diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..38a3af970c7f9c882a6cfc496650d1526680b581 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57901cea50aac961ff94301329296a5618c3942d0b5608509d0cb78c83527c2c +size 483889 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..030caca179daf98037b04880fccc2d77b467b0a7 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:822b4c46a65a9099da650797bc5891c1a1b68ae13e69328b8bac4e7d2e52caec +size 582544 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f2908478250f89f2e6be1330fa840f27b98dea2f --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b489276f6f095743d36e0142c2f4e24e0242d0860e1f639c2cc17709e2d70d9 +size 1425928 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c96eb74f62b3993b6937dbe49f65cdb7f3e9db00 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md @@ -0,0 +1,2958 @@ +# 2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression + +Alexander Tyurin + +KAUST + +Saudi Arabia + +alexandertiurin@gmail.com + +Peter Richtárik + +KAUST + +Saudi Arabia + +richtarik@gmail.com + +# Abstract + +We consider distributed convex optimization problems in the regime when the communication between the server and the workers is expensive in both uplink and downlink directions. We develop a new and provably accelerated method, which we call 2Direction, based on fast bidirectional compressed communication and a new bespoke error-feedback mechanism which may be of independent interest. Indeed, we find that the EF and EF21-P mechanisms (Seide et al., 2014; Gruntkowska et al., 2023) that have considerable success in the design of efficient non-accelerated methods are not appropriate for accelerated methods. In particular, we prove that 2Direction improves the previous state-of-the-art communication complexity $\widetilde{\Theta}\left(K\times \left(L / \alpha \mu +L_{\mathrm{max}}\omega /n\mu +\omega\right)\right)$ (Gruntkowska et al., 2023) to $\widetilde{\Theta} (K\times (\sqrt{L(\omega + 1) / \alpha\mu} +\sqrt{L_{\mathrm{max}}\omega^2 / n\mu} +1 / \alpha +\omega))$ in the $\mu$ -strongly-convex setting, where $L$ and $L_{\mathrm{max}}$ are smoothness constants, $n$ is # of workers, $\omega$ and $\alpha$ are compression errors of the Rand $K$ and Top $K$ sparsifiers (as examples), $K$ is # of coordinates/bits that the server and workers send to each other. Moreover, our method is the first that improves upon the communication complexity of the vanilla accelerated gradient descent (AGD) method (Nesterov, 2018). We obtain similar improvements in the general convex regime as well. Finally, our theoretical findings are corroborated by experimental evidence. + +# 1 Introduction + +We consider convex optimization problems in the centralized distributed setting. These types of problems appear in federated learning (Konečný et al., 2016; McMahan et al., 2017) and distributed optimization (Ramesh et al., 2021). In this setting, one of the main problems is the communication bottleneck: the connection link between the server and the workers can be very slow. We focus our attention on methods that aim to address this issue by applying lossy compression to the communicated messages (Alistarh et al., 2017; Mishchenko et al., 2019; Gruntkowska et al., 2023). + +# 1.1 The problem + +Formally, we consider the optimization problem + +$$ +\min _ {x \in \mathbb {R} ^ {d}} \left\{f (x) := \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x) \right\}, \tag {1} +$$ + +where $n$ is the number of workers and $f_{i}:\mathbb{R}^{d}\to \mathbb{R}$ are smooth convex functions for all $i\in [n]\coloneqq \{1,\ldots ,n\}$ . We consider the centralized distributed optimization setting in which each $i^{\mathrm{th}}$ worker contains the function $f_{i}$ , and all workers are directly connected to a server (Kairouz et al., 2021). In general, we want to find a (possibly random) point $\widehat{x}$ such that $\mathbb{E}[f(\widehat{x})] - f(x^{*})\leq \varepsilon$ , where $x^{*}$ is + +an optimal point. In the strongly convex setup, we also want to guarantee that $\mathbb{E}[\| \widetilde{x} - x^{*}\|^{2}] \leq \varepsilon$ for some point $\widetilde{x}$ . + +Virtually all other theoretical works in this genre assume that, compared to the worker-to-server (w2s) communication cost, the server-to-workers (s2w) broadcast is so fast that it can be ignored. We lift this limitation and instead associate a relative cost $r \in [0,1]$ with the two directions of communication. If $r = 0$ , then s2w communication is free, if $r = 1$ , then w2s communication is free, and if $r = 1/2$ , then the s2w and w2s costs are equal. All our theoretical results hold for any $r \in [0,1]$ . We formalize and elaborate upon this setup in Section 2. + +# 1.2 Assumptions + +Throughout the paper we rely on several standard assumptions on the functions $f_{i}$ and $f$ . + +Assumption 1.1. Functions $f_{i}$ are $L_{i}$ -smooth, i.e., $\| \nabla f_{i}(x) - \nabla f_{i}(y)\| \leq L_{i}\| x - y\|$ for all $x,y \in \mathbb{R}^d$ , for all $i \in [n]$ . We let $L_{\max} \coloneqq \max_{i\in [n]}L_{i}$ . Further, let $\widehat{L} > 0$ be a constant such that $\frac{1}{n}\sum_{i = 1}^{n}\| \nabla f_i(x) - \nabla f_i(y)\| ^2\leq \widehat{L}^2\| x - y\| ^2$ for all $x,y \in \mathbb{R}^d$ . + +Note that if the functions $f_{i}$ are $L_{i}$ -smooth for all $i \in [n]$ , then $\widehat{L} \leq L_{\max}$ . + +Assumption 1.2. Function $f$ is $L$ -smooth, i.e., $\| \nabla f(x) - \nabla f(y) \| \leq L \| x - y \|$ for all $x, y \in \mathbb{R}^d$ . Assumption 1.3. Functions $f_i$ are convex for all $i \in [n]$ , and $f$ is $\mu$ -strongly convex with $\mu \geq 0$ , attaining a minimum at some point $x^* \in \mathbb{R}^d$ . + +It is known that the above smoothness constants are related in the following way. + +Lemma 1.4 (Gruntkowska et al. (2023)). If Assumptions 1.2, 1.1 and 1.3 hold, then $\widehat{L} \leq L_{\max} \leq nL$ and $L \leq \widehat{L} \leq \sqrt{L_{\max} L}$ . + +# 2 Motivation: From Unidirectional to Bidirectional Compression + +In this work, we distinguish between worker-to-server (w2s=uplink) and server-to-worker (s2w=downlink) communication cost, and define w2s and s2w communication complexities of methods in the following natural way. + +Definition 2.1. For a centralized distributed method $\mathcal{M}$ aiming to solve problem (1), the communication complexity $\mathfrak{m}_{\mathcal{M}}^{\mathrm{w2s}}$ is the expected number of coordinates/floats1 that each worker sends to the server to solve problem (1). The quantity $\mathfrak{m}_{\mathcal{M}}^{\mathrm{s2w}}$ is the expected number of floats/coordinates the server broadcasts to the workers to solve problem (1). If $\mathfrak{m}_{\mathcal{M}}^{\mathrm{w2s}} = \mathfrak{m}_{\mathcal{M}}^{\mathrm{s2w}}$ , then we use the simplified notation $\mathfrak{m}_{\mathcal{M}} := \mathfrak{m}_{\mathcal{M}}^{\mathrm{s2w}} = \mathfrak{m}_{\mathcal{M}}^{\mathrm{w2s}}$ . + +Let us illustrate the above concepts on the simplest baseline: vanilla gradient descent (GD). It is well known (Nesterov, 2018) that for $L$ -smooth, $\mu$ -strongly convex problems, GD returns an $\varepsilon$ -solution after $\mathcal{O}\left(L / \mu \log^{1 / \varepsilon}\right)$ iterations. In each iteration, the workers and the server communicate all $\Theta(d)$ coordinates to each other (since no compression is applied). Therefore, the communication complexity of GD is $\mathfrak{m}_{\mathrm{GD}} = \Theta\left(dL / \mu \log^{1 / \varepsilon}\right)$ . The same reasoning applies to the accelerated gradient method (AGD) (Nesterov, 2018), whose communication complexity is $\mathfrak{m}_{\mathrm{AGD}} = \Theta\left(d\sqrt{L / \mu} \log^{1 / \varepsilon}\right)$ . + +# 2.1 Compression mappings + +In the literature, researchers often use the following two families of compressors: + +Definition 2.2. A (possibly) stochastic mapping $\mathcal{C}:\mathbb{R}^d\to \mathbb{R}^d$ is a biased compressor if there exists $\alpha \in (0,1]$ such that + +$$ +\mathbb {E} \left[ \| \mathcal {C} (x) - x \| ^ {2} \right] \leq (1 - \alpha) \| x \| ^ {2}, \quad \forall x \in \mathbb {R} ^ {d}. \tag {2} +$$ + +Definition 2.3. A stochastic mapping $\mathcal{C}:\mathbb{R}^d\to \mathbb{R}^d$ is an unbiased compressor if there exists $\omega \geq 0$ such that + +$$ +\underline {{\mathbb {E}}} [ \mathcal {C} (x) ] = x, \quad \mathbb {E} \left[ \| \mathcal {C} (x) - x \| ^ {2} \right] \leq \omega \| x \| ^ {2}, \quad \forall x \in \mathbb {R} ^ {d}. \tag {3} +$$ + +Table 1: Communication Rounds in the Strongly Convex Case. The number of communication rounds and rounds costs to get an $\varepsilon$ -solution $(\mathbb{E}\left[||\widehat{x} -x^{*}||^{2}\right]\leq \varepsilon)$ up to logarithmic factors. The table shows the most relevant bidirectional compressed methods that are ordered by the total communication complexity # Communication Rounds $\times$ Round Cost (see (4) for details). + +i. The parameter $r$ weights the importance/speed of uplink and downlink connections. When $r = 1/2$ , it means that the uplink and downlink speeds are equal. +ii. The parameters $K_{\omega}$ and $K_{\alpha}$ are the expected densities Definition 2.5 of compressors $\mathcal{C}^D \in \mathbb{U}(\omega)$ and $\mathcal{C}^P \in \mathbb{B}(\alpha)^{\mathrm{(a)}}$ , that operate in the workers and the server accordingly. Less formally, $K_{\omega}$ and $K_{\alpha}$ are the number of coordinates/bits that the workers and the server send to each other in each communication round. + +
Method# Communication RoundsRound Cost(c)
Dore, Artemis, MURANA(a)(Liu et al., 2020)(Philippenko and Dieulevut, 2020)(Condat and Richtárik, 2022)\(\widetilde{\Omega}\left(\frac{\omega}{\alpha n}\frac{L_{\max}}{\mu}\right)^{(f)}\)(1-r)Kω+rKα
MCM(a)(Philippenko and Dieulevut, 2021)\(\widetilde{\Omega}\left(\left(\frac{1}{\alpha^{3/2}}+\frac{\omega^{1/2}}{\alpha\sqrt{n}}+\frac{\omega}{n}\right)\frac{L_{\max}}{\mu}\right)^{(f)}\)(1-r)Kω+rKα
GD(Nesterov, 2018)\(\frac{L}{\mu}\)d
EF21-P + DIANA(Gruntkowska et al., 2023)\(\frac{L}{\alpha\mu}+\frac{L_{\max}\omega}{n\mu}+\omega\)(1-r)Kω+rKα
AGD(Nesterov, 2018)\(\sqrt{\frac{L}{\mu}}\)d
2Direction(Remark 5.3)(b), (Theorem 5.2)\(\sqrt{\frac{L(\omega+1)}{\alpha\mu}}+\sqrt{\frac{L_{\max}\omega^2}{n\mu}}+\frac{1}{\alpha}+\omega\)(1-r)Kω+rKα
2Direction(Remark 5.5)(b), (Theorem 5.4)(requires Lmax/L)(d)\(\sqrt{\frac{L_{\max}\{1,r(\omega+1)\}}{\alpha\mu}}+\sqrt{\frac{L^2/3L_{\max}^{1/3}(\omega+1)}{\alpha n^{1/3}\mu}}+\sqrt{\frac{L^1/2L_{\max}^{1/2}(\omega+1)^{3/2}}{\sqrt{\alpha n\mu}}}+\sqrt{\frac{L_{\max}\omega^2}{n\mu}}+\frac{1}{\alpha}+\omega\)(1-r)Kω+rKα
+ +(a) The Dore, Artemis, MURANA, and MCM methods do not support biased compressors for server-to-worker compression. In these methods, the error $\alpha$ equals $1 / (\omega_{\mathrm{s}} + 1)$ , where the error $\omega_{s}$ is a parameter of an unbiased compressor that is used in server-to-worker compression. For these methods, we define $1 / (\omega_{\mathrm{s}} + 1)$ as $\alpha$ to make comparison easy with EF21-P + DIANA and 2Direction. +(b) In this table, we present the simplified iteration complexity of 2Direction assuming that $r \leq 1/2$ and $\omega + 1 = \Theta\left(d / K_{\omega}\right)$ . The full complexities are in (13) and (14). In Section 6, we show that 2Direction has no worse total communication complexity than EF21-P + DIANA for all $r \in [0,1]$ and for any choice of compressors. +(c) We define Round Cost of a method $\mathcal{M}$ as a constant such that $\mathfrak{m}_{\mathcal{M}}^r = \# \text{Communication Rounds} \times \text{Round Cost}$ , where $\mathfrak{m}_{\mathcal{M}}^r$ is the total communication complexity (4). +(d) 2Direction can have even better total communication complexity if the algorithm can use the ratio $L_{\max} / L$ when selecting the parameters $\tau$ and $p$ in Algorithm 1. For instance, this is the case if we assume that $L_{\max} = L$ , which was done by Li et al. (2020); Li and Richtárik (2021), for example. +(1) The notation $\widetilde{\Omega} (\cdot)$ means "at least up to logarithmic factors." + +We will make use of the following assumption. + +Assumption 2.4. The randomness in all compressors used in our method is drawn independently. + +Let us denote the set of mappings satisfying Definition 2.2 and 2.3 by $\mathbb{B}(\alpha)$ and $\mathbb{U}(\omega)$ , respectively. The family of biased compressors $\mathbb{B}$ is wider. Indeed, it is well known if $\mathcal{C} \in \mathbb{U}(\omega)$ , then $^{1 / (\omega + 1)}\cdot \mathcal{C} \in \mathbb{B}(1 / (\omega + 1))$ . The canonical sparsification operators belonging to these classes are $\operatorname{Top}K \in \mathbb{B}(K / d)$ and $\operatorname{Rand}K \in \mathbb{U}(d / K - 1)$ . The former outputs the $K$ largest values (in magnitude) of the input vector, while the latter outputs $K$ random values of the input vector, scaled by $d / K$ (Beznosikov et al., 2020). Following (Gorbunov et al., 2021; Tyurin and Richtárik, 2023), we now define the expected density of a sparsifier as a way to formalize its compression performance. + +Definition 2.5. The expected density of a sparsifier $\mathcal{C}:\mathbb{R}^d\to \mathbb{R}^d$ is the quantity $K_{\mathcal{C}}\coloneqq \sup_{x\in \mathbb{R}^{d}}\mathbb{E}\left[\| \mathcal{C}(x)\|_{0}\right]$ , where $\| y\| _0$ is the number of of non-zero components of $y\in \mathbb{R}^d$ . + +Trivially, for the Rand $K$ and Top $K$ sparsifiers we have $K_{\mathcal{C}} = K$ . + +# 2.2 Unidirectional (i.e., w2s) compression + +As mentioned in the introduction, virtually all theoretical works in the area of compressed communication ignore s2w communication cost and instead aim to minimize $\mathfrak{m}_{\mathcal{M}}^{w_2}$ . Algorithmic work related + +to methods that only perform w2s compression has a long history, and this area is relatively well understood (Alistarh et al., 2017; Mishchenko et al., 2019; Richtárik et al., 2021). + +We refer to the work of Gruntkowska et al. (2023) for a detailed discussion of the communication complexities of non-accelerated methods in the convex and non-convex settings. For instance, using Rand $K$ , the DIANA method of Mishchenko et al. (2019) provably improves2 the communication complexity of GD to $\mathfrak{m}_{\mathrm{DIANA}}^{\mathrm{w2s}} = \widetilde{\Theta}\left(d + KL / \mu + dL_{\max} / n\mu\right)$ . Accelerated methods focusing on w2s compression are also well investigated. For example, Li et al. (2020) and Li and Richtárik (2021) developed accelerated methods, which are based on (Mishchenko et al., 2019; Kovalev et al., 2020), and provably improve the w2s complexity of DIANA. Moreover, using Rand $K$ with $K \leq d / n$ , ADIANA improves the communication complexity of AGD to $\mathfrak{m}_{\mathrm{ADIANA}}^{\mathrm{w2s}} = \widetilde{\Theta}\left(d + d\sqrt{L_{\max} / n\mu}\right)$ . + +# 2.3 Bidirectional (i.e., w2s and s2w) compression + +The methods mentioned in Section 2.2 do not perform server-to-workers (s2w) compression, and one can show that the server-to-workers (s2w) communication complexities of these methods are worse than $\mathfrak{m}_{\mathrm{AGD}} = \widetilde{\Theta}(d\sqrt{L / \mu})$ . For example, using the RandK, the s2w communication complexity of ADIANA is at least $\mathfrak{m}_{\mathrm{ADIANA}}^{s2w} = \widetilde{\Omega}(d \times \omega) = \widetilde{\Omega}(d^2 / K)$ , which can be $d / K$ times larger than in GD or AGD. Instead of $\mathfrak{m}_{\mathcal{M}}^{w2s}$ , methods performing bidirectional compression attempt to minimize the total communication complexity, which we define as a convex combination of the w2s and s2w communication complexities: + +$$ +\mathfrak {m} _ {\mathcal {M}} ^ {r} := (1 - r) \mathfrak {m} _ {\mathcal {M}} ^ {\mathrm {w} 2 \mathrm {s}} + r \mathfrak {m} _ {\mathcal {M}} ^ {\mathrm {s} 2 \mathrm {w}}. \tag {4} +$$ + +The parameter $r \in [0,1]$ weights the importance of uplink (w2s) and downlink (s2w) connections. Methods from Section 2.2 assume that $r = 0$ , thus ignoring the s2w communication cost. On the other hand, when $r = 1/2$ , the uplink and downlink communication speeds are equal. By considering any $r \in [0,1]$ , our methods and findings are applicable to more situations arising in practice. Obviously, $\mathfrak{m}_{\mathrm{GD}}^r = \mathfrak{m}_{\mathrm{GD}}$ and $\mathfrak{m}_{\mathrm{AGD}}^r = \mathfrak{m}_{\mathrm{AGD}}$ for all $r \in [0,1]$ . Recently, Gruntkowska et al. (2023) proposed the EF21-P + DIANA method. This is the first method supporting bidirectional compression that provably improves both the w2s and s2w complexities of GD: $\mathfrak{m}_{\mathrm{EF21 - P + DIANA}}^r \leq \mathfrak{m}_{\mathrm{GD}}$ for all $r \in [0,1]$ . Bidirectional methods designed before EF21-P + DIANA, including (Tang et al., 2020; Liu et al., 2020; Philippenko and Dieuleveut, 2021), do not guarantee the total communication complexities better than that of GD. The EF21-P + DIANA method is not an accelerated method and, in the worst case, can have communication complexities worse than AGD when the condition number $L / \mu$ is large. + +# 3 Contributions + +Motivated by the above discussion, in this work we aim to address the following + +# Main Problem: + +Is it possible to develop a method supporting bidirectional communication compression that improves the current best theoretical total communication complexity of EF21-P + DIANA, and guarantees the total communication complexity to be no worse than the communication complexity $\mathfrak{m}_{\mathrm{AGD}} = \widetilde{\Theta}(d\sqrt{L / \mu})$ of AGD, while improving on AGD in at least some regimes? + +A) We develop a new fast method (2Direction; see Algorithm 1) supporting bidirectional communication compression. Our analysis leads to new state-of-the-art complexity rates in the centralized distributed setting (see Table 1), and as a byproduct, we answer Main Problem in the affirmative. +B) Gruntkowska et al. (2023) proposed to use the EF21-P error-feedback mechanism (8) to improve the convergence rates of non-accelerated methods supporting bidirectional communication compression. EF21-P is a reparameterization of the celebrated EF mechanism (Seide et al., 2014). We tried to use EF21-P in our method as well, but failed. Our failures indicated that a fundamentally + +new approach is needed, and this eventually led us to design a new error-feedback mechanism (9) that is more appropriate for accelerated methods. We believe that this is a contribution of independent interest that might motivate future growth in the area. + +C) Unlike previous theoretical works (Li et al., 2020; Li and Richtárik, 2021) on accelerated methods, we present a unified analysis in both the $\mu$ -strongly-convex and general convex cases. Moreover, in the general convex setting and low accuracy regimes, our analysis improves the rate $\mathcal{O}\left(1 / \varepsilon^{1 / 3}\right)$ of Li and Richtárik (2021) to $\mathcal{O}\left(\log^{1 / \varepsilon}\right)$ (see details in Section R). +D) Even though our central goal was to obtain new SOTA theoretical communication complexities for centralized distributed optimization, we show that the newly developed algorithm enjoys faster communication complexities in practice as well (see details in Section Q). + +Algorithm 1 2Direction: A Fast Gradient Method Supporting Bidirectional Compression +1: Parameters: Lipschitz-like parameter $\bar{L} >0$ strong-convexity parameter $\mu \geq 0$ probability $p\in (0,1]$ +parameter $\Gamma_0\geq 1$ momentum $\tau \in (0,1]$ , contraction parameter $\alpha \in (0,1]$ from (2), initial point $x^0\in \mathbb{R}^d$ +initial gradient shifts $h_1^0,\ldots ,h_n^0\in \mathbb{R}^d$ gradient shifts $k^0\in \mathbb{R}^d$ and $v^{0}\in \mathbb{R}^{d}$ +2: Initialize $\beta = 1 / (\omega +1)$ $w^{0} = z^{0} = u^{0} = x^{0}$ , and $h^0 = \frac{1}{n}\sum_{i = 1}^{n}h_i^0$ +3: for $t = 0,1,\dots ,T - 1$ do +4: $\Gamma_{t + 1},\gamma_{t + 1},\theta_{t + 1} =$ CalculateLearningRates( $\Gamma_t,\bar{L},\mu ,p,\alpha ,\tau ,\beta)$ Get learning rates using Algorithm 2 +5: for $i = 1,\dots ,n$ in parallel do +6: $y^{t + 1} = \theta_{t + 1}w^{t} + (1 - \theta_{t + 1})z^{t}$ +7: $m_i^{t,y} = C_i^{D,y}(\nabla f_i(y^{t + 1}) - h_i^t)$ Worker i compresses the shifted gradient via the compressor $C_i^{D,y}\in \mathbb{U}(\omega)$ +8: Send compressed message $m_i^{t,y}$ to the server +9: end for +10: $g^{t + 1} = h^t +\frac{1}{n}\sum_{i = 1}^n m_i^{t,y}$ +11: $u^{t + 1} = \arg \min_{x\in \mathbb{R}^d}\langle g^{t + 1},x\rangle +\frac{\bar{L} + \Gamma_t\mu}{2\gamma_t + 1}\| x - u^t\|^2 +\frac{\mu}{2}\| x - y^{t + 1}\|^2$ A gradient-like descent step +12: $q^{t + 1} = \arg \min_{x\in \mathbb{R}^d}\langle k^t,x\rangle +\frac{\bar{L} + \Gamma_t\mu}{2\gamma_t + 1}\| x - w^t\|^2 +\frac{\mu}{2}\| x - y^{t + 1}\|^2$ +13: $p^{t + 1} = \mathcal{C}^P (u^{t + 1} - q^{t + 1})$ Server compresses the shifted model via the compressor $\mathcal{C}^P\in \mathbb{B}(\alpha)$ +14: $w^{t + 1} = q^{t + 1} + p^{t + 1}$ +15: $x^{t + 1} = \theta_{t + 1}u^{t + 1} + (1 - \theta_{t + 1})z^t$ +16: Send compressed message $p^{t + 1}$ to all n workers +17: Flip a coin $c^t\sim$ Bernoulli(p) +18: $k^{t + 1} = \left\{ \begin{array}{ll}v^t, & c^t = 1\\ k^t, & c^t = 0 \end{array} \right.$ and $z^{t + 1} = \left\{ \begin{array}{ll}x^{t + 1}, & c^t = 1\\ z^t, & c^t = 0 \end{array} \right.$ +19: if $c^t = 1$ then +20: Broadcast non-compressed messages $x^{t + 1}$ and $k^{t + 1}$ to all n workers With small probability $p!$ +21: end if +22: for $i = 1,\dots ,n$ in parallel do +23: $q^{t + 1} = \arg \min_{x\in \mathbb{R}^d}\langle k^t,x\rangle +\frac{\bar{L} + \Gamma_t\mu}{2\gamma_t + 1}\| x - w^t\|^2 +\frac{\mu}{2}\| x - y^{t + 1}\|^2$ +24: $w^{t + 1} = q^{t + 1} + p^{t + 1}$ +25: $z^{t + 1} = \left\{ \begin{array}{ll}x^{t + 1}, & c^t = 1\\ z^t, & c^t = 0 \end{array} \right.$ +26: $m_i^{t,z} = C_i^{D,z}(\nabla f_i(z^{t + 1}) - h_i^t)$ Worker i compresses the shifted gradient via the compressor $C_i^{D,z}\in \mathbb{U}(\omega)$ +27: $h_i^{t + 1} = h_i^t +\beta m_i^{t,z}$ +28: Send compressed message $m_i^{t,z}$ to the server +29: end for +30: $v^{t + 1} = (1 - \tau)v^t +\tau (h^t +\frac{1}{n}\sum_{i = 1}^n m_i^{t,z})$ +31: $h^{t + 1} = h^t +\beta \frac{1}{n}\sum_{i = 1}^n m_i^{t,z}$ +32: end for + +# 4 New Method: 2Direction + +In order to provide an answer to Main Problem, at the beginning of our research journey we hoped that a rather straightforward approach might bear fruit. In particular, we considered the current state-the-art methods ADIANA (Algorithm 3) (Li et al., 2020), CANITA (Li and Richtárik, 2021) and EF21-P + DIANA (Algorithm 4) (Gruntkowska et al., 2023), and tried to combine the EF21-P compression + +# Algorithm 2 CalculateLearningRates + +1: Parameters: element $\Gamma_{t}$ ; parameter $\bar{L} > 0$ ; strong-convexity parameter $\mu \geq 0$ , probability $p$ , contraction parameter $\alpha$ , momentum $\tau$ , parameter $\beta$ +2: Find the largest root $\bar{\theta}_{t + 1}$ of the quadratic equation + +$$ +p \bar {L} \Gamma_ {t} \bar {\theta} _ {t + 1} ^ {2} + p (\bar {L} + \Gamma_ {t} \mu) \bar {\theta} _ {t + 1} - (\bar {L} + \Gamma_ {t} \mu) = 0 +$$ + +$$ +3: \theta_ {\min } = \frac {1}{4} \min \left\{1, \frac {\alpha}{p}, \frac {\tau}{p}, \frac {\beta}{p} \right\}; \quad \theta_ {t + 1} = \min \{\bar {\theta} _ {t + 1}, \theta_ {\min } \}; \quad \gamma_ {t + 1} = \frac {p \theta_ {t + 1} \Gamma_ {t}}{1 - p \theta_ {t + 1}}; \quad \Gamma_ {t + 1} = \Gamma_ {t} + \gamma_ {t + 1} +$$ + +mechanism on the server side with the ADIANA (accelerated DIANA) compression mechanism on the workers' side. In short, we were aiming to develop a "EF21-P + ADIANA" method. Note that while EF21-P + DIANA provides the current SOTA communication complexity among all methods supporting bidirectional compression, the method is not "accelerated". On the other hand, while ADIANA (in the strongly convex regime) and CANITA (in the convex regime) are "accelerated", they support unidirectional (uplink) compression only. + +In Sections B and C we list the ADIANA and EF21-P + DIANA methods, respectively. One can see that in order to calculate $x^{t+1}$ , $y^{t+1}$ , and $z^{t+1}$ in ADIANA (Algorithm 3), it is sufficient for the server to broadcast the point $u^{t+1}$ . At first sight, it seems that we might be able to develop a "EF21-P + ADIANA" method by replacing Line 15 in Algorithm 3 with Lines 12, 13, 14, and 16 from Algorithm 4. With these changes, we can try to calculate $x^{t+1}$ and $y^{t+1}$ using the formulas + +$$ +y ^ {t + 1} = \theta_ {t + 1} w ^ {t} + (1 - \theta_ {t + 1}) z ^ {t}, \tag {5} +$$ + +$$ +x ^ {t + 1} = \theta_ {t + 1} w ^ {t + 1} + (1 - \theta_ {t + 1}) z ^ {t} \tag {6} +$$ + +$$ +w ^ {t + 1} = w ^ {t} + \mathcal {C} ^ {P} \left(u ^ {t + 1} - w ^ {t}\right), \tag {7} +$$ + +instead of Lines 7 and 17 in Algorithm 3. Unfortunately, all our attempts of making this work failed, and we now believe that this "naive" approach will not lead to a resolution of Main Problem. Let us briefly explain why we think so, and how we ultimately managed to resolve Main Problem. + +$\spadesuit$ The first issue arises from the fact that the point $x^{t + 1}$ , and, consequently, the point $z^{t + 1}$ , depend on $w^{t + 1}$ instead of $u^{t + 1}$ , and thus the error from the primal (i.e., server) compressor $\mathcal{C}^P$ affects them. In our proofs, we do not know how to prove a good convergence rate with (6). Therefore, we decided to use the original update (Line 17 from Algorithm 3) instead. We can do this almost for free because in Algorithm 3 the point $x^{t + 1}$ is only used in Line 18 (Algorithm 3) with small probability $p$ . In the final version of our algorithm 2Direction (see Algorithm 1), we broadcast a non-compressed messages $x^{t + 1}$ with probability $p$ . In Section 6, we show that $p$ is so small that these non-compressed rare messages do not affect the total communication complexity of Algorithm 3. +$\spadesuit$ The second issue comes from the observation that we can not perform the same trick for point $y^{t + 1}$ since it is required in each iteration of Algorithm 3. We tried to use (5) and (7), but this still does not work. Deeper understanding of this can only be gained by a detailed examination our proof and the proofs of (Li et al., 2020; Gruntkowska et al., 2023). One way to explain the difficulty is to observe that in non-accelerated methods (Gorbunov et al., 2020; Gruntkowska et al., 2023), the variance-reducing shifts $h^t$ converge to the fixed vector $\nabla f(x^{*})$ , while in the accelerated methods (Li et al., 2020; Li and Richtárik, 2021), these shifts $h^t$ converge to the non-fixed vectors $\nabla f(z^t)$ in the corresponding Lyapunov functions. Assume that $\mu = 0$ . Then, instead of the EF21-P mechanism + +$$ +w ^ {t + 1} = w ^ {t} + \mathcal {C} ^ {P} \left(u ^ {t + 1} - w ^ {t}\right) \stackrel {\text {L i n e} 1 3 \text {i n A l g .} 3} {=} w ^ {t} + \mathcal {C} ^ {P} \left(u ^ {t} - \frac {\gamma_ {t + 1}}{L} g ^ {t + 1} - w ^ {t}\right) \tag {8} +$$ + +$$ +\overset {\nabla f (x ^ {*}) = 0} {=} w ^ {t} - \frac {\gamma_ {t + 1}}{L} \nabla f (x ^ {*}) + \mathcal {C} ^ {P} \left(u ^ {t} - \frac {\gamma_ {t + 1}}{L} (g ^ {t + 1} - \nabla f (x ^ {*})) - w ^ {t}\right), +$$ + +we propose to perform the step + +$$ +\begin{array}{l} w ^ {t + 1} = w ^ {t} - \frac {\gamma_ {t + 1}}{\bar {L}} \nabla f (z ^ {t}) + \mathcal {C} ^ {P} \left(u ^ {t} - \frac {\gamma_ {t + 1}}{\bar {L}} \left(g ^ {t + 1} - \nabla f (z ^ {t})\right) - w ^ {t}\right) \tag {9} \\ = q ^ {t + 1} + \mathcal {C} ^ {P} (u ^ {t + 1} - q ^ {t + 1}), \text {w h e r e} \\ \end{array} +$$ + +$$ +u ^ {t + 1} = \underset {x \in \mathbb {R} ^ {d}} {\arg \min } \left\langle g ^ {t + 1}, x \right\rangle + \frac {\bar {L}}{2 \gamma_ {t + 1}} \left\| x - u ^ {t} \right\| ^ {2}, q ^ {t + 1} = \underset {x \in \mathbb {R} ^ {d}} {\arg \min } \left\langle \nabla f (z ^ {t}), x \right\rangle + \frac {\bar {L}}{2 \gamma_ {t + 1}} \left\| x - u ^ {t} \right\| ^ {2}. +$$ + +Unlike (8), step (9) resolves all our previous problems, and we were able to obtain new SOTA rates. + +$\spadesuit$ However, step (9) is not implementable since the server and the nodes need to know the vector $\nabla f(z^t)$ . The last crucial observation is the same as with the points $x^t$ and $z^t$ : the vector $\nabla f(z^t)$ changes with probability $p$ since the point $z^t$ changes with probability $p$ . Intuitively, this means that it is easier to communicate $\nabla f(z^t)$ between the server and the workers. We do this using two auxiliary control vectors, $v^t$ and $k^t$ . The former "learns" the value of $\nabla f(z^t)$ in Line 30 (Algorithm 1), and the latter is used in Line 14 (Algorithm 1) instead of $\nabla f(z^t)$ . Then, when the algorithm updates $z^{t+1}$ , it also updates $k^t$ in Line 18 (Algorithm 1) and the updated non-compressed vector $k^t$ is broadcast to the workers. + +The described changes are highlighted in Lines 6, 12, 13, 14, 18, 20, 23, 24 and 30 of our new Algorithm 1 in green color. Other steps of the algorithm correspond to the original ADIANA method (Algorithm 3). Remarkably, all these new steps are only required to substitute a single Line 15 of Algorithm 3! + +# 5 Theoretical Communication Complexity of 2Direction + +Having outlined our thought process when developing 2Direction (Algorithm 1), we are now ready to present our theoretical iteration and communication complexity results. Note that 2Direction depends on two hyper-parameters, probability $p$ (used in Lines 4 and 17) and momentum $\tau$ (used in Lines 4 and 30). Further, while Li et al. (2020); Li and Richtárik (2021) assume a strong relationship between $L$ and $L_{\mathrm{max}}$ ( $L = L_{\mathrm{max}}$ ), Gruntkowska et al. (2023) differentiate between $L$ and $L_{\mathrm{max}}$ , and thus perform a more general and analysis of their method. In order to perform a fair comparison to the above results, we have decided to minimize the total communication complexity $\mathfrak{m}_{\mathcal{M}}^r$ as a function of the hyper-parameters $p$ and $\tau$ , depending on whether the ratio $L_{\mathrm{max}} / L$ is known or not. + +Defining $R^2 \coloneqq \left\| x^0 - x^* \right\|^2$ , Theorems E.12 and E.14 state that 2Direction converges after + +$$ +T := \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L}{\alpha \tau \mu}}, \sqrt {\frac {\sqrt {L L _ {\operatorname* {m a x}}} (\omega + 1) \sqrt {\omega \tau}}{\alpha \sqrt {n} \mu}}, \sqrt {\frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {\omega + 1} \sqrt {\omega \tau}}{\alpha \sqrt {p} \sqrt {n} \mu}}, \right. \right. \tag {10} +$$ + +$$ +\left. \left. \sqrt {\frac {L _ {\operatorname* {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, \frac {1}{\tau}, (\omega + 1), \frac {1}{p} \right\}\right), \mathrm {o r} +$$ + +$$ +\begin{array}{l} T _ {\mathrm {g c}} := \Theta \left(\max \left\{\sqrt {\frac {L R ^ {2}}{\alpha p \varepsilon}}, \sqrt {\frac {L R ^ {2}}{\alpha \tau \varepsilon}}, \sqrt {\frac {\sqrt {L L _ {\operatorname* {m a x}}} (\omega + 1) \sqrt {\omega \tau} R ^ {2}}{\alpha \sqrt {n} \varepsilon}}, \sqrt {\frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {\omega + 1} \sqrt {\omega \tau} R ^ {2}}{\alpha \sqrt {p} \sqrt {n} \varepsilon}}, \right. \right. \tag {11} \\ \left. \sqrt {\frac {L _ {\max} \omega (\omega + 1) ^ {2} p R ^ {2}}{n \varepsilon}}, \sqrt {\frac {L _ {\max} \omega R ^ {2}}{n p \varepsilon}} \right\} + \widetilde {\Theta} \left(\max \left\{\frac {1}{\alpha}, \frac {1}{\tau}, (\omega + 1), \frac {1}{p} \right\}\right) \\ \end{array} +$$ + +iterations, in the $\mu$ -strongly convex and general convex regimes, respectively. These complexities depend on two hyper-parameters: $p \in (0,1]$ and $\tau \in (0,1]$ . For simplicity, in what follows we consider the $\mu$ -strongly convex case only4. While the iteration complexities (10) and (11) are clearly important, in the context of our paper, optimizing communication complexity (4) is more important. In the following simple theorem, we give expressions for the communication complexities of 2Direction, taking into account both workers-to-server (w2s) and server-to-workers (s2w) communication. + +Theorem 5.1. Assume that $\mathcal{C}_i^{D,\cdot}$ and $\mathcal{C}^P$ have expected densities equal to $K_{\omega}$ and $K_{\alpha}$ , respectively (see Definition 2.5). In view of Theorem E.12, in expectation, the w2s and s2w communication complexities are equal to + +$$ +\mathfrak {m} _ {\text {n e w}} ^ {\mathrm {w 2 s}} = \widetilde {\Theta} \left(K _ {\omega} \times T + d\right) \quad \text {a n d} \quad \mathfrak {m} _ {\text {n e w}} ^ {\mathrm {s 2 w}} = \widetilde {\Theta} \left(\left(K _ {\alpha} + p d\right) \times T + d\right). \tag {12} +$$ + +Proof. The first complexity in (12) follows because w2s communication involves $\mathcal{C}_i^{D,y}(\cdot)$ and $\mathcal{C}_i^{D,z}(\cdot)$ only. The second complexity in (12) follows because s2w communication involves $\mathcal{C}_i^P (\cdot)$ , plus two non-compressed vectors $x^{t + 1}$ and $k^{t + 1}$ with the probability $p$ . The term $d$ comes from the fact that non-compressed vectors are communicated in the initialization phase. + +# 5.1 The ratio $L_{\mathrm{max}} / L$ is not known + +In the following theorem, we consider the regime when the exact value of $L_{\max} / L$ is not known. Hence, we seek to find $p$ and $\tau$ that minimize the worst case $\mathfrak{m}_{\mathrm{new}}^r$ (see (4)) w.r.t. $L_{\max} \in [L, nL]$ . + +Theorem 5.2. Choose $r \in [0,1]$ and let $\mu_{\omega,\alpha}^r \coloneqq \frac{rd}{(1-r)K_\omega + rK_\alpha}$ . In view of Theorem 5.1, the values $p = \min \left\{\frac{1}{\omega+1}, \frac{1}{\mu_{\omega,\alpha}^r}\right\}$ and $\tau = \frac{p^{1/3}}{(\omega+1)^{2/3}}$ minimize $\max_{L_{\mathrm{max}} \in [L,nL]} \mathfrak{m}_{\mathrm{new}}^r$ . This choice leads to the following number of communication rounds: + +$$ +T ^ {\text {r e a l i s t i c}} := \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L \operatorname* {m a x} \left\{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{\alpha \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega \operatorname* {m a x} \left\{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{n \mu}}, \frac {1}{\alpha}, (\omega + 1), \mu_ {\omega , \alpha} ^ {r} \right\}\right). \tag {13} +$$ + +The total communication complexity thus equals $\mathfrak{m}_{\mathrm{realistic}}^r = \widetilde{\Theta}\left((1 - r)K_\omega +rK_\alpha\right)T_{\mathrm{realistic}} + d$ . + +Remark 5.3. To simplify the rate (13) and understand the quantity $\mu_{\omega, \alpha}^{r}$ , let $\mathcal{C}_i^{D,\cdot}$ be the RandK sparsifier5 and consider the case when the s2w communication is not slower than the w2s communication, i.e., $r \leq 1/2$ . Then $T^{\mathrm{realistic}} = \widetilde{\Theta}\left(\max \left\{\sqrt{\frac{L(\omega + 1)}{\alpha\mu}}, \sqrt{\frac{L_{\max}\omega(\omega + 1)}{np\mu}}, \frac{1}{\alpha}, (\omega + 1)\right\}\right)$ and $\mu_{\omega, \alpha}^{r} \leq \omega + 1$ . Indeed, this follows from $r \leq 1/2$ and the fact that $\omega + 1 = d / K_{\omega}$ for the RandK compressor: $\mu_{\omega, \alpha}^{r} := \frac{rd}{(1 - r)K_{\omega} + rK_{\alpha}} \leq \frac{r}{1 - r} \times \frac{d}{K_{\omega}} \leq \omega + 1$ . + +# 5.2 The ratio $L_{\max} / L$ is known + +We now consider the case when we have information about the ratio $L_{\mathrm{max}} / L$ . + +Theorem 5.4. Choose $r \in [0,1]$ , and let $\mu_{\omega, \alpha}^r \coloneqq \frac{rd}{(1 - r)K_\omega + rK_\alpha}$ . In view of Theorem 5.1, the values $p$ and $\tau$ given by (63) and (58), respectively, minimize $\mathfrak{m}_{\mathrm{new}}^r$ from (10). This choice leads to the following number of communication rounds: + +$$ +\begin{array}{l} T ^ {\text {o p t i m i s t i c}} = \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L \max \{1 , \mu_ {\omega , \alpha} ^ {r} \}}{\alpha \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\max } ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}} \right., \sqrt {\frac {L ^ {1 / 2} L _ {\max } ^ {1 / 2} (\omega + 1) ^ {3 / 2}}{\sqrt {\alpha n} \mu}}, \right. \tag {14} \\ \left. \sqrt {\frac {L _ {\operatorname* {m a x}} \omega \operatorname* {m a x} \{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \}}{n \mu}}, \frac {1}{\alpha}, (\omega + 1), \mu_ {\omega , \alpha} ^ {r} \right\}\Bigg). \\ \end{array} +$$ + +The total communication complexity thus equals $\mathfrak{m}_{\mathrm{optimistic}}^r = \widetilde{\Theta}\left((1 - r)K_\omega +rK_\alpha\right)T_{\mathrm{optimistic}} + d$ . + +Note that information about $L_{\mathrm{max}} / L$ leads to a better rate that in Theorem 5.2. + +Remark 5.5. To simplify the rate (14), let $\mathcal{C}_i^{D,\cdot}$ be the Rand $K$ sparsifier and consider the case when the s2w communication is not slower than the w2s communication, i.e., $r\leq 1 / 2$ . Then $T^{\mathrm{optimistic}} = \widetilde{\Theta}\left(\max \left\{\sqrt{\frac{L\max\{1,r(\omega + 1)\}}{\alpha\mu}},\sqrt{\frac{L^{2 / 3}L_{\max}^{1 / 3}(\omega + 1)}{\alpha n^{1 / 3}\mu}},\sqrt{\frac{L^{1 / 2}L_{\max}^{1 / 2}(\omega + 1)^{3 / 2}}{\sqrt{\alpha n}\mu}},\sqrt{\frac{L_{\max}\omega(\omega + 1)}{n\mu}},\frac{1}{\alpha},(\omega +1)\right\}\right)$ . Indeed, this follows from $r\leq 1 / 2$ and the fact that $\omega +1 = d / K_{\omega}$ for the Rand $K$ compressor: $\mu_{\omega ,\alpha}^{r}\coloneqq \frac{rd}{(1 - r)K_{\omega} + rK_{\alpha}}\leq \frac{r}{1 - r}\times \frac{d}{K_{\omega}}\leq 2r~(\omega +1)$ . + +# 6 Theoretical Comparison with Previous State of the Art + +We now show that the communication complexity of 2Direction is always no worse than that of EF21 + DIANA and AGD. Crucially, in some regimes, it can be substantially better. Furthermore, we show that if the s2w communication cost is zero (i.e., if $r = 0$ ), the 2Direction obtains the same communication complexity as ADIANA (Li et al., 2020) (see Section S). + +Comparison with EF21 + DIANA. The EF21-P + DIANA method has the communication complexities that equal + +$$ +\mathfrak {m} _ {\mathrm {E F 2 1 - P + D I A N A}} ^ {\mathrm {w 2 s}} = \widetilde {\Theta} \left(K _ {\omega} \times T ^ {\mathrm {E F 2 1 - P + D I A N A}} + d\right) \quad \text {a n d} \quad \mathfrak {m} _ {\mathrm {E F 2 1 - P + D I A N A}} ^ {\mathrm {s 2 w}} = \widetilde {\Theta} \left(K _ {\alpha} \times T ^ {\mathrm {E F 2 1 - P + D I A N A}} + d\right), +$$ + +where $K_{\omega}$ and $K_{\alpha}$ are the expected densities of $\mathcal{C}_i^D$ and $\mathcal{C}^P$ in Algorithm 4. The last term $d$ comes from the fact that EF21-P + DIANA sends non-compressed vectors in the initialization phase. Let us define $K_{\omega, \alpha}^{r} := (1 - r)K_{\omega} + rK_{\alpha}$ . Therefore, the total communication complexity equals + +$$ +\mathfrak {m} _ {\mathrm {E F 2 1 - P + D I A N A}} ^ {r} = \widetilde {\Theta} \left(K _ {\omega , \alpha} ^ {r} \left(\frac {L}{\alpha \mu} + \frac {\omega L _ {\max }}{n \mu} + \omega\right) + d\right). \tag {15} +$$ + +Theorem 5.2 ensures that the total communication complexity of 2Direction is + +$$ +\mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = \widetilde {\Theta} \left(K _ {\omega , \alpha} ^ {r} \left(\sqrt {\frac {L \operatorname* {m a x} \left\{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{\alpha \mu}} + \sqrt {\frac {L _ {\operatorname* {m a x}} \omega \operatorname* {m a x} \left\{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{n \mu}} + \frac {1}{\alpha} + \omega + \mu_ {\omega , \alpha} ^ {r}\right) + d\right). \tag {16} +$$ + +One can see that (16) is an accelerated rate; it has much better dependence on the condition numbers $L / \mu$ and $L_{\mathrm{max}} / \mu$ . In Section E.8, we prove the following simple theorem, which means that 2Direction is not worse than EF21-P + DIANA. + +Theorem 6.1. For all $r \in [0,1]$ , $\mathfrak{m}_{\mathrm{realistic}}^r = \tilde{\mathcal{O}}\left(\mathfrak{m}_{\mathrm{EF21 - P + DIANA}}^r\right)$ . + +Comparison with AGD. To compare the abstract complexity (16) with the non-abstract complexity $\widetilde{\Theta}(d\sqrt{L / \mu})$ , we take the Rand $K$ and Top $K$ compressors in Algorithm 1. + +Theorem 6.2. For all $r \in [0,1]$ and for all $K \in [d]$ , let us take the RandK and TopK compressors with the parameters (expected densities) $i) K_{\omega} = K$ and $K_{\alpha} = \min \{\lceil 1 - r / rK\rceil ,d\}$ for $r \in [0,1 / 2]$ , $ii) K_{\omega} = \min \{\lceil r / 1 - rK\rceil ,d\}$ and $K_{\alpha} = K$ for $r \in (1 / 2,1]$ . Then we have $\mathfrak{m}_{\mathrm{realistic}}^r = \widetilde{\mathcal{O}} (\mathfrak{m}_{\mathrm{AGD}})$ . + +Theorem 6.2 states that the total communication complexity of 2Direction is not worse than that of AGD. It can be strictly better in the regimes when $\alpha > K / d$ (for $\mathrm{Top}K$ ), $L_{\max} < nL$ , and $n > 1$ . + +# 7 Proof Sketch + +After we settle on the final version of Algorithm 1, the proof technique is pretty standard at the beginning (Li et al., 2020; Tyurin and Richtárik, 2023; Gruntkowska et al., 2023). We proved a descent lemma (Lemma E.4) and lemmas that control the convergences of auxiliary sequences (Lemmas E.5, E.6, E.7, E.8). Using these lemmas, we construct the Lyapunov function (45) with the coefficients $\kappa$ , $\rho$ , $\lambda$ and $\nu_{t} \geq 0$ for all $t \geq 0$ . + +One of the main problems was to find appropriate $\bar{L}$ , $\kappa$ , $\rho$ , $\lambda$ and $\nu_{t} \geq 0$ such that we get a convergence. In more details, to get a converge it is sufficient to find $\bar{L}$ , $\kappa$ , $\rho$ , $\lambda$ and $\nu_{t} \geq 0$ such that (46), (47) and (48) hold. Using the symbolic computation SymPy library (Meurer et al., 2017), we found appropriate $\kappa$ , $\rho$ , $\lambda$ and $\nu_{t} \geq 0$ ((90), (88), (82), (83)) such that the inequalities hold. But that is not all. To get a convergence, we also found the bounds on the parameter $\bar{L}$ , which essentially describe the speed of the convergence. In raw form, using symbolic computations, we obtained a huge number of bounds on $\bar{L}$ (see Sections I, J, L, N). It was clear that most of them are redundant and it was required to find the essential ones. After a close look at the bounds on $\bar{L}$ , we found out that it is sufficient to require (44) to ensure that all inequalities from Sections I, J, L and N hold (see Section O for details). + +![](images/b722a4c3774fd404ac1aac50d1d7bc21248e195540b651294341485ae7b57081.jpg) +Figure 1: Roadmap to our resolution of Main Problem. + +# 8 Limitations and Future Work + +In contrast to Algorithm 4 (EF21+DIANA), in which the server always broadcasts compressed vectors, in our Algorithm 1 (2Direction) the server needs to broadcast non-compressed vectors with + +probability $p$ (see Line 20). While in Section 6 we explain that this does not have an adverse effect on the theoretical communication complexity since $p$ is small, one may wonder whether it might be possible to achieve the same (or better) bounds as ours without having to resort to intermittent non-compressed broadcasts. This remains an open problem; possibly a challenging one. Another limitation comes from the fact that 2Direction requires more iterations than AGD in general (this is the case of all methods that reduce communication complexity). While, indeed, (10) can be higher than $\widetilde{\Theta}(\sqrt{L / \mu})$ , the total communication complexity of 2Direction is not worse than that of AGD. + +# Acknowledgements + +This work of P. Richtárik and A. Tyurin was supported by the KAUST Baseline Research Scheme (KAUST BRF) and the KAUST Extreme Computing Research Center (KAUST ECRC), and the work of P. Richtárik was supported by the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). + +# References + +Alistarh, D., Grubic, D., Li, J., Tomioka, R., and Vojnovic, M. (2017). QSGD: Communication-efficient SGD via gradient quantization and encoding. In Advances in Neural Information Processing Systems (NIPS), pages 1709-1720. +Beznosikov, A., Horvath, S., Richtárik, P., and Safaryan, M. (2020). On biased compression for distributed learning. arXiv preprint arXiv:2002.12410. +Chang, C.-C. and Lin, C.-J. (2011). LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):1-27. +Condat, L. and Richtárik, P. (2022). Murana: A generic framework for stochastic variance-reduced optimization. In Mathematical and Scientific Machine Learning, pages 81-96. PMLR. +Gorbunov, E., Burlachenko, K., Li, Z., and Richtárik, P. (2021). MARINA: Faster non-convex distributed learning with compression. In 38th International Conference on Machine Learning. +Gorbunov, E., Hanzely, F., and Richtárik, P. (2020). A unified theory of SGD: Variance reduction, sampling, quantization and coordinate descent. In International Conference on Artificial Intelligence and Statistics, pages 680-690. PMLR. +Gruntkowska, K., Tyurin, A., and Richtárik, P. (2023). EF21-P and friends: Improved theoretical communication complexity for distributed optimization with bidirectional compression. In International Conference on Machine Learning. +Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Dennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1-2):1-210. +Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., and Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. +Kovalev, D., Horváth, S., and Richtárik, P. (2020). Don't jump through hoops and remove those loops: Svrg and katyusha are better without the outer loop. In Algorithmic Learning Theory, pages 451-467. PMLR. +Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto, Toronto. +Lan, G. (2020). First-order and stochastic optimization methods for machine learning. Springer. +Lan, G., Li, Z., and Zhou, Y. (2019). A unified variance-reduced accelerated gradient method for convex optimization. Advances in Neural Information Processing Systems, 32. +Li, Z., Kovalev, D., Qian, X., and Richtárik, P. (2020). Acceleration for compressed gradient descent in distributed and federated optimization. In International Conference on Machine Learning. + +Li, Z. and Richtárik, P. (2021). CANITA: Faster rates for distributed convex optimization with communication compression. Advances in Neural Information Processing Systems, 34:13770-13781. +Liu, X., Li, Y., Tang, J., and Yan, M. (2020). A double residual compression algorithm for efficient distributed learning. In International Conference on Artificial Intelligence and Statistics, pages 133-143. PMLR. +McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR. +Meurer, A., Smith, C. P., Paprocki, M., Čertík, O., Kirpichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., Rathnayake, T., Vig, S., Granger, B. E., Muller, R. P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M. J., Terrel, A. R., Roučka, v., Saboo, A., Fernando, I., Kulal, S., Cirmrman, R., and Scopatz, A. (2017). Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103. +Mishchenko, K., Gorbunov, E., Takáč, M., and Richtárik, P. (2019). Distributed learning with compressed gradient differences. arXiv preprint arXiv:1901.09269. +Nesterov, Y. (2018). Lectures on convex optimization, volume 137. Springer. +Philippenko, C. and Dieuleveut, A. (2020). Artemis: tight convergence guarantees for bidirectional compression in federated learning. arXiv preprint arXiv:2006.14591. +Philippenko, C. and Dieuleveut, A. (2021). Preserved central model for faster bidirectional compression in distributed settings. Advances in Neural Information Processing Systems, 34:2387-2399. +Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021). Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092. +Richtárik, P., Sokolov, I., and Fatkhullin, I. (2021). EF21: A new, simpler, theoretically better, and practically faster error feedback. In Neural Information Processing Systems, 2021. +Seide, F., Fu, H., Droppo, J., Li, G., and Yu, D. (2014). 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In Fifteenth Annual Conference of the International Speech Communication Association. +Stonyakin, F., Tyurin, A., Gasnikov, A., Dvurechensky, P., Agafonov, A., Dvinskikh, D., Alkousa, M., Pasechnyuk, D., Artamonov, S., and Piskunova, V. (2021). Inexact model: a framework for optimization and variational inequalities. Optimization Methods and Software, 36(6):1155-1201. +Tang, H., Lian, X., Yu, C., Zhang, T., and Liu, J. (2020). DoubleSqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression. In Proceedings of the 36th International Conference on Machine Learning (ICML). +Tyurin, A. and Richtárik, P. (2023). DASHA: Distributed nonconvex optimization with communication compression and optimal oracle complexity. International Conference on Learning Representations (ICLR). + +# Appendix + +# Contents + +# 1 Introduction 1 + +1.1 The problem 1 +1.2Assumptions 2 + +# 2 Motivation: From Unidirectional to Bidirectional Compression 2 + +2.1 Compression mappings 2 +2.2 Unidirectional (i.e., w2s) compression 3 +2.3 Bidirectional (i.e., w2s and s2w) compression 4 + +# 3 Contributions 4 + +# 4 New Method: 2Direction 5 + +# 5 Theoretical Communication Complexity of 2Direction 7 + +5.1 The ratio $L_{\mathrm{max}} / L$ is not known 8 +5.2 The ratio $L_{\mathrm{max}} / L$ is known 8 + +# 6 Theoretical Comparison with Previous State of the Art 8 + +# 7 Proof Sketch 9 + +# 8 Limitations and Future Work 9 + +A Table of Notations 14 +B The Original ADIANA Algorithm 14 +C The Original EF21-P + DIANA Algorithm 15 +D Useful Identities and Inequalities 15 + +# E Proofs of Theorems 15 + +E.1 Analysis of learning rates 15 +E.2 Generic lemmas 19 +E.3 Construction of the Lyapunov function 24 +E.4 Main theorem 29 +E.5 Strongly-convex case 34 +E.6 General convex case 35 +E.7 Choosing optimal parameters 36 +E.8 Comparison with EF21 + DIANA 40 + +E.9 Comparison with AGD 40 +F Auxiliary Inequalities For $\bar{L}$ 41 +G Proof of Lemma E.10 (First Symbolically Computed) 43 +H Proof of Lemma E.11 (Second Symbolically Computed) 44 +I Symbolically Computed Constraints for $\bar{L}$ Such That The Term w.r.t. $\kappa$ is less or equal $1 / 2$ in (87) 45 +J Symbolically Computed Constraints for $\bar{L}$ Such That The Term w.r.t. $\rho$ is less or equal $1 / 2$ in (89) 46 +K Symbolically Computed Expression (91) 46 +L Symbolically Computed Constraints for $\bar{L}$ Such That The Inequality from Section K Holds 49 +M Symbolically Computed Expression (92) 54 +N Symbolically Computed Constraints for $\bar{L}$ Such That The Inequality from Section M Holds 55 +O Symbolical Check That The Constraints from Sections I, J, L and N Follow From The Constraint (44) 57 +P Jupyter Notebook for Symbolic Computations 63 + +P.1 Fileutils.py 68 + +Q Experiments 71 +Q.1 Setup 71 +Q.2 Results 71 +R Convergence Rate of CANITA obtained by Li and Richtárik (2021) 72 +S Comparison with ADIANA 72 + +# A Table of Notations + +
NotationMeaning
g = O(f)Exist C > 0 such that g(z) ≤ C × f(z) for all z ∈ Z
g = Ω(f)Exist C > 0 such that g(z) ≥ C × f(z) for all z ∈ Z
g = Θ(f)g = O(f) and g = Ω(f)
g = O(f)Exist C > 0 such that g(z) ≤ C × f(z) × log(poly(z)) for all z ∈ Z
g = Ω(f)Exist C > 0 such that g(z) ≥ C × f(z) × log(poly(z)) for all z ∈ Z
g = Θ(f)g = O(f) and g = Ω(f)
{a, ..., b}Set {i ∈ Z | a ≤ i ≤ b}
[n]{1, ..., n}
+ +# B The Original ADIANA Algorithm + +In this section, we present the ADIANA algorithm from (Li et al., 2020). In the following method, the notations, parameterization, and order of steps can be slightly different, but the general idea is the same. + +Algorithm 3 Accelerated DIANA (ADIANA) by Li et al. (2020) +1: Parameters: Lipschitz-like parameter $\bar{L} >0$ strong-convexity parameter $\mu \geq 0$ probability $p$ ,parameter $\Gamma_0$ initial point $x^0\in \mathbb{R}^d$ , initial gradient shifts $h_1^0,\dots,h_n^0\in \mathbb{R}^d$ +2: Initialize $\beta = 1 / \omega +1$ and $z^0 = u^0 = x^0$ +3: Initialize $h^0 = \frac{1}{n}\sum_{i = 1}^{n}h_i^0$ +4: for $t = 0,1,\ldots ,T - 1$ do +5: $\Gamma_{t + 1},\gamma_{t + 1},\theta_{t + 1} =$ CalculateOriginalLearningRates(..) +6: for $i = 1,\dots ,n$ in parallel do +7: $y^{t + 1} = \theta_{t + 1}u^{t} + (1 - \theta_{t + 1})z^{t}$ +8: $m_i^{t,y} = \mathcal{C}_i^{D,y}(\nabla f_i(y^{t + 1}) - h_i^t)$ Worker i compresses the shifted gradient via the compressor $\mathcal{C}_i^{D,y}\in \mathbb{U}(\omega)$ +9: Send compressed message $m_i^{t,y}$ to the server +10: end for +11: $m^{t,y} = \frac{1}{n}\sum_{i = 1}^{n}m_i^{t,y}$ Server averages the messages +12: $g^{t + 1} = h^t +m^{t,y}\equiv \frac{1}{n}\sum_{i = 1}^{n}(h_i^t +m_i^{t,y})$ +13: $u^{t + 1} = \arg \min_{x\in \mathbb{R}^d}\langle g^{t + 1},x\rangle +\frac{\bar{L} + \Gamma_t\mu}{2\gamma_{t + 1}}\left\| x - u^t\right\|^2 +\frac{\mu}{2}\left\| x - y^{t + 1}\right\|^2$ Server does a gradient-like descent step +14: Flip a coin $c^t\sim$ Bernoulli(p) +15: Broadcast non-compressed messages $u^{t + 1}$ to all n workers +16: for $i = 1,\dots ,n$ in parallel do +17: $x^{t + 1} = \theta_{t + 1}u^{t + 1} + (1 - \theta_{t + 1})z^t$ +18: $z^{t + 1} = \left\{ \begin{array}{ll}x^{t + 1}, & c^t = 1\\ z^t, & c^t = 0 \end{array} \right.$ +19: $m_i^{t,z} = \mathcal{C}_i^{D,z}(\nabla f_i(z^{t + 1}) - h_i^t)$ Worker i compresses the shifted gradient via the compressor $\mathcal{C}_i^{D,z}\in \mathbb{U}(\omega)$ +20: $h_i^{t + 1} = h_i^t +\beta m_i^{t,z}$ Update the control variables +21: Send compressed message $m_i^{t,z}$ to the server +22: end for +23: $h^{t + 1} = h^t +\beta \frac{1}{n}\sum_{i = 1}^{n}m_i^{t,z}$ Server averages the messages + +# C The Original EF21-P + DIANA Algorithm + +In this section, we present the EF21-P + DIANA algorithm from (Gruntkowska et al., 2023). In the following method, the notations, parameterization, and order of steps can be slightly different, but the general idea is the same. + +Algorithm 4 EF21-P + DIANA by Gruntkowska et al. (2023) +1: Parameters: learning rates $\gamma >0$ and $\beta >0$ , initial model $u^0\in \mathbb{R}^d$ , initial gradient shifts $h_1^0,\dots,h_n^0\in$ $\mathbb{R}^d$ , average of the initial gradient shifts $h^0 = \frac{1}{n}\sum_{i = 1}^{n}h_i^0$ , initial model shift $w^{0} = u^{0}\in \mathbb{R}^{d}$ +2: for $t = 0,1,\ldots ,T - 1$ do +3: for $i = 1,\dots ,n$ in parallel do +4: $m_i^t = \mathcal{C}_i^D (\nabla f_i(w^t) - h_i^t)$ Worker i compresses the shifted gradient via the dual compressor $\mathcal{C}_i^D\in \mathbb{U}(\omega)$ +5: Send compressed message $m_i^t$ to the server +6: $h_i^{t + 1} = h_i^t +\beta m_i^t$ Worker i updates its local gradient shift with stepsize $\beta$ +7: end for +8: $m^t = \frac{1}{n}\sum_{i = 1}^n m_i^t$ Server averages the n messages received from the workers +9: $h^{t + 1} = h^t +\beta m^t$ Server updates the average gradient shift so that $h^t = \frac{1}{n}\sum_{i = 1}^n h_i^t$ +10: $g^{t} = h^{t} + m^{t}$ Server computes the gradient estimator +11: $u^{t + 1} = u^{t} - \gamma g^{t}$ Server takes a gradient-type step with stepsize $\gamma$ +12: $p^{t + 1} = \mathcal{C}^P (u^{t + 1} - w^t)$ Server compresses the shifted model via the primal compressor $\mathcal{C}^P\in \mathbb{B}(\alpha)$ +13: $w^{t + 1} = w^{t} + p^{t + 1}$ Server updates the model shift +14: Broadcast compressed message $p^{t + 1}$ to all n workers +15: for $i = 1,\dots ,n$ in parallel do +16: $w^{t + 1} = w^{t} + p^{t + 1}$ Worker i updates its local copy of the model shift +17: end for +18: end for + +# D Useful Identities and Inequalities + +For all $x,y,x_1,\ldots ,x_n\in \mathbb{R}^d$ $s > 0$ and $\alpha \in (0,1]$ we have: + +$$ +\left\| x + y \right\| ^ {2} \leq (1 + s) \left\| x \right\| ^ {2} + \left(1 + s ^ {- 1}\right) \| y \| ^ {2}, \tag {17} +$$ + +$$ +\left\| x + y \right\| ^ {2} \leq 2 \left\| x \right\| ^ {2} + 2 \left\| y \right\| ^ {2}, \tag {18} +$$ + +$$ +\langle x, y \rangle \leq \frac {\| x \| ^ {2}}{2 s} + \frac {s \| y \| ^ {2}}{2}, \tag {19} +$$ + +$$ +(1 - \alpha) \left(1 + \frac {\alpha}{2}\right) \leq 1 - \frac {\alpha}{2}, \tag {20} +$$ + +$$ +(1 - \alpha) \left(1 + \frac {2}{\alpha}\right) \leq \frac {2}{\alpha}, \tag {21} +$$ + +$$ +\langle a, b \rangle = \frac {1}{2} \left(\| a \| ^ {2} + \| b \| ^ {2} - \| a - b \| ^ {2}\right). \tag {22} +$$ + +Variance decomposition: For any random vector $X \in \mathbb{R}^d$ and any non-random $c \in \mathbb{R}^d$ , we have + +$$ +\mathbb {E} \left[ \| X - c \| ^ {2} \right] = \mathbb {E} \left[ \| X - \mathbb {E} [ X ] \| ^ {2} \right] + \| \mathbb {E} [ X ] - c \| ^ {2}. \tag {23} +$$ + +Lemma D.1 (Nesterov (2018)). Let $f: \mathbb{R}^d \to \mathbb{R}$ be a function for which Assumptions 1.2 and 1.3 are satisfied. Then for all $x, y \in \mathbb{R}^d$ we have: + +$$ +\left\| \nabla f (x) - \nabla f (y) \right\| ^ {2} \leq 2 L (f (x) - f (y) - \langle \nabla f (y), x - y \rangle). \tag {24} +$$ + +# E Proofs of Theorems + +# E.1 Analysis of learning rates + +In this section, we establish inequalities for the sequences from Algorithm 2. + +Lemma E.1. Suppose that parameter $\bar{L} > 0$ , strong-convexity parameter $\mu \geq 0$ , probability $p \in (0,1]$ , $\bar{L} \geq \mu$ , and $\Gamma_0 \geq 1$ . Then the sequences generated by Algorithm 2 have the following properties: + +1. The quantities $\theta_{t + 1},\gamma_{t + 1}$ , and $\Gamma_{t + 1}$ are well-defined and $\theta_{t + 1},\gamma_{t + 1}\geq 0$ for all $t\geq 0$ +2. $\gamma_{t + 1} = p\theta_{t + 1}\Gamma_{t + 1}$ for all $t\geq 0$ +3. $\bar{L}\theta_{t + 1}\gamma_{t + 1}\leq (\bar{L} +\Gamma_t\mu)$ for all $t\geq 0$ + +4. + +$$ +\Gamma_ {t} \geq \frac {\Gamma_ {0}}{2} \exp \left(t \min \left\{\sqrt {\frac {p \mu}{4 \bar {L}}}, p \theta_ {\min } \right\}\right) +$$ + +for all $t\geq 0$ + +5. + +$$ +\Gamma_ {t} \geq \left\{ \begin{array}{l l} \frac {\Gamma_ {0}}{2} \exp \left(t p \theta_ {\min}\right), & t < \bar {t} \\ \frac {1}{4 p \theta_ {\min} ^ {2}} + \frac {p (t - \bar {t}) ^ {2}}{1 6}, & t \geq \bar {t}. \end{array} \right. +$$ + +where $\bar{t} := \max \left\{\left\lceil \frac{1}{p\theta_{\min}}\log \frac{1}{2\Gamma_0p\theta_{\min}^2}\right\rceil ,0\right\}$ . + +6. $\{\theta_{t + 1}\}_{t = 0}^{\infty}$ is a non-increasing sequence. + +Proof. + +1. Note that $\bar{\theta}_{t + 1}$ is the largest root of + +$$ +p \bar {L} \Gamma_ {t} \bar {\theta} _ {t + 1} ^ {2} + p (\bar {L} + \Gamma_ {t} \mu) \bar {\theta} _ {t + 1} - (\bar {L} + \Gamma_ {t} \mu) = 0. \tag {25} +$$ + +We fix $t \geq 0$ . Assume that $\Gamma_t > 0$ . + +Then + +$$ +p \bar {L} \Gamma_ {t} \times 0 + p (\bar {L} + \Gamma_ {t} \mu) \times 0 - (\bar {L} + \Gamma_ {t} \mu) < 0. +$$ + +Therefore, the largest root $\bar{\theta}_{t + 1}$ is well-defined and $\bar{\theta}_{t + 1} \geq 0$ , and $\theta_{t + 1} = \min \{\bar{\theta}_{t + 1}, \theta_{\min}\} \geq 0$ . Next, $\gamma_{t + 1}$ is well-defined and + +$$ +\gamma_ {t + 1} = p \theta_ {t + 1} \Gamma_ {t} / (1 - p \theta_ {t + 1}) \geq 0 +$$ + +since $p\theta_{t + 1} \in [0,1 / 4]$ . Finally, $\Gamma_{t + 1} = \Gamma_t + \gamma_{t + 1} > 0$ . We showed that, for all $t \geq 0$ , if $\Gamma_t > 0$ , then $\theta_{t + 1}, \gamma_{t + 1} \geq 0$ and $\Gamma_{t + 1} > 0$ . Note that $\Gamma_0 > 0$ , thus $\theta_{t + 1}, \gamma_{t + 1} \geq 0$ and $\Gamma_{t + 1} > 0$ for all $t \geq 0$ . + +2. From the definition of $\gamma_{t + 1}$ and $\Gamma_{t + 1}$ , we have + +$$ +\left(1 - p \theta_ {t + 1}\right) \gamma_ {t + 1} = p \theta_ {t + 1} \Gamma_ {t}, +$$ + +which is equivalent to + +$$ +\gamma_ {t + 1} = p \theta_ {t + 1} (\Gamma_ {t} + \gamma_ {t + 1}) = p \theta_ {t + 1} \Gamma_ {t + 1}. +$$ + +3. Recall again that $\bar{\theta}_{t + 1}\geq 0$ is the largest root of + +$$ +p \bar {L} \Gamma_ {t} \bar {\theta} _ {t + 1} ^ {2} + p (\bar {L} + \Gamma_ {t} \mu) \bar {\theta} _ {t + 1} - (\bar {L} + \Gamma_ {t} \mu) = 0. +$$ + +If $\bar{\theta}_{t + 1}\leq \theta_{\mathrm{min}}$ , then + +$$ +p \bar {L} \Gamma_ {t} \theta_ {t + 1} ^ {2} + p (\bar {L} + \Gamma_ {t} \mu) \theta_ {t + 1} - (\bar {L} + \Gamma_ {t} \mu) = 0. +$$ + +Otherwise, if $\bar{\theta}_{t + 1} > \theta_{\mathrm{min}}$ , since + +$$ +p \bar {L} \Gamma_ {t} \times 0 + p (\bar {L} + \Gamma_ {t} \mu) \times 0 - (\bar {L} + \Gamma_ {t} \mu) < 0, +$$ + +and $\theta_{t + 1} = \theta_{\min} < \bar{\theta}_{t + 1}$ , then + +$$ +p \bar {L} \Gamma_ {t} \theta_ {t + 1} ^ {2} + p (\bar {L} + \Gamma_ {t} \mu) \theta_ {t + 1} - (\bar {L} + \Gamma_ {t} \mu) \leq 0. \tag {26} +$$ + +In all cases, the inequality (26) holds. From this inequality, we can get + +$$ +p \bar {L} \Gamma_ {t} \theta_ {t + 1} ^ {2} \leq (\bar {L} + \Gamma_ {t} \mu) (1 - p \theta_ {t + 1}) +$$ + +and + +$$ +\bar {L} \theta_ {t + 1} \frac {p \theta_ {t + 1} \Gamma_ {t}}{(1 - p \theta_ {t + 1})} \leq (\bar {L} + \Gamma_ {t} \mu). +$$ + +Using the definition of $\gamma_{t + 1}$ , we obtain + +$$ +\bar {L} \theta_ {t + 1} \gamma_ {t + 1} \leq (\bar {L} + \Gamma_ {t} \mu) +$$ + +for all $t\geq 0$ + +4. Let us find the largest root of the quadratic equation (25): + +$$ +\bar {\theta} _ {t + 1} := \frac {- p (\bar {L} + \Gamma_ {t} \mu) + \sqrt {p ^ {2} (\bar {L} + \Gamma_ {t} \mu) ^ {2} + 4 p \bar {L} \Gamma_ {t} (\bar {L} + \Gamma_ {t} \mu)}}{2 p \bar {L} \Gamma_ {t}}. \tag {27} +$$ + +Let us define $a \coloneqq p^2 (\bar{L} + \Gamma_t \mu)^2$ and $b \coloneqq 4\bar{L} p(\bar{L} + \Gamma_t \mu) \Gamma_t$ , then + +$$ +\bar {\theta} _ {t + 1} = \frac {- \sqrt {a} + \sqrt {a + b}}{2 \bar {L} p \Gamma_ {t}} +$$ + +Since $\Gamma_t \geq 1$ for all $t \geq 0$ , and $\bar{L} \geq \mu$ , we have + +$$ +a = p ^ {2} \left(\bar {L} + \Gamma_ {t} \mu\right) ^ {2} \leq p \left(\bar {L} + \Gamma_ {t} \mu\right) ^ {2} \leq p \left(\bar {L} + \Gamma_ {t} \mu\right) \left(\Gamma_ {t} \bar {L} + \Gamma_ {t} \mu\right) \leq 2 \bar {L} p \left(\bar {L} + \Gamma_ {t} \mu\right) \Gamma_ {t} = \frac {b}{2}. +$$ + +Using $\sqrt{x + y} \geq \left(\sqrt{x} + \sqrt{y}\right) / \sqrt{2}$ for all $x, y \geq 0$ , and $\sqrt{b} \geq \sqrt{2}\sqrt{a}$ we have + +$$ +\begin{array}{l} \bar {\theta} _ {t + 1} = \frac {- \sqrt {a} + \sqrt {a + b}}{2 \bar {L} p \Gamma_ {t}} \geq \frac {- \sqrt {a} + \frac {1}{\sqrt {2}} \sqrt {a} + \frac {1}{\sqrt {2}} \sqrt {b}}{2 \bar {L} p \Gamma_ {t}} \\ = \frac {\left(\frac {1}{\sqrt {2}} - 1\right) \sqrt {a} + \frac {1}{\sqrt {2}} \left(\frac {1}{\sqrt {2}} + 1 - \frac {1}{\sqrt {2}}\right) \sqrt {b}}{2 \bar {L} p \Gamma_ {t}} \geq \frac {\frac {1}{\sqrt {2}} \left(\frac {1}{\sqrt {2}}\right) \sqrt {b}}{2 \bar {L} p \Gamma_ {t}} = \frac {\sqrt {b}}{4 \bar {L} p \Gamma_ {t}}. \\ \end{array} +$$ + +Therefore + +$$ +\bar {\theta} _ {t + 1} \geq \frac {\sqrt {4 \bar {L} p (\bar {L} + \Gamma_ {t} \mu) \Gamma_ {t}}}{4 \bar {L} p \Gamma_ {t}} = \sqrt {\frac {(\bar {L} + \Gamma_ {t} \mu)}{4 \bar {L} p \Gamma_ {t}}} \geq \max \left\{\sqrt {\frac {1}{4 p \Gamma_ {t}}}, \sqrt {\frac {\mu}{4 p \bar {L}}} \right\} +$$ + +and + +$$ +\theta_ {t + 1} \geq \min \left\{\max \left\{\sqrt {\frac {1}{4 p \Gamma_ {t}}}, \sqrt {\frac {\mu}{4 p \bar {L}}} \right\}, \theta_ {\min } \right\}. \tag {28} +$$ + +Next, since $p\theta_{t + 1}\in [0,1 / 4]$ , we have + +$$ +\gamma_ {t + 1} := p \theta_ {t + 1} \Gamma_ {t} / (1 - p \theta_ {t + 1}) \geq p \theta_ {t + 1} \Gamma_ {t} (1 + p \theta_ {t + 1}) \tag {29} +$$ + +and + +$$ +\Gamma_ {t + 1} := \Gamma_ {t} + \gamma_ {t + 1} \geq \left(1 + p \theta_ {t + 1} + p ^ {2} \theta_ {t + 1} ^ {2}\right) \Gamma_ {t}. +$$ + +Using (28) and (29), we obtain + +$$ +\begin{array}{l} \Gamma_ {t + 1} \geq \left(1 + p \min \left\{\sqrt {\frac {\mu}{4 p \bar {L}}}, \theta_ {\min } \right\}\right) \Gamma_ {t} = \left(1 + \min \left\{\sqrt {\frac {p \mu}{4 \bar {L}}}, p \theta_ {\min } \right\}\right) \Gamma_ {t} \\ \geq \Gamma_ {0} \left(1 + \min \left\{\sqrt {\frac {p \mu}{4 \bar {L}}}, p \theta_ {\min } \right\}\right) ^ {t + 1} \geq \frac {\Gamma_ {0}}{2} \exp \left((t + 1) \min \left\{\sqrt {\frac {p \mu}{4 \bar {L}}}, p \theta_ {\min } \right\}\right), \tag {30} \\ \end{array} +$$ + +where we use that $1 + x \geq e^{x} / 2$ for all $x \in [0,1]$ . + +5. Using (28) and (29), we have + +$$ +\begin{array}{l} \Gamma_ {t + 1} \geq \left(1 + p \theta_ {t + 1} + p ^ {2} \theta_ {t + 1} ^ {2}\right) \Gamma_ {t} \\ \geq \Gamma_ {t} + p \min \left\{\sqrt {\frac {1}{4 p \Gamma_ {t}}}, \theta_ {\min } \right\} \Gamma_ {t} + p ^ {2} \min \left\{\sqrt {\frac {1}{4 p \Gamma_ {t}}}, \theta_ {\min } \right\} ^ {2} \Gamma_ {t}. \\ \end{array} +$$ + +The sequence $\Gamma_{t + 1}$ is strongly increasing. Thus, exists the minimal $\widehat{t} \geq 0$ such that $\Gamma_{\widehat{t}} \geq (4p\theta_{\min}^2)^{-1}$ . For all $0 \leq t < \widehat{t}$ , it holds that $\Gamma_t < (4p\theta_{\min}^2)^{-1}$ , $\sqrt{\frac{1}{4p\Gamma_t}} > \theta_{\min}$ , and + +$$ +\Gamma_ {t + 1} \geq \Gamma_ {t} + p \theta_ {\min } \Gamma_ {t} + p ^ {2} \theta_ {\min } ^ {2} \Gamma_ {t} \geq \Gamma_ {t} + p \theta_ {\min } \Gamma_ {t} \geq \Gamma_ {0} (1 + p \theta_ {\min }) ^ {t + 1} \geq \frac {\Gamma_ {0}}{2} \exp ((t + 1) p \theta_ {\min }) \tag {31} +$$ + +Therefore, if $\widehat{t} > 0$ , then + +$$ +\frac {1}{4 p \theta_ {\operatorname* {m i n}} ^ {2}} > \Gamma_ {\widehat {t} - 1} \geq \frac {\Gamma_ {0}}{2} \exp \left((\widehat {t} - 1) p \theta_ {\min }\right). +$$ + +Thus, we have the following bound for $\widehat{t}$ : + +$$ +\widehat {t} \leq \bar {t} := \max \left\{\left\lceil \frac {1}{p \theta_ {\operatorname* {m i n}}} \log \frac {1}{2 \Gamma_ {0} p \theta_ {\operatorname* {m i n}} ^ {2}} \right\rceil , 0 \right\}. +$$ + +For all $t \geq \widehat{t}$ , we have $\sqrt{\frac{1}{4p\Gamma_t}} \leq \theta_{\min}$ and + +$$ +\Gamma_ {t + 1} \geq \Gamma_ {t} + p \sqrt {\frac {1}{4 p \Gamma_ {t}}} \Gamma_ {t} + p ^ {2} \frac {1}{4 p \Gamma_ {t}} \Gamma_ {t} = \Gamma_ {t} + \sqrt {\frac {p}{4}} \sqrt {\Gamma_ {t}} + \frac {p}{4}. +$$ + +Using mathematical induction, let us show that + +$$ +\Gamma_ {t} \geq \frac {1}{4 p \theta_ {\min } ^ {2}} + \frac {p (t - \widehat {t}) ^ {2}}{1 6} \tag {32} +$$ + +for all $t \geq \widehat{t}$ . For $t = \widehat{t}$ it is true since $\Gamma_{\widehat{t}} \geq \left(4p\theta_{\min}^2\right)^{-1}$ by the definition of $\widehat{t}$ . Next, for some $t \geq \widehat{t}$ , assume that (32) holds, then + +$$ +\begin{array}{l} \Gamma_ {t + 1} \geq \Gamma_ {t} + \frac {\sqrt {p}}{2} \sqrt {\Gamma_ {t}} + \frac {p}{4} \\ \geq \frac {1}{4 p \theta_ {\mathrm {m i n}} ^ {2}} + \frac {p (t - \widehat {t}) ^ {2}}{1 6} + \frac {\sqrt {p}}{2} \sqrt {\frac {1}{4 p \theta_ {\mathrm {m i n}} ^ {2}} + \frac {p (t - \widehat {t}) ^ {2}}{1 6}} + \frac {p}{4} \\ \geq \frac {1}{4 p \theta_ {\mathrm {m i n}} ^ {2}} + \frac {p (t - \widehat {t}) ^ {2}}{1 6} + \frac {p (t - \widehat {t})}{8} + \frac {p}{4} \\ \geq \frac {1}{4 p \theta_ {\mathrm {m i n}} ^ {2}} + \frac {p (t - \widehat {t} + 1) ^ {2}}{1 6}. \\ \end{array} +$$ + +We proved the inequality using mathematical induction. Combining (31) and (32), we obtain the unified inequality for $\Gamma_t$ : + +$$ +\Gamma_ {t} \geq \min \left\{\frac {\Gamma_ {0}}{2} \exp (t p \theta_ {\min }) , \frac {1}{4 p \theta_ {\min } ^ {2}} + \frac {p (t - \widehat {t}) ^ {2}}{1 6} \right\} +$$ + +for all $t \geq 0$ . Also, if $t < \bar{t}$ , then the first term in the minimum is less or equal than the second one. Therefore, + +$$ +\Gamma_ {t} \geq \left\{ \begin{array}{l l} \frac {\Gamma_ {0}}{2} \exp (t p \theta_ {\min }) , & t < \bar {t} \\ \min \left\{\frac {\Gamma_ {0}}{2} \exp (t p \theta_ {\min }) , \frac {1}{4 p \theta_ {\min } ^ {2}} + \frac {p (t - \hat {t}) ^ {2}}{1 6} \right\}, & t \geq \bar {t}. \end{array} \right. \tag {33} +$$ + +Let us bound the second term. Recall that $\bar{t} \geq \widehat{t}$ , thus, if $t \geq \bar{t}$ , then $(t - \widehat{t})^2 \geq (t - \bar{t})^2$ . In (33), we can change $\widehat{t}$ to $\bar{t}$ and get + +$$ +\Gamma_ {t} \geq \left\{ \begin{array}{l l} \frac {\Gamma_ {0}}{2} \exp \left(t p \theta_ {\min }\right), & t < \bar {t} \\ \min \left\{\frac {\Gamma_ {0}}{2} \exp \left(t p \theta_ {\min }\right), \frac {1}{4 p \theta_ {\min } ^ {2}} + \frac {p (t - \bar {t}) ^ {2}}{1 6} \right\}, & t \geq \bar {t}. \end{array} \right. +$$ + +Using the Taylor expansion of the exponent at the point $\bar{t}$ , we get + +$$ +\begin{array}{l} \frac {\Gamma_ {0}}{2} \exp (t p \theta_ {\min }) \geq \frac {\Gamma_ {0}}{2} \exp (\bar {t} p \theta_ {\min }) + \frac {\Gamma_ {0}}{2} p ^ {2} \theta_ {\min } ^ {2} \exp (\bar {t} p \theta_ {\min }) \frac {(t - \bar {t}) ^ {2}}{2} \\ \geq \frac {1}{4 p \theta_ {\mathrm {m i n}} ^ {2}} + \frac {p}{8} (t - \bar {t}) ^ {2} \geq \frac {1}{4 p \theta_ {\mathrm {m i n}} ^ {2}} + \frac {p}{1 6} (t - \bar {t}) ^ {2}. \\ \end{array} +$$ + +for all $t\geq \bar{t}$ . Finally, we can conclude that + +$$ +\Gamma_ {t} \geq \left\{ \begin{array}{l l} \frac {\Gamma_ {0}}{2} \exp \left(t p \theta_ {\min }\right), & t < \bar {t} \\ \frac {1}{4 p \theta_ {\min } ^ {2}} + \frac {p (t - \bar {t}) ^ {2}}{1 6}, & t \geq \bar {t}. \end{array} \right. +$$ + +6. Let us rewrite (27): + +$$ +\begin{array}{l} \bar {\theta} _ {t + 1} = \frac {- p (\bar {L} + \Gamma_ {t} \mu) + \sqrt {p ^ {2} (\bar {L} + \Gamma_ {t} \mu) ^ {2} + 4 p \bar {L} \Gamma_ {t} (\bar {L} + \Gamma_ {t} \mu)}}{2 p \bar {L} \Gamma_ {t}} \\ = - \frac {1}{2} \left(\frac {1}{\Gamma_ {t}} + \frac {\mu}{\bar {L}}\right) + \sqrt {\frac {1}{4} \left(\frac {1}{\Gamma_ {t}} + \frac {\mu}{\bar {L}}\right) ^ {2} + \frac {1}{p} \left(\frac {1}{\Gamma_ {t}} + \frac {\mu}{\bar {L}}\right)}. \\ \end{array} +$$ + +Let us temporarily denote $a_{t} \coloneqq \left(\frac{1}{\Gamma_{t}} + \frac{\mu}{L}\right)$ for all $t \geq 0$ . Note that $a_{t}$ is a non-increasing sequence, since $\Gamma_{t+1} \geq \Gamma_{t}$ for all $t \geq 0$ . Therefore, + +$$ +\bar {\theta} _ {t + 1} = - \frac {1}{2} a _ {t} + \sqrt {\frac {1}{4} a _ {t} ^ {2} + \frac {1}{p} a _ {t}}. +$$ + +Let us take the derivative of the last term w.r.t. $a_{t}$ and compare it to zero: + +$$ +\begin{array}{l} - \frac {1}{2} + \frac {\frac {1}{2} a _ {t} + \frac {1}{p}}{2 \sqrt {\frac {1}{4} a _ {t} ^ {2} + \frac {1}{p} a _ {t}}} \geq 0 \Leftrightarrow \frac {1}{2} a _ {t} + \frac {1}{p} \geq \sqrt {\frac {1}{4} a _ {t} ^ {2} + \frac {1}{p} a _ {t}} \\ \Leftrightarrow \frac {1}{4} a _ {t} ^ {2} + \frac {1}{p} a _ {t} + \frac {1}{p ^ {2}} \geq \frac {1}{4} a _ {t} ^ {2} + \frac {1}{p} a _ {t} \Leftrightarrow \frac {1}{p ^ {2}} \geq 0. \\ \end{array} +$$ + +Thus, $\bar{\theta}_{t + 1}$ is a non-decreasing sequence w.r.t. $a_{t}$ . But the sequence $a_{t}$ is non-increasing, therefore $\bar{\theta}_{t + 1}$ is a non-increasing sequence w.r.t. $t$ . It is left to use that $\theta_{t + 1}$ is the minimum of $\bar{\theta}_{t + 1}$ and the constant quantity. + +# E.2 Generic lemmas + +First, we prove a well-known lemma from the theory of accelerated methods (Lan, 2020; Stonyakin et al., 2021). + +Lemma E.2. Let us take vectors $a, b, g \in \mathbb{R}^d$ , numerical quantities $\alpha, \beta \geq 0$ , and + +$$ +u = \operatorname * {a r g m i n} _ {x \in \mathbb {R} ^ {d}} \left\langle g, x \right\rangle + \frac {\alpha}{2} \left\| x - a \right\| ^ {2} + \frac {\beta}{2} \left\| x - b \right\| ^ {2}. +$$ + +Then + +$$ +\langle g, x \rangle + \frac {\alpha}{2} \| x - a \| ^ {2} + \frac {\beta}{2} \| x - b \| ^ {2} \geq \langle g, u \rangle + \frac {\alpha}{2} \| u - a \| ^ {2} + \frac {\beta}{2} \| u - b \| ^ {2} + \frac {\alpha + \beta}{2} \| x - u \| ^ {2} \tag {34} +$$ + +for all $x\in \mathbb{R}^d$ + +Proof. A function + +$$ +\widehat {f} (x) := \langle g, x \rangle + \frac {\alpha}{2} \| x - a \| ^ {2} + \frac {\beta}{2} \| x - b \| ^ {2} +$$ + +is strongly-convex with the parameter $\alpha +\beta$ . From the strong-convexity and the optimality of $u$ , we obtain + +$$ +\widehat {f} (x) \geq \widehat {f} (u) + \left\langle \widehat {f} (u), x - u \right\rangle + \frac {\alpha + \beta}{2} \| x - u \| ^ {2} = \widehat {f} (u) + \frac {\alpha + \beta}{2} \| x - u \| ^ {2} +$$ + +for all $x\in \mathbb{R}^d$ . This inequality is equivalent to (34). + +![](images/dba2d947e07f82ee55758c0f5a2d2248e82508fef303deb396f768183fece9a2.jpg) + +We assume that the conditional expectation $\mathbb{E}_t[\cdot ]$ is conditioned on the randomness from the first $t$ iterations. Also, let us define $D_{f}(x,y)\coloneqq f(x) - f(y) - \langle \nabla f(y),x - y\rangle$ + +Lemma E.3. Suppose that Assumptions 1.2, 1.1, 1.3 and 2.4 hold. For Algorithm 1, the following inequality holds: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| g ^ {t + 1} - \nabla f (y ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq \frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2} + \frac {4 \omega L _ {\operatorname* {m a x}}}{n} \left(f \left(z ^ {t}\right) - f \left(y ^ {t + 1}\right) - \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right). \tag {35} \\ \end{array} +$$ + +Proof. Using the definition of $g^{t + 1}$ , we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| g ^ {t + 1} - \nabla f (y ^ {t + 1}) \right| \right| ^ {2} \right] \\ = \mathbb {E} _ {t} \left[ \left\| h ^ {t} + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathcal {C} _ {i} ^ {D, y} \left(\nabla f _ {i} \left(y ^ {t + 1}\right) - h _ {i} ^ {t}\right) - \nabla f \left(y ^ {t + 1}\right) \right\| ^ {2} \right] \\ = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \mathcal {C} _ {i} ^ {D, y} \left(\nabla f _ {i} \left(y ^ {t + 1}\right) - h _ {i} ^ {t}\right) - \left(\nabla f \left(y ^ {t + 1}\right) - h _ {i} ^ {t}\right) \right\| ^ {2} \right], \\ \end{array} +$$ + +where we use the independence and the unbiasedness of the compressors (see Assumption 2.4). Using Definition 2.3 and Assumption 1.1, we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| g ^ {t + 1} - \nabla f (y ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq \frac {\omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left| \left| \nabla f _ {i} (y ^ {t + 1}) - h _ {i} ^ {t} \right| \right| ^ {2} \\ \stackrel {(1 8)} {\leq} \frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - h _ {i} ^ {t} \right\| ^ {2} + \frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (y ^ {t + 1}) - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ \stackrel {L. D. 1} {\leq} \frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - h _ {i} ^ {t} \right\| ^ {2} + \frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} 2 L _ {i} (f _ {i} (z ^ {t}) - f _ {i} (y ^ {t + 1}) - \left\langle \nabla f _ {i} (y ^ {t + 1}), z ^ {t} - y ^ {t + 1} \right\rangle). \\ \end{array} +$$ + +Using that $L_{\mathrm{max}} = \max_{i\in [n]}L_i$ we obtain (35). + +![](images/01e8ed48883a568c5a1bceb0cf2fe34829000ca59e852c3624da949f2180d0db.jpg) + +Lemma E.4. Suppose that Assumptions 1.2, 1.1, 1.3 and 2.4 hold. For Algorithm 1, the following inequality holds: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f \left(z ^ {t + 1}\right) - f \left(x ^ {*}\right) \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f (z ^ {t}) - f \left(x ^ {*}\right)\right) \\ + \frac {2 p \omega}{n \bar {L}} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2}\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t} - x ^ {*} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - x ^ {*} \right\| ^ {2} \right]\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} + p \left(\frac {4 \omega L _ {\operatorname* {m a x}}}{n \bar {L}} + \theta_ {t + 1} - 1\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \frac {p L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Proof. Using Assumption 1.2, we have + +$$ +f (x ^ {t + 1}) - f (x ^ {*}) \leq f (y ^ {t + 1}) - f (x ^ {*}) + \left\langle \nabla f (y ^ {t + 1}), x ^ {t + 1} - y ^ {t + 1} \right\rangle + \frac {L}{2} \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2}. +$$ + +Using the definition of $x^{t + 1}$ , we obtain + +$$ +\begin{array}{l} f (x ^ {t + 1}) - f (x ^ {*}) \leq (1 - \theta_ {t + 1}) \left(f (y ^ {t + 1}) - f (x ^ {*}) + \left\langle \nabla f (y ^ {t + 1}), z ^ {t} - y ^ {t + 1} \right\rangle\right) \\ + \theta_ {t + 1} (f (y ^ {t + 1}) - f (x ^ {*}) + \left\langle \nabla f (y ^ {t + 1}), u ^ {t + 1} - y ^ {t + 1} \right\rangle) \\ + \frac {L}{2} \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \\ = \left(1 - \theta_ {t + 1}\right) \left(f \left(y ^ {t + 1}\right) - f \left(x ^ {*}\right) + \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right) \tag {36} \\ + \theta_ {t + 1} (f (y ^ {t + 1}) - f (x ^ {*}) + \left\langle g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle) \\ + \theta_ {t + 1} \left(\left\langle \nabla f (y ^ {t + 1}) - g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle\right) \\ + \frac {L}{2} \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2}, \\ \end{array} +$$ + +in the last inequality we add and subtract $g^{t + 1}$ . Using the definition of $u^{t + 1}$ and Lemma E.2 with $x = x^*$ , we have + +$$ +\begin{array}{l} \left\langle g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle \leq \left\langle g ^ {t + 1}, x ^ {*} - y ^ {t + 1} \right\rangle + \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t} \right\| ^ {2} + \frac {\mu}{2} \left\| x ^ {*} - y ^ {t + 1} \right\| ^ {2} \\ - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} - \frac {\mu}{2} \left\| u ^ {t} - y ^ {t + 1} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t + 1} \right\| ^ {2}. \\ \end{array} +$$ + +We use the fact that $\Gamma_{t + 1} = \Gamma_t + \gamma_{t + 1}$ . Since $\left\| u^t - y^{t + 1} \right\|^2 \geq 0$ , we have + +$$ +\begin{array}{l} \left\langle g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle \leq \left\langle g ^ {t + 1}, x ^ {*} - y ^ {t + 1} \right\rangle + \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t} \right\| ^ {2} + \frac {\mu}{2} \left\| x ^ {*} - y ^ {t + 1} \right\| ^ {2} \\ - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t + 1} \right\| ^ {2}. \\ \end{array} +$$ + +By substituting this inequality to (36), we get + +$$ +\begin{array}{l} f (x ^ {t + 1}) - f (x ^ {*}) \\ \leq \left(1 - \theta_ {t + 1}\right) \left(f \left(y ^ {t + 1}\right) - f \left(x ^ {*}\right) + \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right) \\ + \theta_ {t + 1} (f (y ^ {t + 1}) - f (x ^ {*}) + \langle g ^ {t + 1}, x ^ {*} - y ^ {t + 1} \rangle) \\ + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t} \| ^ {2} + \frac {\mu}{2} \| x ^ {*} - y ^ {t + 1} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| u ^ {t + 1} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t + 1} \| ^ {2}\right) \\ + \theta_ {t + 1} \left(\left\langle \nabla f (y ^ {t + 1}) - g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle\right) \\ + \frac {L}{2} \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2}. \\ \end{array} +$$ + +Using $\mu$ -strong convexity, we have + +$$ +f (x ^ {*}) \geq f (y ^ {t + 1}) + \left\langle \nabla f (y ^ {t + 1}), x ^ {*} - y ^ {t + 1} \right\rangle + \frac {\mu}{2} \left\| x ^ {*} - y ^ {t + 1} \right\| ^ {2} +$$ + +and + +$$ +\begin{array}{l} f (x ^ {t + 1}) - f (x ^ {*}) \\ \leq \left(1 - \theta_ {t + 1}\right) \left(f \left(y ^ {t + 1}\right) - f \left(x ^ {*}\right) + \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} + \theta_ {t + 1} \left(\left\langle g ^ {t + 1} - \nabla f (y ^ {t + 1}), x ^ {*} - y ^ {t + 1} \right\rangle\right) \\ + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| u ^ {t + 1} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t + 1} \| ^ {2}\right) \\ + \theta_ {t + 1} \left(\left\langle \nabla f (y ^ {t + 1}) - g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle\right) \\ + \frac {L}{2} \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2}. \\ \end{array} +$$ + +Let us take the conditional expectation $\mathbb{E}_t[\cdot ]$ conditioned on the randomness from the first $t$ iterations: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (x ^ {t + 1}) - f (x ^ {*}) \right] \\ \leq \left(1 - \theta_ {t + 1}\right) \left(f \left(y ^ {t + 1}\right) - f \left(x ^ {*}\right) + \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right) \\ + \theta_ {t + 1} \left(\left\langle \mathbb {E} _ {t} \left[ g ^ {t + 1} - \nabla f (y ^ {t + 1}) \right], x ^ {*} - y ^ {t + 1} \right\rangle\right) \\ \left. \right. + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[\left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[\left\| x ^ {*} - u ^ {t + 1} \right\| ^ {2} \right]\right) \\ + \theta_ {t + 1} \mathbb {E} _ {t} \left[ \left\langle \nabla f (y ^ {t + 1}) - g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle \right] \\ + \frac {L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] \\ = \left(1 - \theta_ {t + 1}\right) \left(f \left(y ^ {t + 1}\right) - f \left(x ^ {*}\right) + \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right) \\ + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - u ^ {t} \| ^ {2} \right] - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| x ^ {*} - u ^ {t + 1} \| ^ {2} \right]\right) \\ + \theta_ {t + 1} \mathbb {E} _ {t} \left[ \left\langle \nabla f (y ^ {t + 1}) - g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle \right] \\ + \frac {L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right], \tag {37} \\ \end{array} +$$ + +where use that $\mathbb{E}_t\left[g^{t + 1}\right] = \nabla f(y^{t + 1})$ . We can find $u^{t + 1}$ analytically and obtain that + +$$ +u ^ {t + 1} = \frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu} u ^ {t} + \frac {\mu \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} y ^ {t + 1} - \frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} g ^ {t + 1}. +$$ + +Therefore, using that $\mathbb{E}_t\left[g^{t + 1}\right] = \nabla f(y^{t + 1})$ and $u^t$ and $y^{t + 1}$ are conditionally nonrandom, we obtain + +$$ +\mathbb {E} _ {t} \left[ \left\langle \nabla f (y ^ {t + 1}) - g ^ {t + 1}, u ^ {t + 1} - y ^ {t + 1} \right\rangle \right] = \frac {\gamma t + 1}{\bar {L} + \Gamma_ {t + 1} \mu} \mathbb {E} _ {t} \left[ \left\| g ^ {t + 1} - \nabla f (y ^ {t + 1}) \right\| ^ {2} \right]. \tag {38} +$$ + +Combining (35) from Lemma E.3 with (37) and (38), one can get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (x ^ {t + 1}) - f (x ^ {*}) \right] \\ \leq \left(1 - \theta_ {t + 1}\right) \left(f \left(y ^ {t + 1}\right) - f \left(x ^ {*}\right) + \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right) \\ + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - u ^ {t} \| ^ {2} \right] - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| x ^ {*} - u ^ {t + 1} \| ^ {2} \right]\right) \\ + \frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \left(\frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2} + \frac {4 \omega L _ {\max}}{n} \left(f \left(z ^ {t}\right) - f \left(y ^ {t + 1}\right) - \left\langle \nabla f \left(y ^ {t + 1}\right), z ^ {t} - y ^ {t + 1} \right\rangle\right)\right) \\ + \frac {L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using the notation $D_{f}(x,y)\coloneqq f(x) - f(y) - \langle \nabla f(y),x - y\rangle$ , we get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (x ^ {t + 1}) - f (x ^ {*}) \right] \\ \leq \left(1 - \theta_ {t + 1}\right) \left(f (z ^ {t}) - f (x ^ {*}) - D _ {f} \left(z ^ {t}, y ^ {t + 1}\right)\right) \\ + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| x ^ {*} - u ^ {t + 1} \right\| ^ {2} \right]\right) \\ + \frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \left(\frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2} + \frac {4 \omega L _ {\max }}{n} D _ {f} \left(z ^ {t}, y ^ {t + 1}\right)\right) \\ + \frac {L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] \\ = \left(1 - \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \frac {2 \omega}{n} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| x ^ {*} - u ^ {t + 1} \right\| ^ {2} \right]\right) \\ + \left(\frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \frac {4 \omega L _ {\max }}{n} + \theta_ {t + 1} - 1\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \frac {L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] - \frac {\theta_ {t + 1} (\bar {L} + \Gamma_ {t} \mu)}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right]. \\ \end{array} +$$ + +In the last equality, we simply regrouped the terms. Using the definition of $z^{t + 1}$ , we get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f \left(z ^ {t + 1}\right) - f \left(x ^ {*}\right) \right] = p \mathbb {E} _ {t} \left[ f \left(x ^ {t + 1}\right) - f \left(x ^ {*}\right) \right] + (1 - p) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ \leq p \left(1 - \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \frac {2 \omega}{n} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| x ^ {*} - u ^ {t + 1} \| ^ {2} \right]\right) \\ + p \left(\frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \frac {4 \omega L _ {\max}}{n} + \theta_ {t + 1} - 1\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \frac {p L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] - \frac {p \theta_ {t + 1} (\bar {L} + \Gamma_ {t} \mu)}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] + (1 - p) (f (z ^ {t}) - f (x ^ {*})) \\ = \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \frac {2 \omega}{n} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| x ^ {*} - u ^ {t} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| x ^ {*} - u ^ {t + 1} \| ^ {2} \right]\right) \\ + p \left(\frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \frac {4 \omega L _ {\max}}{n} + \theta_ {t + 1} - 1\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \frac {p L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] - \frac {p \theta_ {t + 1} (\bar {L} + \Gamma_ {t} \mu)}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right]. \\ \end{array} +$$ + +In the last equality, we grouped the terms with $f(z^t) - f(x^*)$ . In Algorithm 2, we choose the learning rates so that (see Lemma E.1) + +$$ +\bar {L} \theta_ {t + 1 \gamma_ {t + 1}} \leq \bar {L} + \Gamma_ {t} \mu . +$$ + +Since $\Gamma_{t + 1}\geq \Gamma_t$ for all $t\in \mathbb{N}_0$ , thus + +$$ +\frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \leq \frac {\theta_ {t + 1} \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t} \mu} \leq \frac {1}{\bar {L}} +$$ + +and + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f \left(z ^ {t + 1}\right) - f \left(x ^ {*}\right) \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \frac {2 \omega}{n \bar {L}} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| x ^ {*} - u ^ {t} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| x ^ {*} - u ^ {t + 1} \right\| ^ {2} \right]\right) \\ + p \left(\frac {4 \omega L _ {\operatorname* {m a x}}}{n \bar {L}} + \theta_ {t + 1} - 1\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \frac {p L}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right]. \\ \end{array} +$$ + +![](images/b76c95c93c29fe57b12307c81c08405cdb4c101da563c328b944a881345013ed.jpg) + +# E.3 Construction of the Lyapunov function + +In this section, we provide lemmas that will help us to construct a Lyapunov function. + +Lemma E.5. Suppose that Assumptions 1.2, 1.1, 1.3 and 2.4 hold. The parameter $\beta \leq 1 / \omega + 1$ . Then, for Algorithm 1, the following inequality holds: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right\| ^ {2} \right] (39) \\ \leq 8 p \left(1 + \frac {p}{\beta}\right) L _ {\max } D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] + \left(1 - \frac {\beta}{2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2}. (40) \\ \end{array} +$$ + +Proof. Using the definition of $h_i^{t+1}$ , we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ = \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} + \beta \mathcal {C} _ {i} ^ {D, z} \left(\nabla f _ {i} \left(z ^ {t + 1}\right) - h _ {i} ^ {t}\right) - \nabla f _ {i} \left(z ^ {t + 1}\right) \right\| ^ {2} \right] \\ = \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \mathbb {E} _ {t} \left[ \frac {2 \beta}{n} \sum_ {i = 1} ^ {n} \left\langle h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t + 1}), \mathcal {C} _ {i} ^ {D, z} (\nabla f _ {i} (z ^ {t + 1}) - h _ {i} ^ {t}) \right\rangle + \frac {\beta^ {2}}{n} \sum_ {i = 1} ^ {n} \left\| \mathcal {C} _ {i} ^ {D, z} (\nabla f _ {i} (z ^ {t + 1}) - h _ {i} ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Note that $\mathbb{E}_{\mathcal{C}}\left[\mathcal{C}_i^{D,z}(\nabla f_i(z^{t + 1}) - h_i^t)\right] = \nabla f_i(z^{t + 1}) - h_i^t$ and + +$$ +\mathbb {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} ^ {D, z} \left(\nabla f _ {i} \left(z ^ {t + 1}\right) - h _ {i} ^ {t}\right) \right\| ^ {2} \right] \leq (\omega + 1) \left\| \nabla f _ {i} \left(z ^ {t + 1}\right) - h _ {i} ^ {t} \right\| ^ {2}, +$$ + +where $\mathbb{E}_C[\cdot ]$ is a conditional expectation that is conditioned on $z^{t + 1}$ and $h_i^t$ . Therefore, + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \mathbb {E} _ {t} \left[ \frac {2 \beta}{n} \sum_ {i = 1} ^ {n} \left\langle h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t + 1}\right), \nabla f _ {i} \left(z ^ {t + 1}\right) - h _ {i} ^ {t} \right\rangle + \frac {\beta^ {2} (\omega + 1)}{n} \sum_ {i = 1} ^ {n} \left| \left| \nabla f _ {i} \left(z ^ {t + 1}\right) - h _ {i} ^ {t} \right| \right| ^ {2} \right] \\ = \left(1 - 2 \beta + \beta^ {2} (\omega + 1)\right) \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Since $\beta \leq 1 / \omega +1$ , we have + +$$ +\mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] \leq (1 - \beta) \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right]. +$$ + +Next, we use the definition of $z^{t + 1}$ and obtain + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ \leq (1 - \beta) p \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] + (1 - \beta) (1 - p) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2} \\ \stackrel {(1 7)} {\leq} \left(1 + \frac {2 p}{\beta}\right) (1 - \beta) p \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| f _ {i} \left(z ^ {t}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] + \left(1 + \frac {\beta}{2 p}\right) (1 - \beta) p \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2} \\ + (1 - \beta) (1 - p) \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2} \right] \\ = \left(1 + \frac {2 p}{\beta}\right) (1 - \beta) p \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| f _ {i} \left(z ^ {t}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] + (1 - \beta) \left(1 + \frac {\beta}{2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Using $1 - \beta \leq 1$ , (18) and (20), we get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ \leq 4 p \left(1 + \frac {p}{\beta}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| f _ {i} \left(z ^ {t}\right) - \nabla f _ {i} \left(y ^ {t + 1}\right) \right\| ^ {2} + 4 p \left(1 + \frac {p}{\beta}\right) \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(y ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \left(1 - \frac {\beta}{2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +From Assumptions 1.1 and 1.3 and Lemma D.1, we obtain + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ \leq 8 p \left(1 + \frac {p}{\beta}\right) L _ {\max } \left(f (z ^ {t}) - f (y ^ {t + 1}) - \left\langle \nabla f (y ^ {t + 1}), z ^ {t} - y ^ {t + 1} \right\rangle\right) + 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] \\ + \left(1 - \frac {\beta}{2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ = 8 p \left(1 + \frac {p}{\beta}\right) L _ {\max } D _ {f} (z ^ {t}, y ^ {t + 1}) + 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] + \left(1 - \frac {\beta}{2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +□ + +Lemma E.6. Suppose that Assumptions 1.2, 1.1, 1.3 and 2.4 hold. Then, for Algorithm 1, the following inequality holds: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] \leq \left(1 - \frac {\alpha}{2}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2} + \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max }}{n} + \frac {8 L}{\alpha}\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right). \tag {41} \\ \end{array} +$$ + +Proof. Using the definition of $w^{t + 1}$ and Definition 2.2, we get the following inequality + +$$ +\mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] = \mathbb {E} _ {t} \left[ \left\| \mathcal {C} ^ {P} (u ^ {t + 1} - q ^ {t + 1}) - (u ^ {t + 1} - q ^ {t + 1}) \right\| ^ {2} \right] \leq (1 - \alpha) \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - q ^ {t + 1} \right\| ^ {2} \right]. +$$ + +We can find the analytical formulas for $u^{t + 1}$ and $q^{t + 1}$ , and obtain that + +$$ +u ^ {t + 1} = \frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu} u ^ {t} + \frac {\mu \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} y ^ {t + 1} - \frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} g ^ {t + 1} +$$ + +and + +$$ +q ^ {t + 1} = \frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu} w ^ {t} + \frac {\mu \gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} y ^ {t + 1} - \frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} k ^ {t}. +$$ + +Therefore, + +$$ +\mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\leq (1 - \alpha) \mathbb {E} _ {t} \left[ \left\| \frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu} \left(w ^ {t} - u ^ {t}\right) - \frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \left(k ^ {t} - g ^ {t + 1}\right) \right\| ^ {2} \right] +$$ + +$$ +\stackrel {(2 3)} {=} (1 - \alpha) \left\| \frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu} \left(w ^ {t} - u ^ {t}\right) - \frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu} \left(k ^ {t} - \nabla f \left(y ^ {t + 1}\right)\right) \right\| ^ {2} +$$ + +$$ ++ (1 - \alpha) \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \mathbb {E} _ {t} \left[ \left\| g ^ {t + 1} - \nabla f \left(y ^ {t + 1}\right) \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} \stackrel {(1 7)} {\leq} \left(1 - \frac {\alpha}{2}\right) \left(\frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \| w ^ {t} - u ^ {t} \| ^ {2} + \frac {2}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \| k ^ {t} - \nabla f (y ^ {t + 1}) \| ^ {2} \\ + (1 - \alpha) \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \mathbb {E} _ {t} \left[ \left\| g ^ {t + 1} - \nabla f \left(y ^ {t + 1}\right) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 8)} {\leq} \left(1 - \frac {\alpha}{2}\right) \left(\frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \| w ^ {t} - u ^ {t} \| ^ {2} \\ + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \| k ^ {t} - \nabla f (z ^ {t}) \| ^ {2} \\ + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| \nabla f (z ^ {t}) - \nabla f (y ^ {t + 1}) \right\| ^ {2} \\ + (1 - \alpha) \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \mathbb {E} _ {t} \left[ \left\| g ^ {t + 1} - \nabla f \left(y ^ {t + 1}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +One can substitute (35) from Lemma E.3 to the last inequality and get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| w ^ {t + 1} - u ^ {t + 1} \right| \right| ^ {2} \right] \\ \leq \left(1 - \frac {\alpha}{2}\right) \left(\frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \| w ^ {t} - u ^ {t} \| ^ {2} \\ + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| \nabla f (z ^ {t}) - \nabla f (y ^ {t + 1}) \right\| ^ {2} \\ + (1 - \alpha) \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {2 \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - h _ {i} ^ {t} \right\| ^ {2} + \frac {4 \omega L _ {\max}}{n} D _ {f} (z ^ {t}, y ^ {t + 1})\right). \\ \end{array} +$$ + +Using Lemma D.1 and $1 - \alpha \leq 1$ , we obtain + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| w ^ {t + 1} - u ^ {t + 1} \right| \right| ^ {2} \right] \\ \leq \left(1 - \frac {\alpha}{2}\right) \left(\frac {\bar {L} + \Gamma_ {t} \mu}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \| w ^ {t} - u ^ {t} \| ^ {2} \\ + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - h _ {i} ^ {t} \right\| ^ {2} + \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{n} + \frac {8 L}{\alpha}\right) D _ {f} (z ^ {t}, y ^ {t + 1}) \\ \leq \left(1 - \frac {\alpha}{2}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} \\ + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - h _ {i} ^ {t} \right\| ^ {2} + \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{n} + \frac {8 L}{\alpha}\right) D _ {f} (z ^ {t}, y ^ {t + 1}), \\ \end{array} +$$ + +where we use that $\Gamma_{t + 1}\geq \Gamma_t$ for all $t\geq 0$ + +![](images/b39954373998f743356c5f68766a525b23fc7e7bc299bab008e62ff4aa6c0a26.jpg) + +Lemma E.7. Suppose that Assumptions 1.2 and 1.3 hold. Then, for Algorithm 1, the following inequality holds: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq 2 p \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f \left(z ^ {t}\right) \right\| ^ {2} \right] + 8 p L D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + 4 p L ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] + (1 - p) \left\| k ^ {t} - \nabla f \left(z ^ {t}\right) \right\| ^ {2}. \tag {42} \\ \end{array} +$$ + +Proof. Note that $k^{t + 1}$ and $z^{t + 1}$ are coupled by the same random variable $c^t$ . Therefore, + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ = p \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + (1 - p) \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ \leq 2 p \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \right] + 4 p \left\| \nabla f (z ^ {t}) - \nabla f (y ^ {t + 1}) \right\| ^ {2} + 4 p \mathbb {E} _ {t} \left[ \left\| \nabla f (y ^ {t + 1}) - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ + (1 - p) \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ \leq 2 p \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \right] + 8 p L D _ {f} (z ^ {t}, y ^ {t + 1}) + 4 p L ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] + (1 - p) \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2}, \\ \end{array} +$$ + +where we use Assumptions 1.2 and Lemma D.1. + +![](images/0cb143aa183486e5f5d7a5308068a551a363daa40bf7a1bfcc514e26943ddadd.jpg) + +Lemma E.8. Suppose that Assumptions 1.2, 1.1 and 1.3 hold. The momentum $\tau \in (0,1]$ and the probability $p \in (0,1]$ . Then, for Algorithm 1, the following inequality holds: + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| v ^ {\ell + 1} - \nabla f (z ^ {\ell + 1}) \right| \right| ^ {2} \right] \\ \leq \left(1 - \frac {\tau}{2}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + \frac {2 \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ + \left(4 p \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 p \tau^ {2} \omega L _ {\max }}{n}\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right) \mathbb {E} _ {t} \left[ \| x ^ {t + 1} - y ^ {t + 1} \| ^ {2} \right]. \tag {43} \\ \end{array} +$$ + +Proof. Using the definition of $v^{t + 1}$ , we get + +$$ +\mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] = \mathbb {E} _ {t} \left[ \left\| (1 - \tau) v ^ {t} + \tau \left(h ^ {t} + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathcal {C} _ {i} ^ {D, z} (\nabla f _ {i} (z ^ {t + 1}) - h _ {i} ^ {t})\right) - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right]. +$$ + +Assumption 2.4, including the independence and the unbiasedness of the compressors, insures that + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ \stackrel {(2 3)} {=} (1 - \tau) ^ {2} \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \tau^ {2} \mathbb {E} _ {t} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \mathcal {C} _ {i} ^ {D, z} (\nabla f _ {i} (z ^ {t + 1}) - h _ {i} ^ {t}) - (\nabla f (z ^ {t + 1}) - h ^ {t}) \right\| ^ {2} \right] \\ = (1 - \tau) ^ {2} \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \frac {\tau^ {2}}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \mathcal {C} _ {i} ^ {D, z} \left(\nabla f _ {i} (z ^ {t + 1}) - h _ {i} ^ {t}\right) - (\nabla f (z ^ {t + 1}) - h ^ {t}) \right\| ^ {2} \right] \\ \leq (1 - \tau) ^ {2} \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \frac {\tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \nabla f _ {i} (z ^ {t + 1}) - h _ {i} ^ {t} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using the definition of $z^{t + 1}$ , we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq (1 - \tau) ^ {2} (1 - p) \left| \left| v ^ {t} - \nabla f (z ^ {t}) \right| \right| ^ {2} + (1 - \tau) ^ {2} p \mathbb {E} _ {t} \left[ \left| \left| v ^ {t} - \nabla f (x ^ {t + 1}) \right| \right| ^ {2} \right] \\ + \frac {(1 - p) \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} + \frac {p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +(17),(18) + +$$ +\begin{array}{l} \leq (1 - \tau) ^ {2} (1 - p) \| v ^ {t} - \nabla f (z ^ {t}) \| ^ {2} \\ + (1 - \tau) ^ {2} \left(1 + \frac {\tau}{2 p}\right) p \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + (1 - \tau) ^ {2} \left(1 + \frac {2 p}{\tau}\right) p \mathbb {E} _ {t} \left[ \left\| \nabla f (z ^ {t}) - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \frac {(1 - p) \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ + \frac {2 p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \right] + \frac {2 p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \nabla f _ {i} (z ^ {t}) - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ = (1 - \tau) ^ {2} \left(1 + \frac {\tau}{2}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + (1 - \tau) ^ {2} \left(1 + \frac {2 p}{\tau}\right) p \mathbb {E} _ {t} \left[ \left\| \nabla f (z ^ {t}) - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \frac {(1 + p) \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} + \frac {2 p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \nabla f _ {i} (z ^ {t}) - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using $0 \leq 1 - \tau \leq 1, p \in (0,1]$ and (20), we get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq \left(1 - \frac {\tau}{2}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + \left(1 + \frac {2 p}{\tau}\right) p \mathbb {E} _ {t} \left[ \left\| \nabla f (z ^ {t}) - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \frac {2 \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} + \frac {2 p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \nabla f _ {i} (z ^ {t}) - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 8)} {\leq} \left(1 - \frac {\tau}{2}\right) \| v ^ {t} - \nabla f (z ^ {t}) \| ^ {2} \\ + 2 p \left(1 + \frac {2 p}{\tau}\right) \left\| \nabla f (z ^ {t}) - \nabla f (y ^ {t + 1}) \right\| ^ {2} + 2 p \left(1 + \frac {2 p}{\tau}\right) \mathbb {E} _ {t} \left[ \left\| \nabla f (y ^ {t + 1}) - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {2 \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ + \frac {4 p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (z ^ {t}) - \nabla f _ {i} (y ^ {t + 1}) \right\| ^ {2} + \frac {4 p \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathbb {E} _ {t} \left[ \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (y ^ {t + 1}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +It is left to use Assumptions 1.2 and 1.1 with Lemma D.1 to obtain + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \left| \left| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq \left(1 - \frac {\tau}{2}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + 4 p \left(1 + \frac {2 p}{\tau}\right) L D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + 2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} \mathbb {E} _ {t} \left[ \left\| y ^ {t + 1} - x ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ + \frac {8 p \tau^ {2} \omega L _ {\max}}{n} D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] \\ = \left(1 - \frac {\tau}{2}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + \frac {2 \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \\ \left. \right. + \left(4 p \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 p \tau^ {2} \omega L _ {\max}}{n}\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right) \mathbb {E} _ {t} \left[\left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +![](images/af96ad70a9a34840256d360e00f4f0e161b4167a4247b6399661f12af1f370fd.jpg) + +# E.4 Main theorem + +Theorem E.9. Suppose that Assumptions 1.2, 1.1, 1.3, 2.4 hold. Let + +$$ +\bar {L} = 6 6 0 5 0 8 \times \max \left\{\frac {L}{\alpha}, \frac {L p}{\alpha \tau}, \frac {\sqrt {L L _ {\mathrm {m a x}}} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}}, \frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}}, \frac {L _ {\mathrm {m a x}} \omega p ^ {2}}{\beta^ {2} n}, \frac {L _ {\mathrm {m a x}} \omega}{n} \right\}, (4 4) +$$ + +$\beta \leq \frac{1}{\omega + 1}$ , and $\theta_{\mathrm{min}} = \frac{1}{4} \min \left\{1, \frac{\alpha}{p}, \frac{\tau}{p}, \frac{\beta}{p}\right\}$ . For all $t \geq 0$ , Algorithm 1 guarantees that + +$$ +\begin{array}{l} \Gamma_ {t + 1} \left(\mathbb {E} [ f (z ^ {t + 1}) - f (x ^ {*}) ] + \kappa \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \| ^ {2} \right] + \nu_ {t + 1} \mathbb {E} \left[ \| w ^ {t + 1} - u ^ {t + 1} \| ^ {2} \right] \right. \\ + \rho \mathbb {E} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right]) + \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2} \mathbb {E} \left[ \left\| u ^ {t + 1} - x ^ {*} \right\| ^ {2} \right] \\ \leq \Gamma_ {t} \left(\mathbb {E} \left[ f (z ^ {t}) - f (x ^ {*}) \right] + \kappa \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \right] + \nu_ {t} \mathbb {E} \left[ \left\| w ^ {t} - u ^ {t} \right\| ^ {2} \right] \right. \\ \left. + \rho \mathbb {E} \left[ \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \right] + \lambda \mathbb {E} \left[ \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \right]\right) + \frac {\bar {L} + \Gamma_ {t} \mu}{2} \mathbb {E} \left[ \left\| u ^ {t} - x ^ {*} \right\| ^ {2} \right] \tag {45} \\ \end{array} +$$ + +for some $\kappa, \rho, \lambda, \nu_t \geq 0$ . + +Proof. We fix some constants $\kappa, \rho, \lambda, \nu_t \geq 0$ for all $t \geq 0$ that we define later. By combining Lemma E.4 with $\kappa \times (39)$ from Lemma E.5, $\nu_t \times (41)$ from Lemma E.6, $\rho \times (42)$ from Lemma E.7, and $\lambda \times (43)$ from Lemma E.8, we get the following inequality: + +$$ +\mathbb {E} _ {t} \left[ f (z ^ {t + 1}) - f (x ^ {*}) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] + \nu_ {t} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {2 p \omega}{n \bar {L}} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t} - x ^ {*} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - x ^ {*} \| ^ {2} \right]\right) \\ + p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \theta_ {t + 1} - 1\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \frac {p \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \\ + \kappa \left(8 p \left(1 + \frac {p}{\beta}\right) L _ {\max } D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] + \left(1 - \frac {\beta}{2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2}\right) \\ + \nu_ {t} \left(\left(1 - \frac {\alpha}{2}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} + \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \right. \\ + \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(z ^ {t}\right) - h _ {i} ^ {t} \right\| ^ {2} + \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{n} + \frac {8 L}{\alpha}\right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \rho \left(2 p \mathbb {E} _ {t} \left[ \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \right] + 8 p L D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) + 4 p L ^ {2} \mathbb {E} _ {t} \left[ \left\| x ^ {t + 1} - y ^ {t + 1} \right\| ^ {2} \right] + (1 - p) \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2}\right) \\ + \lambda \left(\left(1 - \frac {\tau}{2}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + \frac {2 \tau^ {2} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2} \right. \\ \left. + \left(4 p \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 p \tau^ {2} \omega L _ {\max}}{n}\right) D _ {f} (z ^ {t}, y ^ {t + 1}) + \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right) \mathbb {E} _ {t} \left[ \| x ^ {t + 1} - y ^ {t + 1} \| ^ {2} \right]\right). \\ \end{array} +$$ + +We regroup the terms and obtain + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (z ^ {t + 1}) - f (x ^ {*}) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] + \nu_ {t} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] \\ + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| u ^ {t} - x ^ {*} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - x ^ {*} \| ^ {2} \right]\right) \\ + p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \right. \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max }}{n}\right) + \theta_ {t + 1} - 1\left. \right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \mathbb {E} _ {t} \left[ \| x ^ {t + 1} - y ^ {t + 1} \| ^ {2} \right] \\ - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \\ + \nu_ {t} \left(1 - \frac {\alpha}{2}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} \\ \left. + \left(\nu_ {t} \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \rho (1 - p)\right) \| k ^ {t} - \nabla f (z ^ {t}) \| ^ {2} \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} + \left(\rho 2 p + \lambda \left(1 - \frac {\tau}{2}\right)\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \left(\frac {2 p \omega}{n \bar {L}} + \nu_ {t} \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \lambda \frac {2 \tau^ {2} \omega}{n} + \kappa \left(1 - \frac {\beta}{2}\right)\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}\right). \\ \end{array} +$$ + +Using $x^{t + 1} - y^{t + 1} = \theta_{t + 1}\left(u^{t + 1} - w^t\right)$ , we get (we mark the changes with color) + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (z ^ {t + 1}) - f (x ^ {*}) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] + \nu_ {t} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] \\ + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t} - x ^ {*} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - x ^ {*} \right\| ^ {2} \right]\right) \\ + p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \right. \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max }}{n}\right) + \theta_ {t + 1} - 1\left. \right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - w ^ {t} \| ^ {2} \right] \\ - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \\ + \nu_ {t} \left(1 - \frac {\alpha}{2}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} \\ \left. + \left(\nu_ {t} \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \rho (1 - p)\right) \| k ^ {t} - \nabla f (z ^ {t}) \| ^ {2} \right. \\ + \left(\rho 2 p + \lambda \left(1 - \frac {\tau}{2}\right)\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ \left. \right. + \left(\frac {2 p \omega}{n \bar {L}} + \nu_ {t} \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \lambda \frac {2 \tau^ {2} \omega}{n} + \kappa \left(1 - \frac {\beta}{2}\right)\right)\left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}\right). \\ \end{array} +$$ + +The inequality (17) implies $\left\| u^{t + 1} - w^t\right\|^2\leq 2\left\| u^{t + 1} - u^t\right\|^2 +2\left\| u^t -w^t\right\|^2$ and + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (z ^ {t + 1}) - f (x ^ {*}) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] + \nu_ {t} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] \\ + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t} - x ^ {*} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - x ^ {*} \right\| ^ {2} \right]\right) \\ + p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \right. \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max }}{n}\right) + \theta_ {t + 1} - 1\left. \right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + 2 \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - u ^ {t} \| ^ {2} \right] \\ - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. + \left(2 \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) + \nu_ {t} \left(1 - \frac {\alpha}{2}\right)\right) \| w ^ {t} - u ^ {t} \| ^ {2} \right. \\ \left. + \left(\nu_ {t} \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \rho (1 - p)\right) \| k ^ {t} - \nabla f (z ^ {t}) \| ^ {2} \right. \\ + \left(\rho 2 p + \lambda \left(1 - \frac {\tau}{2}\right)\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ \left. \right. + \left(\frac {2 p \omega}{n \bar {L}} + \nu_ {t} \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \lambda \frac {2 \tau^ {2} \omega}{n} + \kappa \left(1 - \frac {\beta}{2}\right)\right)\left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}\right). \\ \end{array} +$$ + +Now, we want to find appropriate $\kappa$ , $\rho$ , $\lambda$ , and $\nu_{t}$ such that + +$$ +\begin{array}{l} 2 \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) + \nu_ {t} \left(1 - \frac {\alpha}{2}\right) \leq \nu_ {t} \left(1 - \frac {\alpha}{4}\right), \\ \nu_ {t} \frac {4}{\alpha} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \rho (1 - p) \leq \rho \left(1 - \frac {p}{2}\right), \\ \rho 2 p + \lambda \left(1 - \frac {\tau}{2}\right) \leq \lambda \left(1 - \frac {\tau}{4}\right), \\ \frac {2 p \omega}{n \bar {L}} + \nu_ {t} \frac {2 \omega}{n} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \lambda \frac {2 \tau^ {2} \omega}{n} + \kappa \left(1 - \frac {\beta}{2}\right) \leq \kappa \left(1 - \frac {\beta}{4}\right). \tag {46} \\ \end{array} +$$ + +We analyze inequalities (46) in the following lemma: + +Lemma E.10 (First Symbolically Computed). Assume that for the parameter $\bar{L}$ , the inequalities from Sections I and J hold. Then, for all $t \geq 0$ , exists $\rho$ in (90), $\kappa$ in (88), $\lambda$ in (82), and $\nu_t$ in (83) such that (46) holds. + +We proof lemma separately in Section G. Using the lemma, we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (z ^ {t + 1}) - f (x ^ {*}) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] + \nu_ {t} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] \\ + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f (z ^ {t}) - f \left(x ^ {*}\right)\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \left\| u ^ {t} - x ^ {*} \right\| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - x ^ {*} \right\| ^ {2} \right]\right) \\ + p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \right. \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max}}{n}\right) + \theta_ {t + 1} - 1\left. \right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \\ + 2 \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - u ^ {t} \| ^ {2} \right] \\ - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \\ + \nu_ {t} \left(1 - \frac {\alpha}{4}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} \\ + \rho \left(1 - \frac {p}{2}\right) \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \lambda \left(1 - \frac {\tau}{4}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \kappa \left(1 - \frac {\beta}{4}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(z ^ {t}\right) \right\| ^ {2}\right). \\ \end{array} +$$ + +Let us separately analyze the terms w.r.t. $D_{f}(z^{t},y^{t + 1})$ and $\mathbb{E}_t\left[\| u^{t + 1} - u^t\| ^2\right]$ + +Lemma E.11 (Second Symbolically Computed). Consider the parameters $\rho$ , $\kappa$ , $\lambda$ , and $\nu_{t}$ from Lemma E.10. Assume that for the parameter $\bar{L}$ , the inequalities from Sections $L$ and $N$ hold, and the step size $\theta_{t+1} \leq 1/4$ for all $t \geq 0$ . Then, for all $t \geq 0$ , the following inequalities are satisfied: + +$$ +\begin{array}{l} p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \right. \tag {47} \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max}}{n}\right) + \theta_ {t + 1} - 1\left. \right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \leq 0 \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} 2 \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - u ^ {t} \| ^ {2} \right] \\ - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \leq 0. \tag {48} \\ \end{array} +$$ + +We prove Lemma E.11 in Section H. Using the lemma, we get + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f \left(z ^ {t + 1}\right) - f \left(x ^ {*}\right) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(z ^ {t + 1}\right) \right| \right| ^ {2} \right] + \nu_ {t} \mathbb {E} _ {t} \left[ \left| \left| w ^ {t + 1} - u ^ {t + 1} \right| \right| ^ {2} \right] \\ + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f \left(z ^ {t}\right) - f \left(x ^ {*}\right)\right) \\ + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| u ^ {t} - x ^ {*} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - x ^ {*} \| ^ {2} \right]\right) \\ + \nu_ {t} \left(1 - \frac {\alpha}{4}\right) \left\| w ^ {t} - u ^ {t} \right\| ^ {2} \\ + \rho \left(1 - \frac {p}{2}\right) \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \lambda \left(1 - \frac {\tau}{4}\right) \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} \\ + \kappa \left(1 - \frac {\beta}{4}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}\right). \\ \end{array} +$$ + +Note that $0 \leq \nu_{t+1} \leq \nu_t$ for all $t \geq 0$ , since $\theta_{t+1}$ is a non-increasing sequence (see Lemma E.1 and the definition of $\nu_t$ in (83)). Using + +$$ +\theta_ {t + 1} \leq \frac {1}{4} \min \left\{1, \frac {\alpha}{p}, \frac {\tau}{p}, \frac {\beta}{p} \right\} +$$ + +for all $t\geq 0$ , we obtain + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ f (z ^ {t + 1}) - f (x ^ {*}) \right] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \right\| ^ {2} \right] + \nu_ {t + 1} \mathbb {E} _ {t} \left[ \left\| w ^ {t + 1} - u ^ {t + 1} \right\| ^ {2} \right] \\ + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \left(1 - p \theta_ {t + 1}\right) \left(f (z ^ {t}) - f (x ^ {*}) + \kappa \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \right\| ^ {2}\right) + \nu_ {t} \| w ^ {t} - u ^ {t} \| ^ {2} \right. \\ \left. + \rho \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + \lambda \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2}\right) \\ \end{array} +$$ + +$$ +\left. + p \theta_ {t + 1} \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2 \gamma_ {t + 1}} \| u ^ {t} - x ^ {*} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2 \gamma_ {t + 1}} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - x ^ {*} \| ^ {2} \right]\right)\right). +$$ + +Let us multiply the inequality by $\frac{\gamma_{t + 1}}{p\theta_{t + 1}}$ + +$$ +\begin{array}{l} \frac {\gamma_ {t + 1}}{p \theta_ {t + 1}} \left(\mathbb {E} _ {t} [ f (z ^ {t + 1}) - f (x ^ {*}) ] + \kappa \mathbb {E} _ {t} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {t + 1} - \nabla f _ {i} (z ^ {t + 1}) \| ^ {2} \right] + \nu_ {t + 1} \mathbb {E} _ {t} [ \| w ^ {t + 1} - u ^ {t + 1} \| ^ {2} ] \right. \\ \left. + \rho \mathbb {E} _ {t} \left[ \left\| k ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right] + \lambda \mathbb {E} _ {t} \left[ \left\| v ^ {t + 1} - \nabla f (z ^ {t + 1}) \right\| ^ {2} \right]\right) \\ \leq \left(\frac {\gamma_ {t + 1}}{p \theta_ {t + 1}} - \gamma_ {t + 1}\right) \left(f (z ^ {t}) - f (x ^ {*}) + \kappa \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {t} - \nabla f _ {i} (z ^ {t}) \| ^ {2}\right) + \nu_ {t} \| w ^ {t} - u ^ {t} \| ^ {2} \right. \\ \left. + \rho \left\| k ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2} + \lambda \left\| v ^ {t} - \nabla f (z ^ {t}) \right\| ^ {2}\right) \\ \left. \right. + \left(\frac {\bar {L} + \Gamma_ {t} \mu}{2} \| u ^ {t} - x ^ {*} \| ^ {2} - \frac {\bar {L} + \Gamma_ {t + 1} \mu}{2} \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - x ^ {*} \| ^ {2} \right]\right). \\ \end{array} +$$ + +It is left to use that $\Gamma_{t + 1} \coloneqq \Gamma_t + \gamma_{t + 1}$ and $\gamma_{t + 1} = p\theta_{t + 1}\Gamma_{t + 1}$ (see Lemma E.1) and take the full expectation to obtain (45). + +In the proof we require that, for the parameter $\bar{L}$ , the inequalities from Sections I, J, L and N hold. In Section O, we show that these inequalities follow from (44). + +# E.5 Strongly-convex case + +Theorem E.12. Suppose that Assumptions 1.2, 1.1, 1.3, 2.4 hold. Let + +$$ +\bar {L} = 6 6 0 5 0 8 \times \max \left\{\frac {L}{\alpha}, \frac {L p}{\alpha \tau}, \frac {\sqrt {L L _ {\operatorname* {m a x}}} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}}, \frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}}, \frac {L _ {\operatorname* {m a x}} \omega p ^ {2}}{\beta^ {2} n}, \frac {L _ {\operatorname* {m a x}} \omega}{n} \right\}, \tag {49} +$$ + +$\beta = \frac{1}{\omega + 1}$ , $\theta_{\mathrm{min}} = \frac{1}{4}\min \left\{1,\frac{\alpha}{p},\frac{\tau}{p},\frac{\beta}{p}\right\}$ , $h_i^0 = \nabla f_i(z^0)$ for all $i\in [n]$ , $w^0 = u^0$ , $k^0 = \nabla f(z^0)$ , $v^{0} = \nabla f(z^{0})$ , and $\Gamma_0\geq 1$ . Then Algorithm 1 guarantees that + +$$ +\mathbb {E} \left[ f \left(z ^ {T}\right) - f \left(x ^ {*}\right) \right] + \frac {\mu}{2} \mathbb {E} \left[ \left\| u ^ {T} - x ^ {*} \right\| ^ {2} \right] \leq 2 \exp \left(- \frac {T}{Q}\right) \left(\left(f \left(z ^ {0}\right) - f \left(x ^ {*}\right)\right) + \left(\frac {\bar {L}}{\Gamma_ {0}} + \mu\right) \left\| u ^ {0} - x ^ {*} \right\| ^ {2}\right), \tag {50} +$$ + +where + +$$ +\begin{array}{l} Q := 2 \times \sqrt {6 6 0 5 0 8} \times \\ \max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L}{\alpha \tau \mu}}, \sqrt {\frac {\sqrt {L L _ {\operatorname* {m a x}}} (\omega + 1) \sqrt {\omega \tau}}{\alpha \sqrt {n} \mu}}, \sqrt {\frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {\omega + 1} \sqrt {\omega \tau}}{\alpha \sqrt {p} \sqrt {n} \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega}{n p \mu}}, \right. \\ \left. \frac {1}{\alpha}, \frac {1}{\tau}, (\omega + 1), \frac {1}{p} \right\}. \\ \end{array} +$$ + +Remark E.13. Up to a constant factor of 2, one can see that the optimal $\Gamma_0$ in (50) from Theorem E.12 equals $\Gamma_0 = \bar{L} /\mu$ . But the dependence on $\Gamma_0$ is under the logarithm, so if the dependence on the logarithm is not critical, one can take any $\Gamma_0\geq 1$ . + +Proof. All conditions from Theorem E.9 are satisfied. Let us sum the inequality (45) for $t = 0$ to $T - 1$ : + +$$ +\begin{array}{l} \Gamma_ {T} \left(\mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] + \kappa \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {T} - \nabla f _ {i} (z ^ {T}) \| ^ {2} \right] + \nu_ {T} \mathbb {E} \left[ \| w ^ {T} - u ^ {T} \| ^ {2} \right] \right. \\ \left. + \rho \mathbb {E} \left[ \left\| k ^ {T} - \nabla f (z ^ {T}) \right\| ^ {2} \right] + \lambda \mathbb {E} \left[ \left\| v ^ {T} - \nabla f (z ^ {T}) \right\| ^ {2} \right]\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \Gamma_ {0} \left(\mathbb {E} \left[ f (z ^ {0}) - f (x ^ {*}) \right] + \kappa \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (z ^ {0}) \right\| ^ {2} \right] + \nu_ {0} \mathbb {E} \left[ \| w ^ {0} - u ^ {0} \| ^ {2} \right] \right. \\ \left. + \rho \mathbb {E} \left[ \left\| k ^ {0} - \nabla f (z ^ {0}) \right\| ^ {2} \right] + \lambda \mathbb {E} \left[ \left\| v ^ {0} - \nabla f (z ^ {0}) \right\| ^ {2} \right]\right) \\ \left. \right. + \left(\frac {\bar {L} + \Gamma_ {0} \mu}{2} \mathbb {E} \left[\left\| u ^ {0} - x ^ {*} \right\| ^ {2} \right] - \frac {\bar {L} + \Gamma_ {T} \mu}{2} \mathbb {E} \left[\left\| u ^ {T} - x ^ {*} \right\| ^ {2} \right]\right). \\ \end{array} +$$ + +Using the initial conditions and the non-negativity of the terms, we get + +$$ +\Gamma_ {T} \mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] + \frac {\Gamma_ {T} \mu}{2} \mathbb {E} \left[ \| u ^ {T} - x ^ {*} \| ^ {2} \right] \leq \Gamma_ {0} \left(f (z ^ {0}) - f (x ^ {*})\right) + \frac {\bar {L} + \Gamma_ {0} \mu}{2} \| u ^ {0} - x ^ {*} \| ^ {2}. +$$ + +Using Lemma E.1, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] + \frac {\mu}{2} \mathbb {E} \left[ \left\| u ^ {T} - x ^ {*} \right\| ^ {2} \right] \\ \leq \exp \left(- T \min \left\{\sqrt {\frac {p \mu}{4 \bar {L}}}, p \theta_ {\min } \right\}\right) \left(2 \left(f (z ^ {0}) - f (x ^ {*})\right) + \left(\frac {\bar {L}}{\Gamma_ {0}} + \mu\right) \| u ^ {0} - x ^ {*} \| ^ {2}\right). \\ \end{array} +$$ + +It is left to use the definitions of $\bar{L}$ and $\theta_{\mathrm{min}}$ + +![](images/22f9d583f97c1436f36e3b801a6fbc1a379e76720170314c2c42bb625d83a451.jpg) + +# E.6 General convex case + +Let us use Theorem E.9 to analyze the general convex case ( $\mu$ can possibly be equal to zero): + +Theorem E.14. Suppose that Assumptions 1.2, 1.1, 1.3, 2.4 hold. Let + +$$ +\bar {L} = 6 6 0 5 0 8 \times \max \left\{\frac {L}{\alpha}, \frac {L p}{\alpha \tau}, \frac {\sqrt {L L _ {\mathrm {m a x}}} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}}, \frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}}, \frac {L _ {\mathrm {m a x}} \omega p ^ {2}}{\beta^ {2} n}, \frac {L _ {\mathrm {m a x}} \omega}{n} \right\}, (5 1) +$$ + +$$ +\begin{array}{l} \beta = \frac {1}{\omega + 1}, \theta_ {\min } = \frac {1}{4} \min \left\{1, \frac {\alpha}{p}, \frac {\tau}{p}, \frac {\beta}{p} \right\}, h _ {i} ^ {0} = \nabla f _ {i} (z ^ {0}) f o r a l l i \in [ n ], w ^ {0} = u ^ {0}, k ^ {0} = \nabla f (z ^ {0}), \\ v ^ {0} = \nabla f (z ^ {0}), a n d \Gamma_ {0} \in [ 1, \bar {L} / L ]. T h e n A l g o r i t h m 1 r e t u r n s \varepsilon - s o l u t i o n, i . e ., \mathbb {E} \left[ f (z ^ {T}) \right] - f (x ^ {*}) \leq \varepsilon , \\ a f t e r \end{array} +$$ + +$$ +T = \left\{ \begin{array}{l l} \Theta \left(\frac {1}{p \theta_ {\operatorname* {m i n}}} \log \frac {\bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2}}{\Gamma_ {0} \varepsilon}\right), & \frac {\bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2}}{\varepsilon} < \frac {1}{p \theta_ {\operatorname* {m i n}} ^ {2}} \\ \Theta \left(\max \left\{\frac {1}{p \theta_ {\operatorname* {m i n}}} \log \frac {1}{\Gamma_ {0} p \theta_ {\operatorname* {m i n}} ^ {2}}, 0 \right\} + Q \sqrt {\frac {\left\| z ^ {0} - x ^ {*} \right\| ^ {2}}{\varepsilon}}\right), & \text {o t h e r w i s e} \end{array} \right. \tag {52} +$$ + +iterations, where + +$$ +Q := \Theta \left(\max \left\{\sqrt {\frac {L}{\alpha p}}, \sqrt {\frac {L}{\alpha \tau}}, \sqrt {\frac {\sqrt {L L _ {\mathrm {m a x}}} (\omega + 1) \sqrt {\omega \tau}}{\alpha \sqrt {n}}}, \sqrt {\frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {\omega + 1} \sqrt {\omega \tau}}{\alpha \sqrt {p} \sqrt {n}}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega (\omega + 1) ^ {2} p}{n}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega}{n p}} \right\}\right). +$$ + +Remark E.15. One can see that the optimal $\Gamma_0$ in (52) from Theorem E.14 equals $\Gamma_0 = \bar{L} / L$ . But the dependence on $\Gamma_0$ is under the logarithm, so if the dependence on the logarithm is not critical, one can take any $\Gamma_0 \in [1, \bar{L} / L]$ . + +Proof. All conditions from Theorem E.9 are satisfied. Let us sum the inequality (45) for $t = 0$ to $T - 1$ : + +$$ +\begin{array}{l} \Gamma_ {T} \left(\mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] + \kappa \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {T} - \nabla f _ {i} (z ^ {T}) \| ^ {2} \right] + \nu_ {T} \mathbb {E} \left[ \| w ^ {T} - u ^ {T} \| ^ {2} \right] \right. \\ \left. + \rho \mathbb {E} \left[ \left\| k ^ {T} - \nabla f (z ^ {T}) \right\| ^ {2} \right] + \lambda \mathbb {E} \left[ \left\| v ^ {T} - \nabla f (z ^ {T}) \right\| ^ {2} \right]\right) \\ \leq \Gamma_ {0} \left(\mathbb {E} \left[ f (z ^ {0}) - f (x ^ {*}) \right] + \kappa \mathbb {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (z ^ {0}) \right\| ^ {2} \right] + \nu_ {0} \mathbb {E} \left[ \| w ^ {0} - u ^ {0} \| ^ {2} \right] \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. + \rho \mathbb {E} \left[ \left\| k ^ {0} - \nabla f (z ^ {0}) \right\| ^ {2} \right] + \lambda \mathbb {E} \left[ \left\| v ^ {0} - \nabla f (z ^ {0}) \right\| ^ {2} \right]\right) \\ \left. \right. + \left(\frac {\bar {L} + \Gamma_ {0} \mu}{2} \mathbb {E} \left[\left\| u ^ {0} - x ^ {*} \right\| ^ {2} \right] - \frac {\bar {L} + \Gamma_ {T} \mu}{2} \mathbb {E} \left[\left\| u ^ {T} - x ^ {*} \right\| ^ {2} \right]\right). \\ \end{array} +$$ + +Using the initial conditions, the non-negativity of the terms, and $\Gamma_0\leq \bar{L} /L$ , we get + +$$ +\begin{array}{l} \Gamma_ {T} \mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] \leq \Gamma_ {0} \left(f (z ^ {0}) - f (x ^ {*})\right) + \frac {\bar {L} + \Gamma_ {0} \mu}{2} \| u ^ {0} - x ^ {*} \| ^ {2} \\ \leq \frac {\bar {L}}{L} \left(f (z ^ {0}) - f (x ^ {*})\right) + \bar {L} \| z ^ {0} - x ^ {*} \| ^ {2}. \\ \end{array} +$$ + +Using the $L$ -smoothness and $\mu \leq L \leq \bar{L}$ , we have + +$$ +\Gamma_ {T} \mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] \leq \bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2} + \bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2} \leq 2 \bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2}. +$$ + +Using Lemma E.1, we have + +$$ +\left\{ \begin{array}{l l} \frac {\Gamma_ {0}}{2} \exp \left(t p \theta_ {\min }\right) \mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] \leq 2 \bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2}, & t < \bar {t} \\ \frac {p (t - \bar {t}) ^ {2}}{1 6} \mathbb {E} \left[ f (z ^ {T}) - f (x ^ {*}) \right] \leq 2 \bar {L} \left\| z ^ {0} - x ^ {*} \right\| ^ {2}, & t \geq \bar {t}, \end{array} \right. +$$ + +where $\bar{t} := \max \left\{\left\lceil \frac{1}{p\theta_{\min}}\log \frac{1}{2\Gamma_0p\theta_{\min}^2}\right\rceil ,0\right\}$ . The last inequalities guarantees that Algorithm 1 returns $\varepsilon$ -solution after (52) iterations. + +![](images/90c05d0b62c332cbef79b369a6cee80e528d86ba3a5cdf63850608899f1e84d7.jpg) + +# E.7 Choosing optimal parameters + +Theorem 5.2. Choose $r \in [0,1]$ and let $\mu_{\omega,\alpha}^r \coloneqq \frac{rd}{(1-r)K_\omega + rK_\alpha}$ . In view of Theorem 5.1, the values $p = \min \left\{\frac{1}{\omega+1}, \frac{1}{\mu_{\omega,\alpha}^r}\right\}$ and $\tau = \frac{p^{1/3}}{(\omega+1)^{2/3}}$ minimize $\max_{L_{\mathrm{max}} \in [L,nL]} \mathfrak{m}_{\mathrm{new}}^r$ . This choice leads to the following number of communication rounds: + +$$ +T ^ {\text {r e a l i s t i c}} := \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L \operatorname* {m a x} \left\{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{\alpha \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega \operatorname* {m a x} \left\{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{n \mu}}, \frac {1}{\alpha}, (\omega + 1), \mu_ {\omega , \alpha} ^ {r} \right\}\right). \tag {13} +$$ + +The total communication complexity thus equals $\mathfrak{m}_{\mathrm{realistic}}^r = \widetilde{\Theta}\left((1 - r)K_\omega +rK_\alpha\right)T_{\mathrm{realistic}} + d$ . + +Proof. We implicitly assume that $p \in (0,1]$ and $\tau \in (0,1]$ . Using (12) and (4), we have + +$$ +\operatorname *{arg min}_{p,\tau}\max_{L_{\max }\in [L,nL]}\mathfrak{m}^{r}_{\text{new}} = \operatorname *{arg min}_{p,\tau}\max_{L_{\max }\in [L,nL]}\widetilde{\Theta}\left((1 - r)K_{\omega}T + r\left(K_{\alpha} + pd\right)T\right). +$$ + +Note that only $T$ depends on $L_{\mathrm{max}}$ and $\tau$ . We have + +$$ +\min_{\tau}\max_{L_{\max}\in [L,nL]}T +$$ + +$$ +\begin{array}{l} \stackrel {(1 0)} {=} \min _ {\tau} \max _ {L _ {\max } \in [ L, n L ]} \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L}{\alpha \tau \mu}}, \sqrt {\frac {\sqrt {L L _ {\max}} (\omega + 1) \sqrt {\omega \tau}}{\alpha \sqrt {n} \mu}}, \right. \right. \\ \sqrt {\frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {\omega + 1} \sqrt {\omega \tau}}{\alpha \sqrt {p} \sqrt {n} \mu}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, \frac {1}{\tau}, (\omega + 1), \frac {1}{p} \Biggr \}) \\ \end{array} +$$ + +$$ +\begin{array}{l} = \min _ {\tau} \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L}{\alpha \tau \mu}}, \sqrt {\frac {L (\omega + 1) \sqrt {\omega \tau}}{\alpha \mu}}, \sqrt {\frac {L \sqrt {\omega + 1} \sqrt {\omega \tau}}{\alpha \sqrt {p} \mu}}, \sqrt {\frac {L \omega (\omega + 1) ^ {2} p}{\mu}}, \sqrt {\frac {L \omega}{p \mu}}, \right. \right. \\ \left. \left. \frac {1}{\alpha}, \frac {1}{\tau}, (\omega + 1), \frac {1}{p} \right\}\right) \\ \end{array} +$$ + +The last term attains the minimum when + +$$ +\tau = \min \left\{\frac {1}{\omega + 1}, \frac {p ^ {1 / 3}}{(\omega + 1) ^ {2 / 3}} \right\}. \tag {53} +$$ + +Therefore, we get + +$$ +T^{\prime}:= \min_{\tau}\max_{L_{\max}\in [L,nL]}T +$$ + +$$ += \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L (\omega + 1)}{\alpha \mu}}, \sqrt {\frac {L (\omega + 1) ^ {2 / 3}}{\alpha p ^ {1 / 3} \mu}}, \sqrt {\frac {L \omega (\omega + 1) ^ {2} p}{\mu}}, \sqrt {\frac {L \omega}{p \mu}}, \frac {1}{\alpha}, (\omega + 1), \frac {1}{p} \right\}\right), +$$ + +where we use that + +$$ +\frac {1}{\tau} = \max \left\{\omega + 1, \frac {(\omega + 1) ^ {2 / 3}}{p ^ {1 / 3}} \right\} \leq \max \left\{\omega + 1, \frac {2}{3} (\omega + 1) + \frac {1}{3 p} \right\} = \Theta \left(\max \left\{(\omega + 1), \frac {1}{p} \right\}\right). +$$ + +It is left to find + +$$ +\begin{array}{l} \underset {p} {\arg \min } \left(\underset {\tau} {\min } \underset {L _ {\max } \in [ L, n L ]} {\max } \mathfrak {m} _ {\text {n e w}} ^ {r}\right) = \underset {p} {\arg \min } \widetilde {\Theta} \left((1 - r) K _ {\omega} T ^ {\prime} + r \left(K _ {\alpha} + p d\right) T ^ {\prime}\right) \\ = \underset {p} {\arg \min } \widetilde {\Theta} \left(A \times T ^ {\prime} + B \times p T ^ {\prime}\right), \tag {55} \\ \end{array} +$$ + +where $A \coloneqq (1 - r)K_{\omega} + rK_{\alpha} \geq 0$ and $B \coloneqq rd \geq 0$ . Note that $A$ and $B$ do not depend on $p$ . If $p \geq {}^A /B$ , then $\widetilde{\Theta}\left(AT^{\prime} + BpT^{\prime}\right) = \widetilde{\Theta}\left(BpT^{\prime}\right)$ . The term $pT^{\prime}$ is non-decreasing function w.r.t. $p$ . For $p \geq {}^A /B$ , it means that an optimal point $p = {}^A /B$ . Thus the argmin (55) is equivalent to + +$$ +\operatorname * {a r g m i n} _ {p \in Q _ {0}} \widetilde {\Theta} \left(A \times T ^ {\prime} + B \times p T ^ {\prime}\right) = \operatorname * {a r g m i n} _ {p \in Q _ {0}} \widetilde {\Theta} \left(T ^ {\prime}\right), +$$ + +where $Q_0 \coloneqq \left\{p \mid p \leq \frac{A}{B}\right\}$ . Thus, we have + +$$ +\underset {p \in Q _ {0}} {\arg \min } \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L (\omega + 1)}{\alpha \mu}}, \sqrt {\frac {L (\omega + 1) ^ {2 / 3}}{\alpha p ^ {1 / 3} \mu}}, \sqrt {\frac {L \omega (\omega + 1) ^ {2} p}{\mu}}, \sqrt {\frac {L \omega}{p \mu}}, \frac {1}{\alpha}, (\omega + 1), \frac {1}{p} \right\}\right). +$$ + +The next observation is that this argmin is non-decreasing when $p \geq 1 / \omega + 1$ . It means that the minimum attains at some point $p \in Q_1 := \{p \mid p \leq 1 / \omega + 1, p \in Q_0\}$ . Using this information, we can eliminate the redundant terms (for instance, $A \sqrt{\frac{L(\omega + 1)}{\alpha \mu}} \leq A \sqrt{\frac{L}{\alpha p \mu}}$ for $p \in Q_1$ ) and get an equivalent argmin + +$$ +\underset {p \in Q _ {1}} {\arg \min } \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L \omega}{p \mu}}, \frac {1}{\alpha}, \frac {1}{p} \right\}\right), +$$ + +The last term attains the minimum at + +$$ +p = \min \left\{\frac {1}{\omega + 1}, \frac {A}{B} \right\}. \tag {56} +$$ + +Using (56) and (53), an optimal $\tau$ is + +$$ +\tau = \frac {p ^ {1 / 3}}{(\omega + 1) ^ {2 / 3}}. \tag {57} +$$ + +We substitute (56) and (57) to (10) and obtain (13). We use Lemma 1.4 to eliminate the redundant terms. Note that $\mu_{\omega, \alpha}^{r} \coloneqq B / A$ . + +Using (12) and (56), we have + +$$ +\mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = (1 - r) K _ {\omega} T _ {\text {r e a l i s t i c}} + r \left(K _ {\alpha} + p d\right) T _ {\text {r e a l i s t i c}} + d +$$ + +$$ +\begin{array}{l} \leq (1 - r) K _ {\omega} T _ {\text {r e a l i s t i c}} + r \left(K _ {\alpha} + \frac {(1 - r) K _ {\omega} + r K _ {\alpha}}{r d} d\right) T _ {\text {r e a l i s t i c}} + d \\ = 2 (1 - r) K _ {\omega} T _ {\text {r e a l i s t i c}} + 2 r K _ {\alpha} T _ {\text {r e a l i s t i c}} + d. \\ \end{array} +$$ + +Note that + +$$ +\begin{array}{l} \mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = (1 - r) K _ {\omega} T _ {\text {r e a l i s t i c}} + r \left(K _ {\alpha} + p d\right) T _ {\text {r e a l i s t i c}} + d \\ \geq (1 - r) K _ {\omega} T _ {\text {r e a l i s t i c}} + r K _ {\alpha} T _ {\text {r e a l i s t i c}} + d. \\ \end{array} +$$ + +![](images/02c7a433f367c85a03ad5df44762c7a6717f3c9bd36a05475cba76b3bd9ff259.jpg) + +Theorem 5.4. Choose $r \in [0,1]$ , and let $\mu_{\omega, \alpha}^r \coloneqq \frac{rd}{(1 - r)K_\omega + rK_\alpha}$ . In view of Theorem 5.1, the values $p$ and $\tau$ given by (63) and (58), respectively, minimize $\mathfrak{m}_{\mathrm{new}}^r$ from (10). This choice leads to the following number of communication rounds: + +$$ +\begin{array}{l} T ^ {\text {o p t i m i s t i c}} = \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L \operatorname* {m a x} \left\{1 , \mu_ {\omega , \alpha} ^ {r} \right\}}{\alpha \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\max } ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}}, \sqrt {\frac {L ^ {1 / 2} L _ {\max } ^ {1 / 2} (\omega + 1) ^ {3 / 2}}{\sqrt {\alpha n} \mu}}, \right. \right. \tag {14} \\ \left. \sqrt {\frac {L _ {\operatorname* {m a x}} \omega \operatorname* {m a x} \{\omega + 1 , \mu_ {\omega , \alpha} ^ {r} \}}{n \mu}}, \frac {1}{\alpha}, (\omega + 1), \mu_ {\omega , \alpha} ^ {r} \right\}\Biggr). \\ \end{array} +$$ + +The total communication complexity thus equals $\mathfrak{m}_{\mathrm{optimistic}}^r = \widetilde{\Theta} \left( ((1 - r) K_\omega + r K_\alpha) T_{\mathrm{optimistic}} + d \right)$ . + +Proof. We implicitly assume that $p \in (0,1]$ and $\tau \in (0,1]$ . We start the proof as in Theorem 5.2. Using (12) and (4), we have + +$$ +\operatorname *{arg min}_{p,\tau}\mathfrak{m}^{r}_{\mathrm{new}} = \operatorname *{arg min}_{p,\tau}\widetilde{\Theta}\left((1 - r)K_{\omega}T + r\left(K_{\alpha} + pd\right)T\right). +$$ + +Unlike Theorem 5.2, we know the ratio $L_{\mathrm{max}} / L$ , thus + +min T + +$$ +\begin{array}{l} \stackrel {(1 0)} {=} \min _ {\tau} \widetilde {\Theta} \bigg (\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L}{\alpha \tau \mu}}, \sqrt {\frac {\sqrt {L L _ {\max}} (\omega + 1) \sqrt {\omega \tau}}{\alpha \sqrt {n} \mu}}, \right. \\ \sqrt {\frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {\omega + 1} \sqrt {\omega \tau}}{\alpha \sqrt {p} \sqrt {n \mu}}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, \frac {1}{\tau}, (\omega + 1), \frac {1}{p} \Biggr \}) \\ \end{array} +$$ + +The last term attains the minimum when + +$$ +\tau = \min \left\{1, \left(\frac {L n}{L _ {\max }}\right) ^ {1 / 3} \min \left\{\frac {1}{\omega + 1}, \frac {p ^ {1 / 3}}{(\omega + 1) ^ {2 / 3}} \right\} \right\}. \tag {58} +$$ + +Therefore, we get + +$$ +\begin{array}{l} T ^ {\prime} := \min _ {\tau} T \\ = \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\operatorname* {m a x}} ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\operatorname* {m a x}} ^ {1 / 3} (\omega + 1) ^ {2 / 3}}{\alpha n ^ {1 / 3} p ^ {1 / 3} \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, \omega + 1, \frac {1}{p} \right\}\right). \tag {59} \\ \end{array} +$$ + +It is left to find + +$$ +\begin{array}{l} \arg \min _ {p} \left(\min _ {\tau} \mathfrak {m} _ {\mathrm {n e w}} ^ {r}\right) = \arg \min _ {p} \widetilde {\Theta} \left((1 - r) K _ {\omega} T ^ {\prime} + r \left(K _ {\alpha} + p d\right) T ^ {\prime}\right) \\ = \underset {p} {\arg \min } \widetilde {\Theta} (A \times T ^ {\prime} + B \times p T ^ {\prime}), \tag {60} \\ \end{array} +$$ + +where $A \coloneqq (1 - r)K_{\omega} + rK_{\alpha} \geq 0$ and $B \coloneqq rd \geq 0$ . Note that $A$ and $B$ do not depend on $p$ . If $p \geq {}^A /B$ , then $\widetilde{\Theta}\left(AT^{\prime} + BpT^{\prime}\right) = \widetilde{\Theta}\left(BpT^{\prime}\right)$ . The term $pT^{\prime}$ is non-decreasing function w.r.t. $p$ . For $p \geq {}^A /B$ , it means that an optimal point $p = {}^A /B$ . Thus the argmin (60) is equivalent to + +$$ +\operatorname *{arg min}_{p\in Q_{0}}\widetilde{\Theta}\left(A\times T^{\prime} + B\times pT^{\prime}\right) = \operatorname *{arg min}_{p\in Q_{0}}\widetilde{\Theta}\left(T^{\prime}\right), +$$ + +where $Q_0 \coloneqq \left\{p \mid p \leq \frac{A}{B}\right\}$ . Next, we have + +$$ +\begin{array}{l} \operatorname *{arg min}_{p\in Q_{0}}\widetilde{\Theta}\left(T^{\prime}\right) \\ = \operatorname *{arg min}_{p\in Q_{0}} \\ \end{array} +$$ + +$$ +\widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\max} ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\max} ^ {1 / 3} (\omega + 1) ^ {2 / 3}}{\alpha n ^ {1 / 3} p ^ {1 / 3} \mu}}, \sqrt {\frac {L _ {\max} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\max} \omega}{n p \mu}}, \frac {1}{\alpha}, (\omega + 1), \frac {1}{p} \right\}\right). +$$ + +For $p \geq \left(\frac{Ln}{L_{\max}}\right)^{1/3} \frac{1}{\omega + 1}$ , using Lemma 1.4, we have $p \geq \frac{1}{\omega + 1}$ + +$$ +\begin{array}{l} \sqrt {\frac {L}{\alpha p \mu}} \leq \sqrt {\frac {L ^ {2 / 3} L _ {\max } ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}}, \\ \sqrt {\frac {L ^ {2 / 3} L _ {\mathrm {m a x}} ^ {1 / 3} (\omega + 1) ^ {2 / 3}}{\alpha n ^ {1 / 3} p ^ {1 / 3} \mu}} \leq \sqrt {\frac {L ^ {2 / 3} L _ {\mathrm {m a x}} ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}} \\ \sqrt {\frac {L _ {\mathrm {m a x}} \omega}{n p \mu}} \leq \sqrt {\frac {L _ {\mathrm {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}} \\ \frac {1}{p} \leq \left(\frac {L _ {\operatorname* {m a x}}}{L n}\right) ^ {1 / 3} (\omega + 1) \leq (\omega + 1). \\ \end{array} +$$ + +It means that for $p \geq \left( \frac{Ln}{L_{\max}} \right)^{1/3} \frac{1}{\omega + 1}$ , the argmin (61) is equivalent to + +$$ +\underset {p \in Q _ {0}} {\arg \min} \widetilde {\Theta} \bigg (\max \left\{\sqrt {\frac {L ^ {2 / 3} L _ {\max} ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}}, \sqrt {\frac {L _ {\max} \omega (\omega + 1) ^ {2} p}{n \mu}}, \frac {1}{\alpha}, (\omega + 1) \right\} \bigg). +$$ + +Since all terms are non-increasing functions of $p$ , the minimum is attained at a point $p = \left(\frac{Ln}{L_{\max}}\right)^{1/3} \frac{1}{\omega + 1}$ for all $p \geq \left(\frac{Ln}{L_{\max}}\right)^{1/3} \frac{1}{\omega + 1}$ . Let us define + +$$ +Q _ {1} := \left\{p \middle | p \leq \left(\frac {L n}{L _ {\operatorname* {m a x}}}\right) ^ {1 / 3} \frac {1}{\omega + 1}, p \in Q _ {0} \right\}. +$$ + +The last observation means that the argmin (61) is equivalent to + +arg min $p\in Q_1$ + +$$ +\begin{array}{l} \widetilde {\Theta} \bigg (\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\mathrm {m a x}} ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}}, \sqrt {\frac {L ^ {2 / 3} L _ {\mathrm {m a x}} ^ {1 / 3} (\omega + 1) ^ {2 / 3}}{\alpha n ^ {1 / 3} p ^ {1 / 3} \mu}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\mathrm {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, (\omega + 1), \frac {1}{p} \right\} \bigg) \\ = \underset {p \in Q _ {1}} {\arg \min } \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega (\omega + 1) ^ {2} p}{n \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, (\omega + 1), \frac {1}{p} \right\}\right), \tag {62} \\ \end{array} +$$ + +where we eliminate the redundant terms using the additional information $p \leq \left( \frac{Ln}{L_{\max}} \right)^{1/3} \frac{1}{\omega + 1}$ . In particular, + +$$ +\sqrt {\frac {L ^ {2 / 3} L _ {\mathrm {m a x}} ^ {1 / 3} (\omega + 1)}{\alpha n ^ {1 / 3} \mu}} \leq \sqrt {\frac {L}{\alpha p \mu}} \mathrm {a n d} \sqrt {\frac {L ^ {2 / 3} L _ {\mathrm {m a x}} ^ {1 / 3} (\omega + 1) ^ {2 / 3}}{\alpha n ^ {1 / 3} p ^ {1 / 3} \mu}} \leq \sqrt {\frac {L}{\alpha p \mu}}. +$$ + +for all $p \leq \left( \frac{Ln}{L_{\max}} \right)^{1/3} \frac{1}{\omega + 1}$ . Without the condition $p \in Q_1$ , the argmin (62) attains the minimum at a point + +$$ +p = \max \left\{\frac {1}{\omega + 1}, \left(\frac {L n}{L _ {\mathrm {m a x}}}\right) ^ {1 / 2} \frac {1}{\sqrt {\alpha} (\omega + 1) ^ {3 / 2}} \right\}. +$$ + +Considering the condition $p \in Q_1$ and $\mu_{\omega, \alpha}^r \coloneqq B / A$ , we have + +$$ +p = \min \left\{1, \frac {1}{\mu_ {\omega , \alpha} ^ {r}}, \left(\frac {L n}{L _ {\operatorname* {m a x}}}\right) ^ {1 / 3} \frac {1}{\omega + 1}, \max \left\{\frac {1}{\omega + 1}, \left(\frac {L n}{L _ {\operatorname* {m a x}}}\right) ^ {1 / 2} \frac {1}{\sqrt {\alpha} (\omega + 1) ^ {3 / 2}} \right\} \right\}. \tag {63} +$$ + +It is left carefully to substitute (63) to + +$$ +\widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\alpha p \mu}}, \sqrt {\frac {L _ {\operatorname* {m a x}} \omega}{n p \mu}}, \frac {1}{\alpha}, (\omega + 1), \frac {1}{p} \right\}\right) +$$ + +and obtain (14). The proof of $\mathfrak{m}_{\mathrm{realistic}}^r = \widetilde{\Theta}\left((1 - r)K_\omega +rK_\alpha\right)T_{\mathrm{realistic}})$ is the same as in Theorem 5.2. + +# E.8 Comparison with EF21 + DIANA + +Theorem 6.1. For all $r \in [0,1]$ , $\mathfrak{m}_{\mathrm{realistic}}^r = \tilde{\mathcal{O}}\left(\mathfrak{m}_{\mathrm{EF21 - P + DIANA}}^r\right)$ . + +Proof. Using the inequality of arithmetic and geometric means, i.e., $\sqrt{xy} \leq \frac{x + y}{2}$ for all $x, y \geq 0$ , and $L \geq \mu$ , we have + +$$ +\begin{array}{l} \mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = \widetilde {\Theta} \left(K _ {\omega , \alpha} ^ {r} \left(\sqrt {\frac {L (\omega + 1)}{\alpha \mu}} + \sqrt {\frac {L _ {\operatorname* {m a x}} \omega (\omega + 1)}{n \mu}} + \sqrt {\frac {L \mu_ {\omega , \alpha} ^ {r}}{\alpha \mu}} + \sqrt {\frac {L _ {\operatorname* {m a x}} \omega \mu_ {\omega , \alpha} ^ {r}}{n \mu}} + \frac {1}{\alpha} + \omega + \mu_ {\omega , \alpha} ^ {r}\right) + d\right) \\ = \widetilde {\mathcal {O}} \left(K _ {\omega , \alpha} ^ {r} \left(\frac {L}{\alpha \mu} + \frac {L _ {\mathrm {m a x}} \omega}{n \mu} + \omega + \sqrt {\frac {L \mu_ {\omega , \alpha} ^ {r}}{\alpha \mu}} + \sqrt {\frac {L _ {\mathrm {m a x}} \omega \mu_ {\omega , \alpha} ^ {r}}{n \mu}} + \mu_ {\omega , \alpha} ^ {r}\right) + d\right). \\ \end{array} +$$ + +From the definition of $\mu_{\omega ,\alpha}^{r}\coloneqq {}^{rd} / K_{\omega ,\alpha}^{r}$ , we get + +$$ +\mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = \widetilde {\mathcal {O}} \left(K _ {\omega , \alpha} ^ {r} \left(\frac {L}{\alpha \mu} + \frac {L _ {\operatorname* {m a x}} \omega}{n \mu} + \omega\right) + \left(\sqrt {\frac {L K _ {\omega , \alpha} ^ {r} \times r d}{\alpha \mu}} + \sqrt {\frac {L _ {\operatorname* {m a x}} \omega K _ {\omega , \alpha} ^ {r} \times r d}{n \mu}} + r d\right) + d\right). +$$ + +Using the inequality of arithmetic and geometric means again and $r \leq 1$ , we obtain + +$$ +\mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = \widetilde {\mathcal {O}} \left(K _ {\omega , \alpha} ^ {r} \left(\frac {L}{\alpha \mu} + \frac {L _ {\max} \omega}{n \mu} + \omega\right) + K _ {\omega , \alpha} ^ {r} \left(\frac {L}{\alpha \mu} + \frac {L _ {\max} \omega}{n \mu}\right) + d\right). +$$ + +The last equality means that $\mathfrak{m}_{\mathrm{realistic}}^r = \tilde{\mathcal{O}}\left(\mathfrak{m}_{\mathrm{EF21 - P + DIANA}}^r\right)$ for all $r\in [0,1]$ . + +# E.9 Comparison with AGD + +Theorem 6.2. For all $r \in [0,1]$ and for all $K \in [d]$ , let us take the RandK and TopK compressors with the parameters (expected densities) i) $K_{\omega} = K$ and $K_{\alpha} = \min \{\lceil 1 - r / rK\rceil ,d\}$ for $r \in [0,1 / 2]$ , ii) $K_{\omega} = \min \{\lceil r / 1 - rK\rceil ,d\}$ and $K_{\alpha} = K$ for $r \in (1 / 2,1]$ . Then we have $\mathfrak{m}_{\mathrm{realistic}}^r = \tilde{\mathcal{O}} (\mathfrak{m}_{\mathrm{AGD}})$ . + +Proof. Consider that $r \in [0, 1/2]$ , then $K_{\omega} = K$ , $K_{\alpha} = \min \{ \lceil 1^{-r} / rK \rceil, d \}$ . Therefore, we have + +$$ +K _ {\omega , \alpha} ^ {r} := (1 - r) K _ {\omega} + r K _ {\alpha} \leq (1 - r) K + r \lceil^ {1 - r} / r K \rceil \leq 3 (1 - r) K +$$ + +and + +$$ +K _ {\omega , \alpha} ^ {r} := (1 - r) K _ {\omega} + r K _ {\alpha} \geq (1 - r) K. +$$ + +Using this observation, we obtain $\mu_{\omega ,\alpha}^{r}\coloneqq \frac{rd}{K_{\omega,\alpha}^{r}}\leq \frac{rd}{(1 - r)K}\leq \frac{d}{K}$ . Note that $\alpha \geq K_{\alpha} / d$ and $\omega \leq d / K_{\omega} - 1$ for $\mathrm{Top}K$ and Rand $K$ . Thus $\alpha \geq \min \left\{\frac{(1 - r)K}{rd},1\right\}$ and $\omega \leq \frac{d}{K} -1$ . We substitute the bounds to (16) and obtain + +$$ +\mathfrak {m} _ {\mathrm {r e a l i s t i c}} ^ {r} = \widetilde {\mathcal {O}} \bigg ((1 - r) K \bigg (\sqrt {\frac {d}{K}} \sqrt {\frac {L}{\mu}} + \frac {d}{K} \sqrt {\frac {r L}{(1 - r) \mu}} + \frac {d}{K} \sqrt {\frac {L _ {\mathrm {m a x}}}{n \mu}} + \frac {r d}{(1 - r) K} + \frac {d}{K} + \frac {r d}{(1 - r) K} \bigg) + d \bigg). +$$ + +Since $r \in [0, 1/2]$ and $K \leq d$ , one can easily show that + +$$ +\mathfrak {m} _ {\text {r e l i s t i c}} ^ {r} = \widetilde {\mathcal {O}} \left(d \sqrt {\frac {L}{\mu}} + d \sqrt {\frac {L _ {\operatorname* {m a x}}}{n \mu}} + d\right). +$$ + +It is left to use Lemma 1.4, to get $\mathfrak{m}_{\mathrm{realistic}}^r = \widetilde{\mathcal{O}}\left(d\sqrt{\frac{L}{\mu}}\right) = \widetilde{\mathcal{O}}\left(\mathfrak{m}_{\mathrm{AGD}}\right)$ for all $r \in [0,1/2]$ . + +Assume that $r \in (1/2, 1]$ , then $K_{\omega} = \min \{\lceil r / 1 - rK \rceil, d\}$ and $K_{\alpha} = K$ . Using the same reasoning, we have + +$$ +K _ {\omega , \alpha} ^ {r} := (1 - r) K _ {\omega} + r K _ {\alpha} \leq (1 - r) \lceil r / 1 - r K \rceil + r K \leq 3 r K, +$$ + +$$ +K _ {\omega , \alpha} ^ {r} := (1 - r) K _ {\omega} + r K _ {\alpha} \geq r K, +$$ + +$$ +\mu_ {\omega , \alpha} ^ {r} := \frac {r d}{K _ {\omega , \alpha} ^ {r}} \leq \frac {d}{K}, +$$ + +$$ +\alpha \geq \frac {K}{d} \text {a n d} \omega \leq \max \left\{\frac {(1 - r) d}{r K}, 1 \right\} - 1. +$$ + +By substituting these inequalities to (16), we obtain + +$$ +\mathfrak {m} _ {\text {r e a l i s t i c}} ^ {r} = \widetilde {\mathcal {O}} \left(r K \left(\frac {d}{K} \sqrt {\frac {L}{\mu}} + \frac {d}{K} \sqrt {\frac {(1 - r) L _ {\max } \omega}{r n \mu}} + \frac {d}{K} + \frac {(1 - r) d}{r K} + \frac {d}{K}\right) + d\right) +$$ + +Using Lemma 1.4, one can easily show that $\mathfrak{m}_{\mathrm{realistic}}^r = \widetilde{\mathcal{O}}\left(d\sqrt{\frac{L}{\mu}}\right) = \widetilde{\mathcal{O}}\left(\mathfrak{m}_{\mathrm{AGD}}\right)$ for all $r \in (1/2, 1]$ . + +# F Auxiliary Inequalities For $\bar{L}$ + +We now prove useful bounds for $\bar{L}$ . + +Lemma F.1 (Auxillary Inequalities). Assume that the constraint (44) hold, and a constant $c = 660508$ . Then + +$$ +\bar {L} \geq c \frac {L _ {\max} \omega p ^ {2}}{\beta^ {2} n} \quad (6 4) \quad \bar {L} \geq c \frac {L _ {\max} \omega p}{\beta n} \quad (6 5) \quad \bar {L} \geq c \frac {L _ {\max} \omega}{n} \tag {66} +$$ + +$$ +\bar {L} \geq c \frac {\sqrt {L L _ {\operatorname* {m a x}}} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}} (6 7) \quad \bar {L} \geq c \frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}} (6 8) \quad \bar {L} \geq c \frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {n}} (6 9) +$$ + +$$ +\bar {L} \geq c \frac {\widehat {L} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}} \quad (7 0) \quad \bar {L} \geq c \frac {\widehat {L} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}} \quad (7 1) \quad \bar {L} \geq c \frac {\widehat {L} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {n}} \quad (7 2) +$$ + +$$ +\bar {L} \geq c \frac {\widehat {L} p \sqrt {\omega}}{\sqrt {\alpha} \beta \sqrt {n}} \quad (7 3) \quad \bar {L} \geq c \frac {\widehat {L} \sqrt {p} \sqrt {\omega}}{\sqrt {\alpha} \sqrt {\beta} \sqrt {n}} \quad (7 4) +$$ + +$$ +\bar {L} \geq c \frac {\widehat {L} p \sqrt {\omega}}{\beta \sqrt {n}} \quad (7 5) \quad \bar {L} \geq c \frac {\widehat {L} \sqrt {p \omega}}{\sqrt {\beta n}} \tag {76} +$$ + +$$ +\bar {L} \geq c \frac {L}{\alpha} \quad (7 7) \quad \bar {L} \geq c \frac {L p}{\alpha \tau} \quad (7 8) \quad \bar {L} \geq c L \tag {79} +$$ + +$$ +\bar {L} \geq c \left(\frac {L \hat {L} ^ {2} \omega p ^ {4}}{\alpha^ {2} \beta^ {2} n \tau^ {2}}\right) ^ {1 / 3} \quad (8 0) \quad \bar {L} \geq c \left(\frac {L \hat {L} ^ {2} \omega p ^ {3}}{\alpha^ {2} \beta n \tau^ {2}}\right) ^ {1 / 3} \tag {81} +$$ + +Proof. The inequalities (64) and (66) follow from (44). The inequality (65) follows from (64) and (66): + +$$ +c \frac {L _ {\max} \omega p}{\beta n} \leq c \frac {L _ {\max} \omega}{n} \left(\frac {1}{2} \times \frac {p ^ {2}}{\beta^ {2}} + \frac {1}{2} \times 1 ^ {2}\right) = \frac {c}{2} \times \frac {L _ {\max} \omega p ^ {2}}{\beta^ {2} n} + \frac {c}{2} \times \frac {L _ {\max} \omega}{n} \leq \bar {L}. +$$ + +The inequalities (67) and (68) follow from (44). The inequality (69) follows from (68) and $\beta \in (0,1]$ : + +$$ +\bar {L} \geq c \frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}} \geq c \frac {\sqrt {L L _ {\operatorname* {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {n}}. +$$ + +Using Lemma 1.4, (67), (68), and (69), the inequalities (70), (71), and (72) follow from + +$$ +\bar {L} \geq c \frac {\sqrt {L L _ {\mathrm {m a x}}} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}} \geq c \frac {\hat {L} p \sqrt {\omega \tau}}{\alpha \beta \sqrt {n}}, +$$ + +$$ +\bar {L} \geq c \frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}} \geq c \frac {\hat {L} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {\beta} \sqrt {n}}, +$$ + +$$ +\bar {L} \geq c \frac {\sqrt {L L _ {\mathrm {m a x}}} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {n}} \geq c \frac {\hat {L} \sqrt {p} \sqrt {\omega \tau}}{\alpha \sqrt {n}}. +$$ + +Next, using Lemma 1.4, and $\frac{x + y}{2} \geq \sqrt{xy}$ for all $x, y \geq 0$ , the inequality (73) follows from + +$$ +c \frac {\widehat {L} p \sqrt {\omega}}{\sqrt {\alpha} \beta \sqrt {n}} \leq c \frac {\sqrt {L L _ {\max}} p \sqrt {\omega}}{\sqrt {\alpha} \beta \sqrt {n}} \leq \frac {c}{2} \times \frac {L}{\alpha} + \frac {c}{2} \times \frac {L _ {\max} p ^ {2} \omega}{\beta^ {2} n} \stackrel {(4 4)} {\leq} \bar {L}. +$$ + +The inequality (74) follows from + +$$ +c \frac {\widehat {L} \sqrt {p} \sqrt {\omega}}{\sqrt {\alpha} \sqrt {\beta} \sqrt {n}} \leq c \frac {\sqrt {L L _ {\max}} \sqrt {p} \sqrt {\omega}}{\sqrt {\alpha} \sqrt {\beta} \sqrt {n}} \leq \frac {c}{2} \times \frac {L}{\alpha} + \frac {c}{2} \times \frac {L _ {\max} p \omega}{\beta n} (4 4), (6 5) \bar {L}. +$$ + +The inequalities (75) and (76) follow from (73), (74), and $\alpha \in (0,1]$ : + +$$ +\bar {L} \geq c \frac {\widehat {L} p \sqrt {\omega}}{\sqrt {\alpha} \beta \sqrt {n}} \geq c \frac {\widehat {L} p \sqrt {\omega}}{\beta \sqrt {n}}, +$$ + +$$ +\bar {L} \geq c \frac {\widehat {L} \sqrt {p} \sqrt {\omega}}{\sqrt {\alpha} \sqrt {\beta} \sqrt {n}} \geq c \frac {\widehat {L} \sqrt {p} \sqrt {\omega}}{\sqrt {\beta} \sqrt {n}}. +$$ + +The inequalities (77) and (78) follow from (44), and (79) follows from (77) and $\alpha \in (0,1]$ . Using Lemma 1.4, and $\frac{x + y + z}{3} \geq (xyz)^{1/3}$ for all $x,y,z \geq 0$ , the inequalities (80) and (81) follow from + +$$ +c \left(\frac {L \widehat {L} ^ {2} \omega p ^ {4}}{\alpha^ {2} \beta^ {2} n \tau^ {2}}\right) ^ {1 / 3} \leq c \left(\frac {L ^ {2} L _ {\max} \omega p ^ {4}}{\alpha^ {2} \beta^ {2} n \tau^ {2}}\right) ^ {1 / 3} \leq \frac {c}{3} \times \frac {L p}{\alpha \tau} + \frac {c}{3} \times \frac {L p}{\alpha \tau} + \frac {c}{3} \times \frac {L _ {\max} \omega p ^ {2}}{\beta^ {2} n} (4 4) \leq \bar {L}, +$$ + +$$ +c \left(\frac {L \widehat {L} ^ {2} \omega p ^ {3}}{\alpha^ {2} \beta n \tau^ {2}}\right) ^ {1 / 3} \leq c \left(\frac {L ^ {2} L _ {\max } \omega p ^ {3}}{\alpha^ {2} \beta n \tau^ {2}}\right) ^ {1 / 3} \leq \frac {c}{3} \times \frac {L p}{\alpha \tau} + \frac {c}{3} \times \frac {L p}{\alpha \tau} + \frac {c}{3} \times \frac {L _ {\max } \omega p}{\beta n} (4 4), (6 5) \leq \bar {L}. +$$ + +□ + +# G Proof of Lemma E.10 (First Symbolically Computed) + +We use the notations from the proof of Theorem E.9. + +Lemma E.10 (First Symbolically Computed). Assume that for the parameter $\bar{L}$ , the inequalities from Sections I and J hold. Then, for all $t \geq 0$ , exists $\rho$ in (90), $\kappa$ in (88), $\lambda$ in (82), and $\nu_{t}$ in (83) such that (46) holds. + +Proof. The inequalities (46) are equivalent to + +$$ +\frac {8 \theta_ {t + 1} ^ {2}}{\alpha} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \leq \nu_ {t}, +$$ + +$$ +\nu_ {t} \frac {8}{\alpha p} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \leq \rho , +$$ + +$$ +\rho \frac {8 p}{\tau} \leq \lambda , +$$ + +$$ +\frac {8 p \omega}{n \bar {L} \beta} + \nu_ {t} \frac {8 \omega}{n \beta} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \lambda \frac {8 \tau^ {2} \omega}{n \beta} \leq \kappa . +$$ + +Let us take + +$$ +\lambda := \rho \frac {8 p}{\tau} \tag {82} +$$ + +to ensure that the third inequality holds. It left to find the parameters such that + +$$ +\frac {8 \theta_ {t + 1} ^ {2}}{\alpha} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \rho \frac {8 p}{\tau} \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \leq \nu_ {t}, +$$ + +$$ +\nu_ {t} \frac {8}{\alpha p} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \leq \rho , +$$ + +$$ +\frac {8 p \omega}{n \bar {L} \beta} + \nu_ {t} \frac {8 \omega}{n \beta} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \rho \frac {8 p}{\tau} \cdot \frac {8 \tau^ {2} \omega}{n \beta} \leq \kappa . +$$ + +Let us take + +$$ +\nu_ {t} := \theta_ {t + 1} ^ {2} \widehat {\nu} (\kappa , \rho), \tag {83} +$$ + +where we additionally define + +$$ +\widehat {\nu} \equiv \widehat {\nu} (\kappa , \rho) := \frac {8}{\alpha} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \rho \frac {8 p}{\tau} \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right), \tag {84} +$$ + +to ensure that the first inequality holds. It left to find the parameters $\kappa$ and $\rho$ such that + +$$ +\widehat {\nu} (\kappa , \rho) \frac {8}{\alpha p} \left(\frac {\gamma_ {t + 1} \theta_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \leq \rho , +$$ + +$$ +\frac {8 p \omega}{n \bar {L} \beta} + \widehat {\nu} (\kappa , \rho) \frac {8 \omega}{n \beta} \left(\frac {\gamma_ {t + 1} \theta_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} + \rho \frac {8 p}{\tau} \cdot \frac {8 \tau^ {2} \omega}{n \beta} \leq \kappa . +$$ + +Using Lemma E.1, we have $\frac{\gamma_{t + 1}\theta_{t + 1}}{\overline{L} + \Gamma_{t + 1}\mu} \leq \frac{\gamma_{t + 1}\theta_{t + 1}}{\overline{L} + \Gamma_{t}\mu} \leq \frac{1}{\overline{L}}$ , so it is sufficient to show that stronger inequalities hold: + +$$ +\widehat {\nu} (\kappa , \rho) \frac {8}{\alpha p \bar {L} ^ {2}} \leq \rho , \tag {85} +$$ + +$$ +\frac {8 p \omega}{n \bar {L} \beta} + \hat {\nu} (\kappa , \rho) \frac {8 \omega}{n \beta \bar {L} ^ {2}} + \rho \frac {8 p}{\tau} \cdot \frac {8 \tau^ {2} \omega}{n \beta} \leq \kappa . \tag {86} +$$ + +From this point all formulas in this lemma are generated by the script from Section P (see Section 4 in Section P). We use the SymPy library (Meurer et al., 2017). + +Using the definition of $\widehat{\nu}$ , the left hand side of (86) equals + +$$ +\begin{array}{l} \frac {2 0 4 8 L ^ {2} \omega p ^ {3} \rho}{\bar {L} ^ {2} \alpha \beta n \tau^ {2}} + \frac {1 0 2 4 L ^ {2} \omega p ^ {2} \rho}{\bar {L} ^ {2} \alpha \beta n \tau} + \frac {2 5 6 L ^ {2} \omega p \rho}{\bar {L} ^ {2} \alpha \beta n} + \frac {3 2 L \omega p}{\bar {L} ^ {2} \alpha \beta n} \tag {87} \\ + \kappa \left(\frac {2 5 6 \hat {L} ^ {2} \omega p}{\bar {L} ^ {2} \alpha \beta n} + \frac {2 5 6 \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {2} \alpha \beta^ {2} n}\right) + \frac {6 4 \omega p \rho \tau}{\beta n} + \frac {8 \omega p}{\bar {L} \beta n} + \frac {2 0 4 8 \hat {L} ^ {2} \omega^ {2} p ^ {2} \rho \tau}{\bar {L} ^ {2} \alpha \beta n ^ {2}}, \\ \end{array} +$$ + +where we grouped the terms w.r.t. $\kappa$ . Let us take $\bar{L}$ such that the bracket is less or equal to $1/2$ . We define the constraints in Section I. Therefore, (86) holds if + +$$ +\kappa := \frac {4 0 9 6 L ^ {2} \omega p ^ {3} \rho}{\bar {L} ^ {2} \alpha \beta n \tau^ {2}} + \frac {2 0 4 8 L ^ {2} \omega p ^ {2} \rho}{\bar {L} ^ {2} \alpha \beta n \tau} + \frac {5 1 2 L ^ {2} \omega p \rho}{\bar {L} ^ {2} \alpha \beta n} + \frac {6 4 L \omega p}{\bar {L} ^ {2} \alpha \beta n} + \frac {1 2 8 \omega p \rho \tau}{\beta n} + \frac {1 6 \omega p}{\bar {L} \beta n} + \frac {4 0 9 6 \hat {L} ^ {2} \omega^ {2} p ^ {2} \rho \tau}{\bar {L} ^ {2} \alpha \beta n ^ {2}}. +$$ + +Using the definition of $\widehat{\nu}$ and $\kappa$ , the left hand side of (85) equals + +$$ +\begin{array}{l} \frac {3 2 L}{\bar {L} ^ {2} \alpha^ {2}} + \frac {1 6 3 8 4 L \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} + \frac {1 6 3 8 4 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} \\ + \rho \left(\frac {2 0 4 8 L ^ {2} p ^ {2}}{\bar {L} ^ {2} \alpha^ {2} \tau^ {2}} + \frac {1 0 2 4 L ^ {2} p}{\bar {L} ^ {2} \alpha^ {2} \tau} + \frac {2 5 6 L ^ {2}}{\bar {L} ^ {2} \alpha^ {2}} + \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau^ {2}} + \frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau}\right) \\ + \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} + \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau^ {2}} + \frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau} \tag {89} \\ + \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} + \frac {2 0 4 8 \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} n} + \frac {3 2 7 6 8 \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} \beta n} + \frac {3 2 7 6 8 \hat {L} ^ {2} \omega p ^ {2} \tau}{\bar {L} ^ {2} \alpha^ {2} \beta^ {2} n} \\ \left. + \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta n ^ {2}} + \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n ^ {2}}\right) + \frac {4 0 9 6 \hat {L} ^ {2} \omega p}{\bar {L} ^ {3} \alpha^ {2} \beta n} + \frac {4 0 9 6 \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n}, \\ \end{array} +$$ + +where we grouped the terms w.r.t. $\rho$ . Let us take $\bar{L}$ such that the bracket is less or equal to $1/2$ . We define the constraints in Section J. Therefore, (85) holds if + +$$ +\rho := \frac {6 4 L}{\bar {L} ^ {2} \alpha^ {2}} + \frac {3 2 7 6 8 L \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} + \frac {3 2 7 6 8 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} + \frac {8 1 9 2 \hat {L} ^ {2} \omega p}{\bar {L} ^ {3} \alpha^ {2} \beta n} + \frac {8 1 9 2 \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n}. \tag {90} +$$ + +Finally, under the constraints from Sections I and J on $\bar{L}$ , the choices of parameters (90), (88), (83) and (82) insure that (46) holds. + +# H Proof of Lemma E.11 (Second Symbolically Computed) + +We use the notations from the proof of Theorem E.9. + +Lemma E.11 (Second Symbolically Computed). Consider the parameters $\rho$ , $\kappa$ , $\lambda$ , and $\nu_{t}$ from Lemma E.10. Assume that for the parameter $\bar{L}$ , the inequalities from Sections $L$ and $N$ hold, and the step size $\theta_{t+1} \leq 1/4$ for all $t \geq 0$ . Then, for all $t \geq 0$ , the following inequalities are satisfied: + +$$ +\begin{array}{l} p \left(\frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \right. \tag {47} \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max}}{n}\right) + \theta_ {t + 1} - 1\left. \right) D _ {f} \left(z ^ {t}, y ^ {t + 1}\right) \leq 0 \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} 2 \theta_ {t + 1} ^ {2} \left(\frac {p L}{2} + \kappa 4 p \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 p L ^ {2} + \lambda \left(2 p \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 p \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \mathbb {E} _ {t} \left[ \| u ^ {t + 1} - u ^ {t} \| ^ {2} \right] \\ - \frac {p \theta_ {t + 1} ^ {2} \bar {L}}{2} \mathbb {E} _ {t} \left[ \left\| u ^ {t + 1} - u ^ {t} \right\| ^ {2} \right] \leq 0. \tag {48} \\ \end{array} +$$ + +Proof. Since $p \geq 0$ and $D_{f}(z^{t},y^{t + 1}) \geq 0$ for all $t \geq 0$ , the inequality (47) is satisfied if + +$$ +\begin{array}{l} \frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max}}{n}\right) + \theta_ {t + 1} - 1 \leq 0. \\ \end{array} +$$ + +Note that $\theta_{t + 1}\leq \frac{1}{4}$ for all $t\geq 0$ . Therefore, it is sufficient to show that + +$$ +\begin{array}{l} \frac {4 \omega L _ {\operatorname* {m a x}}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\operatorname * {m a x}} + \nu_ {t} \left(\frac {\gamma_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\operatorname* {m a x}}}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\mathrm {m a x}}}{n}\right) \leq \frac {3}{4}. \\ \end{array} +$$ + +In the view of (83), we have to show that + +$$ +\begin{array}{l} \frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max } + \widehat {\nu} \left(\frac {\gamma_ {t + 1} \theta_ {t + 1}}{\bar {L} + \Gamma_ {t + 1} \mu}\right) ^ {2} \left(\frac {4 \omega L _ {\max }}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\mathrm {m a x}}}{n}\right) \leq \frac {3}{4}. \\ \end{array} +$$ + +Using Lemma E.1, we have $\frac{\gamma_{t + 1}\theta_{t + 1}}{L + \Gamma_{t + 1}\mu} \leq \frac{\gamma_{t + 1}\theta_{t + 1}}{L + \Gamma_t\mu} \leq \frac{1}{L}$ , so it is sufficient to show that + +$$ +\begin{array}{l} \frac {4 \omega L _ {\max}}{n \bar {L}} + \kappa 8 \left(1 + \frac {p}{\beta}\right) L _ {\max } + \frac {\widehat {\nu}}{\bar {L} ^ {2}} \left(\frac {4 \omega L _ {\max }}{p n} + \frac {8 L}{p \alpha}\right) + \rho 8 L + \tag {91} \\ + \lambda \left(4 \left(1 + \frac {2 p}{\tau}\right) L + \frac {8 \tau^ {2} \omega L _ {\max}}{n}\right) \leq \frac {3}{4}. \\ \end{array} +$$ + +From this point all formulas in this lemma are generated by the script from Section P (see Section 5 in Section P). + +Let us substitute (90), (88), (82), and (84) to the last inequality and obtain the inequality from Section K. The conditions from Section L insure that the inequality from Section K holds. It left to prove (48). Since $p \geq 0$ , $\mathbb{E}_t\left[\| u^{t + 1} - u^t\|^2\right] \geq 0$ and $\theta_{t + 1}^2 \geq 0$ for all $t \geq 0$ , the inequality (48) holds if + +$$ +\frac {4}{\bar {L}} \left(\frac {L}{2} + \kappa 4 \left(1 + \frac {p}{\beta}\right) \widehat {L} ^ {2} + \rho 4 L ^ {2} + \lambda \left(2 \left(1 + \frac {2 p}{\tau}\right) L ^ {2} + \frac {4 \tau^ {2} \omega \widehat {L} ^ {2}}{n}\right)\right) \leq 1. \tag {92} +$$ + +Let us substitute (90), (88) and (82) to the last inequality and obtain the inequality from Section M. The inequality from Section M holds if $\bar{L}$ satisfy the inequalities from Section N. + +I Symbolically Computed Constraints for $\bar{L}$ Such That The Term w.r.t. $\kappa$ is less or equal $1/2$ in (87) + +$$ +\frac {2 5 6 \hat {L} ^ {2} \omega p}{\bar {L} ^ {2} \alpha \beta n} \leq \frac {1}{4} \tag {94} +$$ + +# J Symbolically Computed Constraints for $\bar{L}$ Such That The Term w.r.t. $\rho$ is less or equal $1/2$ in (89) + +$$ +\frac {2 5 6 L ^ {2}}{\bar {L} ^ {2} \alpha^ {2}} \leq \frac {1}{2 8} \quad (9 5) \quad \frac {1 0 2 4 L ^ {2} p}{\bar {L} ^ {2} \alpha^ {2} \tau} \leq \frac {1}{2 8} \quad (9 6) \quad \frac {2 0 4 8 L ^ {2} p ^ {2}}{\bar {L} ^ {2} \alpha^ {2} \tau^ {2}} \leq \frac {1}{2 8} \tag {97} +$$ + +$$ +\frac {2 0 4 8 \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} n} \leq \frac {1}{2 8} \quad (9 8) \quad \frac {3 2 7 6 8 \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} \beta n} \leq \frac {1}{2 8} \quad (9 9) \quad \frac {3 2 7 6 8 \hat {L} ^ {2} \omega p ^ {2} \tau}{\bar {L} ^ {2} \alpha^ {2} \beta^ {2} n} \leq \frac {1}{2 8} \tag {100} +$$ + +$$ +\frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} \leq \frac {1}{2 8} \quad (1 0 1) \quad \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} \leq \frac {1}{2 8} \quad (1 0 2) \quad \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta n ^ {2}} \leq \frac {1}{2 8} \quad (1 0 3) +$$ + +$$ +\frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{2 8} \quad (1 0 4) \quad \frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau} \leq \frac {1}{2 8} \quad (1 0 5) \quad \frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau} \leq \frac {1}{2 8} \quad (1 0 6) +$$ + +$$ +\frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau^ {2}} \leq \frac {1}{2 8} \quad (1 0 7) \quad \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau^ {2}} \leq \frac {1}{2 8} \quad (1 0 8) +$$ + +# K Symbolically Computed Expression (91) + +$$ +\begin{array}{l} \frac {5 4 4 L ^ {2}}{\bar {L} ^ {2} \alpha^ {2}} + \frac {1 6 3 8 4 L ^ {4}}{\bar {L} ^ {4} \alpha^ {4}} + \frac {4 L _ {\max} \omega}{\bar {L} n} \\ + \frac {2 0 4 8 L ^ {2} p}{\bar {L} ^ {2} \alpha^ {2} \tau} + \frac {4 0 9 6 L ^ {2} p ^ {2}}{\bar {L} ^ {2} \alpha^ {2} \tau^ {2}} + \frac {6 5 5 3 6 L ^ {4} p}{\bar {L} ^ {4} \alpha^ {4} \tau} \\ + \frac {1 3 1 0 7 2 L ^ {4} p ^ {2}}{\bar {L} ^ {4} \alpha^ {4} \tau^ {2}} + \frac {1 6 L L _ {\max} \omega}{\bar {L} ^ {2} \alpha n} + \frac {1 2 8 L _ {\max} \omega p}{\bar {L} \beta n} \\ + \frac {1 2 8 L _ {\max} \omega p ^ {2}}{\bar {L} \beta^ {2} n} + \frac {8 1 9 2 L ^ {3} L _ {\max} \omega}{\bar {L} ^ {4} \alpha^ {3} n} + \frac {5 1 2 L L _ {\max} \omega p}{\bar {L} ^ {2} \alpha \beta n} \\ + \frac {5 1 2 L L _ {\max} \omega p ^ {2}}{\bar {L} ^ {2} \alpha \beta^ {2} n} + \frac {2 0 4 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {3} \alpha \beta n ^ {2}} + \frac {2 0 4 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {3} \alpha \beta^ {2} n ^ {2}} \\ + \frac {4 0 9 6 L L _ {\operatorname* {m a x}} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} n} + \frac {3 2 7 6 8 L ^ {3} L _ {\operatorname* {m a x}} \omega p}{\bar {L} ^ {4} \alpha^ {3} n \tau} + \frac {6 5 5 3 6 L ^ {3} L _ {\operatorname* {m a x}} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} n \tau^ {2}} \\ + \frac {6 9 6 3 2 L \hat {L} ^ {2} \omega p}{\bar {L} ^ {3} \alpha^ {2} \beta n} + \frac {6 9 6 3 2 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n} + \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {4} \alpha^ {4} n} \\ + \frac {2 6 2 1 4 4 L ^ {3} L _ {\max} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} + \frac {2 6 2 1 4 4 L ^ {3} L _ {\max} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} + \frac {2 7 8 5 2 8 L ^ {2} \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} \\ + \frac {2 7 8 5 2 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} + \frac {2 0 9 7 1 5 2 L ^ {3} \hat {L} ^ {2} \omega p}{\bar {L} ^ {5} \alpha^ {4} \beta n} + \frac {2 0 9 7 1 5 2 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n} \\ + \frac {1 6 7 7 7 2 1 6 L ^ {4} \hat {L} ^ {2} \omega p}{\bar {L} ^ {6} \alpha^ {5} \beta n} + \frac {1 6 7 7 7 2 1 6 L ^ {4} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {2}} \\ + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {2}} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {2}} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {2}} \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {2}} + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {2}} + \frac {8 1 9 2 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {4} \alpha^ {2} \beta n ^ {2}} \\ + \frac {8 1 9 2 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n ^ {2}} + \frac {6 5 5 3 6 L L _ {\max} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} \beta n} + \frac {6 5 5 3 6 L L _ {\max} \omega p ^ {2} \tau}{\bar {L} ^ {2} \alpha^ {2} \beta^ {2} n} \\ + \frac {6 5 5 3 6 L L _ {\mathrm {m a x}} \hat {L} ^ {2} \omega^ {2} p \tau}{\bar {L} ^ {4} \alpha^ {3} n ^ {2}} + \frac {2 6 2 1 4 4 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \beta n \tau} + \frac {2 6 2 1 4 4 L \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n \tau} \\ + \frac {5 2 4 2 8 8 L \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {3} \alpha^ {2} \beta n \tau^ {2}} + \frac {5 2 4 2 8 8 L \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n \tau^ {2}} + \frac {5 2 4 2 8 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta n ^ {2}} \\ + \frac {5 2 4 2 8 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n ^ {2}} + \frac {1 0 4 8 5 7 6 L ^ {3} L _ {\max} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau} + \frac {1 0 4 8 5 7 6 L ^ {3} L _ {\max} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau} \\ + \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau} + \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau} + \frac {1 0 4 8 5 7 6 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2}} \\ + \frac {2 0 9 7 1 5 2 L ^ {3} L _ {\max} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau^ {2}} + \frac {2 0 9 7 1 5 2 L ^ {3} L _ {\max} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau^ {2}} + \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {4} \alpha^ {4} \beta n} \\ + \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {4} \beta^ {2} n} + \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau^ {2}} + \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau^ {2}} \\ + \frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {4} \beta n \tau} + \frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n \tau} + \frac {8 3 8 8 6 0 8 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {2}} \\ + \frac {8 3 8 8 6 0 8 L _ {\operatorname* {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {4} n ^ {2}} + \frac {8 3 8 8 6 0 8 L _ {\operatorname* {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n ^ {2}} + \frac {8 3 8 8 6 0 8 L _ {\operatorname* {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {3}} \\ + \frac {1 6 7 7 7 2 1 6 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {4} \beta n \tau^ {2}} + \frac {1 6 7 7 7 2 1 6 L ^ {3} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n \tau^ {2}} + \frac {1 6 7 7 7 2 1 6 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta n ^ {2}} \\ + \frac {1 6 7 7 7 2 1 6 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n ^ {2}} + \frac {1 6 7 7 7 2 1 6 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {3} n ^ {2}} + \frac {3 3 5 5 4 4 3 2 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2}} \\ + \frac {3 4 6 0 3 0 0 8 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2}} + \frac {6 7 1 0 8 8 6 4 L ^ {4} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {6} \alpha^ {5} \beta n \tau} + \frac {6 7 1 0 8 8 6 4 L ^ {4} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n \tau} \\ + \frac {6 7 1 0 8 8 6 4 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2}} + \frac {1 3 4 2 1 7 7 2 8 L ^ {4} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {6} \alpha^ {5} \beta n \tau^ {2}} + \frac {1 3 4 2 1 7 7 2 8 L ^ {4} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n \tau^ {2}} \\ + \frac {1 3 4 2 1 7 7 2 8 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {3}} + \frac {1 3 4 2 1 7 7 2 8 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {3}} + \frac {1 3 4 2 1 7 7 2 8 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {2}} \\ + \frac {1 3 4 2 1 7 7 2 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta n ^ {2}} + \frac {1 3 4 2 1 7 7 2 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n ^ {2}} + \frac {1 4 2 6 0 6 3 3 6 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {2}} \\ + \frac {2 6 8 4 3 5 4 5 6 L \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {4} n ^ {2}} + \frac {2 6 8 4 3 5 4 5 6 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n ^ {2}} + \frac {2 6 8 4 3 5 4 5 6 L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {3}} \\ + \frac {2 6 8 4 3 5 4 5 6 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {3}} + \frac {2 6 8 4 3 5 4 5 6 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {2}} + \frac {2 7 6 8 2 4 0 6 4 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {3}} \\ + \frac {5 3 6 8 7 0 9 1 2 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {3} n ^ {2}} + \frac {5 3 6 8 7 0 9 1 2 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {3}} + \frac {5 3 6 8 7 0 9 1 2 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3}} \\ + \frac {5 3 6 8 7 0 9 1 2 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {2}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3}} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3}} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {4} n ^ {2}} \\ + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n ^ {2}} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {3}} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {2}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {3}} \\ + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {3} n ^ {2}} + \frac {4 2 9 4 9 6 7 2 9 6 L _ {\max} \hat {L} ^ {6} \omega^ {4} p ^ {5} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {4}} + \frac {4 2 9 4 9 6 7 2 9 6 L _ {\max} \hat {L} ^ {6} \omega^ {4} p ^ {3} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {4}} \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {3}} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {2} \tau} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {2} \tau} \\ + \frac {8 5 8 9 9 3 4 5 9 2 L \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {3}} + \frac {8 5 8 9 9 3 4 5 9 2 L \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {3}} + \frac {8 5 8 9 9 3 4 5 9 2 L _ {\max } \hat {L} ^ {6} \omega^ {4} p ^ {4} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {4}} \\ + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {2} \tau^ {2}} + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {2} \tau} + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {2} \tau^ {2}} \\ + \frac {1 7 1 7 9 8 6 9 1 8 4 L \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {3}} + \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {2} \tau^ {2}} + \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {2} \tau} \\ + \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {2} \tau} + \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {2} \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {3}} + \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {2} \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {3}} \\ + \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {2} \tau^ {2}} + \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {2} \tau} + \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {2} \tau^ {2}} \\ + \frac {6 8 7 1 9 4 7 6 7 3 6 L ^ {2} \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {3}} + \frac {6 8 7 1 9 4 7 6 7 3 6 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {2} \tau^ {2}} + \frac {1 0 4 8 5 7 6 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p \tau}{\bar {L} ^ {4} \alpha^ {3} \beta n ^ {2}} \\ + \frac {4 1 9 4 3 0 4 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta n ^ {2}} + \frac {4 1 9 4 3 0 4 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n ^ {2}} + \frac {4 1 9 4 3 0 4 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2} \tau} \\ + \frac {8 3 8 8 6 0 8 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2} \tau^ {2}} + \frac {3 3 5 5 4 4 3 2 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {4} n ^ {2}} + \frac {3 3 5 5 4 4 3 2 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {2} \tau} \\ + \frac {3 4 6 0 3 0 0 8 L L _ {\operatorname* {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n ^ {2}} + \frac {6 7 1 0 8 8 6 4 L L _ {\operatorname* {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {3} n ^ {2}} + \frac {6 7 1 0 8 8 6 4 L L _ {\operatorname* {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {3}} \\ + \frac {6 7 1 0 8 8 6 4 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {2} \tau^ {2}} + \frac {1 3 4 2 1 7 7 2 8 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2} \tau} + \frac {1 3 8 4 1 2 0 3 2 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2} \tau} \\ + \frac {2 6 8 4 3 5 4 5 6 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {6}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2} \tau^ {2}} + \frac {2 6 8 4 3 5 4 5 6 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2} \tau} + \frac {2 7 6 8 2 4 0 6 4 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2} \tau^ {2}} \\ + \frac {5 3 6 8 7 0 9 1 2 L L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {3}} + \frac {5 3 6 8 7 0 9 1 2 L L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {3}} + \frac {5 3 6 8 7 0 9 1 2 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2} \tau^ {2}} \\ + \frac {5 3 6 8 7 0 9 1 2 L ^ {3} L _ {\operatorname* {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {2} \tau} + \frac {5 7 0 4 2 5 3 4 4 L ^ {3} L _ {\operatorname* {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {2} \tau} + \frac {1 0 7 3 7 4 1 8 2 4 L L _ {\operatorname* {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {3}} \\ + \frac {1 0 7 3 7 4 1 8 2 4 L L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {3}} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} L _ {\mathrm {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {6}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {2} \tau^ {2}} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} L _ {\mathrm {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {2} \tau} \\ + \frac {1 1 4 0 8 5 0 6 8 8 L L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {3}} + \frac {1 1 4 0 8 5 0 6 8 8 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {2} \tau^ {2}} + \frac {2 1 4 7 4 8 3 6 4 8 L L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {3}} \\ + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} L _ {\mathrm {m a x}} \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {2} \tau^ {2}} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {2} L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3} \tau} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {2} L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3} \tau} \\ + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {6}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3} \tau^ {2}} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3} \tau} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3} \tau^ {2}} \\ + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {2} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3} \tau^ {2}} + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {3} \tau} + \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {3} \tau} \\ + \frac {1 7 1 7 9 8 6 9 1 8 4 L L _ {\operatorname* {m a x}} \hat {L} ^ {6} \omega^ {4} p ^ {5} \tau}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {4}} + \frac {1 7 1 7 9 8 6 9 1 8 4 L L _ {\operatorname* {m a x}} \hat {L} ^ {6} \omega^ {4} p ^ {3} \tau}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {4}} + \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} L _ {\operatorname* {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {6}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {3} \tau^ {2}} \\ + \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {3} \tau} + \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {3} \tau^ {2}} + \frac {3 4 3 5 9 7 3 8 3 6 8 L L _ {\mathrm {m a x}} \hat {L} ^ {6} \omega^ {4} p ^ {4} \tau}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {4}} \\ + \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {3} L _ {\mathrm {m a x}} \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {3} \tau^ {2}} \leq \frac {3}{4} \\ \end{array} +$$ + +# L Symbolically Computed Constraints for $\bar{L}$ Such That The Inequality from Section K Holds + +$$ +\frac {5 4 4 L ^ {2}}{\bar {L} ^ {2} \alpha^ {2}} \leq \frac {1}{3 2 6} \quad (1 0 9) \quad \frac {1 6 3 8 4 L ^ {4}}{\bar {L} ^ {4} \alpha^ {4}} \leq \frac {1}{3 2 6} \quad (1 1 0) \quad \frac {4 L _ {\max } \omega}{\bar {L} n} \leq \frac {1}{3 2 6} \tag {111} +$$ + +$$ +\frac {2 0 4 8 L ^ {2} p}{\bar {L} ^ {2} \alpha^ {2} \tau} \leq \frac {1}{3 2 6} \quad (1 1 2) \quad \frac {4 0 9 6 L ^ {2} p ^ {2}}{\bar {L} ^ {2} \alpha^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad (1 1 3) \quad \frac {6 5 5 3 6 L ^ {4} p}{\bar {L} ^ {4} \alpha^ {4} \tau} \leq \frac {1}{3 2 6} \tag {114} +$$ + +$$ +\frac {1 3 1 0 7 2 L ^ {4} p ^ {2}}{\bar {L} ^ {4} \alpha^ {4} \tau^ {2}} \leq \frac {1}{3 2 6} \quad (1 1 5) \quad \frac {1 6 L L _ {\max } \omega}{\bar {L} ^ {2} \alpha n} \leq \frac {1}{3 2 6} \quad (1 1 6) \quad \frac {1 2 8 L _ {\max } \omega p}{\bar {L} \beta n} \leq \frac {1}{3 2 6} \tag {117} +$$ + +$$ +\frac {1 2 8 L _ {\max} \omega p ^ {2}}{\bar {L} \beta^ {2} n} \leq \frac {1}{3 2 6} \quad (1 1 8) \quad \frac {8 1 9 2 L ^ {3} L _ {\max} \omega}{\bar {L} ^ {4} \alpha^ {3} n} \leq \frac {1}{3 2 6} \quad (1 1 9) \quad \frac {5 1 2 L L _ {\max} \omega p}{\bar {L} ^ {2} \alpha \beta n} \leq \frac {1}{3 2 6} \tag {120} +$$ + +$$ +\frac {5 1 2 L L _ {\max} \omega p ^ {2}}{\bar {L} ^ {2} \alpha \beta^ {2} n} \leq \frac {1}{3 2 6} \quad (1 2 1) \quad \frac {2 0 4 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {3} \alpha \beta n ^ {2}} \leq \frac {1}{3 2 6} \quad (1 2 2) \quad \frac {2 0 4 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {3} \alpha \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad (1 2 3) +$$ + +$$ +\frac {4 0 9 6 L L _ {\max} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} n} \leq \frac {1}{3 2 6} \quad (1 2 4) \quad \frac {3 2 7 6 8 L ^ {3} L _ {\max} \omega p}{\bar {L} ^ {4} \alpha^ {3} n \tau} \leq \frac {1}{3 2 6} \quad (1 2 5) \quad \frac {6 5 5 3 6 L ^ {3} L _ {\max} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} n \tau^ {2}} \leq \frac {1}{3 2 6} \quad (1 2 6) +$$ + +$$ +\frac {6 9 6 3 2 L \hat {L} ^ {2} \omega p}{\bar {L} ^ {3} \alpha^ {2} \beta n} \leq \frac {1}{3 2 6} \quad (1 2 7) \quad \frac {6 9 6 3 2 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n} \leq \frac {1}{3 2 6} \quad (1 2 8) \quad \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {4} \alpha^ {4} n} \leq \frac {1}{3 2 6} \quad (1 2 9) +$$ + +$$ +\frac {2 6 2 1 4 4 L ^ {3} L _ {\max} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} \leq \frac {1}{3 2 6} (1 3 0) \quad \frac {2 6 2 1 4 4 L ^ {3} L _ {\max} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} \leq \frac {1}{3 2 6} (1 3 1) \quad \frac {2 7 8 5 2 8 L ^ {2} \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {3} \beta n} \leq \frac {1}{3 2 6} (1 3 2) +$$ + +$$ +\frac {2 7 8 5 2 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n} \leq \frac {1}{3 2 6} \quad (1 3 3) \quad \frac {2 0 9 7 1 5 2 L ^ {3} \hat {L} ^ {2} \omega p}{\bar {L} ^ {5} \alpha^ {4} \beta n} \leq \frac {1}{3 2 6} \quad (1 3 4) \quad \frac {2 0 9 7 1 5 2 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n} \leq \frac {1}{3 2 6} \quad (1 3 5) +$$ + +$$ +\frac {1 6 7 7 7 2 1 6 L ^ {4} \hat {L} ^ {2} \omega p}{\bar {L} ^ {6} \alpha^ {5} \beta n} \leq \frac {1}{3 2 6} (1 3 6) \quad \frac {1 6 7 7 7 2 1 6 L ^ {4} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n} \leq \frac {1}{3 2 6} (1 3 7) \quad \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \tag {138} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \tag {141} +$$ + +$$ +\frac {4 2 9 4 9 6 7 2 9 6 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {8 1 9 2 L L _ {\max } \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {4} \alpha^ {2} \beta n ^ {2}} \leq \frac {1}{3 2 6} \tag {144} +$$ + +$$ +\frac {8 1 9 2 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} (1 4 5) \quad \frac {6 5 5 3 6 L L _ {\max} \omega p \tau}{\bar {L} ^ {2} \alpha^ {2} \beta n} \leq \frac {1}{3 2 6} (1 4 6) \quad \frac {6 5 5 3 6 L L _ {\max} \omega p ^ {2} \tau}{\bar {L} ^ {2} \alpha^ {2} \beta^ {2} n} \leq \frac {1}{3 2 6} (1 4 7) +$$ + +$$ +\frac {6 5 5 3 6 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p \tau}{\bar {L} ^ {4} \alpha^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 6 2 1 4 4 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \beta n \tau} \leq \frac {1}{3 2 6} \quad (1 4 9) \quad \frac {2 6 2 1 4 4 L \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n \tau} \leq \frac {1}{3 2 6} \tag {150} +$$ + +$$ +\frac {5 2 4 2 8 8 L \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {3} \alpha^ {2} \beta n \tau^ {2}} \leq \frac {1}{3 2 6} \quad (1 5 1) \quad \frac {5 2 4 2 8 8 L \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n \tau^ {2}} \leq \frac {1}{3 2 6} \quad (1 5 2) \quad \frac {5 2 4 2 8 8 L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta n ^ {2}} \leq \frac {1}{3 2 6} \tag {153} +$$ + +$$ +\frac {5 2 4 2 8 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 0 4 8 5 7 6 L ^ {3} L _ {\max} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau} \leq \frac {1}{3 2 6} \quad \frac {1 0 4 8 5 7 6 L ^ {3} L _ {\max} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau} \leq \frac {1}{3 2 6} \tag {156} +$$ + +$$ +\frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau} \leq \frac {1}{3 2 6} (1 5 7) \quad \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau} \leq \frac {1}{3 2 6} (1 5 8) \quad \frac {1 0 4 8 5 7 6 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2}} \leq \frac {1}{3 2 6} (1 5 9) +$$ + +$$ +\frac {2 0 9 7 1 5 2 L ^ {3} L _ {\max} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 0 9 7 1 5 2 L ^ {3} L _ {\max} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {4} \alpha^ {4} \beta n} \leq \frac {1}{3 2 6} (1 6 2) +$$ + +$$ +\frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {4} \beta^ {2} n} \leq \frac {1}{3 2 6} (1 6 3) \quad \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {3} \beta n \tau^ {2}} \leq \frac {1}{3 2 6} (1 6 4) \quad \frac {2 0 9 7 1 5 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n \tau^ {2}} \leq \frac {1}{3 2 6} (1 6 5) +$$ + +$$ +\frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {4} \beta n \tau} \leq \frac {1}{3 2 6} (1 6 6) \quad \frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n \tau} \leq \frac {1}{3 2 6} (1 6 7) \quad \frac {8 3 8 8 6 0 8 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {2}} \leq \frac {1}{3 2 6} (1 6 8) +$$ + +$$ +\frac {8 3 8 8 6 0 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {8 3 8 8 6 0 8 L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {8 3 8 8 6 0 8 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {3}} \leq \frac {1}{3 2 6} \tag {171} +$$ + +$$ +\frac {1 6 7 7 7 2 1 6 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {4} \beta n \tau^ {2}} \leq \frac {1}{3 2 6} (1 7 2) \quad \frac {1 6 7 7 7 2 1 6 L ^ {3} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n \tau^ {2}} \leq \frac {1}{3 2 6} (1 7 3) \quad \frac {1 6 7 7 7 2 1 6 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta n ^ {2}} \leq \frac {1}{3 2 6} (1 7 4) +$$ + +$$ +\frac {1 6 7 7 7 2 1 6 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 6 7 7 7 2 1 6 L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {3 3 5 5 4 4 3 2 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \tag {177} +$$ + +$$ +\frac {3 4 6 0 3 0 0 8 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {6 7 1 0 8 8 6 4 L ^ {4} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {6} \alpha^ {5} \beta n \tau} \leq \frac {1}{3 2 6} (1 7 9) \quad \frac {6 7 1 0 8 8 6 4 L ^ {4} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n \tau} \leq \frac {1}{3 2 6} (1 8 0) +$$ + +$$ +\frac {6 7 1 0 8 8 6 4 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 3 4 2 1 7 7 2 8 L ^ {4} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {6} \alpha^ {5} \beta n \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 3 4 2 1 7 7 2 8 L ^ {4} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n \tau^ {2}} \leq \frac {1}{3 2 6} \tag {183} +$$ + +$$ +\frac {1 3 4 2 1 7 7 2 8 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 3 4 2 1 7 7 2 8 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 3 4 2 1 7 7 2 8 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \tag {186} +$$ + +$$ +\frac {1 3 4 2 1 7 7 2 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 3 4 2 1 7 7 2 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 4 2 6 0 6 3 3 6 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \tag {189} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 6 8 4 3 5 4 5 6 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 6 8 4 3 5 4 5 6 L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \tag {190} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {2 6 8 4 3 5 4 5 6 L ^ {3} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 7 6 8 2 4 0 6 4 L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \tag {195} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {4} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {5 3 6 8 7 0 9 1 2 L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {5 3 6 8 7 0 9 1 2 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \tag {198} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {2}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \tag {201} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {2}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \tag {204} +$$ + +$$ +\frac {2 1 4 7 4 8 3 6 4 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {5} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L _ {\max } \hat {L} ^ {6} \omega^ {4} p ^ {5} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {4}} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L _ {\max } \hat {L} ^ {6} \omega^ {4} p ^ {3} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {4}} \leq \frac {1}{3 2 6} \tag {207} +$$ + +$$ +\frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {2} \tau} \leq \frac {1}{3 2 6} \tag {210} +$$ + +$$ +\frac {8 5 8 9 9 3 4 5 9 2 L \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L _ {\max } \hat {L} ^ {6} \omega^ {4} p ^ {4} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {4}} \leq \frac {1}{3 2 6} \tag {213} +$$ + +$$ +\frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {4} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {2} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \tag {216} +$$ + +$$ +\frac {1 7 1 7 9 8 6 9 1 8 4 L \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {5} \beta^ {3} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {2} \tau} \leq \frac {1}{3 2 6} \tag {219} +$$ + +$$ +\frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {2} \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {2} \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \tag {222} +$$ + +$$ +\frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {4} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {2} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \tag {225} +$$ + +$$ +\frac {6 8 7 1 9 4 7 6 7 3 6 L ^ {2} \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {6 8 7 1 9 4 7 6 7 3 6 L ^ {4} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {8} \alpha^ {6} \beta^ {3} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 0 4 8 5 7 6 L L _ {\max } \hat {L} ^ {2} \omega^ {2} p \tau}{\bar {L} ^ {4} \alpha^ {3} \beta n ^ {2}} \leq \frac {1}{3 2 6} \tag {228} +$$ + +$$ +\frac {4 1 9 4 3 0 4 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {4 1 9 4 3 0 4 L L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {4 1 9 4 3 0 4 L ^ {2} L _ {\max} \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2} \tau} \leq \frac {1}{3 2 6} \tag {231} +$$ + +$$ +\frac {8 3 8 8 6 0 8 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {3 3 5 5 4 4 3 2 L L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {4} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {3 3 5 5 4 4 3 2 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {2}}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {2} \tau} \leq \frac {1}{3 2 6} \tag {234} +$$ + +$$ +\frac {3 4 6 0 3 0 0 8 L L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {6 7 1 0 8 8 6 4 L L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {3} \beta^ {3} n ^ {2}} \leq \frac {1}{3 2 6} \quad \frac {6 7 1 0 8 8 6 4 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {3}} \leq \frac {1}{3 2 6} \tag {237} +$$ + +$$ +\frac {6 7 1 0 8 8 6 4 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {4} \beta n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 3 4 2 1 7 7 2 8 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {1 3 8 4 1 2 0 3 2 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2} \tau} \leq \frac {1}{3 2 6} \tag {240} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {6}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 6 8 4 3 5 4 5 6 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {2 7 6 8 2 4 0 6 4 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \tag {243} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {5 3 6 8 7 0 9 1 2 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {2} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {5 3 6 8 7 0 9 1 2 L ^ {2} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \tag {246} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {5 7 0 4 2 5 3 4 4 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {2} \tau} \leq \frac {1}{3 2 6} \quad \frac {1 0 7 3 7 4 1 8 2 4 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {3}} \leq \frac {1}{3 2 6} \tag {249} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {6}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {4} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {2} \tau} \leq \frac {1}{3 2 6} \tag {252} +$$ + +$$ +\frac {1 1 4 0 8 5 0 6 8 8 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {3}} \leq \frac {1}{3 2 6} \quad \frac {1 1 4 0 8 5 0 6 8 8 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {2} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 1 4 7 4 8 3 6 4 8 L L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {3}} \leq \frac {1}{3 2 6} \tag {255} +$$ + +$$ +\frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} L _ {\max } \hat {L} ^ {2} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {4} \beta^ {3} n ^ {2} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3} \tau} \leq \frac {1}{3 2 6} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3} \tau} \leq \frac {1}{3 2 6} \tag {258} +$$ + +$$ +\frac {4 2 9 4 9 6 7 2 9 6 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {6}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3} \tau} \leq \frac {1}{3 2 6} \quad \frac {4 2 9 4 9 6 7 2 9 6 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3} \tau^ {2}} \leq \frac {1}{3 2 6} \tag {261} +$$ + +$$ +\frac {8 5 8 9 9 3 4 5 9 2 L ^ {2} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {3} \tau} \leq \frac {1}{3 2 6} \quad \frac {8 5 8 9 9 3 4 5 9 2 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {3}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {3} \tau} \leq \frac {1}{3 2 6} +$$ + +$$ +\frac {1 7 1 7 9 8 6 9 1 8 4 L L _ {\max} \hat {L} ^ {6} \omega^ {4} p ^ {5} \tau}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {4}} \leq \frac {1}{3 2 6} \quad \frac {1 7 1 7 9 8 6 9 1 8 4 L L _ {\max} \hat {L} ^ {6} \omega^ {4} p ^ {3} \tau}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {4}} \leq \frac {1}{3 2 6} \quad \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} L _ {\max} \hat {L} ^ {4} \omega^ {3} p ^ {6}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {4} n ^ {3} \tau^ {2}} \leq \frac {1}{3 2 6} +$$ + +$$ +\frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {3} \tau} \leq \frac {1}{3 2 6} \quad \frac {1 7 1 7 9 8 6 9 1 8 4 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {4}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {2} n ^ {3} \tau^ {2}} \leq \frac {1}{3 2 6} \quad \frac {3 4 3 5 9 7 3 8 3 6 8 L L _ {\max } \hat {L} ^ {6} \omega^ {4} p ^ {4} \tau}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {4}} \leq \frac {1}{3 2 6} +$$ + +$$ +\frac {3 4 3 5 9 7 3 8 3 6 8 L ^ {3} L _ {\max } \hat {L} ^ {4} \omega^ {3} p ^ {5}}{\bar {L} ^ {8} \alpha^ {5} \beta^ {3} n ^ {3} \tau^ {2}} \leq \frac {1}{3 2 6} \tag {271} +$$ + +# M Symbolically Computed Expression (92) + +$$ +\begin{array}{l} \frac {2 L}{\bar {L}} + \frac {1 0 2 4 L ^ {3}}{\bar {L} ^ {3} \alpha^ {2}} + \frac {4 0 9 6 L ^ {3} p}{\bar {L} ^ {3} \alpha^ {2} \tau} \\ + \frac {8 1 9 2 L ^ {3} p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \tau^ {2}} + \frac {2 5 6 \hat {L} ^ {2} \omega p}{\bar {L} ^ {2} \beta n} + \frac {2 5 6 \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {2} \beta^ {2} n} \\ + \frac {1 0 2 4 L \hat {L} ^ {2} \omega p}{\bar {L} ^ {3} \alpha \beta n} + \frac {1 0 2 4 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha \beta^ {2} n} + \frac {8 1 9 2 L \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {3} \alpha^ {2} n} \\ + \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {2} \beta n} + \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n} + \frac {1 0 4 8 5 7 6 L ^ {3} \hat {L} ^ {2} \omega p}{\bar {L} ^ {5} \alpha^ {3} \beta n} \\ + \frac {1 0 4 8 5 7 6 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n} + \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta n ^ {2}} + \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n ^ {2}} \\ + \frac {1 6 7 7 7 2 1 6 \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {4} n ^ {2}} + \frac {1 6 7 7 7 2 1 6 \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n ^ {2}} + \frac {3 3 5 5 4 4 3 2 \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {3} n ^ {2}} \\ + \frac {6 7 1 0 8 8 6 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {2}} + \frac {6 7 1 0 8 8 6 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {2}} + \frac {1 3 4 2 1 7 7 2 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {2}} \\ + \frac {2 6 8 4 3 5 4 5 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {2}} + \frac {2 6 8 4 3 5 4 5 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {2}} + \frac {5 3 6 8 7 0 9 1 2 \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {3}} \\ + \frac {5 3 6 8 7 0 9 1 2 \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {3}} + \frac {5 3 6 8 7 0 9 1 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {2}} + \frac {1 0 7 3 7 4 1 8 2 4 \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {3}} \\ + \frac {1 3 1 0 7 2 L \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {3} \alpha^ {2} \beta n} + \frac {1 3 1 0 7 2 L \hat {L} ^ {2} \omega p ^ {2} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n} + \frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {2} \beta n \tau} \\ + \frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n \tau} + \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {2} \beta n \tau^ {2}} + \frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n \tau^ {2}} \\ + \frac {4 1 9 4 3 0 4 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta n \tau} + \frac {4 1 9 4 3 0 4 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n \tau} + \frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta n \tau^ {2}} \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n \tau^ {2}} + \frac {8 3 8 8 6 0 8 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2}} + \frac {8 3 8 8 6 0 8 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2}} \\ + \frac {6 7 1 0 8 8 6 4 L \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2}} + \frac {6 7 1 0 8 8 6 4 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2}} + \frac {1 3 4 2 1 7 7 2 8 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2}} \\ + \frac {2 6 8 4 3 5 4 5 6 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {2} \tau} + \frac {2 6 8 4 3 5 4 5 6 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {2} \tau} + \frac {5 3 6 8 7 0 9 1 2 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {2} \tau^ {2}} \\ + \frac {5 3 6 8 7 0 9 1 2 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {2} \tau} + \frac {5 3 6 8 7 0 9 1 2 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {2} \tau^ {2}} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {2} \tau^ {2}} \\ + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {2} \tau} + \frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {2} \tau} + \frac {2 1 4 7 4 8 3 6 4 8 L \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3}} \\ + \frac {2 1 4 7 4 8 3 6 4 8 L \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3}} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {2} \tau^ {2}} + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {2} \tau} \\ + \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {2} \tau^ {2}} + \frac {4 2 9 4 9 6 7 2 9 6 L \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3}} + \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {2} \tau^ {2}} \leq 1 \\ \end{array} +$$ + +# N Symbolically Computed Constraints for $\bar{L}$ Such That The Inequality from Section M Holds + +$$ +\frac {2 L}{\bar {L}} \leq \frac {1}{1 1 4} \quad (2 7 2) \quad \frac {1 0 2 4 L ^ {3}}{\bar {L} ^ {3} \alpha^ {2}} \leq \frac {1}{1 1 4} \quad (2 7 3) \quad \frac {4 0 9 6 L ^ {3} p}{\bar {L} ^ {3} \alpha^ {2} \tau} \leq \frac {1}{1 1 4} \tag {274} +$$ + +$$ +\frac {8 1 9 2 L ^ {3} p ^ {2}}{\bar {L} ^ {3} \alpha^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \quad (2 7 5) \quad \frac {2 5 6 \hat {L} ^ {2} \omega p}{\bar {L} ^ {2} \beta n} \leq \frac {1}{1 1 4} \quad (2 7 6) \quad \frac {2 5 6 \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {2} \beta^ {2} n} \leq \frac {1}{1 1 4} \tag {277} +$$ + +$$ +\frac {1 0 2 4 L \hat {L} ^ {2} \omega p}{\bar {L} ^ {3} \alpha \beta n} \leq \frac {1}{1 1 4} \quad (2 7 8) \quad \frac {1 0 2 4 L \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {3} \alpha \beta^ {2} n} \leq \frac {1}{1 1 4} \quad (2 7 9) \quad \frac {8 1 9 2 L \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {3} \alpha^ {2} n} \leq \frac {1}{1 1 4} \tag {280} +$$ + +$$ +\frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p}{\bar {L} ^ {4} \alpha^ {2} \beta n} \leq \frac {1}{1 1 4} \quad (2 8 1) \quad \frac {1 3 1 0 7 2 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n} \leq \frac {1}{1 1 4} \quad (2 8 2) \quad \frac {1 0 4 8 5 7 6 L ^ {3} \hat {L} ^ {2} \omega p}{\bar {L} ^ {5} \alpha^ {3} \beta n} \leq \frac {1}{1 1 4} \tag {283} +$$ + +$$ +\frac {1 0 4 8 5 7 6 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n} \leq \frac {1}{1 1 4} \quad (2 8 4) \quad \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta n ^ {2}} \leq \frac {1}{1 1 4} \quad (2 8 5) \quad \frac {1 0 4 8 5 7 6 \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n ^ {2}} \leq \frac {1}{1 1 4} \quad (2 8 6) +$$ + +$$ +\frac {1 6 7 7 7 2 1 6 \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {4} n ^ {2}} \leq \frac {1}{1 1 4} (2 8 7) \quad \frac {1 6 7 7 7 2 1 6 \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n ^ {2}} \leq \frac {1}{1 1 4} (2 8 8) \quad \frac {3 3 5 5 4 4 3 2 \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {4} \alpha^ {2} \beta^ {3} n ^ {2}} \leq \frac {1}{1 1 4} (2 8 9) +$$ + +$$ +\frac {6 7 1 0 8 8 6 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {2}} \leq \frac {1}{1 1 4} \quad \frac {6 7 1 0 8 8 6 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{1 1 4} \quad \frac {1 3 4 2 1 7 7 2 8 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {2}} \leq \frac {1}{1 1 4} \tag {292} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {2}} \leq \frac {1}{1 1 4} \tag {293} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {2}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {2}} \leq \frac {1}{1 1 4} \tag {294} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {3}} \leq \frac {1}{1 1 4} \tag {295} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {3}} \leq \frac {1}{1 1 4} \tag {296} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {2}} \leq \frac {1}{1 1 4} \tag {297} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {3}} \leq \frac {1}{1 1 4} \tag {298} +$$ + +$$ +\frac {1 3 1 0 7 2 L \hat {L} ^ {2} \omega p \tau}{\bar {L} ^ {3} \alpha^ {2} \beta n} \leq \frac {1}{1 1 4} \tag {299} +$$ + +$$ +\frac {1 3 1 0 7 2 L \hat {L} ^ {2} \omega p ^ {2} \tau}{\bar {L} ^ {3} \alpha^ {2} \beta^ {2} n} \leq \frac {1}{1 1 4} \quad (3 0 0) +$$ + +$$ +\frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {4} \alpha^ {2} \beta n \tau} \leq \frac {1}{1 1 4} \tag {301} +$$ + +$$ +\frac {5 2 4 2 8 8 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n \tau} \leq \frac {1}{1 1 4} \tag {302} +$$ + +$$ +\frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {4} \alpha^ {2} \beta n \tau^ {2}} \leq \frac {1}{1 1 4} \tag {303} +$$ + +$$ +\frac {1 0 4 8 5 7 6 L ^ {2} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {4} \alpha^ {2} \beta^ {2} n \tau^ {2}} \leq \frac {1}{1 1 4} \tag {304} +$$ + +$$ +\frac {4 1 9 4 3 0 4 L ^ {3} \hat {L} ^ {2} \omega p ^ {2}}{\bar {L} ^ {5} \alpha^ {3} \beta n \tau} \leq \frac {1}{1 1 4} \tag {305} +$$ + +$$ +\frac {4 1 9 4 3 0 4 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n \tau} \leq \frac {1}{1 1 4} \quad (3 0 6) +$$ + +$$ +\frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {3}}{\bar {L} ^ {5} \alpha^ {3} \beta n \tau^ {2}} \leq \frac {1}{1 1 4} \tag {307} +$$ + +$$ +\frac {8 3 8 8 6 0 8 L ^ {3} \hat {L} ^ {2} \omega p ^ {4}}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n \tau^ {2}} \leq \frac {1}{1 1 4} \tag {308} +$$ + +$$ +\frac {8 3 8 8 6 0 8 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta n ^ {2}} \leq \frac {1}{1 1 4} \tag {309} +$$ + +$$ +\frac {8 3 8 8 6 0 8 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{1 1 4} \tag {310} +$$ + +$$ +\frac {6 7 1 0 8 8 6 4 L \hat {L} ^ {4} \omega^ {2} p ^ {4} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {4} n ^ {2}} \leq \frac {1}{1 1 4} \tag {311} +$$ + +$$ +\frac {6 7 1 0 8 8 6 4 L \hat {L} ^ {4} \omega^ {2} p ^ {2} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {2} n ^ {2}} \leq \frac {1}{1 1 4} \tag {312} +$$ + +$$ +\frac {1 3 4 2 1 7 7 2 8 L \hat {L} ^ {4} \omega^ {2} p ^ {3} \tau}{\bar {L} ^ {5} \alpha^ {3} \beta^ {3} n ^ {2}} \leq \frac {1}{1 1 4} \tag {313} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {2} \tau} \leq \frac {1}{1 1 4} \tag {314} +$$ + +$$ +\frac {2 6 8 4 3 5 4 5 6 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {2} \tau} \leq \frac {1}{1 1 4} \tag {315} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {4} n ^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \tag {316} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {2} \tau} \leq \frac {1}{1 1 4} \tag {317} +$$ + +$$ +\frac {5 3 6 8 7 0 9 1 2 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {2} n ^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \tag {318} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 L ^ {2} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {6} \alpha^ {3} \beta^ {3} n ^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \tag {319} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {2} \tau} \leq \frac {1}{1 1 4} \tag {320} +$$ + +$$ +\frac {1 0 7 3 7 4 1 8 2 4 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {3}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {2} \tau} \leq \frac {1}{1 1 4} \tag {321} +$$ + +$$ +\frac {2 1 4 7 4 8 3 6 4 8 L \hat {L} ^ {6} \omega^ {3} p ^ {5} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {3}} \leq \frac {1}{1 1 4} \tag {322} +$$ + +$$ +\frac {2 1 4 7 4 8 3 6 4 8 L \hat {L} ^ {6} \omega^ {3} p ^ {3} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {3}} \leq \frac {1}{1 1 4} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {6}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {4} n ^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \quad \frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {2} \tau} \leq \frac {1}{1 1 4} \tag {325} +$$ + +$$ +\frac {2 1 4 7 4 8 3 6 4 8 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {4}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {2} n ^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \quad \frac {4 2 9 4 9 6 7 2 9 6 L \hat {L} ^ {6} \omega^ {3} p ^ {4} \tau}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {3}} \leq \frac {1}{1 1 4} \quad \frac {4 2 9 4 9 6 7 2 9 6 L ^ {3} \hat {L} ^ {4} \omega^ {2} p ^ {5}}{\bar {L} ^ {7} \alpha^ {4} \beta^ {3} n ^ {2} \tau^ {2}} \leq \frac {1}{1 1 4} \tag {328} +$$ + +# O Symbolical Check That The Constraints from Sections I, J, L and N Follow From The Constraint (44) + +Note that the inequalities from Lemma F.1 follow from (44). Therefore, the inequalities from Sections I, J, L and N follow from (44), if they follow from the inequalities from Lemma F.1. We now present it7. These results are checked and generated using the script in Section P (see Section 6 in Section P) + +(93) follows from (74), (74). +(94) follows from (73), (73). +(95) follows from (77), (77). +(96) follows from (78), (77). +(97) follows from (78), (78). +(98) follows from (72), (72). +(99) follows from (71), (71). +(100) follows from (70), (70). +(101) follows from (77), (77), (74), (74). +(102) follows from (77), (77), (73), (73). +(103) follows from (74), (74), (72), (72). +(104) follows from (73), (73), (72), (72). +(105) follows from (78), (77), (74), (74). +(106) follows from (78), (77), (73), (73). +(107) follows from (78), (78), (74), (74). +(108) follows from (78), (78), (73), (73). +(109) follows from (77), (77). +(110) follows from (77), (77), (77), (77). +(111) follows from (66). +(112) follows from (78), (77). +(113) follows from (78), (78). +(114) follows from (78), (77), (77), (77). +(115) follows from (78), (78), (77), (77). +(116) follows from (77), (66). +(117) follows from (65). +(118) follows from (64). +(119) follows from (77), (77), (77), (66). + +(120) follows from (77), (65). +(121) follows from (77), (64). +(122) follows from (74), (74), (66). +(123) follows from (74), (74), (65). +(124) follows from (69), (69). +(125) follows from (78), (77), (77), (66). +(126) follows from (78), (78), (77), (66). +(127) follows from (77), (74), (74). +(128) follows from (77), (73), (73). +(129) follows from (77), (77), (72), (72). +(130) follows from (77), (77), (77), (65). +(131) follows from (77), (77), (77), (64). +(132) follows from (77), (77), (74), (74). +(133) follows from (77), (77), (73), (73). +(134) follows from (77), (77), (77), (74), (74). +(135) follows from (77), (77), (77), (73), (73). +(136) follows from (77), (77), (77), (77), (74), (74). +(137) follows from (77), (77), (77), (77), (73), (73). +(138) follows from (77), (77), (77), (73), (73), (73), (73). +(139) follows from (77), (77), (77), (74), (74), (74), (74). +(140) follows from (77), (77), (77), (74), (74), (73), (73). +(141) follows from (77), (77), (77), (77), (73), (73), (73), (73). +(142) follows from (77), (77), (77), (77), (74), (74), (74), (74). +(143) follows from (77), (77), (77), (77), (74), (74), (73), (73). +(144) follows from (77), (74), (74), (66). +(145) follows from (77), (74), (74), (65). +(146) follows from (68), (68). +(147) follows from (67), (67). +(148) follows from (77), (72), (72), (66). +(149) follows from (78), (74), (74). +(150) follows from (78), (73), (73). +(151) follows from (81), (81), (81). +(152) follows from (80), (80), (80). +(153) follows from (72), (72), (65). +(154) follows from (72), (72), (64). +(155) follows from (78), (77), (77), (65). +(156) follows from (78), (77), (77), (64). +(157) follows from (78), (77), (74), (74). +(158) follows from (78), (77), (73), (73). +(159) follows from (77), (77), (74), (74), (66). +(160) follows from (78), (78), (77), (65). +(161) follows from (78), (78), (77), (64). +(162) follows from (77), (77), (71), (71). +(163) follows from (77), (77), (70), (70). +(164) follows from (78), (78), (74), (74). +(165) follows from (78), (78), (73), (73). + +(166) follows from (78), (77), (77), (74), (74). +(167) follows from (78), (77), (77), (73), (73). +(168) follows from (77), (77), (77), (74), (74), (66). +(169) follows from (70), (70), (64). +(170) follows from (71), (71), (65). +(171) follows from (74), (74), (72), (72), (66). +(172) follows from (78), (78), (77), (74), (74). +(173) follows from (78), (78), (77), (73), (73). +(174) follows from (77), (74), (74), (72), (72). +(175) follows from (77), (73), (73), (72), (72). +(176) follows from (71), (71), (64). +(177) follows from (77), (77), (73), (73), (64). +(178) follows from (77), (77), (74), (74), (65). +(179) follows from (78), (77), (77), (77), (74), (74). +(180) follows from (78), (77), (77), (77), (73), (73). +(181) follows from (77), (77), (74), (74), (64). +(182) follows from (78), (78), (77), (77), (74), (74). +(183) follows from (78), (78), (77), (77), (73), (73). +(184) follows from (74), (74), (71), (71), (64). +(185) follows from (74), (74), (71), (71), (66). +(186) follows from (77), (77), (77), (73), (73), (64). +(187) follows from (77), (77), (74), (74), (72), (72). +(188) follows from (79), (78), (71), (71), (71), (71). +(189) follows from (77), (77), (77), (74), (74), (65). +(190) follows from (77), (73), (73), (70), (70). +(191) follows from (77), (74), (74), (71), (71). +(192) follows from (73), (73), (72), (72), (64). +(193) follows from (74), (74), (71), (71), (65). +(194) follows from (77), (77), (77), (74), (74), (64). +(195) follows from (74), (74), (72), (72), (65). +(196) follows from (77), (74), (74), (70), (70). +(197) follows from (74), (74), (72), (72), (64). +(198) follows from (77), (77), (74), (74), (74), (74), (64). +(199) follows from (77), (77), (74), (74), (74), (74), (66). +(200) follows from (77), (77), (74), (74), (74), (74), (65). +(201) follows from (77), (77), (73), (73), (70), (70). +(202) follows from (77), (77), (74), (74), (71), (71). +(203) follows from (77), (77), (77), (74), (74), (74), (74), (64). +(204) follows from (77), (77), (77), (74), (74), (74), (74), (66). +(205) follows from (77), (77), (74), (74), (70), (70). +(206) follows from (74), (74), (74), (74), (72), (72), (64). +(207) follows from (74), (74), (74), (74), (72), (72), (66). +(208) follows from (77), (77), (77), (74), (74), (74), (74), (65). +(209) follows from (78), (77), (77), (73), (73), (73), (73). +(210) follows from (78), (77), (77), (74), (74), (74), (74). +(211) follows from (78), (76), (76), (71), (71), (70), (70). + +(212) follows from (77), (74), (74), (74), (74), (72), (72). +(213) follows from (74), (74), (74), (74), (72), (72), (65). +(214) follows from (78), (78), (77), (73), (73), (73), (73). +(215) follows from (78), (77), (77), (74), (74), (73), (73). +(216) follows from (78), (78), (77), (74), (74), (74), (74). +(217) follows from (78), (76), (76), (71), (71), (71), (71). +(218) follows from (78), (78), (77), (74), (74), (73), (73). +(219) follows from (78), (77), (77), (77), (73), (73), (73), (73). +(220) follows from (78), (77), (77), (77), (74), (74), (74). +(221) follows from (79), (78), (74), (74), (71), (71), (70), (70). +(222) follows from (77), (77), (74), (74), (74), (74), (72), (72). +(223) follows from (78), (78), (77), (77), (73), (73), (73), (73). +(224) follows from (78), (77), (77), (77), (74), (74), (73), (73). +(225) follows from (78), (78), (77), (77), (74), (74), (74), (74). +(226) follows from (79), (78), (74), (74), (71), (71), (71), (71). +(227) follows from (78), (78), (77), (77), (74), (74), (73), (73). +(228) follows from (77), (71), (71), (66). +(229) follows from (77), (72), (72), (65). +(230) follows from (77), (72), (72), (64). +(231) follows from (78), (77), (74), (74), (66). +(232) follows from (78), (78), (74), (74), (66). +(233) follows from (77), (70), (70), (64). +(234) follows from (78), (77), (77), (74), (74), (66). +(235) follows from (77), (71), (71), (65). +(236) follows from (77), (71), (71), (64). +(237) follows from (77), (74), (74), (72), (72), (66). +(238) follows from (78), (78), (77), (74), (74), (66). +(239) follows from (78), (77), (73), (73), (64). +(240) follows from (78), (77), (74), (74), (65). +(241) follows from (78), (78), (73), (73), (64). +(242) follows from (78), (77), (74), (74), (64). +(243) follows from (78), (78), (74), (74), (65). +(244) follows from (77), (74), (74), (71), (71), (64). +(245) follows from (77), (74), (74), (71), (71), (66). +(246) follows from (78), (78), (74), (74), (64). +(247) follows from (78), (77), (77), (73), (73), (64). +(248) follows from (78), (77), (77), (74), (74), (65). +(249) follows from (77), (73), (73), (72), (72), (64). +(250) follows from (77), (74), (74), (71), (71), (65). +(251) follows from (78), (78), (77), (73), (73), (64). +(252) follows from (78), (77), (77), (74), (74), (64). +(253) follows from (77), (74), (74), (72), (72), (65). +(254) follows from (78), (78), (77), (74), (74), (65). +(255) follows from (77), (74), (74), (72), (72), (64). +(256) follows from (78), (78), (77), (74), (74), (64). +(257) follows from (78), (77), (74), (74), (74), (74), (64). + +(258) follows from (78), (77), (74), (74), (74), (74), (66). +(259) follows from (78), (78), (74), (74), (74), (74), (64). +(260) follows from (78), (77), (74), (74), (74), (74), (65). +(261) follows from (78), (78), (74), (74), (74), (74), (66). +(262) follows from (78), (78), (74), (74), (74), (74), (65). +(263) follows from (78), (77), (77), (74), (74), (74), (74), (64). +(264) follows from (78), (77), (77), (74), (74), (74), (74), (66). +(265) follows from (77), (74), (74), (74), (74), (72), (72), (64). +(266) follows from (77), (74), (74), (74), (74), (72), (72), (66). +(267) follows from (78), (78), (77), (74), (74), (74), (74), (64). +(268) follows from (78), (77), (77), (74), (74), (74), (74), (65). +(269) follows from (78), (78), (77), (74), (74), (74), (74), (66). +(270) follows from (77), (74), (74), (74), (74), (72), (72), (65). +(271) follows from (78), (78), (77), (74), (74), (74), (74), (65). +(272) follows from (79). +(273) follows from (79), (77), (77). +(274) follows from (79), (78), (77). +(275) follows from (79), (78), (78). +(276) follows from (76), (76). +(277) follows from (75), (75). +(278) follows from (79), (74), (74). +(279) follows from (79), (73), (73). +(280) follows from (79), (72), (72). +(281) follows from (79), (77), (74), (74). +(282) follows from (79), (77), (73), (73). +(283) follows from (79), (77), (77), (74), (74). +(284) follows from (79), (77), (77), (73), (73). +(285) follows from (76), (76), (72), (72). +(286) follows from (75), (75), (72), (72). +(287) follows from (75), (75), (70), (70). +(288) follows from (76), (76), (71), (71). +(289) follows from (76), (76), (70), (70). +(290) follows from (79), (77), (73), (73), (73), (73). +(291) follows from (79), (77), (74), (74), (74), (74). +(292) follows from (79), (77), (74), (74), (73), (73). +(293) follows from (79), (77), (77), (73), (73), (73), (73). +(294) follows from (79), (77), (77), (74), (74), (74), (74). +(295) follows from (75), (75), (73), (73), (72), (72). +(296) follows from (76), (76), (74), (74), (72), (72). +(297) follows from (79), (77), (77), (74), (74), (73), (73). +(298) follows from (76), (76), (73), (73), (72), (72). +(299) follows from (79), (71), (71). +(300) follows from (79), (70), (70). +(301) follows from (79), (78), (74), (74). +(302) follows from (79), (78), (73), (73). +(303) follows from (78), (78), (76), (76). + +(304) follows from (78), (78), (75), (75). +(305) follows from (79), (78), (77), (74), (74). +(306) follows from (79), (78), (77), (73), (73). +(307) follows from (79), (78), (78), (74), (74). +(308) follows from (79), (78), (78), (73), (73). +(309) follows from (79), (74), (74), (72), (72). +(310) follows from (79), (73), (73), (72), (72). +(311) follows from (79), (73), (73), (70), (70). +(312) follows from (79), (74), (74), (71), (71). +(313) follows from (79), (74), (74), (70), (70). +(314) follows from (79), (78), (73), (73), (73), (73). +(315) follows from (79), (78), (74), (74), (74), (74). +(316) follows from (78), (78), (75), (75), (73), (73). +(317) follows from (79), (78), (74), (74), (73), (73). +(318) follows from (78), (78), (76), (76), (74), (74). +(319) follows from (78), (78), (76), (76), (73), (73). +(320) follows from (79), (78), (77), (73), (73), (73), (73). +(321) follows from (79), (78), (77), (74), (74), (74), (74). +(322) follows from (79), (73), (73), (73), (73), (72), (72). +(323) follows from (79), (74), (74), (74), (74), (72), (72). +(324) follows from (79), (78), (78), (73), (73), (73), (73). +(325) follows from (79), (78), (77), (74), (74), (73), (73). +(326) follows from (79), (78), (78), (74), (74), (74), (74). +(327) follows from (79), (74), (74), (73), (73), (72), (72). +(328) follows from (79), (78), (78), (74), (74), (73), (73). + +# Jupyter Notebook for Symbolic Computations + +# 1 Import Necessary Libraries + +```python +[ ]: import os +from IPython.display import display +import sympy +from sympy import Symbol +from utils import FileWriter, ConstraintsAggregator +from utils import get Factors, get_term, searchFactors,→latex_repres_of_inequality +``` + +# 2 Initialize a File for Results + +```python +[ ]: file_path = '.../paper/result.txt' +if os.path.exists(file_path): + os.remove(file_path) +fw = FileWriter(file_path) +``` + +# 3 Initialize Symbols From the Paper + +```python +[ ]: rho = Symbol('rho', nonnegative=True) kappa = Symbol('kappa', nonnegative=True) lambda = Symbol('lambda', nonnegative=True) omega = Symbol('omega', nonnegative=True) p = Symbol('p', positive=True) tau = Symbol('tau', positive=True) beta = Symbol('beta', positive=True) alpha = Symbol('alpha', positive=True) n = Symbol('n', positive=True) L = Symbol('L', positive=True) L_hat = Symbol(r'\hat{ha}l\{L\}', positive=True) +``` + +```python +L_max = Symbol(r'[L]{\max}], positive=True) +L_bar = Symbol(r'[bar{L}'], positive=True) +``` + +# 4 Assistant for "First Symbolically Computed" Lemma + +# 4.1 Calculate Expressions + +```python +[ ]: nu_hat_exp = (8 / alpha) * (p * L / 2 + kappa * 4 * p * (1 + p / beta) * L_hat**2 + rho * 4 * p * L**2 + rho * (8 * p / tau) * (2 * p * (1 + 2 * p / tau) * L**2 + 4 * p * tau**2 * omega * L_hat**2 / n)) nu_hat = Symbol(r'\hat{h}[\nu]\), nonnegative=True) # The left hand sides of the inequalities rho_lhs = nu_hat * (8 / (alpha * p * L_bar ** 2)) kappa_lhs = ((8 * p * omega) / (n * L_bar * beta) + nu_hat * (8 * omega) / (n * beta * L_bar**2) + rho * (8 * p) / tau * (8 * tau ** 2 * omega) / (n * beta)) +``` + +# 4.2 Display Them + +```txt +[ ]: display(nu_hat_exp) +display(rho_lhs) +display(kappa_lhs) +``` + +# 4.3 Symbolically Calculate The Steps From The Proof + +```python +[ ]: constraints_agg = ConstraintsAggregator() +rho_lhs = rho_lhs.subs(nu_hat, nu_hat_exp) +kappa_lhs = kappa_lhs.subs(nu_hat, nu_hat_exp) +# Group Terms w.r.t. kappa +kappa_lhs(poly = sympy.poly(kappa_lhs, kappa) +kappa_coeff = kappa_lhs.poly.all_coeffs() +kappa_lhs = kappa_lhs.exp().collect(kappa) +fw.write(simpy.latex(kappa_lhs) + "", \label{eq:kappa Expand}) +# Find Conditions When The Coefficients Near Kappa <= 1/2 +terms = sympy.exp(kappa_coeff[0]).args +latex_string = constraints_agg.addconstraints terms) +fw.write(latex_string) +``` + +```python +# Define kappa +kappa_solution = (2 * kappacoef[1]).expand() +fw.write("\\kappa\eqdef " + sympy.latex(kappa_solution) + ".\label{eq: $-\text{kappa\_sol}\}")$ +# Group Terms w.r.t. rho +rho_lhs = rho_lhs.subs(kappa, kappa_solution) +rho_lhs = rho_lhs.exp().collect(rho) +fw.write(sympy.latex(rho_lhs) + ",\label{eq:rho Expand}") +# Find Conditions When The Coefficients Near rho <= 1/2 +rho_lhs.poly = sympy.poly(rho_lhs, rho) +rho coef = rho_lhs.poly.all_coeffs() +terms = sympy.exp(rhocoef[0]).args +latex_string = constraints_agg.add Constraints terms) +fw.write(latex_string) +# Define rho +rho Solution = (2 * rhocoef[1]).expand() +fw.write("\\rho\eqdef " + sympy.latex(rho_solution) + ".\label{eq:rho_sol}") +``` + +# 5 Assistant for "Second Symbolically Computed" Lemma + +# 5.1 First Inequality + +```scala +[ ]: bregmancoef $=$ ((4 \* omega \* L_max)/ (n \* L_bar) + kappa \* 8 \* (1 + p / beta) \* L_max + (nu_hat / L_bar\*\*2) \* (4 \* omega \* L_max / (p \* n) + 8 \* L / $\rightarrow (\mathrm{p} * \mathrm{alpha}))$ + rho \* 8 \* L + lmbda \* (4 \* (1 + (2 \* p) / tau) \* L + 8 \* tau\*\*2 \* omega $\rightarrow$ L_max / n)) display(bregmancoef) lmbda_solution $=$ rho \* 8 \* p / tau # Substitue all known expressions to the term bregmancoef $=$ \\ bregmancoef.subs(nu_hat, nu_hat_exp).subs(lmbda, lmbda_solution). $\rightarrow$ subs(kappa, kappa_solution).subs(rho, rho_solution) bregmancoef $=$ bregmancoef.exp().simplify().expand() latex_string = latex_repres_of_inequality(bregmancoef) fw.write(latex_string) # Find Conditions When bregmancoef is Less or Equal <= 1/2 terms $=$ bregmancoef.args +``` + +```python +latex_string = constraints_agg.add Constraints/terms) +fw.write(latex_string) +``` + +# 5.2 Second Inequality + +```javascript +[ ]: distcoef $=$ (4 / L_bar) \* (L / 2 + kappa \* 4 \* (1 + p / beta) \* L_hat \*\* 2 + rho \* 4 \* L\*\*2 + lmbda \* (2 \* (1 + 2 \* p / tau) \* L\*\*2 + 4 \* tau\*\*2 $\Rightarrow$ omega \* L_hat\*\*2 / n)) display(distcoef) distcoef $=$ \ -distcoef.subs(lmbda, lmbda_solution).subs(kappa, kappa_solution).subs(rho, rho Solution) distcoef $=$ distcoef.exp().simplify().expand() latex_string $=$ latex_repres_of_inequality(dist coef, rhs="1") fw.write(latex_string) terms $=$ distcoef.args latex_string $=$ constraints_agg.add Constraintsterms) fw.write(latex_string) +``` + +# 6 Check That The Constraints Follow From The Inequalities from The "Auxiliary Inequalities" Lemma. + +# 6.1 The Inequalities from The "Auxiliary Inequalities" Lemma: + +```python +[ ]: from collections import OrderedDict + _contant = 1 + proposals = OrderedDict( + ("eq:lipt:max_2", (L_max * omega * p**2) / (beta**2 * n)), + ("eq:lipt:max_1", (L_max * omega * p) / (beta * n)), + ("eq:lipt:max", (L_max * omega) / (n)), + ("eq:lipt:l_l_max_2", (sympy.sqrt(L * L_max) * p * sympy.sqrt(omega * _tau)) / (alpha * beta * sympy.sqrt(n))), + ("eq:lipt:l_l_max_1", (sympy.sqrt(L * L_max) * sympy.sqrt(p * omega * _tau)) / (alpha * sympy.sqrt(beta * n))), + ("eq:lipt:l_l_max_p", (sympy.sqrt(L * L_max) * sympy.sqrt(p * omega * _tau)) / (alpha * sympy.sqrt(n))), + ("eq:lipt:hat_2", (L_hat * p * sympy.sqrt(omega * tau)) / (alpha * _beta * sympy.sqrt(n))), + ("eq:lipt:hat_1", (L_hat * sympy.sqrt(p * omega * tau)) / (alpha * _sympy.sqrt(beta * n))), + ("eq:lipt:hat_p", (L_hat * sympy.sqrt(p * omega * tau)) / (alpha * _sympy.sqrt(n))), +``` + +```python +("eq:lipt:hat_alpha_2", (L_hat * p * sympy.sqrt(omega)) / (beta * sympy. sqrt(alpha * n))), ("eq:lipt:hat_alpha_1", (L_hat * sympy.sqrt(p * omega)) / (sympy.sqrt(beta * alpha * n))), ("eq:lipt:hat_no_alpha_2", (L_hat * p * sympy.sqrt(omega)) / (beta * sympy. sqrt(n))), ("eq:lipt:hat_no_alpha_1", (L_hat * sympy.sqrt(p * omega)) / (sympy.sqrt(beta * n))), ("eq:lipt:plain", (L) / (alpha)), ("eq:lipt:plain_p_alpha", (L * p) / (alpha * tau)), ("eq:lipt:plain_no_alpha", (L)), ("eq:lipt:double_lipt_2", sympy.cbrt((L * L_hat**2 * omega * p**4) / (alpha**2 * beta**2 * n * tau**2))), ("eq:lipt:double_lipt_1", sympy.cbrt((L * L_hat**2 * omega * p**3) / (alpha**2 * beta * n * tau**2))), ]) const = 660508 for k, proposal in proposals.items(): proposals[k] = (const * proposal / L_bar).expand() proposals Factors = [get Factors(proposal) for _, proposal in proposals.items()) +``` + +# 6.2 Search The Right Inequalities for Each Constraint + +```python +[ ]: # Takes ~1 hour on a laptop +import tqdm +constraints = constraints_agg.get Constraints() +constraints_inequalities = [] +for i, factor in enumerate(tqdm.tqdm(constraints)): + num_of_base Factors_to_use = -int(factor[L_bar]) + path = search Factors(factor, num_of_base Factors_to_use, proposals Factors, pos_hints=[L, L_max, L_hat, omega, p], neg_hints=[alpha, n]) + constraints_inequalities.append(path) + assert path is not None +``` + +```python +[ ]: text = constraints_agg.prepare_text_from_proposals(proposals, $\rightarrow$ constraints_inequalities) fw.write(text) +``` + +# P.1 File utils.py + +```python +import os +from collections import defaultdict +from copy import copy +import sympy +class _defaultdictwithconst(defaultdict): def __init__(self, *args, **kwargs): super(_defaultdictwithconst, self).__init__((*args, **kwargs) self const $=$ None +def __copy__(self): obj $=$ super(_defaultdictwithconst, self).__copy_(obj const $=$ self const return obj +def get Factors(term): "" Converts a sympy expression with the sympy.Mul type to a dictionary with factors. Ex: A\~2 / B\~(1/2) -> {A: 2, B: -1/2} Args: param1: sympy expression (sympy.Mul). +factors $\equiv$ _defaultdictwithconst(int) factors const $= 1$ const Assigned $\equiv$ False assert isinstance(term, sympy.Mul) for el in term.args: freesymbols $=$ list(el.freesymbols) if len(freesymbols) $= = 0$ : assert not const Assigned const Assigned $\equiv$ True factors const $=$ int(el) continue assert len(freesymbols) $= = 1$ power $=$ el.aspowers_dict() [free Symbols [0]] assert isinstance(power, sympy.Rational) factors[freesymbols [0]] $=$ power return factors +def get_term(factors): "" The inverse function to the get_factors function: converts factors to a term Ex: {A: 2, B: -1/2} -> A\~2 / B\~(1/2) Args: param1: dictionary with factors +result $=$ factors const for k, v in factors.items(): result $=$ result \* k \*\* v return result +def searchFactors(factors, num_of_base_factors_to_use, base Factors, pos_hints=[], neg_hints=[]): +"" Checks if it possible to decompose 'factors' using num_of_base_factors from 'base Factors' +Args: param1: dictionary with factors +param2: number of base factors to use +param3: list of dictionaries with factors +``` + +param4 and param5: hints to the algorithm that helps to improve performance Returns: + +A list with indices (with repetitions) of 'base Factors' that compose 'factors'. + +If it not possible, then the function returns None. + +1111 + +```python +def_check_hints(factors): + for k in pos_hints: + assert factors_[k] >= 0 + for k in neg_hints: + assert factors_[k] <= 0 + check_hints(factors) +for base in base Factors: + check_hints(base) +def_search_factors(factors, num_of_baseFactors_to_use, path, base Factors): + if num_of_baseFactors_to_use == 0: + return factors_const <= 1 and all([factors[k] == 0 for k in factors]) +for index, choice in enumerate(base Factors): + factor_choice = copy(factors) + factor_choice const = factor_choice const / choice const + for k, v in choice.items(): + factor_choice[k] = factor_choice[k] - v +skip = False +for k in pos_hints: + if factor_choice[k] < 0: + skip = True +break +for k in neg_hints: + if factor_choice[k] > 0: + skip = True +break +if not skip and _search_factors(factor_choice, num_of_baseFactors_to_use +- 1, path, base Factors): + path.append(index) +return True +return False +path = [] +if _search_factors(factors, num_of_baseFactors_to_use, path, base Factors): +return path +else: +return None +``` + +# class FileWriter(object): + +```python +def __init__(self, file_path): + self._file_path = file_path + assert not os.path.exists(file_path) +``` + +```python +def write(self, text): + with open(self._file_path, "a") as fd: + fd.write("\{}\n".format(text)) +``` + +# class ConstraintsAggregator(object): + +```python +def __init__(self): + self._constraints = [] +def get Constraints(self): + return self._constraints +def addConstraints(self, terms): + denom_constant = self._denom_constant/terms) + terms_global_index = [] + for term in terms: + terms_global_index.append(len(self._constraints)) +``` + +```python +def prepare_text_from_proposals(self, proposals, constraints_inequalities): + # assert len(constraints_inequalities) == len(self._constraints) + text = "" + proposals_labels = list(proposals.keys()) + for global_index, ineq in enumerate(constraints_inequalities): + constraint_label = self._get_label(global_index) + text += "&\eqref{[{}]}".format(constraint_label) + "\textnormal{follows from} " + for i, index_proposal in enumerate(ineq): + text +="\eqref{[{}]}.format(proposals_labels[index_proposal]) + if i == len(ineq) - 1: + text +="." + else: + text +="," + text +="\\"" + return text +def _denom_constant(self, terms): + return 2 * len/terms) +def _get_label(self, global_index): + return "eq:sympy:constraints:{}" format(global_index) +def _prepare_text(self, terms, terms_global_index): + text = "" + denom_constant = self._denom_constant/terms) + num_per_column = 3 + for i, (global_index, term) in enumerate(zipterms_global_index, terms)): + if i % num_per_column == 0: + text += r"\begin{tabularx}{1.2\linewidth}{XXX}"text += r"\begin{equation}"text += sympy.latex(term) + "\leq\frac{\text{frac}\{1\}}{\{4\}}\{...\}}".format( + i % num_per_column == (num_per_column - 1): + text += r"\end{tabularx}"else: + text += "&" + def ceildiv(a, b): + return -(a // -b) +for i in range(len/terms), ceildiv(len/terms), num_per_column): + if i % num_per_column == (num_per_column - 1): + text += r"\end{tabularx}"else: + text +="&" +return text +latex_repres_of_inequality(coef, rho="\frac{\text{frac}\{3\}{\{4\}}}{\{4\}}") +latex_string = "&" +for i, term in enumerate(coef.args): +latex_string = latex_string + sympy.latex(term) +if i == len(coef.args) - 1: +latex_string += " \leq" + rho +else: +if i % 3 == 2: +latex_string += " \ \ \ \ & +" +else: +latex_string += " + " +return latex_string +``` + +# Q Experiments + +# Q.1 Setup + +We now conduct experiments on the practical logistic regression task with LIBSVM datasets (Chang and Lin, 2011) (under the 3-clause BSD license). The experiments were implemented in Python 3.7.9. The distributed environment was emulated on machines with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz. In each plot we show the relations between the total number of coordinates transmitted from and to the server and function values. The parameters of the algorithms are taken as suggested by the corresponding theory, except for the step sizes that we fine-tune from a set $\{2^i \mid i \in [-20, 20]\}$ . For 2Direction, we use parameters from Theorem 5.2 and finetune the step size $L$ . + +We solve the logistic regression problem: + +$$ +f _ {i} (x _ {1}, \ldots , x _ {c}) := - \frac {1}{m} \sum_ {j = 1} ^ {m} \log \left(\frac {\exp \left(a _ {i j} ^ {\top} x _ {y _ {i j}}\right)}{\sum_ {y = 1} ^ {c} \exp \left(a _ {i j} ^ {\top} x _ {y}\right)}\right), +$$ + +where $x_{1},\ldots ,x_{c}\in \mathbb{R}^{d}$ , $c$ is the number of unique labels, $a_{ij}\in \mathbb{R}^d$ is a feature of a sample on the $i^{\mathrm{th}}$ worker, $y_{ij}$ is a corresponding label and $m$ is the number of samples located on the $i^{\mathrm{th}}$ worker. The Rand $K$ compressor is used to compress information from the workers to the server, the Top $K$ compressor is used to compress information from the server to the workers. The performance of algorithms is compared on CIFAR10 (Krizhevsky et al., 2009) (# of features = 3072, # of samples equals 50,000), and real-sim (# of features = 20958, # of samples equals 72,309) datasets. + +# Q.2 Results + +In Figure 2, 3 and 4 we provide empirical communication complexities of our experiments. For each algorithm, we show the three best experiments. The experiments are collaborative with our theory. 2Direction enjoys faster convergence rates than EF21-P + DIANA and AGD. + +![](images/474d46f8353f5343bc27f6a6e02432fae567a6ea90f37fc9e1955dde1b1d7314.jpg) +Figure 2: Logistic Regression with real-sim dataset. # of workers $n = 100$ . $K = 1000$ in all compressors. + +![](images/3abaf32ea04645fd22b7068037c633179fb8186627f27a9c36e34d2b68873df7.jpg) + +![](images/d5682923bce789be0f165c927f3b432555343910dc98ff91a06dd5a8e5fc99b8.jpg) +Figure 3: Logistic Regression with CIFAR10 dataset. # of workers $n = 10$ . $K = 1000$ in all compressors. + +![](images/b7e41cb5a73a98d92be3496f4846699c82d672c5964d2207d6165c63a147ddd1.jpg) + +![](images/7393862208f90973261634ad23ee4fec0ea7047024f9b1ead273452ebc3345e0.jpg) +Figure 4: Logistic Regression with CIFAR10 dataset. # of workers $n = 100$ . $K = 1000$ in all compressors. + +![](images/ee59bd8f63555cb41846a244665c29cf2a8bf1d3d8d80ed98e00e21560fa944d.jpg) + +# R Convergence Rate of CANITA obtained by Li and Richtárik (2021) + +In their Equation (54), Li and Richtárik (2021) derive the following bound for their CANITA method: + +$$ +\mathbb {E} \left[ F ^ {T + 1} \right] \leq \mathcal {O} \left(\max \left\{\frac {(1 + \omega) ^ {3}}{T ^ {3}}, \frac {(1 + b) (\beta + 3 / 2) L}{T ^ {2}} \right\}\right). +$$ + +In the regime when $\omega \geq n$ , choosing $b = \omega$ and $\beta = \Theta \left(\frac{\omega}{n}\right)$ in their Equation (10) gives + +$$ +\begin{array}{l} \mathbb {E} \left[ F ^ {T + 1} \right] \leq \mathcal {O} \left(\max \left\{\frac {(1 + \omega) ^ {3}}{T ^ {3}}, \frac {(1 + b) (\beta + 3 / 2) L}{T ^ {2}} \right\}\right) \\ = \mathcal {O} \left(\max \left\{\frac {(1 + \omega) ^ {3}}{T ^ {3}}, \frac {\omega (\omega / n + 3 / 2) L}{T ^ {2}} \right\}\right) \\ = \mathcal {O} \left(\max \left\{\frac {(1 + \omega) ^ {3}}{T ^ {3}}, \frac {\omega^ {2} L}{n T ^ {2}} \right\}\right). \\ \end{array} +$$ + +This means that the correct convergence rate of the CANITA method (Li and Richtárik, 2021) is + +$$ +T = \left\{ \begin{array}{l l} \Theta \left(\frac {\omega}{\varepsilon^ {1 / 3}} + \frac {\omega}{\sqrt {n}} \sqrt {\frac {L}{\varepsilon}}\right), & \omega \geq n, \\ \Theta \left(\frac {\omega}{\varepsilon^ {1 / 3}} + \left(1 + \frac {\omega^ {3 / 4}}{n ^ {1 / 4}}\right) \sqrt {\frac {L}{\varepsilon}}\right), & \omega < n. \end{array} \right. \tag {329} +$$ + +Comparing this result with our Theorem E.14 describing the convergence of our method 2Direction, one can see that in the low accuracy regimes (in particular, when $\frac{\omega}{\varepsilon^{1/3}}$ dominates in (329)), our result improves $\Theta\left(\frac{1}{\varepsilon^{1/3}}\right)$ to at least $\Theta\left(\log \frac{1}{\varepsilon}\right)$ . However, the dependence $\Theta\left(\log \frac{1}{\varepsilon}\right)$ should not be overly surprising as it was observed by Lan et al. (2019) already, albeit in a somewhat different context. + +# S Comparison with ADIANA + +We now want to check that our rate (14) restores the rate from (Li et al., 2020). Since ADIANA only compresses from the workers to the server, let us take $r = 0$ , the identity compressor operator $\mathcal{C}^P(x) = x$ for all $x \in \mathbb{R}^d$ , which does not perform compression, and, as in (Li et al., 2020), consider the optimistic case, when $L_{\max} = L$ . For this compressor, we have $\alpha = 1$ in (2). Note that $\mu_{\omega, \alpha}^r = 0$ . Thus the iteration complexity (14) equals + +$$ +\begin{array}{l} T ^ {\text {o p t i m i s t i c}} = \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\mu}}, \sqrt {\frac {L (\omega + 1)}{n ^ {1 / 3} \mu}}, \sqrt {\frac {L (\omega + 1) ^ {3 / 2}}{\sqrt {n} \mu}}, \sqrt {\frac {L \omega (\omega + 1)}{n \mu}}, (\omega + 1) \right\}\right) \\ = \widetilde {\Theta} \left(\max \left\{\sqrt {\frac {L}{\mu}}, \sqrt {\frac {L (\omega + 1) ^ {3 / 2}}{\sqrt {n} \mu}}, \sqrt {\frac {L \omega (\omega + 1)}{n \mu}}, (\omega + 1) \right\}\right), \tag {330} \\ \end{array} +$$ + +where we use Young's inequality: $\sqrt{\frac{L}{\mu}}\sqrt{\frac{(\omega + 1)}{n^{1/3}}} \leq \sqrt{\frac{L}{\mu}}\sqrt{\frac{1}{3} \times 1^3 + \frac{2}{3}\frac{(\omega + 1)^{3/2}}{\sqrt{n}}}$ . Without the server-to-worker compression, Algorithm 1 has the same iteration (330) and communication complexity as (Li et al., 2020). \ No newline at end of file diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/images.zip b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..727a33bd5a7e29404e9668c34f329437050a77f4 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14e60f93d4d810969d13381cf479718f8e8820afe258dc6cae69a5ffe9688315 +size 6567519 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/layout.json b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..30c364473b77e95cb31d11c5ef3b17306dc351b2 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d789e4a742c14d2c24f3d96e2029645db59d5700a74fc2bb42f21b55025f1449 +size 2642224 diff --git a/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_content_list.json b/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cec55e4f7ac7062aa3a175332c95aa6bbbd0f5b1 --- /dev/null +++ b/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ad9b52f502e93b13ec103d3cb22b20919e6ccddda189ca0ada7765c62433ee4 +size 112934 diff --git a/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_model.json b/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e0f6e29404e63452cf6550275d93dbe7e93660d2 --- /dev/null +++ b/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adc88d95759ddeae2b5249a970703804ae9c41c93f3a356b870596ce9fea0b53 +size 141698 diff --git a/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_origin.pdf b/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..08474cc01543efa40809b526e24e46d5c44d8a77 --- /dev/null +++ b/3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70386fa17f4c2eb36626e882548c4a3d4016d34e2538167697b7f001867f68ec +size 5030621 diff --git a/3dawarevisualquestionansweringaboutpartsposesandocclusions/full.md b/3dawarevisualquestionansweringaboutpartsposesandocclusions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..378c9e78e0b4f97a706d2fb3c7f14edce4d01e25 --- /dev/null +++ b/3dawarevisualquestionansweringaboutpartsposesandocclusions/full.md @@ -0,0 +1,454 @@ +# 3D-Aware Visual Question Answering about Parts, Poses and Occlusions + +Xingrui Wang $^{1}$ Wufei Ma $^{1*}$ Zhuowan Li $^{1\dagger}$ Adam Kortylewski $^{2,3}$ Alan Yuille $^{1}$ $^{1}$ Johns Hopkins University $^{2}$ Max Planck Institute for Informatics $^{3}$ University of Freiburg +{xwang378, wma27, zli110, ayuille1}@jhu.edu akortyle@mpi-inf.mpg.de + +# Abstract + +Despite rapid progress in Visual question answering (VQA), existing datasets and models mainly focus on testing reasoning in 2D. However, it is important that VQA models also understand the 3D structure of visual scenes, for example to support tasks like navigation or manipulation. This includes an understanding of the 3D object pose, their parts and occlusions. In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. We address 3D-aware VQA from both the dataset and the model perspective. First, we introduce Super-CLEVR-3D, a compositional reasoning dataset that contains questions about object parts, their 3D poses, and occlusions. Second, we propose PO3D-VQA, a 3D-aware VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and deep neural networks with 3D generative representations of objects for robust visual recognition. Our experimental results show our model PO3D-VQA outperforms existing methods significantly, but we still observe a significant performance gap compared to 2D VQA benchmarks, indicating that 3D-aware VQA remains an important open research area. The code is available at https://github.com/XingruiWang/3D-Aware-VQA. + +# 1 Introduction + +Visual question answering (VQA) is a challenging task that requires an in-depth understanding of vision and language, as well as multi-modal reasoning. Various benchmarks and models have been proposed to tackle this challenging task, but they mainly focus on 2D questions about objects, attributes, or 2D spatial relationships. However, it is important that VQA models understand the 3D structure of scenes, in order to support tasks like autonomous navigation and manipulation. + +An inherent property of human vision is that we can naturally answer questions that require a comprehensive understanding of the 3D structure in images. For example, humans can answer the questions shown in Fig. 1, which ask about the object parts, their 3D poses, and occlusions. However, current VQA models, which often rely on 2D bounding boxes to encode a visual scene [2, 59, 25] struggle to answer such questions reliably (as can be seen from our experiments). We hypothesize this is caused by the lack of understanding of the 3D structure images. + +In this work, we introduce the task of 3D-aware VQA, where answering the questions requires compositional reasoning over the 3D structure of the visual scenes. More specifically, we focus on challenging questions that require multi-step reasoning about the object-part hierarchy, the 3D poses of the objects, and the occlusion relationships between objects or parts. + +![](images/d954c9cbe56e36658cfc7b91ecd77a2cf5d5e43dad1505727ddc78f22c845fb6.jpg) +Figure 1: Examples from Super-CLEVR-3D. We introduce the task of 3D-aware VQA, which requires 3D understanding of the image, including the parts, 3D poses, and occlusions. + +Part Q: What is the name of the brown part of the large rubber thing? Wheel Q: What is the material of the trunk that belongs to the same object as the purple part? Metallic + +3D Pose + +Q: Which direction the double bus is facing? Left +Q: What is the color of the small object which faces to the right? Truck + +Occlusion + +Q: Is the bumper of the purple SUV occluded? No +Q: What is the size of the aeroplane whose wing is occluded? Small + +filter[aeroplane] + +obj_to_part + +filter_part[wing] + +filter_occludee + +part_to_obj + +query_size + +What is the size of the aeroplane whose wing is occluded? + +We address the challenging 3D-aware VQA task from both the dataset and the model perspective. From the dataset perspective, we introduce Super-CLEVR-3D, which extends the Super-CLEVR dataset [32] with 3D-aware questions. Given the visual scenes from Super-CLEVR that contain randomly placed vehicles of various categories, we define a set of 3D-aware reasoning operations and automatically generate 3D questions based on these operations. Fig. 1 shows examples of the images, questions and the underlying 3D operations for the questions. From the model perspective, we introduce PO3D-VQA, a VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and a deep neural network with 3D generative representations of objects for robust visual scene parsing. Our model first recovers a 3D scene representation from the image and a program from the question, and subsequently executes the program on the 3D scene representation to obtain an answer using a probabilistic reasoning process that takes into account the confidence of predictions from the neural network. We refer to our system as PO3D-VQA, which stands for Parts, Poses, and Occlusions in 3D Visual Question Answering. + +On Super-CLEVR-3D, we experiment with existing representative models, their variants, and our model PO3D-VQA. The results show that our model outperforms existing methods significantly, leading to an improvement in accuracy of more than $11\%$ , which shows the advantage of the generative 3D scene parser and the probabilistic neural symbolic reasoning process. Moreover, further analysis on questions with different difficulty levels reveals that the improvements of our model are even greater on harder questions with heavy occlusions and small part sizes. Our results indicate that a reliable 3D understanding, together with the modular reasoning procedure, produces a desirable 3D-aware VQA model. + +In summary, our contributions are as follows. (1) We introduce the challenging task of 3D-aware VQA and propose the Super-CLEVR-3D dataset, where 3D visual understanding about parts, 3D poses, and occlusions are required. (2) We propose a 3D-aware neural modular model PO3D-VQA that conducts probabilistic reasoning in a step-wise modular procedure based on robust 3D scene parsing. (3) With experiments, we show that 3D-aware knowledge and modular reasoning are crucial for 3D-aware VQA, and suggest future VQA methods take 3D understanding into account. + +# 2 Related Work + +Visual Question Answering (VQA). Rapid progress has been made in VQA [4] in both the datasets and the models. To solve the challenging VQA datasets [15, 61, 17, 45] with real images, multiple models are developed including two-stream feature fusion [2, 14, 28, 55, 23, 44, 30] or transformer-based pretraining [48, 36, 31, 59, 25]. However, the real datasets are shown to suffer from spurious correlations and biases [42, 16, 41, 1, 15, 26, 27]. Alternatively, synthetic datasets like CLEVR [24] and Super-CLEVR [32], are developed to study the compositional reasoning ability of VQA systems, which are also extended to study other vision-and-language tasks [34, 29, 53, 58, 6, 47, 20]. The synthetic datasets promote the development of neural modular methods [3, 54, 40, 22], where the reasoning is done in a modular step-by-step manner. It is shown that the modular methods have nice properties including interpretability, data efficiency [54, 40], better robustness [32] and strong performance on synthetic images [54]. However, most existing methods rely on region features [2, 59] extracted using 2D object detectors [46] for image encoding, which is not 3D-aware. We follow the works on the synthetic dataset and enhance the modular methods with 3D understanding. + +VQA in 3D. Multiple existing works study VQA under the 3D setting, such as SimVQA [8], SQA3D [39], 3DMV-VQA [19], CLEVR-3D [51], ScanQA [52], 3DQA [52], and EmbodiedQA [13], which focus on question answering on the 3D visual scenes like real 3D scans [39, 51, 5, 52], simulated 3D environments [9, 13], or multi-view images [19]. PTR [20] is a synthetic VQA dataset that requires part-based reasoning about physics, analogy and geometry. Our setting differs from these works because we focus on 3D in the questions instead of 3D in the visual scenes, since our 3D-aware questions explicitly query the 3D information that can be inferred from the 2D input images. + +3D scene understanding. One popular approach for scene understanding is to use the CLIP features pretrained on large-scale text-image pairs and segment the 2D scene into semantic regions [10, 43]. However, these methods lack a 3D understanding of the scene and cannot be used to answer 3D-related questions. Another approach is to adopt category-level 6D pose estimation methods that can locate objects in the image and estimate their 3D formulations. Previous approaches include classification-based methods that extend a Faster R-CNN model for 6D pose estimation [60, 38] and compositional models that predicts 6D poses with analysis-by-synthesis [38]. We also notice the huge progress of 3D vision language foundation models, which excel in multiple 3D vision-language understanding tasks [19, 37, 21]. Still, we focus on the reasoning with compositional reasoning that brings more interpretability and robustness [32]. + +# 3 Super-CLEVR-3D Dataset + +To study 3D-aware VQA, we propose the Super-CLEVR-3D dataset, which contains questions explicitly asking about the 3D object configurations of the image. The images are rendered using scenes from the Super-CLEVR dataset [32], which is a VQA dataset containing synthetic scenes of randomly placed vehicles from 5 categories (car, plane, bicycle, motorbike, bus) with various of sub-types (e.g. different types of cars) and attributes (color, material, size). The questions are generated by instantiating the question templates based on the image scenes, using a pipeline similar to Super-CLEVR. In Super-CLEVR-3D, three types of 3D-aware questions are introduced: part questions, 3D pose questions, and occlusion questions. In the following, we will describe these three types of questions, and show the new operations we introduced for our 3D-aware questions about object parts, 3D poses, and occlusions. Examples of the dataset are shown in Fig. 1. + +Part questions. While in the original Super-CLEVR dataset refers to objects using their holistic names or attributes, objects are complex and have hierarchical parts, as studied in recent works [33, 11, 20]. Therefore, we introduce part-based questions, which use parts to identify objects (e.g. "which vehicle has red door") or query about object parts (e.g. "what color is the door of the car"). To enable the generation of part-based questions, we introduce two new operations into the reasoning programs: part_to_object(\cdot), which find the objects containing the given part, and object_to_part(\cdot), which select all the parts of the given object. We also modify some existing operations (i.e. filter, query and unique), enabling them to operate on both object-level and part-level. With those reasoning operations, we collect 9 part-based templates and instantiate them with the image scene graph to generate questions. + +3D pose questions. Super-CLEVR-3D asks questions about the 3D poses of objects (e.g. "which direction is the car facing in"), or the pair-wise pose relationships between objects (e.g. "which object has vertical direction with the red car"). The pose for an individual object (e.g. "facing left") can be processed in a similar way as attributes like colors, so we extend the existing attribute-related operations like filter and query to have them include pose as well. For pair-wise pose relationship between objects, we add three operations, i.e. same Pose, opposite Pose and vertical Pose, to deal with the three types of pose relationships between objects. For example, opposite Pose(\cdot) returns the objects that are in the opposite pose direction with the given object. 17 templates are collected to generate 3D pose questions. + +Occlusion questions. Occlusion questions ask about the occlusion between entities (i.e. objects or parts). Similar to 3D poses, occlusion can also be regarded as either an attribute for an entity (e.g. "which object is occluded"), or as a relationship between entities (e.g. "which object occludes the car door"). We extend the attribute-related operations, and introduce new operations to handle the pair-wise occlusion relationships: filter_occludee which filters the entities that are being occluded, relate_occluding which finds the entities that are occluded by the given entity, and relate_occluded which finds the entities that are occluding the given entity. Using these operations, 35 templates are collected to generate the occlusion questions. + +# 4 Method + +![](images/ba0f8a3ac3b55955724e88d19064cf351bea25032aedd0b72d5c102e8fe6f558.jpg) +Figure 2: An overview of our model PO3D-VQA. The image is parsed into 3D-aware scene representations (blue box) using our proposed scene parser based on the idea of render-and-compare (green box). The question is parsed into a program composed of reasoning operations (orange box). Then the operations are executed on the 3D-aware scene representations to predict the answer. + +In this section, we introduce PO3D-VQA, which is a parse-then-execute modular model for 3D-aware VQA. The overview of our system is shown in Fig. 2. We first parse the image into a scene graph representation that is aware of 3D information like object parts, 3D poses and occlusion relations, then we parse the question into a reasoning program and execute the program on the derived scene representations in a probabilistic manner. In Sec. 4.1, we define the scene representation required; in Sec. 4.2, we describe how we parse the image into the scene representation based on a multi-class 6D pose estimation model with non-trivial extensions; in Sec. 4.3, we describe how the question is executed on the derived scene representation to predict the answer. + +# 4.1 3D-aware scene representation + +Given an input image $I$ , we parse it into a 3D-aware scene representation $R$ that contains the objects $(O)$ with attributes $(A^o)$ , the parts $(P)$ with attributes $(A^p)$ , the hierarchical relationships between objects and parts $(H)$ , and the occlusion relationships between them $(S)$ . The attributes include the 3D poses and locations of objects or parts, as well as their colors, materials, and sizes. The scene representation $R = \{O, P, A^o, A^p, H, S\}$ is comprehensive and therefore we can directly execute the symbolic reasoning module on this representation without taking into account the image any further. + +In more detail, objects are represented as a matrix $O \in \mathbb{R}^{n \times N_{obj}}$ containing the probability scores of each object being a certain instance, where $n$ is the number of objects in the given image and $N_{obj}$ is the number of all possible object categories in the dataset (i.e. vocabulary size of the objects). Similarly, parts are represented as $P \in \mathbb{R}^{p \times N_{prt}}$ , where $p$ is the number of parts in the image and $N_{prt}$ is the vocabulary size of the object parts. The object-part hierarchy is represented by a binary matrix $H \in \mathbb{R}^{n \times p}$ , where $H_{ij} = 1$ if the object $i$ contains the part $j$ or $H_{ij} = 0$ otherwise. The attributes $A^o \in \mathbb{R}^{n \times N_{att}}$ and $A^p \in \mathbb{R}^{p \times N_{att}}$ containing probability scores of each object or part having a certain attribute or the value of bounding box. Here $N_{att}$ is the number of attributes including the 3D poses, location coordinates, colors, materials and sizes. Occlusion relationships are represented by $S \in \mathbb{R}^{(n + p) \times n}$ , where each element $S_{ij}$ represents the score of object (or part) $i$ being occluded by object $j$ . + +# 4.2 Multi-class 6D Scene Parsing + +While most existing VQA methods [2, 59] encode the image using pretrained object detectors like Faster-RCNN [46], we build our 6D-aware scene parser in a different way, based on the idea of analysis-by-synthesis through inverse rendering [49] which has the following advantages: first, the model prediction is more robust [49] as the render-and-compare process can naturally integrate a robust reconstruction loss to avoid distortion through occlusion; second, while the object parts + +![](images/7fa9370f50279c608fb0575348cdadc7eb1d0d898b9b87e55b1825ecccea7613.jpg) +(1) +(a) + +![](images/5557163981351de79fc2145d0d3641dd9af729868c3bec003f78b03315b70f51.jpg) +(b) + +![](images/b5cbe2dc57a35c9a177001b13a51800f832a2281daebf9dac25618be1604a618.jpg) +(c) + +![](images/8591ee78fa143ed4bb14506f3b8dc1b978942b592ff92f959c9a98979a3d5923.jpg) +(d) + +![](images/b898434242c22ff7d31d90fa4efc34cc118490e62ece5872be41d51d282956db.jpg) +(e) + +![](images/927bcfbfe88b42eac37da4383cbbb26deda4d3e0ab1316c9ca60805a2787793f.jpg) +(II) +bus +Figure 3: Visualization of intermediate steps in our scene parser. Given an image (a), per-category feature activation maps (shown in II) are computed through render-and-compate. Then the category-wise competition (3D-NMS) is performed (results shown in b) and a post-filtering step is taken to remove mis-detected objects (c). Based on the pose estimation results (d), we project the 3D object mesh back onto the image to locate parts and occlusions(e). + +![](images/fc1f8e3e335d29a2aa57d7c708930748f6de4025aa90765872dfe7844dbd9961.jpg) +aeroplane + +![](images/2ab4e105582699b02b24e8de373fe57463f071e69455bd1c57672a5472abaa71.jpg) +bicycle + +![](images/4ba78faa756cb9fc2e33a9e38f95caac29ae392c3ceebf92ff7059e466f9e459.jpg) +motorbike + +![](images/bf53e32f1e084381938576cbe0d3982c9f0e40da3e0084f8dbbbc43e985ae37b.jpg) +car + +are usually very challenging for Faster-RCNN to detect due to their small size, they can be much easier located using the 3D object shape, by first finding the object and estimating its 3D pose, and subsequently locating the parts using the 3D object shape (as shown in our experimental evaluation). + +However, we observe two open challenges for applying existing 6D pose estimators that follow a render-and-compare approach [38, 49]: (a) these pose estimators assume that the object class is known, but in Super-CLEVR-3D the scene parser must learn to estimate the object class jointly with the pose; and (b) the scenes in Super-CLEVR-3D are very dense, containing multiple close-by objects that occlude each other. In order to address these two challenges, we introduce several improvements over [38] that enable it to be integrated into a 3D-aware VQA model. + +In the following, we first describe neural meshes [49, 38], which were proposed in prior work for pose estimation of single objects following an analysis-by-synthesis approach. Subsequently, we extend this method to complex scenes with densely located and possibly occluded objects to obtain a coherent scene representation, including object parts and attributes. + +Preliminaries. Our work builds on and significantly extends Neural Meshes [38] that were introduced for 6D pose estimation through inverse rendering. The task is to jointly estimate the 6D pose (2D location, distance to the camera and 3D pose) of objects in an image. An object category is represented with a category-level mesh [49] $M_y = \{v_n \in \mathbb{R}^3\}_{n=1}^N$ and a neural texture $T_y \in \mathbb{R}^{N \times c}$ on the surface of the mesh $M_y$ , where $c$ is the dimension of the feature and $y$ is the object category. Given the object 3D pose in camera view $\alpha$ , we can render the neural mesh model $O_y = \{M_y, T_y\}$ into a feature map with soft rasterization [35]: $F_y(\alpha) = \Re(O_y, \alpha)$ . Following prior work in pose estimation [49] we formulate the render-and-compare process as an optimization of the likelihood model: + +$$ +p \left(F \mid O _ {y}, \alpha_ {y}, B\right) = \prod_ {i \in \mathcal {F} \mathcal {G}} p \left(f _ {i} \mid O _ {y}, \alpha_ {y}\right) \prod_ {i \in \mathcal {B} \mathcal {G}} p \left(f _ {i} ^ {\prime} \mid B\right) \tag {1} +$$ + +where $\mathcal{F}\mathcal{G}$ and $\mathcal{B}\mathcal{G}$ are the set of foreground and background locations on the 2D feature map and $f_{i}$ is the feature vector of $F$ at location $i$ . Here the foreground and background likelihoods are modeled as Gaussian distributions. + +To train the feature extractor $\Phi$ , the neural texture $\{T_y\}$ and the background model $B$ jointly, we utilize the EM-type learning strategy as originally introduced for keypoint detection in CoKe[7]. Specifically, the feature extractor is trained using stochastic gradient descent while the parameters of the generative model $\{T_y\}$ and $B$ are trained using momentum update after every gradient step in the feature extractor, which was found to stabilize training convergence. + +At inference time, the object poses $\alpha$ can be inferred by minimizing the negative log-likelihood w.r.t. the 3D pose $\alpha$ using gradient descent [38]. + +Multi-object competition with 3D-NMS. We extend Neural Meshes to predict the 6D object pose and class label in complex multi-object scenes. In particular, we introduce 3D-Non-Maximum-Suppression (3D-NMS) into the maximum likelihood inference process. This introduces a competition between Neural Meshes of different categories in explaining the feature map. In contrast to classical + +2D-NMS, our 3D-NMS also takes into account the distance of each object to the camera and hence naturally enables reasoning about occlusions of objects in the scene. + +We denote the 6D pose as $\gamma = \{x, l\}$ , where $x = \{\alpha, \beta\}$ represents the 3D object pose $\alpha$ and object distance to the camera $\beta$ , and $l$ is the 2D object location in the feature map. We first detect the 6D poses of each object category independently and apply 2D-NMS such that for each 2D location $l'$ in a neighborhood defined by radius $r$ , the predicted 6D pose $\{x, l\}$ yields the largest activation: + +$$ +\max _ {x} p (F \mid x, l) \text {s . t .} p (F \mid x, l) > p (F \mid x, l ^ {\prime}), \forall l ^ {\prime} \in \left\{l ^ {\prime} \mid 0 < | l ^ {\prime} - l | < r \right\} \tag {2} +$$ + +We enable multi-category 6D pose estimation by extending this formulation to a 3D non-maximum suppression (3D-NMS). Using $\mathcal{V}$ to represent the set of all object categories, we model the category label $y$ from a generative perspective: + +$$ +\max _ {x} p (F \mid x, l, y) \text {s . t .} p (F \mid x, l, y) > p (F \mid x, l ^ {\prime}, y), \forall l ^ {\prime} \in \left\{l ^ {\prime} \mid 0 < | l ^ {\prime} - l | < r \right\} \tag {3} +$$ + +$$ +a n d \quad p (F \mid x, l, y) > p (F \mid x, l, y ^ {\prime}), \forall y ^ {\prime} \neq y \in \mathcal {Y} \tag {4} +$$ + +Dense scene parsing with greedy proposal generation. Typically, object detection in complex scenes requires well chosen thresholds and detection hyperparameters. Our render-and-compare approach enables us to avoid tedious hyperparameter tuning by adopting a greedy approach to maximize the model likelihood (Eq. (1)) using a greedy proposal strategy. In particular, we optimize the likelihood greedily by starting from the object proposal that explains away the most parts of the image with highest likelihood, and subsequently update the likelihood of the overlapping proposals taking into account, that at every pixel in the feature map only one object can be visible [56]. Formally, given a list of objects proposals $\{o_i = (O_{y,i},\alpha_{y,i})\}_{i = 1}^k$ (with predicted category label $y$ and 6D pose $\alpha$ ), we first order the object proposals based on their likelihood score $s = p(F|o_i,B)$ such that $s_i\leq s_j$ for $i < j$ . Based on the ordering, we greedily update the 6D pose $\alpha_{j}$ and the corresponding proposal likelihood for object $o_j$ by masking out the foreground regions of previous objects $o_i$ with $1\leq i\leq j - 1$ . In this way, we can largely avoid missing close-by objects or duplicated detection. + +Part and attribute prediction. Given the predicted location and pose of each object, we project the object mesh back onto the image to get the locations for each part. To predict the attributes for the objects and parts, we crop the region containing the object or part from the RGB image, and train an additional CNN classifier using the cropped patches to predict the attributes (color, size, material) and the fine-grained classes (i.e. different sub-types of cars) of each patch using a cross-entropy loss. The reason why this additional CNN classifier is needed instead of re-using the features from the 6D pose estimator is that the pose estimation features are learned to be invariant to scale and texture changes, which makes it unsuitable for attribute prediction. + +Post-filtering. Finally, we post-process the located objects using the fine-grained CNN classifier. We compare the category labels predicted by the 6D pose estimator with the ones predicted by the CNN classifier, and remove the objects for which these two predictions do not agree. This post-filtering step helps with the duplicated detections that cannot be fully resolved with the 3D-NMS. + +Summary. Fig. 2 provides an overview of our scene parser and Fig. 3 visualize the intermediate results. With the idea of render-and-compare (shown in the green box of Fig. 2), the model first computes an activation map for each possible object category (Fig. 3II). Next, to infer the category for each object, the category-wise competition 3D-NMS is performed (Fig. 3b) and a post-filtering step is taken to remove mis-detected objects (Fig. 3c). Fig. 3d shows the 6D pose estimation results. To predict parts, we project the 3D object mesh back onto the image to locate parts based on projected objects (Fig. 3e). In this way, the input image can be parsed into a 3D-aware representation, which is ready for the question reasoning with program execution. + +# 4.3 Program execution + +After the 3D-aware scene representations are predicted for the given image, the question is parsed into a reasoning program, which is then executed on the scene representation to predict the answer. The question parsing follows previous work [54], where a LSTM sequence-to-sequence model is trained to parse the question into its corresponding program. Like P-NSVQA [32], each operation in the program is executed on the scene representation in a probabilistic way. In the following, we describe the execution of the new operations we introduced. + +The part-related operators are implemented by querying the object-part hierarchy matrix $H$ , so that the object containing a given part (part_to_object) and the parts belonging to the given object + +(object_to_part) can be determined. The pose-related operators are based on the estimated 3D pose in the object attributes $A^o$ . For the filter and query operations regarding pose, the 3D poses are quantified into four directions (left, right, front, back). For the pair-wise pose relationships, the azimuth angle between two objects is used to determine the same/opposite/vertical directions. The occlusion-related operations are implemented by querying the occlusion matrix $S$ . Based on the occlusion scores $S_{ij}$ representing whether entity $i$ being occluded by entity $j$ , we can compute the score of one entity being occluded $\sum_{j} S_{ij}$ (filter_occludee), find the entities that occlude a given entity (relate_occluded), or find the entities that are occluded by a given entity (relate_occluded). + +# 5 Experiments + +# 5.1 Evaluated methods + +We compare our model with three representative VQA models: FiLM [44], mDETR [25], and PNSVQA [32]. Additionally, we introduce a variant of PNSVQA, PNSVQA+Projection, to analyze the benefit of our generative 6D pose estimation approach. + +FiLM [44] Feature-wise Linear Modulation is a representative two-stream feature fusion method. The FiLM model merges the question features extracted with GRU [12] and image features extracted with CNN and predicts answers based on the merged features. + +mDETR [25] mDETR is a pretrained text-guided object detector based on transformers. The model is pretrained with 1.3M image and text pairs and shows strong performance when finetuned on downstream tasks like referring expression understanding or VQA. + +PNSVQA [32] PNSVQA is a SoTA neural symbolic VQA model. It parses the scene using MaskR-CNN [18] and an attribute extraction network, then executes the reasoning program on the parsed visual scenes with taking into account the uncertainty of the scene parser. To extend PNSVQA to the 3D questions in Super-CLEVR-3D, we add a regression head in the attribute extraction network to predict the 3D posefor each object; parts are detected in a similar way as objects by predicting 2D bounding boxes; the part-object associations and occlusions are computed using intersection-overunion: a part belongs to an intersected object if the part label matches the object label, otherwise it is occluded by this object. + +PNSVQA+Projection Similar with NSVQA, this model predicts the 6D poses, categories and attributes using MaskRCNN and the attribute extraction network. The difference is that the parts and occlusions are predicted by projecting the 3D object models onto the image using the predicted 6D pose and category (same with how we find parts and occlusions in our model). This model helps us ablate the influence of the two components in our model, i.e. 6D pose prediction by render-and compare, and part/occlusion detection with mesh projection. + +# 5.2 Experiment setup + +Dataset. Our Super-CLEVR-3D dataset shares the same visual scenes with Super-CLEVR dataset. We re-render the images with more annotations recorded (camera parameters, parts annotations, occlusion maps). The dataset splits follow the Super-CLEVR dataset, where we have 20k images for training, 5k for validation, and 5k for testing. For question generation, we create 9 templates for part questions, 17 templates for pose questions, 35 templates for occlusion questions (with and without parts). For each of the three types, 8 to 10 questions are generated for each image by randomly sampling the templates. We ensure that the questions are not ill-posed and cannot be answered by taking shortcuts, i.e. the questions contain no redundant reasoning steps, following the no-redundancy setting in [32]. More details including the list of question templates can be found in the Appendix. + +Implementation details. We train the 6D pose estimator and CNN attribute classifier separately. We train the 6D pose estimator (including the contrastive feature backbone and the nerval mesh models for each of the 5 classes) for 15k iterations with batch size 15, which takes around 2 hours on NVIDIA RTX A5000 for each class. The attribute classifier, which is a ResNet50, is shared for objects and parts. It is trained for 100 epochs with batch size 64. During inference, it takes 22s for 6D pose estimation and 10s for object mesh projection for all the objects in one image. During inference of the 6D pose estimator, we assume the theta is 0. During 3D NMS filtering, we choose the radius $r$ as 2, and we also filter the object proposals with a threshold of 15 on the score map. + +# 5.3 Quantitative Results + +We trained our model and baselines on Super-CLEVR-3D's training split, reporting answer accuracies on the test split in Tab. 1. Accuracies for each question type are detailed separately. + +Table 1: Model accuracies on the Super-CLEVR-3D testing split, reported for each question type, i.e. questions about parts, 3D poses, occlusions between objects, occlusions between objects and parts. + +
MeanPartPoseOcc.Part+Occ.
FiLM [44]50.5338.2467.8251.4144.66
mDETR [25]55.7241.5271.7664.9950.47
PNSVQA [32]64.3950.6187.7865.8053.35
PNSVQA+Projection68.1556.3086.7070.7058.90
PO3D-VQA (Ours)75.6471.8586.4076.9067.40
+ +Comparison with baselines. First, among all the baseline methods, the neural symbolic method PNSVQA performs the best (64.4% accuracy), outperforming the end-to-end methods mDETR and FiLM by a large margin ( $>8\%$ ). This shows the advantage of the step-wise modular reasoning procedure, which agrees with the findings in prior works that the modular methods excel on the simulated benchmarks that require long-trace reasoning. Second, our model achieves 75.6% average accuracy, which significantly outperforms all the evaluated models. Especially, comparing our PO3D-VQA with its 2D counterpart NSVQA, we see that the injection of 3D knowledge brings a large performance boost of 11%, suggesting the importance of the 3D understanding. + +Comparison with PNSVQA variants. By analyzing the results of PNSVQA variants (PNSVQA, PNSVQA+Projection, and our PO3D-VQA), we show (a) the benefit of estimating object 3D poses using our analysis-by-synthesis method over regression and (b) the benefit of object-part structure knowledge. First, by detecting part using 3D model projection, PNSVQA+Projection improves the PNSVQA results by $4\%$ , which indicates that locating parts based on objects using the object-part structure knowledge is beneficial. Second, by estimating object 6D poses with our generative render-and-compare method, our PO3D-VQA outperforms PNSVQA+Projection by $7\%$ (from $68.2\%$ to $75.6\%$ ), showing the advantage of our render-and-compare model. Moreover, looking at the per-type results, we find that the improvement of our PO3D-VQA is most significant on the part-related questions ( $21\%$ improvement over PNSVQA) and part-with-occlusion questions ( $14\%$ ), while the accuracy on pose-related questions does not improve. The reason is that part and occlusion predictions require precise pose predictions for accurate mesh projection, while the pose questions only require a rough pose to determine the facing direction. + +# 5.4 Analysis and discussions + +To further analyze the advantage of PO3D-VQA over other PNSVQA variants, we compare the models on questions of different difficulty levels. It is shown that the benefit our model is the most significant on hard questions. In Fig. 4, we plot the relative accuracy drop ${}^{3}$ of each model on questions with different occlusion ratios and questions with different part sizes. + +![](images/9785c0b763ac4ef5aae9f2eee771ee4f3d33598ab74be86b9e3ec506baefe6bb.jpg) +(a) Pose wrt. Occlusion Ratio + +![](images/6a1a7808025a53076f7af6956af01793d91a24de74f865aaf2b2aee1deb7f085.jpg) +(b) Part wrt. Part Size +Figure 4: Analysis on questions of different difficulty levels. The plots show the relative accuracy drop of models, on pose questions w.r.t. different occlusion ratios (a), on part questions w.r.t. different part sizes (b), and on part+occlusion questions w.r.t. different part sizes (c). + +![](images/d86930506110bc6556c827effc0fe49b995132f49766f4a99f0bd00e65e585ad.jpg) +(c) Part + Occlusion wrt. Part Size + +Questions with different occlusion ratios. We sort pose-related questions into different sub-groups based on their occlusion ratios and evaluate the models on each of the sub-groups. The occlusion ratio $r$ of a question is the minimum of occlusion ratios for all the objects in its reasoning trace. We choose $r$ from $0\%$ to $30\%$ , in increment of $5\%$ . The results are shown in Fig. 4 (a). Our PO3D-VQA is much more robust to occlusions compared to the other two methods: while the performances of all + +the three models decrease as the occlusion ratio increases, the relative drop of ours is much smaller than others. The results show that our render-and-compare scene parser is more robust to heavy occlusions compared with the discriminative methods. + +Questions with different part sizes. Questions about small parts are harder than the ones about larger parts. We sort the questions into different part size intervals $(s, t)$ , where the largest part that the question refers to has an area (number of pixels occupied) larger than $s$ and smaller than $t$ . We compare the models on the part questions and the part+occlusion questions with different part sizes in Fig. 4 (b) and (c). In (b), the accuracy drop of PO3D-VQA is smaller than PNSVQA+Projection and PNSVQA when parts get smaller. In (c), PNSVQA+Projection is slightly better than our model and they are both better than the original PNSVQA. + +In summary, by sorting questions into different difficulty levels based on occlusion ratios and part sizes, we show the advantage of our PO3D-VQA on harder questions, indicating that our model is robust to occlusions and small part sizes. + +Qualitative results. Fig. 9 shows examples of predictions for our model and PNSVQA variants. In (a), the question asks about occlusion, but with a slight error in the pose prediction, PNSVQA+Projection misses the occluded bus and predicts the wrong answer, while our model is correct with accurate pose. In (b), the question refers to the heavily occluded minivan that is difficult to detect, but our model gets the correct prediction thanks to its robustness to occlusions. + +(a) +Q: What is the material of the gray object that is occluded? A: rubber +![](images/c25428474835f1455d2e42274a042dd8feb0481fe1c87ea7eeb88417bd82b819.jpg) +Ours: rubber +X PNSVQA+Proj: metal +X PNSVQA: metal + +(b) +Q: Which direction is the minivan facing? A: left +Figure 5: Examples of models' predictions. Our model (a) predicts the object pose accurately and (b) is robust to heavy occlusions. Red boxes are for visualization only. +![](images/10d0a62c36e1bd105597f1c1894ee0cb35802268d9a3a5eb9cf325fcaf662f8e.jpg) +Ours: left +PNSVQA +Proj.: right +PNSVQA: front + +![](images/425ee9a82844dd239d0f220b691486a7f692378cbd4d2305bf2cc40f3021671f.jpg) +Ours + +![](images/08b63c487f0596745b237c96f6ae9cde7be501c9bdff58f3c1efe1c7a4a6eac7.jpg) + +![](images/57b7a7daf89189b5bd2b08961658212a2f1a79be34fa87d5ac564a87793811f0.jpg) +PNSVQA+Projection + +![](images/3b920f4a1b817cb325d5ae3e0720b815d29d8261b04353009c323ff351b4104b.jpg) + +Limitations and failure cases. Due to the difficulties of collecting real images with compositional scenes and 3D annotations, our work is currently limited by its synthetic nature. For PO3D-VQA, it sometimes fails to detect multiple objects if they are from the same category and heavily overlap (see Appendix D for more visualizations). 3D NMS can effectively improve the dense scene parsing results when objects are from different categories, but conceptually it is limited when objects are from the same category. However, 6D pose estimation in dense scenes is a challenging problem, whereas many current works on 6D pose estimation are still focusing on simple scenes with single objects [38, 50, 57]. + +# 6 Further Discussion + +In this section, we discuss two meaningful extensions of our work: the incorporation of z-direction questions and the application of our model to real-world images. + +Z-direction questions. While the proposed Super-CLEVR-3D dataset has been designed with 3D-aware questions, all objects within it are placed on the same surface. Introducing variability in the z direction can further enrich our dataset with more comprehensive 3D spatial relationships. + +We consider the scenario where aeroplane category, is in different elevations, introducing the $z$ dimension into the spatial relationships (see Fig. 6). This allowed us to formulate questions that probe the model's understanding of height relationships and depth perception. We create a subset containing 100 images and 379 questions and test our PO3D-VQA model directly on it without retraining the 6D + +parser. On this dataset, our PO3D model achieves $90.33\%$ accuracy on height relationship questions and $78.89\%$ on depth-related questions, suggesting that our model can successfully handle questions about height. As the baseline models only use the bounding box to determine the spatial relationship between objects, they are not able to determine the height relationships. + +![](images/aea234a674cf1b45c10bf7f7bce8c5bb11ae651f51084dab8553d6e49311c8f6.jpg) +Height question: There is a blue object that is below the biplane; what shape is it? Answer: Tandem +Depth question: Is the biplane closer than the red motorbike? Answer: Yes + +![](images/ad7af7995390d2805c9a82f17fca0d9f1098cbd21474faa515176af0701c5216.jpg) +Height question: How many objects are above the shiny bicycle? Answer: 2 +Depth question: Does the truck have a greater distance than the shiny bus? Answer: Yes +Figure 6: Example images and questions of objects with different elevations. + +Extension to real-world images While our PO3D-VQA model has demonstrated impressive performance on the synthetic Super-CLEVR-3D dataset, an essential research direction is extending it to real images or other 3D VQA datasets (such as GQA and FE-3DGQA). However, it's not trivial to truly evaluate it on these real-world problems, and a primary challenge is the lack of 3D annotations and the highly articulated categories (like the human body) in these datasets. + +However, we show that our PO3D-VQA model can, in principle, work on realistic images. We generate several realistic image samples manually using the vehicle objects (e.g. car, bus, bicycle) from ImageNet with 3D annotation (see Fig. 7) and real-image background. In this experiment, the pose estimator is trained on the PASCAL3D+ dataset, and is used to predict the poses of objects from the image before pasting, as shown in (b). The attribute (color) prediction module is trained on Super-CLEVR-3D and the object shapes are predicted by a ResNet trained on ImageNet. Our model can correctly predict answers to questions about the object pose, parts, and occlusions, e.g. "Which object is occluded by the mountain bike". + +![](images/3c7ab2b2a4edf2ff65398d92ba9b2cd094a90606baa0cd9b03174dc124524d23.jpg) +(a1) + +![](images/600c20adbaf2f3ca09c96b6bddac41516e57f07da80f08e58ec2129f29b7ae38.jpg) +(b1) +[Post] Q: Which direction does the mountain bike face to? Our: Left Q: Which direction does the race car face to? Our: Right [Part] Q: What is the color of the fin that belongs to the aeroplane? Our: Red Q:What's the shape of the object that has a wing? Our: Aeroplane [Occ.] Q: Which object is occluded by the mountain bike? Our: Trolleybus Q: What is the color of the occluded object? Our: Red (c1) (c2) +Figure 7: Examples of results on realistic images. Given a realistic image (a1, a2), our model can successfully estimate the 6D poses of objects (b1, b2) and answer the 3D-aware questions (c1, c2). + +![](images/bffd1785ca170127d044c23e025de558ef0757818398575e8de9b3e97c0c84a7.jpg) +(a2) + +![](images/06cbd9042bc610025cb12f0bae3bfc4e891a3952855a01564d311ead2207761e.jpg) +(b2) + +# 7 Conclusion + +In this work, we study the task of 3D-aware VQA. We propose the Super-CLEVR-3D dataset containing questions explicitly querying 3D understanding including object parts, 3D poses, and occlusions. To address the task, a 3D-aware neural symbolic model PO3D-VQA is proposed, which enhances the probabilistic symbolic model with a robust 3D scene parser based on analysis-by-synthesis. With the merits of accurate 3D scene parsing and symbolic execution, our model outperforms existing methods by a large margin. Further analysis shows that the improvements are even larger on harder questions. With the dataset, the model, and the experiments, we highlight the benefit of symbolic execution and the importance of 3D understanding for 3D-aware VQA. + +# Acknowledgements + +We thank the anonymous reviewers for their valuable comments. We thank Qing Liu, Chenxi Liu, Elias Stengel-Eskin, Benjamin Van Durme for the helpful discussions on early version of the project. This work is supported by Office of Naval Research with grants N00014-23-1-2641, N00014-21-1-2812. A. Kortylewski acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under Grant No.468670075. + +# References + +[1] Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356, 2016. +[2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086, 2018. +[3] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39-48, 2016. +[4] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433, 2015. +[5] Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe. Scanqa: 3d question answering for spatial scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19129-19139, 2022. +[6] Dzmitry Bahdanau, Harm de Vries, Timothy J O'Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, and Aaron Courville. Closure: Assessing systematic generalization of clevr models. arXiv preprint arXiv:1912.05783, 2019. +[7] Yutong Bai, Angtian Wang, Adam Kortylewski, and Alan Yuille. Coke: Contrastive learning for robust keypoint detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 65-74, 2023. +[8] Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. Rubi: Reducing unimodal biases for visual question answering. Advances in neural information processing systems, 32, 2019. +[9] Paola Cascante-Bonilla, Hui Wu, Letao Wang, Rogerio S Feris, and Vicente Ordonez. Simvqa: Exploring simulated environments for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5056-5066, 2022. +[10] Runnan Chen, Youquan Liu, Lingdong Kong, Xinge Zhu, Yuexin Ma, Yikang Li, Yuenan Hou, Yu Qiao, and Wenping Wang. Clip2scene: Towards label-efficient 3d scene understanding by clip. arXiv preprint arXiv:2301.04926, 2023. +[11] Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, and Alan Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1971-1978, 2014. +[12] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bouguares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. +[13] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-10, 2018. +[14] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016. +[15] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. +[16] Vipul Gupta, Zhuowan Li, Adam Kortylewski, Chenyu Zhang, Yingwei Li, and Alan Yuille. Swapmix: Diagnosing and regularizing the over-reliance on visual context in visual question answering. arXiv preprint arXiv:2204.02285, 2022. +[17] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3608-3617, 2018. +[18] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. + +[19] Yining Hong, Chunru Lin, Yilun Du, Zhenfang Chen, Joshua B Tenenbaum, and Chuang Gan. 3d concept learning and reasoning from multi-view images. arXiv preprint arXiv:2303.11327, 2023. +[20] Yining Hong, Li Yi, Josh Tenenbaum, Antonio Torralba, and Chuang Gan.Ptr: A benchmark for part-based conceptual, relational, and physical reasoning. Advances in Neural Information Processing Systems, 34, 2021. +[21] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. arXiv preprint arXiv:2307.12981, 2023. +[22] Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. Explainable neural computation via stack neural module networks. In Proceedings of the European conference on computer vision (ECCV), pages 53-69, 2018. +[23] Drew A Hudson and Christopher D Manning. Compositional attention networks for machine reasoning. 2018. +[24] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901-2910, 2017. +[25] Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1780-1790, 2021. +[26] Coretin Kervadec, Grigory Antipov, Moez Baccouche, and Christian Wolf. Roses are red, violets are blue... but should vqa expect them to? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2776-2785, 2021. +[27] Coretin Kervadec, Theo Jaunet, Grigory Antipov, Moez Baccouche, Romain Vuillemot, and Christian Wolf. How transferable are reasoning patterns in vqa? 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4205-4214, 2021. +[28] Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. Advances in Neural Information Processing Systems, 31, 2018. +[29] Satwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. arXiv preprint arXiv:1903.03166, 2019. +[30] Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. Relation-aware graph attention network for visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10313-10322, 2019. +[31] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. ECCV 2020, 2020. +[32] Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan Yuille. Super-clevr: A virtual benchmark to diagnose domain robustness in visual reasoning. arXiv preprint arXiv:2212.00259, 2022. +[33] Qing Liu, Adam Kortylewski, Zhishuai Zhang, Zizhang Li, Mengqi Guo, Qihao Liu, Xiaoding Yuan, Jiteng Mu, Weichao Qiu, and Alan Yuille. Learning part segmentation through unsupervised domain adaptation from synthetic vehicles. In CVPR, 2022. +[34] Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L Yuille. Clevr-ref+: Diagnosing visual reasoning with referring expressions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4185-4194, 2019. +[35] Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7708-7717, 2019. +[36] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019. +[37] Tiange Luo, Chris Rockwell, Honglak Lee, and Justin Johnson. Scalable 3d captioning with pretrained models. arXiv preprint arXiv:2306.07279, 2023. + +[38] Wufei Ma, Angtian Wang, Alan Yuille, and Adam Kortylewski. Robust category-level 6d pose estimation with coarse-to-fine rendering of neural features. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part IX, pages 492-508. Springer, 2022. +[39] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. In International Conference on Learning Representations, 2023. +[40] Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision. In International Conference on Learning Representations, 2019. +[41] Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12700-12710, June 2021. +[42] Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xiansheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12695-12705, 2021. +[43] Songyou Peng, Kyle Genova, Chiyu "Max" Jiang, Andrea Tagliasacchi, Marc Pollefeys, and Thomas Funkhouser. Openscene: 3d scene understanding with open vocabularies. In CVPR, 2023. +[44] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. +[45] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. Advances in neural information processing systems, 28, 2015. +[46] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015. +[47] Leonard Salewski, A Koepke, Hendrik Lensch, and Zeynep Akata. Clevr-x: A visual reasoning dataset for natural language explanations. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pages 69-88. Springer, 2022. +[48] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019. +[49] Angtian Wang, Adam Kortylewski, and Alan Yuille. Nemo: Neural mesh models of contrastive features for robust 3d pose estimation. In International Conference on Learning Representations, 2021. +[50] Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond Pascal: A benchmark for 3d object detection in the wild. In IEEE winter conference on applications of computer vision, pages 75-82. IEEE, 2014. +[51] Xu Yan, Zhihao Yuan, Yuhao Du, Yinghong Liao, Yao Guo, Zhen Li, and Shuguang Cui. Clevr3d: Compositional language and elementary visual reasoning for question answering in 3d real-world scenes. arXiv preprint arXiv:2112.11691, 2021. +[52] Shuquan Ye, Dongdong Chen, Songfang Han, and Jing Liao. 3d question answering. IEEE Transactions on Visualization and Computer Graphics, 2022. +[53] Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenenbaum. Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442, 2019. +[54] Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B Tenenbaum. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. In Advances in Neural Information Processing Systems (NIPS), 2018. +[55] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6281–6290, 2019. +[56] Xiaoding Yuan, Adam Kortylewski, Yihong Sun, and Alan Yuille. Robust instance segmentation through reasoning about multi-object occlusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11141-11150, 2021. + +[57] Yanjie Ze and Xiaolong Wang. Category-level 6d object pose estimation in the wild: A semi-supervised learning approach and a new dataset. Advances in Neural Information Processing Systems, 35:27469-27483, 2022. +[58] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5317-5327, 2019. +[59] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Making visual representations matter in vision-language models. CVPR 2021, 2021. +[60] Xingyi Zhou, Arjun Karpur, Linjie Luo, and Qixing Huang. Starmap for category-agnostic keypoint and viewpoint estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 318–334, 2018. +[61] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995–5004, 2016. + +# A Dataset Details + +# A.1 Part list + +In Super-CLEVR-3D, the parts of each objects are listed in Tab. 2 + +# A.2 Question templates + +Part Questions we collect 9 part-based templates when generating the part-based questions, as shown in Tab. 4. In the table, means one attribute from shape, material, color or size to be queried, (or , ) means one object to be filtered with a combination of shape, material, color, and size. Different from the pose and occlusion question, we don't query the size of the object. + +3D Pose questions We design 17 3D pose-based templates in question generation (as shown in table 5). The 17 templates consist of: 1 template of the query of the pose; 4 questions of the query of shape, material, color, size, where the pose is in the filtering conditions; 12 templates about the query of shape, material, color, size, where the relationship of the pose is the filtering condition. + +Occlusion Questions There are 35 templates in the occlusion question generation as shown in table 6, which consists of occlusion of objects and occlusion of parts. + +The occlusion of objects consists of occlusion status and occlusion relationship. For the occlusion status of the object, there are 4 templates to query the shape, color, material, and size respectively. There are 2 occlusion relationships of objects (occluded and occluding), and each of them has 4 templates. + +Similarly, we then create a template about occlusion status and occlusion relationship for the parts. The only difference between object and part is that the parts only have 3 attributes to be queried: shape (name), material and color. + +# A.3 Statistics + +As a result, a total of 314,988 part questions, 314,986 pose questions, and 228,397 occlusion questions and 314,988 occlusion questions with parts. + +In Fig. 8, we show the distributions of all attributes of objects including categories, colors, sizes, and materials + +![](images/60469839bd0ebdfeb8ab9a042a7379f0b06de9f7b08c55ba530e97762c67eb3c.jpg) +Figure 8: Distributions for all the attributes of objects including categories, colors, sizes, and materials + +# B Implementation details for the baselines + +The FiLM and mDETR are trained with default settings as in the official implementation. FiLM is trained for 100k iterations with batch size 256. mDETR is trained for 30 epochs with batch size 64 using 2 GPUs for both the grounding stage and the answer classification stage. + +For P-NSVQA, we first train a MaskRCNN for 30k iterations with batch size 16 to detect the objects and parts, then train the attribute extraction model (using Res50 backbone) for 100 epochs with batch size 64. Different fully connected(FC) layers are used for a different type of question: the + +part questions and occlusion questions have 4 FC layers for the shape, material, color, and size classification (as the parts also have size annotations in the dataset when generating scene files, but they are meaningless in the question answering). The pose question includes pose prediction of an object, so we add a new FC layer with 1 output dimension to predict the rotations, followed by an MSE loss during training. For different types of questions (part, pose and occlusion), the MaskRCNN and attribute extraction model are trained separately. + +In the PNSVQA+Projection baseline, we first train a MaskRCNN to detect all of the objects and predict their 3D pose (azimuth, elevation and theta) without category labels in the scene. This MaskRCNN is trained with batch size 8 and iteration 15000. We use an SGD optimizer with a learning rate of 0.02, momentum of 0.9 and weight decay 0.0001. Then, we use the same setting as our PO3D-VQA to train a CNN to classify the attributes of objects and parts. + +# C Detailed results of Analysis + +As an extension for section 5.4 in main paper, here we include the numerical value of accuracy and drop for the pose, part, occlusion + part question with reference to occlusion ratio or part size. The result is shown in Tab. 7, Tab. 9 and Tab. 8. + +# D Failure cases + +Examples of failure cases of our PO3D-VQA, as described in Section 5.4 in main paper. In (a) and (b), PO3D-VQA misses the bicycle behind when two bicycles have a heavy overlap, the same for the two motorbikes in (c) and (d). + +![](images/2198fb95ba84f423f9b286911203b9cabb9f7879c94e63e5c2566eac36fe26d6.jpg) +(a) + +![](images/b215ba34835a47da4eab3f2a07ea58ff001fc0022f62e8d2768134148f1f2ccf.jpg) +(b) +Figure 9: Failure cases of our PO3D-VQA. (a) and (c) is the input images with the objects missed by the model. (b) and (c) is the re-projection results from the model. + +![](images/5205f4227568aec91e226f4b9a635d160690549c1fbc01f2ec83ac30b674cc6c.jpg) +(c) + +![](images/f3e65e35a0bf16f4301e7a42822fbd23051e6ccbe9fe869c0a9ffc99bba6b768.jpg) +(d) + +Table 2: List of objects and parts. + +
shapepart list
airlinerleft door, front wheel, fin, right engine, propeller, back left wheel, left engine, back right wheel, left tailplane, right door, right tailplane, right wing, left wing
biplanefront wheel, fin, propeller, left tailplane, right tailplane, right wing, left wing
jetleft door, front wheel, fin, right engine, propeller, back left wheel, left engine, back right wheel, left tailplane, right tailplane, right wing, left wing
fighterfin, right engine, left engine, left tailplane, right tailplane, right wing, left wing
utility bikeleft handle, brake system, front wheel, left pedal, right handle, back wheel, saddle, carrier, fork, right crank arm, front fender, drive chain, back fender, left crank arm, side stand, right pedal
tandem bikerearlight, front wheel, back wheel, fork, front fender, back fender
road bikeleft handle, brake system, front wheel, left pedal, right handle, back wheel, saddle, fork, right crank arm, drive chain, left crank arm, right pedal
mountain bikeleft handle, brake system, front wheel, left pedal, right handle, back wheel, saddle, fork, right crank arm, drive chain, left crank arm, right pedal
articulated busleft tail light, front license plate, front right door, back bumper, right head light, front left wheel, left mirror, right tail light, back right door, back left wheel, back right wheel, back license plate, front right wheel, left head light, right mirror, trunk, mid right door, roof
double busleft tail light, front license plate, front right door, front bumper, back bumper, right head light, front left wheel, left mirror, right tail light, back left wheel, back right wheel, back license plate, mid left door, front left door, front right wheel, left head light, right mirror, trunk, mid right door, roof
regular busleft tail light, front license plate, front right door, front bumper, back bumper, right head light, front left wheel, left mirror, right tail light, back right door, back left wheel, back right wheel, back license plate, front right wheel, left head light, right mirror, trunk, mid right door, roof
school busleft tail light, front license plate, front right door, front bumper, back bumper, right head light, front left wheel, left mirror, right tail light, back left wheel, back right wheel, back license plate, mid left door, front right wheel, left head light, right mirror, roof
truckfront left door, left tail light, left head light, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, roof, front right door
suvfront left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, roof, front right door, back license plate
minivanfront left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, back right door, roof, front right door, back license plate
sedanfront left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, back right door, roof, front right door, back license plate
wagonfront left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, back right door, roof, front right door, back license plate
chopperleft handle, center headlight, front wheel, right handle, back wheel, center taillight, left mirror, gas tank, front fender, fork, drive chain, left footrest, right mirror, windscreen, engine, back fender, right exhaust, seat, panel, right footrest
scooterleft handle, center headlight, front wheel, right handle, back cover, back wheel, center taillight, left mirror, front cover, fork, drive chain, right mirror, engine, left exhaust, back fender, seat, panel
cruiserleft handle, center headlight, right headlight, right taillight, front wheel, right handle, back cover, back wheel, left taillight, left mirror, left headlight, gas tank, front cover, front fender, fork, drive chain, left footrest, license plate, right mirror, windscreen, left exhaust, back fender, right exhaust, seat, panel, right footrest
dirtbikeleft handle, front wheel, right handle, back cover, back wheel, gas tank, front cover, front fender, fork, drive chain, left footrest, engine, right exhaust, seat, panel, right footrest
+ +Table 4: Templates of parts questions + +
TemplatesCount
What is the <attribute> of the <part> of the <object>?3
What is the <attribute> of the <object> that has a <part>?3
What is the <attribute> of the <part 1> that belongs to the same object as the <part 2>?3
+ +Table 5: Templates of pose questions + +
TemplatesCount
Which direction the <object> is facing?1
What is the <attribute> of the <object> which face to the <0>?4
What is the <attribute> of the <object 1> that faces the same direction as a <object 2>4
What is the <attribute> of the <object 1> that faces the opposite direction as a <object 2>4
What is the <attribute> of the <object 1> that faces the vertical direction as a <object 2>4
+ +Table 6: Templates of occlusion questions + +
TemplatesCount
What is the <attribute> of the <object> that is occluded?4
What is the <attribute> of the <object 1> that is occluded by the <object 2> ?4
What is the <attribute> of the <object 1> that occludes the <object 2> ?4
Is the <part> of the <object> occluded?1
Which part of the <object> is occluded?1
What is the <attribute> of the <object> whose <part> is occluded?4
What is the <attribute> of the <part> which belongs to an occluded <object> ?3
What is the <attribute> of the <part 1> which belongs to the <object> whose <part 2> is occluded?3
Is the <part> of the <object 1> occluded by the <object 2>1
What is the <attribute> of the <object 1> whose <part> is occluded by the <object 2> ?4
What is the <attribute> of the <part> which belongs to <object 1> which is occluded by the <object 2>3
What is the <attribute> of the <part 1> which belongs to the same object whose <part 2> is occluded by the <object 2> ?3
+ +Table 7: Accuracy value and relative drop for pose questions wrt. occlusion ratio + +
Occlusion Ratio051015202530
PNSVQAAccuracy87.4374.0974.0963.1662.0160.3358.52
Drop0.00%15.26%15.26%27.76%29.08%31.00%33.07%
PNSVQA + ProjectionAccuracy86.3074.6167.2066.7860.2656.5255.56
Drop0.00%13.54%22.13%22.62%30.17%34.51%35.63%
OursAccuracy86.4386.0584.3275.0079.4473.2267.98
Drop0.00%0.44%2.44%13.22%8.09%15.28%21.35%
+ +Table 8: Accuracy value and relative drop for occlusion + part wrt. part size + +
Part Sizemax3001501005020
PNSVQAAccuracy58.1854.9854.0552.0945.2021.28
Drop0.00%5.49%7.10%10.47%22.31%63.43%
PNSVQA + ProjectionAccuracy61.8550.6456.7753.9755.2945.83
Drop0.00%18.11%8.20%12.74%10.60%25.89%
OursAccuracy81.6875.3277.2071.5467.0053.19
Drop0.00%7.78%5.49%12.41%17.97%34.88%
+ +Table 9: Accuracy value and relative drop for part wrt. part size + +
Part Sizemax3001501005020
PNSVQAAccuracy57.3151.0037.5044.1840.8529.73
Drop0.00%11.02%34.57%22.92%28.73%48.12%
PNSVQA + ProjectionAccuracy58.8957.5442.6443.2046.7338.67
Drop0.00%2.30%27.60%26.65%20.65%34.34%
OursAccuracy64.0464.8060.1657.0349.0555.41
Drop0.00%-1.19%6.06%10.94%23.41%13.48%
\ No newline at end of file diff --git a/3dawarevisualquestionansweringaboutpartsposesandocclusions/images.zip b/3dawarevisualquestionansweringaboutpartsposesandocclusions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..34415b9bca0369b5750e10ffe3db8edaee2a3ec9 --- /dev/null +++ b/3dawarevisualquestionansweringaboutpartsposesandocclusions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8bf928a83023ac4032d19c09774d379c95626cb7faf49b3d187f44c881ecc0f +size 1108112 diff --git a/3dawarevisualquestionansweringaboutpartsposesandocclusions/layout.json b/3dawarevisualquestionansweringaboutpartsposesandocclusions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fdfc3b9e216cb5f55ca10605b5857820f7cea6e9 --- /dev/null +++ b/3dawarevisualquestionansweringaboutpartsposesandocclusions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ac4ef7b75e69bc41b6584e8295c98ecda079508b256ff33d08d07bec3a36b41 +size 561605 diff --git a/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_content_list.json b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c916d9569223932532a622929de6d5195f8bb77b --- /dev/null +++ b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fd1fd8cd29ac5dc9d1519189d8c9d8c4e5ae2586d8b3a97878c6a88dcff5a17 +size 80600 diff --git a/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_model.json b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0ac14d9abf40001e952b9015100a938399278a1f --- /dev/null +++ b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f42d0881fb5e97e2bcf1c19024c7e8cd4fb1daac5e93863cf519082c2d1696f5 +size 96090 diff --git a/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_origin.pdf b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1d8ad19197130ef7de9c8327278aab68e7d3cd24 --- /dev/null +++ b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f57c6c1cb7fda30438758c22859c2ad76ea5f8a6f65e0a0fdee87043b927156 +size 4368923 diff --git a/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/full.md b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f0d8d6407e92abc38cd1c6676330cf0a57576001 --- /dev/null +++ b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/full.md @@ -0,0 +1,263 @@ +# 3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection + +Yunhao Ge $^{\diamond \dagger}$ , Hong-Xing Yu $^{\diamond}$ , Cheng Zhao $^{\S}$ , Yuliang Guo $^{\S}$ , Xinyu Huang $^{\S}$ , Liu Ren $^{\S}$ , Laurent Itti $^{\dagger}$ , Jiajun Wu $^{\diamond}$ + +$^{\diamond}$ Stanford University †University of Southern California + +$^{\S}$ Bosch Research North America, Bosch Center for Artificial Intelligence (BCAI) + +{yunhaoge, koven, jiajunwu}@cs.stanford.edu {yunhaoge, itti}@usc.edu + +{Cheng.Zhao, Yuliang.Guo2, Xinyu.Huang, Liu.Ren}@us.bosch.com + +# Abstract + +A major challenge in monocular 3D object detection is the limited diversity and quantity of objects in real datasets. While augmenting real scenes with virtual objects holds promise to improve both the diversity and quantity of the objects, it remains elusive due to the lack of an effective 3D object insertion method in complex real captured scenes. In this work, we study augmenting complex real indoor scenes with virtual objects for monocular 3D object detection. The main challenge is to automatically identify plausible physical properties for virtual assets (e.g., locations, appearances, sizes, etc.) in cluttered real scenes. To address this challenge, we propose a physically plausible indoor 3D object insertion approach to automatically copy virtual objects and paste them into real scenes. The resulting objects in scenes have 3D bounding boxes with plausible physical locations and appearances. In particular, our method first identifies physically feasible locations and poses for the inserted objects to prevent collisions with the existing room layout. Subsequently, it estimates spatially-varying illumination for the insertion location, enabling the immersive blending of the virtual objects into the original scene with plausible appearances and cast shadows. We show that our augmentation method significantly improves existing monocular 3D object models and achieves state-of-the-art performance. For the first time, we demonstrate that a physically plausible 3D object insertion, serving as a generative data augmentation technique, can lead to significant improvements for discriminative downstream tasks such as monocular 3D object detection. Project website: https://gyhandy.github.io/3D-Copy-Paste/. + +# 1 Introduction + +Monocular indoor 3D object detection methods have shown promising results in various applications such as robotics and augmented reality [Yang and Scherer, 2019, Chen et al., 2017]. However, the deployment of these methods is potentially constrained by the limited diversity and quantity of objects in existing real datasets. For example, in SUN RGB-D dataset [Song et al., 2015], the bathtub category has only less than 500 annotations compared to chair which has over 19,000 annotations. This may be due to the difficulty in acquiring and labeling substantial indoor scene datasets with diverse 3D object annotations [Silberman et al., 2012, Song et al., 2015, Dai et al., 2017]. + +Data augmentation techniques have been widely utilized in 2D detection and segmentation tasks to improve the diversity and quantity of the available training data [Dwibedi et al., 2017, Ge et al., 2022a, Ghiasi et al., 2021, Ge et al., 2022b, 2023]. However, it is non-trivial to scale 2D augmentation methods to 3D scenes due to physical constraints in real 3D scenes. In particular, technical challenges + +![](images/574c73e0078ea4b7eb8370a6463eb7d0e10b010668202dac3c6337684cf3b4b5.jpg) +Figure 1: Overall pipeline of physically plausible object insertion for monocular 3D object detection: Our approach copies external 3D objects (e.g., from Objaverse [Deitke et al., 2022]) and pastes them into indoor scene datasets (e.g., SUN RGB-D [Song et al., 2015]) in a physically plausible manner. The augmented indoor scene dataset, enriched with inserted 3D objects, is then used to train monocular 3D object detection models, resulting in significant performance improvements. + +emerge especially in how to maintain physical plausibility for: (1) Collision and Occlusion Handling: In 3D data augmentation, handling collisions between objects is more challenging than in 2D data. Properly managing collisions is essential to prevent artifacts and ensure that objects appear as natural and coherent parts of the scene. (2) Illumination and Shading: For 3D data, augmenting objects requires careful consideration of the lighting conditions in the scene to create realistic shading and reflections. This involves estimating the spatially-varying illumination and adapting the appearance of the inserted objects to maintain visual coherence. (3) Geometric Consistency: In 3D data augmentation, maintaining geometric consistency is crucial to ensure that the augmented objects fit naturally within the scene. Unlike 2D augmentation, which deals with flat images, 3D augmentation must consider spatial relationships, object orientations, and their interaction with the surrounding environment. + +In this paper, we explore a novel approach, 3D Copy-Paste, to achieve 3D data augmentation in indoor scenes. We employ physically plausible indoor 3D object insertion to automatically generate large-scale annotated 3D objects with both plausible physical location and illumination. Unlike outdoor scenarios, indoor environments present unique challenges: (1) complex spatial layouts, notably cluttered backgrounds and limited space for object placement, which require a meticulously crafted method for automated object positioning (ensuring realistic position, size, and pose), and (2) intricate lighting effects, such as soft shadows, inter-reflections, and long-range light source dependency, which necessitate sophisticated lighting considerations for harmonious object insertion. + +Fig. 1 shows our overall pipeline. In our approach, we take advantage of existing large-scale 3D object datasets, from which we copy simulated 3D objects and paste them into real scenes. To address the challenges associated with creating physically plausible insertions, we employ a three-step process. First, we analyze the scene by identifying all suitable planes for 3D object insertion. Next, we estimate the object's pose and size, taking into account the insertion site to prevent collisions. Lastly, we estimate the spatially-varying illumination to render realistic shading and shadows for the inserted object, ensuring that it is seamlessly blended into the scene. + +Our proposed method augment existing indoor scene datasets, such as SUN RGB-D [Song et al., 2015], by incorporating large-scale 3D object datasets like Objaverse [Deitke et al., 2022] using our 3D Copy-Paste approach. Our method is an offline augmentation method that creates a new augmented dataset. The monocular 3D object detection model, ImvoxelNet Rukhovich et al. [2022], trained on this augmented dataset, achieves new state-of-the-art performance on the challenging SUN RGB-D dataset. We systematically evaluate the influence of the inserted objects' physical position and illumination on the downstream performance of the final monocular 3D object detection model. Our results suggest that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performances in discriminative downstream tasks such as monocular 3D object detection. + +We make three main contributions: (1) We introduce 3D Copy-Paste, a novel physically plausible indoor object insertion technique for automatically generating large-scale annotated 3D objects. This + +approach ensures the plausibility of the objects' physical location, size, pose, and illumination within the scene. (2) We demonstrate that training a monocular 3D object detection model on a dataset augmented using our 3D Copy-Paste technique results in state-of-the-art performance. Our results show that a physically plausible 3D object insertion method can serve as an effective generative data augmentation technique, leading to significant improvements in discriminative downstream monocular 3D object detection tasks. (3) We conduct a systematic evaluation on the effect of location and illumination of the inserted objects on the performance of the downstream monocular 3D object detection model. This analysis provides valuable insights into the role of these factors in the overall effectiveness of our proposed approach. + +# 2 Related Works + +# 2.1 Monocular 3D Object Detection + +Monocular 3D Object Detection estimates the 3D location, orientation, and dimensions (3D bounding box) of objects from a single 2D image. It has garnered significant attention in recent years due to its potential applications in autonomous driving, robotics, and augmented reality. There are many works of monocular 3D detection in driving scenarios, such as 3DOP[Chen et al., 2015], MLFusion[Xu and Chen, 2018], M3D-RPN[Brazil and Liu, 2019], MonoDIS[Simonelli et al., 2019], Pseudo-LiDAR[Wang et al., 2019], FCOS3D[Wang et al., 2021], SMOKE[Liu et al., 2020], RTM3D[Li et al., 2020a], PGD[Wang et al., 2022a], CaDDN[Reading et al., 2021]. Specifically, Geometry-based Approaches: MV3D Chen et al. [2017] utilized both LiDAR-based point clouds and geometric cues from images for 3D object detection. Mousavian et al. [2017] introduced a method that regresses object properties such as dimensions, orientation, and location from 2D bounding boxes using geometric constraints. In the context of indoor scenes, multi-task learning has gained traction. Recent studies, including PointFusion by Xu et al. [2018], have amalgamated 3D object detection with tasks like depth estimation or semantic segmentation to improve performance. Total3D [Nie et al., 2020] and Implicit3D [Zhang et al., 2021] use end-to-end solutions to jointly reconstruct room layout, object bounding boxes and meshes from a single image. ImvoxelNet [Rukhovich et al., 2022] achieves state-of-the-art performance by using the image-voxels projection for monocular 3d object detection. + +# 2.2 3D Data Augmentation + +Data augmentation in 3D has become increasingly vital for enhancing performance across various 3D perception tasks. Most of work focuses on outdoor scenes [Zhang et al., 2020, Lian et al., 2022, Abu Alhaija et al., 2018, Chen et al., 2021, Tong et al., 2023]. Geometric Transformations: Wu et al. [2015] applied rotations, translations, and scaling to augment the ModelNet dataset, improving classification and retrieval tasks. Point Cloud Augmentation: Engelcke et al. [2017] proposed techniques such as random point removal, Gaussian noise, and point cloud interpolation for augmenting LiDAR datasets, enhancing object detection and segmentation performance. Generative Model-based Augmentation: Smith and Meger [2017] used a conditional GAN to generate diverse and realistic 3D objects. Similarly, Achlioptas et al. [2018] employed a VAE for learning a generative model of 3D shapes for shape completion and exploration tasks. However, while 3D generative models can achieve object-level augmentation, they are not scalable to scene-level augmentation. 2D generative models can produce highly realistic images, but they do not provide physically plausible 3D labels. 3D Common corruptions [Kar et al., 2022] use 3D information to generate real-world corruptions for 2D dataset, which can evaluate the model robustness and be used as a data augmentation for model training, but does not support 3D detection because it does not introduce new 3D object content. + +# 2.3 Illumination Estimation + +Illumination estimation is a critical focus within computer vision research, given its crucial role in various applications. Li et al. [2020b] addressed the inverse rendering problem for complex indoor scenes, estimating spatially-varying lighting, SVBRDF, and shape from a single image. Meanwhile, a differentiable ray tracing method combined with deep learning was proposed for the learning-based inverse rendering of indoor scenes [Zhu et al., 2022]. Additionally, research has been conducted on using deep learning for indoor lighting estimation, with methods like Deep Parametric Indoor Lighting Estimation offering enhanced accuracy and efficiency Gardner et al. [2019]. Furthermore, + +![](images/9f3af0f0c97f2d171400bc750f3dc99ebdf7d343aa14dd101ba4b74febdfce88.jpg) +Figure 2: 3D Copy-Paste method overview: Our method (a) processes the input RGB image and depth data to reconstruct floor planes that can accommodate inserted objects. (b) Using the reconstructed planes and information about objects in the original scene, we estimate a physically plausible position, pose, and size for the inserted objects, ensuring they do not collide with existing objects. (c) We predict the spatially-varying lighting of the scene. (d) By registering the insertion position determined in (b) to spatially-varying lighting, our light estimation module (d) refined an HDR environment map to represent the lighting information for the inserted objects. (e) The insertion rendering module takes the position, pose, size, and lighting as input and inserts a 3D object into the real scene, adjusting the object's lighting and shadows accordingly to ensure it seamlessly integrates as a natural and coherent part of the scene. + +Wang et al. [2022b] introduced Neural Light Field Estimation, a method that effectively models complex lighting conditions for virtual object insertion in street scenes. These studies underscore the potential of machine learning in improving illumination estimation capabilities in rendering and computer vision tasks. + +# 3 Methods + +This section presents our proposed physically plausible indoor 3D object insertion approach. Fig. 2 shows our 3D Copy-Paste method overview. Section 3.1 addresses the question of "where and how to place the object", detailing the process of estimating suitable insertion positions, poses, and sizes for the objects while avoiding collisions with existing objects. Section 3.2 explains "what illumination should we add to the object": estimate the scene's spatially-varying illumination and render the inserted objects with realistic lighting and shadows. Section 3.3 describes how we create an augmented dataset using the inserted objects and train monocular 3D object detection models. + +# 3.1 Where and how: Physically Plausible Position, Pose, and Size Estimation + +This section describes handling the first challenge of avoiding collisions during insertion by estimating physically plausible position, pose, and size parameters. + +# 3.1.1 Ground Plane Selection + +Given a scene and a 3D object to insert, the initial question is where to place the object. To accommodate a new object, we must identify and understand the available regions where the object can be situated. We perform plane reconstruction to comprehend the scene's layout and subsequently, we estimate physically plausible key parameters such as position, size, and pose. Fig. 2(a) presents an overview of our plane reconstruction and selection module, which takes an RGB image and depth data as input and predicts all potential planes, then narrows down to the ground plane. + +To get a rough plane reconstruction, we followed the plane extraction method using Agglomerative Hierarchical Clustering (AHC) described in Feng et al. [2014]. There are three main steps: (1) + +we construct a graph with nodes and edges representing groups of points, obtained by dividing the point cloud (merging RGB with depth) into non-overlapping groups. (2) We then perform AHC on the organized graph to identify potential planes by merging nodes that belong to the same plane, continuing until the mean squared error of plane fitting surpasses a threshold. (3) We use a pixelwise region-growing method to refine the detected planes. To further refine the extracted planes while preserving clear face textures and sharp features without losing geometric details, we utilize a back-end indoor plane optimization and reconstruction method described in Wang and Guo [2018]. Specifically, we first partition the entire dense mesh into different planar clusters based on the planes extracted with AHC, treating them as plane primitives. We then create a texture patch for each plane and sample points on it, followed by executing a global optimization process to maximize the photometric consistency of sampled points across frames by optimizing camera poses, plane parameters, and texture colors. Further, we optimize the mesh geometry by maximizing consistency between geometry and plane primitives, further preserving the original scene's sharp features, such as edges and corners of plane intersections. Finally, we get the reconstructed plane with the geometry parameters (e.g., surface normal). + +To select a proper plane for insertion, we first identify all horizontal planes based on surface direction and the standard deviation along the Z-axis. Specifically, there are two constraints for considering a plane as horizontal: (1) The plane must have a surface normal aligned with the positive direction of the Z-axis (opposite of the gravity vector), and (2) the standard deviation along the Z-axis should be smaller than a predefined threshold. In our scenario, we aim to insert furniture into the scene, such as the ten interest classes in the SUN RGB-D dataset [Song et al., 2015]: sofa, bed, chair, desk, table, nightstand, dresser, bookshelf, toilet, and bathtub. Consequently, we must identify the floor plane by selecting the horizontal plane with the lowest average Z value among all detected horizontal planes. + +# 3.1.2 Constrained Insertion Parameter Search + +To address the question of where and how to place the object, we estimate specific insertion parameters: position $(p)$ , size $(s)$ , and pose $(o)$ . We propose an efficient constrained insertion parameter searching algorithm to calculate plausible insertion parameters while avoiding collisions with existing objects in the scene (Algorithm 1). Given the reconstructed floor plane, we first determine the search space for each parameter. For position, we want the inserted object to touch the floor, so we find the 3D bounding box of the object and calculate the center of the bottom surface $(p)$ as the optimization parameter of position. To prevent potential collisions between the inserted object and existing assets in the original scene, we search for a suitable position around the center of the reconstructed floor. As shown in Fig. 2(b), we first calculate the floor's center $c \gets (c_x, c_y, c_z)$ , and set a search square, which uses twice the floor's standard deviation along X axis, $\sigma_x$ , and Y axis, $\sigma_y$ , as square width and length. The insertion position is sampled from a Uniform distribution inside the search square $p_x \sim \mathcal{U}[c_x - \sigma_x, c_x + \sigma_x]$ and $p_y \sim \mathcal{U}[c_y - \sigma_y, c_y + \sigma_y]$ , $p \gets (p_x, p_y, c_z)$ . For size $(s)$ , we use the height of the 3D bounding box of the object as the optimization parameter. For each object category, we first calculate the mean $m_h$ and standard deviation $\sigma_h$ of the height of the object belonging to the same category in the original scene dataset. We then assume the height size follows a Normal distribution and sample a height size from this Normal distribution: $s \in \mathcal{N}(m_h, \sigma_h)$ . For the pose $(o)$ , we only allow the object to rotate along the Z-axis to maintain its stability. The optimization parameter is the rotation angles alone the Z-axis, which follows uniform distribution as $o \sim \mathcal{U}[-\pi, \pi]$ . + +Algorithm 1 details the Constrained Insertion Parameter Search algorithm. We first set a search budget: $k$ search iterations. For each iteration, we randomly sample each parameter (position, size, and pose) from their corresponding search spaces and calculate the inserted object's bounding box based on the sampled parameters. We then check for collisions with existing objects and quantitatively evaluate the degree of collisions. A direct approach for collision checking is to convert the inserted object into a point cloud and then calculate the overlap with existing objects' point clouds. However, this method is time-consuming due to the large number of points involved. We simplify the problem by converting the original 3D collision into a 2D collision to speed up the collision check. Since the inserted objects are on the floor, if two objects collide, their 3D bounding box projections on the top view would also often collide (but not always, e.g., when an object may be placed under a table; we here ignore these candidate placements). In other words, we disregard the absolute value of the 3D volume and use the 2D collision projection as a relative collision score. Utilizing an efficient collision check allows us to set a relatively large search iteration number, such as $k = 1000$ , while still maintaining a limited search time (less than 0.5 seconds). We also consider a resize factor $r$ + +Algorithm 1: Constrained Insertion Parameter Search +Input: An RGBD image of the scene, a reconstructed floor, a 3D object belonging to the class of interest, $j$ +Output: Position $(\hat{p}$ : 3D bounding box bottom center), size (s: 3D bounding box (bbox) height), and pose (o: orientation along Z-axis) +1 Compute position search constrains: floor center $c\gets (c_x,c_y,c_z)$ , standard deviation $\sigma_{x}$ and $\sigma_{y}$ +2 Initialize search parameters: $k\gets 1000$ degree of collision $\hat{l}\gets \inf$ for $i\in \{1,2,\dots ,k\}$ do Sample position: $p_x\sim \mathcal{U}[c_x - \sigma_x,c_x + \sigma_x]$ and $p_y\sim \mathcal{U}[c_y - \sigma_y,c_y + \sigma_y],p\gets (p_x,p_y,c_z)$ Sample size: $s\sim \mathcal{N}(m_h,\sigma_h)$ , resize factor $r\sim \mathcal{U}[1,r]$ $s\gets s / r$ where $m_h$ and $\sigma_h$ are mean and standard deviation of object height in class $j$ in the raw dataset Sample pose: $o\sim \mathcal{U}[-\pi ,\pi ]$ Calculate 3D bbox $x_{3\mathrm{D}}$ , parameter based on the sampled insertion parameter $(p,s$ and o) Project 3D bbox to 2D bbox $x_{2\mathrm{D}}$ in top view Calculate collision score $l = F(x_{2\mathrm{D}})$ with existing objects in the scene if $l = = 0$ then Return p, s, o if $l < \hat{l}$ then $\hat{p}\gets p,\hat{s}\gets s,\hat{o}\gets o$ $\hat{l}\gets l$ + +to shrink the size of the inserted object to handle inserting a large object in a small empty floor scenario. During the search, we terminate the process if we find an insertion with a collision score of 0; otherwise, we continue to track the best insertion with the lowest collision score and return it after completing $k$ search iterations. + +# 3.2 What Illumination is on the object + +# 3.2.1 Spatial-varying Illumination Estimation and Retrieval + +To answer the question of what kind of illumination should be cast on the object, we first need to estimate the spatially-varying illumination of the scene. This process involves encapsulating intricate global interactions at each spatial location. To achieve this, we utilize the deep inverse rendering framework proposed by Li et al. [2020b]. Initially, we estimate intermediate geometric features such as albedo, normal, depth, and roughness. Subsequently, a LightNet structure, consisting of an encoder-decoder setup, ingests the raw image and the predicted intermediate features. This, in turn, enables the estimation of spatially-varying lighting across the scene. + +As depicted in Fig. 2(c), the estimated spatially-varying illumination is represented as environment maps. Specifically, each $4 \times 4$ pixel region in the raw image is associated with an environment map, which captures the appearance of the surrounding environment and is used for reflection, refraction, or global illumination. These maps are spherical (equirectangular), representing the environment on a single 2D texture. The X-axis corresponds to longitude, and the Y-axis corresponds to latitude. Each point on the texture corresponds to a specific latitude and longitude on a sphere. + +To obtain the environment map associated with the position of the inserted object, we register and retrieve the corresponding environment map based on the estimated position after performing the constrained insertion parameter search. + +# 3.2.2 Environment Map Refinement + +Coordinate transformation. The environment map, estimated for the inserted object, is based on the local coordinates of the insertion position. In particular, it establishes a coordinate system where the surface normal is designated as the Z-axis. In order to apply this map for relighting the inserted object using a rendering method (such as Blender), it becomes necessary to transform the environment map to align with Blender's coordinate system. + +Latitude completion. The estimated environment map only contains latitudes in the range $(0, \pi/2)$ because the inverse rendering method cannot estimate the illumination beneath the surface. As shown in Fig. 2(d), we complete the entire environment map by filling in artificial values in the second half. + +Table 1: Statistics of external 3D objects from Objaverse [Deitke et al., 2022]. + +
CategoryBedTableSofaChairDeskDresserNightstandBookshelfToiletBathtub
Number19085436193431752139914224
+ +Intensity refinement. The estimated environment map is in Low Dynamic Range (LDR) format, lacking High Dynamic Range (HDR) details and high contrast. If we use the predicted value directly, the rendered shadow appears relatively fuzzy. We refine the value by adjusting the scale in log space to estimate the HDR value: $I_{\mathrm{HDR}} = I_{\mathrm{LDR}}^{\gamma}$ , where $\gamma$ is a hyperparameter. + +Finally, we input the HDR environment map after transformation and refinement, along with the position, size, and pose, into an insertion renderer (e.g., Blender). This allows us to obtain the inserted image with 3D bounding boxes serving as ground truth. + +# 3.3 Dataset Augmentation with Insertion and Downstream Model Training + +Given an indoor scene dataset and a set of interest classes $\mathcal{C}$ for potential insertion, we can identify external 3D objects set $\mathcal{E}$ that fall within these classes of interest. Before any insertion, we calculate the statistical parameters for each class of interest that we aim to augment. For every class $j\in \mathcal{C}$ , we assume the size parameter (for instance, the height) fits a Gaussian distribution. We then calculate the mean and standard deviation of this size parameter to guide the insertion of external objects. Here are the detailed steps for insertion: For each scene within the indoor scene dataset, we randomly select a category $j$ from the class of interest set $\mathcal{C}$ . Next, we randomly choose an instance from the external 3D objects set $\mathcal{E}$ that belongs to the selected class $j$ . We then utilize our physically plausible insertion method (Algorithm 1) to integrate this external 3D object into the scene. We could train any downstream monocular 3D object detection model with the augmented dataset because we automatically obtain the 3D annotations of the inserted objects. + +# 4 Experiments + +This section presents experiments to assess the effectiveness of our proposed physically-plausible 3D object insertion method and evaluate how different insertion parameters affect the final performance of monocular 3D object detection. + +# 4.1 Dataset and Model Setting + +Indoor scene dataset. We utilize the SUN RGB-D dataset [Song et al., 2015] as our primary resource for indoor scenes. It is one of the most challenging benchmarks in indoor scene understanding. SUN RGB-D comprises 10,335 RGB-D images captured using four distinct sensors. The dataset is divided into 5,285 training scenes and 5,050 test scenes. Furthermore, it includes 146,617 2D polygons and 58,657 3D bounding boxes, providing a comprehensive dataset for our research. + +We also use ScanNet dataset [Dai et al., 2017]. ScanNet v2 is a large-scale RGB-D video dataset, which contains 1,201 videos/scenes in the training set and 312 scenes in the validation set. Adapting it for monocular 3D object detection, we utilized one RGB-D image per video, amounting to 1,201 RGB-D images for training and 312 for validation. We compute the ground truth 3D bounding box label for each of our used views from their provided scene level label, as some objects in the scene may not be visible in our monocular viewpoint. + +External 3D object assets. The quality of 3D objects is crucial for effective insertion. Hence, we use Objaverse [Deitke et al., 2022], a robust dataset with over 800,000 annotated 3D objects. Using word parsing, we extract objects that align with the classes of interest for monocular 3D object detection within SUN RGB-D. Table 1 shows the selected Objaverse data for each SUN RGB-D class. + +Monocular 3D object detection model. We focus on the challenging task of monocular 3D object detection that relies solely on a single RGB image as input. We employ ImVoxelNet, which achieves state-of-the-art performance on the raw SUN RGB-D dataset using only a single RGB image as input. Other existing methods either resort to using additional modalities and multiple datasets for extra supervision or exhibit underwhelming performance. For the purpose of monocular 3D object + +Table 2: ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with different object insertion methods. When inserting randomly, the accuracy of the downstream object detector drops, i.e., the detector suffers from random insertions (which may have collisions, occlusions, incorrect lighting, etc.). In contrast, by only applying physically plausible position, size, and pose, performance significantly improved $(41.80\%)$ . Further, when plausible lighting and shadows are added, our 3D copy-paste improves the accuracy of the downstream detector to a new state-of-the-art accuracy $(43.79\%)$ . We use mAP $(\%)$ with 0.25 IOU threshold. + +
SettingInsertion Position, Pose, SizeInsertion IlluminationmAP@0.25
ImVoxelNetN/AN/A40.96
ImVoxelNet + random insertRandomCamera point light37.02
ImVoxelNet + 3D Copy-Paste (w/o light)Plausible position, size, poseCamera point light41.80
ImVoxelNet + 3D Copy-PastePlausible position, size, posePlausible dynamic light43.79
+ +Table 3: Per class average precision (AP) of ImVoxelNet 3D monocular object detection performance on SUN RGB-D dataset. + +
SettingmAP@0.25bedchairsofatablebkshfdeskbathtubtoiletdressernightstand
ImVoxelNet40.9672.055.653.041.17.621.529.676.719.033.4
ImVoxelNet + 3D Copy-Paste43.7972.657.155.141.87.124.140.280.722.336.9
+ +detection, we train the same ImVoxelNet model on the original SUN RGB-D dataset and its various versions, each augmented via different insertion methods. All mAP results are mAP@0.25. + +# 4.2 Physically-plausible position, pose, size, and illumination leads to better monocular detection performance + +Our 3D Copy-Paste focuses on solving two challenges: (1) Where and how to put the object: we estimate the object's position, orientation, and size for insertion while ensuring no collisions. (2) What illumination is on the object: we estimate the spatially-varying illumination and apply realistic lighting and shadows to the object rendering. The following experiments evaluate the model performance. + +Table 2 presents the results of monocular 3D object detection on the SUN RGB-D dataset, utilizing various object insertion augmentation techniques. The first row is the performance of ImVoxelNet trained on the raw SUN RGB-D dataset without any insertion. The "ImVoxelNet + random insert" row displays results achieved through a naive 3D object insertion without applying physically plausible constraints (random location and Camera point light). This approach led to a drop in accuracy from $40.96\%$ to $37.02\%$ , likely due to the lack of physical plausibility causing severe collisions and occlusions in the final image. The "ImVoxelNet + 3D Copy-Paste (w/o light)" row showcases the performance after implementing our method for only estimating physically plausible insertion position, pose, and size. Despite using a rudimentary camera point light, this approach outperforms "ImVoxelNet" without any insertion, and also outperforms the naive "ImVoxelNet + random insert" $(+4.78\%)$ improvement. This result shows that applying plausible geometry is essential for downstream tasks and makes 3D data augmentation useful over a naive, random augmentation. After further applying physically plausible dynamic light, our proposed "ImVoxelNet + 3D Copy-Paste" further improved the performance and achieved new state-of-the-art, surpassing ImVoxelNet without insertion $(+2.83\%)$ on monocular 3D object detection task. This performance improvement suggests that our 3D Copy-Paste insertion can serve as an efficient data augmentation method to positively benefit downstream 3D object detection tasks. Table 3 shows detailed SUN RGB-D monocular 3D object detection results with ImVoxelNet on each individual object category. + +Table 4 presents the results of monocular 3D object detection on the ScanNet dataset. We utilized one RGB-D image per video: 1,201 for training and 312 for validation. We compute the ground truth 3D bounding box label for each of our used views from their provided scene-level label. For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlapping categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data. We use our 3D Copy-Paste to augment the training set and train an ImVoxelNet model. All the training parameters are the same as the training on SUN RGB-D dataset. We show the results on the average accuracy of + +Table 4: ImVoxelNet 3D monocular object detection performance on the ScanNet dataset with different object insertion methods. + +
SettingmAP@0.25bedchairsofatablebkshfdeskbathtubtoilet
ImVoxelNet14.125.77.913.27.84.220.522.111.5
ImVoxelNet + 3D Copy-Paste16.927.712.710.010.89.226.229.29.0
+ +Table 5: ImVoxelNet 3D monocular object detection performance on SUN RGB-D dataset with different illumination during insertion rendering. All experiments use the same ImVoxelNet model, insertion also uses our proposed physically plausible position, size, and pose. + +
SettingLight source typeIntensityDirectionWith shadow?mAP@0.25
Point Light 1Point100WCamera positionYes41.80
Point Light 2Point100WSide (left)Yes42.38
Area Light 1Area100WCamera positionYes42.67
Area Light 2Area100WSide (left)Yes42.02
Spot Light 1Spot100WCamera positionYes40.92
Spot Light 2Spot100WSide (left)Yes42.10
Sun Light 1Sun5Camera positionYes42.11
Sun Light 2Sun5Side (left)Yes41.21
Ours (Dynamic Light)Estimated Plausible lightDynamicDynamicNo41.83
Ours (Dynamic Light)Estimated Plausible lightDynamicDynamicYes43.79
+ +the 8 overlapping classes (mAP@0.25) in the Table 4. Our 3D Copy-Paste improves ImVoxelNet by $2.8\%$ mAP. + +# 4.3 Ablation study on the influence of insertion illumination and position on monocular 3D object detection + +We first explore the influence of illumination of inserted objects on downstream monocular 3D object detection tasks. Table 5 shows the ImVoxelNet performance on SUN RGB-D with different illumination settings during 3D Copy-Paste. To eliminate the influence of other insertion parameters, we fix the estimated position, pose, and size for each scene among all experiments in Table 5. + +Fig. 3 provides a visualization of the effects of various light sources and light parameters during the insertion rendering process. The corresponding monocular 3D object detection results are presented in Table 5. These illustrate how lighting not only impacts the visual perception of the inserted object from a human observer's standpoint but also considerably affects the performance of downstream detection tasks. Thus, an accurate and physically plausible lighting estimation is crucial for both understanding the scene and for the practical application of downstream detection tasks. + +Table. 2 shows the importance of physical position, pose, and size (local context) on monocular 3D object detection tasks. We also explored the importance of the global context to the detection performance. The global context here means the semantic relationship of the inserted object to the whole scene. For instance, inserting a toilet into a living room may not satisfy the global context. We propose a plausible global context insertion method where the inserted object class considers the global scene information. Also, we could select an inserted class based on the floor size: insert larger size objects (e.g., bed, bookshelf) on only a large size floor. Table. 6 shows results on different settings. We find considering the global context during the insertion is on par with the random category selecting setting, and the following downstream detection model may not be sensitive to that. + +# 4.4 Qualitative Analysis + +Fig. 4 shows the qualitative results of monocular 3D object detection on SUN RGB-D dataset. Our method demonstrates enhanced capabilities in detecting objects with significant occlusion, provides improved pose estimation, and effectively suppresses false positives. + +![](images/29bbf7bd82805e0e832dfa3199d79ce5e2640804b05761967c201e4a425f3df1.jpg) +Figure 3: Visualization of different illumination on inserted objects. + +Table 6: Ablation study of global context influence on ImVoxelNet monocular 3D object detection performance on SUN RGB-D. + +
MethodFollow global context?Select class based on empty size?mAP@0.25
ImVoxelNet + 3D Copy-PasteYesNo43.75
ImVoxelNet + 3D Copy-PasteYesYes43.74
ImVoxelNet + 3D Copy-PasteNoYes42.50
ImVoxelNet + 3D Copy-PasteNoNo43.79
+ +![](images/283939afcdcc51354a98b9c15690997c9c287d89efac285e01b8ad39405d0412.jpg) +Figure 4: Qualitative results on the SUN RGB-D dataset. + +# 5 Conclusion and Discussion + +Our work addresses the challenge of scarce large-scale annotated datasets for monocular 3D object detection by proposing a physically plausible indoor 3D object insertion approach. This technique allows us to effectively augment existing indoor scene datasets, such as SUN RGB-D, with large-scale annotated 3D objects that have both plausible physical location and illumination. The resulting augmented dataset enables training a monocular 3D object model that achieves new state-of-the-art performance. Our approach carefully considers physically feasible locations, sizes, and poses for inserted objects, avoiding collisions with the existing room layout, and estimates spatially-varying illumination to seamlessly integrate the objects into the original scene. We also systematically evaluate the impact of the physical position and illumination of the inserted objects on the performance of the final monocular 3D object detection model. This paper is the first to demonstrate that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performance in discriminative downstream tasks like monocular 3D object detection. Our findings highlight the potential of 3D data augmentation in improving the performance of 3D perception tasks, opening up new avenues for research and practical applications. + +Acknowledgments. This work is in part supported by Bosch, Ford, ONR MURI N00014-22-1-2740, NSF CCRI #2120095, Amazon ML Ph.D. Fellowship, National Science Foundation (award 2318101), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA) and the Army Research Office (W911NF2020053). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof. + +# References + +H. Abu Alhaija, S. K. Mustikovela, L. Mescheder, A. Geiger, and C. Rother. Augmented reality meets computer vision: Efficient data generation for urban driving scenes. International Journal of Computer Vision, 126:961-972, 2018. +P. Achlioptas, O. Diamanti, I. Mitlagkas, and L. Guibas. Learning representations and generative models for 3d point clouds. In International conference on machine learning, pages 40-49. PMLR, 2018. +G. Brazil and X. Liu. M3d-rpn: Monocular 3d region proposal network for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9287-9296, 2019. +X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun. 3d object proposals for accurate object class detection. Advances in neural information processing systems, 28, 2015. +X. Chen, H. Ma, J. Wan, B. Li, and T. Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1907-1915, 2017. +Y. Chen, F. Rong, S. Duggal, S. Wang, X. Yan, S. Manivasagam, S. Xue, E. Yumer, and R. Urtasun. Geosim: Realistic video simulation via geometry-aware composition for self-driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7230-7240, 2021. +A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839, 2017. +M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani, A. Kembhavi, and A. Farhadi. Objverse: A universe of annotated 3d objects. arXiv preprint arXiv:2212.08051, 2022. +D. Dwibedi, I. Misra, and M. Hebert. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE international conference on computer vision, pages 1301-1310, 2017. +M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1355-1361. IEEE, 2017. +C. Feng, Y. Taguchi, and V. R. Kamat. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 6218-6225. IEEE, 2014. +M.-A. Gardner, Y. Hold-Geoffroy, K. Sunkavalli, C. Gagné, and J.-F. Lalonde. Deep parametric indoor lighting estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7175–7183, 2019. +Y. Ge, H. Behl, J. Xu, S. Gunasekar, N. Joshi, Y. Song, X. Wang, L. Itti, and V. Vineet. Neural-sim: Learning to generate training data with nef. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIII, pages 477-493. Springer, 2022a. + +Y. Ge, J. Xu, B. N. Zhao, L. Itti, and V. Vineet. Dall-e for detection: Language-driven context image synthesis for object detection. arXiv preprint arXiv:2206.09592, 2022b. +Y. Ge, J. Xu, B. N. Zhao, N. Joshi, L. Itti, and V. Vineet. Beyond generation: Harnessing text to image models for object detection and segmentation. arXiv preprint arXiv:2309.05956, 2023. +G. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T.-Y. Lin, E. D. Cubuk, Q. V. Le, and B. Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2918-2928, 2021. +O. F. Kar, T. Yeo, A. Atanov, and A. Zamir. 3d common corruptions and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18963-18974, 2022. +P. Li, H. Zhao, P. Liu, and F. Cao. Rtm3d: Real-time monocular 3d detection from object keypoints for autonomous driving. In European Conference on Computer Vision, pages 644-660. Springer, 2020a. +Z. Li, M. Shafiei, R. Ramamoorthi, K. Sunkavalli, and M. Chandraker. Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2475-2484, 2020b. +Q. Lian, B. Ye, R. Xu, W. Yao, and T. Zhang. Exploring geometric consistency for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1685-1694, 2022. +Z. Liu, Z. Wu, and R. Tóth. Smoke: Single-stage monocular 3d object detection via keypoint estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 996-997, 2020. +A. Mousavian, D. Anguelov, J. Flynn, and J. Kosecka. 3d bounding box estimation using deep learning and geometry. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7074-7082, 2017. +Y. Nie, X. Han, S. Guo, Y. Zheng, J. Chang, and J. J. Zhang. Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 55-64, 2020. +C. Reading, A. Harakeh, J. Chae, and S. L. Waslander. Categorical depth distribution network for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8555-8564, 2021. +D. Rukhovich, A. Vorontsova, and A. Konushin. Imvoxelnet: Image to voxels projection for monocular and multi-view general-purpose 3d object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2397-2406, 2022. +N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd images. ECCV(5), 7576:746-760, 2012. +A. Simonelli, S. R. Bulo, L. Porzi, M. López-Antequera, and P. Kontschieder. Disentangling monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1991-1999, 2019. +E. J. Smith and D. Meger. Improved adversarial systems for 3d object generation and reconstruction. In Conference on Robot Learning, pages 87-96. PMLR, 2017. +S. Song, S. P. Lichtenberg, and J. Xiao. Sun rgb-d: A rgb-d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 567-576, 2015. +X. Sun, J. Wu, X. Zhang, Z. Zhang, C. Zhang, T. Xue, J. B. Tenenbaum, and W. T. Freeman. Pix3d: Dataset and methods for single-image 3d shape modeling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2974-2983, 2018. + +W. Tong, J. Xie, T. Li, H. Deng, X. Geng, R. Zhou, D. Yang, B. Dai, L. Lu, and H. Li. 3d data augmentation for driving scenes on camera. arXiv preprint arXiv:2303.10340, 2023. +C. Wang and X. Guo. Plane-based optimization of geometry and texture for rgb-d reconstruction of indoor scenes. In 2018 International Conference on 3D Vision (3DV), pages 533-541. IEEE, 2018. +T. Wang, X. Zhu, J. Pang, and D. Lin. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 913-922, 2021. +T. Wang, Z. Xinge, J. Pang, and D. Lin. Probabilistic and geometric depth: Detecting objects in perspective. In Conference on Robot Learning, pages 1475-1485. PMLR, 2022a. +Y. Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Q. Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8445-8453, 2019. +Z. Wang, W. Chen, D. Acuna, J. Kautz, and S. Fidler. Neural light field estimation for street scenes with differentiable virtual object insertion. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part II, pages 380-397. Springer, 2022b. +Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. +B. Xu and Z. Chen. Multi-level fusion based 3d object detection from monocular images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2345-2353, 2018. +D. Xu, D. Anguelov, and A. Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 244-253, 2018. +S. Yang and S. Scherer. Cubeslam: Monocular 3-d object slam. IEEE Transactions on Robotics, 35 (4):925-938, 2019. +C. Zhang, Z. Cui, Y. Zhang, B. Zeng, M. Pollefeys, and S. Liu. Holistic 3d scene understanding from a single image with implicit representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8833-8842, 2021. +W. Zhang, Z. Wang, and C. C. Loy. Exploring data augmentation for multi-modality 3d object detection. arXiv preprint arXiv:2012.12741, 2020. +J. Zhu, F. Luan, Y. Huo, Z. Lin, Z. Zhong, D. Xi, R. Wang, H. Bao, J. Zheng, and R. Tang. Learning-based inverse rendering of complex indoor scenes with differentiable monte carlo raytracing. In SIGGRAPH Asia 2022 Conference Papers, pages 1-8, 2022. + +# A Experiments on more Monocular 3D Object Detection methods + +In our main paper, we utilize ImVoxelNet [Rukhovich et al., 2022] for monocular 3D object detection. To show the robustness of our 3D Copy-Paste across different downstream detection methods. We conducted additional experiments with another monocular 3D object detection model: Implicit3DUnderstanding (Im3D [Zhang et al., 2021]). The Im3D model predicts object 3D shapes, bounding boxes, and scene layout within a unified pipeline. Training this model necessitates not only the SUN RGB-D dataset but also the Pix3D dataset [Sun et al., 2018], which supplies 3D mesh supervision. The Im3D training process consists of two stages. In stage one, individual modules - the Layout Estimation Network, Object Detection Network, Local Implicit Embedding Network, and Scene Graph Convolutional Network - are pretrained separately. In stage two, all these modules undergo joint training. We incorporate our 3D Copy-Paste method only during this second stage of joint training, and it's exclusively applied to the 10 SUN RGB-D categories we used in the main paper. We implemented our experiment following the official Im3D guidelines1. + +Table 7 displays the Im3D results for monocular 3D object detection on the SUN RGB-D dataset, adhering to the same ten categories outlined in main paper. Im3D without insertion, attained a mean average precision (mAP) detection performance of $42.13\%$ . After applying our 3D Copy-Paste method—which encompasses physically plausible insertion of position, pose, size, and light—the monocular 3D object detection mAP performance increased to 43.34. These results further substantiate the robustness and effectiveness of our proposed method. + +Table 7: Im3D [Zhang et al., 2021] 3D monocular object detection performance on the SUN RGB-D dataset (same 10 categories as the main paper). + +
SettingInsertion Position, Pose, SizeInsertion IlluminationmAP
Im3DN/AN/A42.13
Im3D + 3D Copy-PastePlausible position, size, posePlausible dynamic light43.34
+ +# B More experiment details + +We run the same experiments multiple times with different random seeds. Table 8 shows the main paper Table 2 results with error range. + +Table 8: ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with different object insertion methods (with error range). + +
SettingInsertion Position, Pose, SizeInsertion IlluminationmAP@0.25
ImVoxelNetN/AN/A40.96 ± 0.4
ImVoxelNet + random insertRandomCamera point light37.02± 0.4
ImVoxelNet + 3D Copy-Paste (w/o light)Plausible position, size, poseCamera point light41.80± 0.3
ImVoxelNet + 3D Copy-PastePlausible position, size, posePlausible dynamic light43.79 ± 0.4
+ +We also show our results with mAP@0.15 on SUN RGB-D dataset (Table 9), our method shows consistent improvements. + +Table 9: ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with mAP@0.15. + +
SettingInsertion Position, Pose, SizeInsertion IlluminationmAP@0.15
ImVoxelNetN/AN/A48.45
ImVoxelNet + 3D Copy-PastePlausible position, size, posePlausible dynamic light51.16
+ +# C Discussion on Limitations and Broader Impact + +Limitations. Our method, while effective, does have certain limitations. A key constraint is its reliance on the availability of external 3D objects, particularly for uncommon categories where sufficient 3D assets may not be readily available. This limitation could potentially impact the performance of downstream tasks. Moreover, the quality of inserted objects can also affect the results. Possible strategies to address this limitation could include leveraging techniques like Neural Radiance Fields (NeRF) to construct higher-quality 3D assets for different categories. + +Broader Impact. Our proposed 3D Copy-Paste method demonstrate that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performance in discriminative downstream tasks like monocular 3D object detection. The implications of this work are profound for both the computer graphics and computer vision communities. From a graphics perspective, our method demonstrates that more accurate 3D property estimation, reconstruction, and inverse rendering techniques can generate more plausible 3D assets and better scene understanding. These assets not only look visually compelling but can also effectively contribute to downstream computer vision tasks. From a computer vision perspective, it encourages us to utilize synthetic data more effectively to tackle challenges in downstream fields, including computer vision and robotics. \ No newline at end of file diff --git a/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/images.zip b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4156cfad98138ef39ae1f335eabc7d2e89420895 --- /dev/null +++ b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18de3b574422919861be65021ce19352426cb2cf5f86ce87e9604645055f5556 +size 510786 diff --git a/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/layout.json b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8ae54835f4f6c473dc1d82dd1fda0817f4c18ca6 --- /dev/null +++ b/3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19597124d9242e4b03f31350160a693fed854488c5def9ffdf6c0b68eebec35c +size 343544 diff --git a/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_content_list.json b/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b79e050d0457dbcdda5e463ee7db5f615bcd9c45 --- /dev/null +++ b/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c27eb301eddd4eb4a8aaad05332a70efe4171dde9ae1fe53187e50171e7c598 +size 117682 diff --git a/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_model.json b/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_model.json new file mode 100644 index 0000000000000000000000000000000000000000..81a65a563f3e69ff3b69b2f07237c3bc4afe3c60 --- /dev/null +++ b/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdf9d12dbca6c50a00a1c9d4f1b06d55887f524f56804498973e2f9870e66eee +size 132810 diff --git a/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_origin.pdf b/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..954a13115e200970bdcd451d5d1f9d064d03c242 --- /dev/null +++ b/3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a71b58dd8220ca2adbdae70f4f85ec02e67a793571c468d4b3f15dc887133d8 +size 17739921 diff --git a/3dindoorinstancesegmentationinanopenworld/full.md b/3dindoorinstancesegmentationinanopenworld/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d513c42aeb816edaeb6a6c77cba1ef030445400f --- /dev/null +++ b/3dindoorinstancesegmentationinanopenworld/full.md @@ -0,0 +1,420 @@ +# 3D Indoor Instance Segmentation in an Open-World + +Mohamed El Amine Boudjoghra1, Salwa K. Al Khatib1, Jean Lahoud1, Hisham Cholakkal1, Rao Muhammad Anwer1,2, Salman Khan1,3, Fahad Shahbaz Khan1,4 + +1Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), + +$^{2}$ Aalto University, $^{3}$ Australian National University, $^{4}$ Linköping University + +{mohamed.boudjoghra, salwa.khatib, jean.lahoud, + +hisham.cholakkal, rao.anwer, salman.khan, fahad.khan}@mbzuai.ac.ae + +# Abstract + +Existing 3D instance segmentation methods typically assume that all semantic classes to be segmented would be available during training and only seen categories are segmented at inference. We argue that such a closed-world assumption is restrictive and explore for the first time 3D indoor instance segmentation in an open-world setting, where the model is allowed to distinguish a set of known classes as well as identify an unknown object as unknown and then later incrementally learning the semantic category of the unknown when the corresponding category labels are available. To this end, we introduce an open-world 3D indoor instance segmentation method, where an auto-labeling scheme is employed to produce pseudo-labels during training and induce separation to separate known and unknown category labels. We further improve the pseudo-labels quality at inference by adjusting the unknown class probability based on the objectness score distribution. We also introduce carefully curated open-world splits leveraging realistic scenarios based on inherent object distribution, region-based indoor scene exploration and randomness aspect of open-world classes. Extensive experiments reveal the efficacy of the proposed contributions leading to promising open-world 3D instance segmentation performance. Code and splits are available at: https://github.com/aminebdj/3D-OWIS. + +# 1 Introduction + +3D semantic instance segmentation aims at identifying objects in a given 3D scene, represented by a point cloud or mesh, by providing object instance-level categorization and semantic labels. The ability to segment objects in the 3D domain has numerous vision applications, including robotics, augmented reality, and autonomous driving. Following the developments in the sensors that acquire depth information, a variety of datasets has been presented in the literature which provides instance-level annotations. In view of the availability of large-scale 3D datasets and the advances in deep learning methods, various 3D instance segmentation methods have been proposed in recent years. + +The dependence of 3D instance segmentation methods on available datasets has a major drawback: a fixed set of object labels (vocabulary) is learned. However, object classes in the real world are plentiful, and many unseen/unknown classes can be present at inference. Current methods that learn on a fixed set not only discard the unknown classes but also supervise them to be labeled as background. This prevents intelligent recognition systems from identifying unknown or novel objects that are not part of the background. Given the importance of identifying unknown objects, recent works have explored open-world learning setting for 2D object detection [18, 11, 28, 33]. In the open-world setting, a model is expected to identify unknown objects, and once new classes are labeled, the new set is desired to be incrementally learned without retraining [18]. While previous methods have been mostly suggested for open-world 2D object detection, it is yet to be explored + +![](images/7ca002e1dfce5522ba5750eba00f9e60a7945cce812a1ce81e4b7ebba6cbb487.jpg) +Figure 1: 3D instance segmentation in an open-world. During each iterative learning phase, the model detects unknown objects, and a human operator gradually assigns labels to some of them and incorporates them into the pre-existing knowledge base for further training. + +in the 3D domain. The main challenge lies in understanding how objects appear in 3D in order to separate them from the background and other object categories. + +3D instance segmentation in the open world, illustrated in Fig. 1, offers more flexibility, allowing the model to identify unknown objects and request annotations for these novel classes from an oracle for further training. However, this approach presents several challenges: (i) the lack of annotations for unknown classes, necessitating quality pseudo-labeling techniques; (ii) the similarities between predicted features of known and unknown classes, requiring separation techniques for improved prediction; and (iii) the need for a more reliable objectness scoring method to differentiate between good and bad predicted masks for 3D point clouds. + +In this work, we investigate a novel problem setting, namely open-World indoor 3D Instance Segmentation, which aims at segmenting objects of unknown classes while incrementally adding new classes. We define real-world protocols and splits to test the ability of 3D instance segmentation methods to identify unknown objects. In the proposed setup, unknown object labels are also added incrementally to the set of known classes, akin to real-world incremental learning scenarios. We propose an unknown object identifier with a probability correction scheme that enables improved recognition of objects. To the best of our knowledge, we are the first to explore 3D instance segmentation in an open-world setting. The key contributions of our work are: + +- We propose the first open-world 3D indoor instance segmentation method with a dedicated mechanism for accurate identification of 3D unknown objects. We employ an auto-labeling scheme to generate pseudo-labels during training and induce separation in the query embedding space to delineate known and unknown class labels. At inference, we further improve the quality of pseudo-labels by adjusting the probability of unknown classes based on the distribution of the objectness scores. +- We introduce carefully curated open-world splits, having known vs. unknown and then incremental learning over the span of 200 classes, for a rigorous evaluation of open-world 3D indoor segmentation. Our proposed splits leverage different realistic scenarios such as inherent distribution (frequency-based) of object classes, various class types encountered during the exploration of indoor areas (region-based), and the randomness aspect of object classes in the open-world. Extensive experiments reveal the merits of the proposed contributions towards bridging the performance gap between our method and oracle. + +# 2 Related Work + +3D semantic instance segmentation: The segmentation of instances in 3D scenes has been approached from various angles. Grouping-based or clustering-based techniques use a bottom-up pipeline by learning an embedding in the latent space to help cluster the object points. [4, 13, 14, 17, 20, 21, 34, 38]. Proposal-based methods work in a top-down fashion, first detecting 3D bounding boxes, then segmenting the object region within the box [10, 15, 22, 36, 37]. Recently, spurred by related 2D work [5, 6], the transformer design [31] has also been applied for the purpose of segmenting 3D instances [29, 30]. Other methods present weakly-supervised alternatives to methods that use dense annotations in order to lower the cost of annotating 3D data [7, 16, 35]. While all these methods aim to improve the quality of 3D instance segmentation, they are trained on a known set of semantic labels. On the other hand, our proposed method aims at segmenting objects with both known and unknown class labels. + +![](images/17f1a80c18524ba7df89c37aa9f404e38ea9331f1dc9d67c299da1449a630479.jpg) +Figure 2: Proposed open-world 3D instance segmentation pipeline. From left to right: 3D instance segmentation model, where the point cloud goes through a 3D convolutional backbone. The extracted feature maps are used in the transformer decoder to refine some initial queries, which then pass through two MLPs to generate label and mask predictions. The Contrastive Clustering block takes the refined queries, the prediction masks, and labels to further process the queries by assigning a target or an unknown pseudo label in the Query Processing module, and then storing them in a Query Store to finally update the class prototypes, which are finally used for contrastive clustering. During inference, the queries are used to correct the probability of the predicted labels based on their reachability to the known class prototypes. + +Open-world object recognition: Open-world object recognition was introduced in [2], where the Nearest Mean Classifier was extended for an open-world setting. In the direction of open-world object detection, many studies [41, 18, 11, 25] have been conducted in the past. In[18], pseudo-labels for the unknowns are generated to perform contrastive clustering during training for a better unknown-known classes separation, where an energy-based unknown class identifier was proposed to detect the unknown classes, based on the energy of the logits from the known classes. For incremental learning, they adopted exemplar replay to alleviate catastrophic forgetting of old classes. In the same task as [18], [11] used a transformer-based model and proposed another way of unknown pseudo-labels generation, by using a new method of objectness estimation, and introduced a foreground objectness branch that separates the background from the foreground. For the task of outdoor 3D point cloud semantic segmentation, [3] proposed a model that predicts old, novel, and unknown classes from three separate classification heads. The latter is trained on the labels of the known classes and pseudo-labels for old classes generated by the same model to alleviate catastrophic forgetting, while the unknown class is assigned the second-highest score for a better unknown class segmentation. Other methods proposed in [40, 12, 39], primarily focus on enhancing the generalizability of 3D models for novel classes by leveraging supervision from 2D Vision Language Models for object recognition and 3D semantic segmentation tasks. However, these approaches exhibit several limitations, including (i) The 3D model's performance becomes dependent on the 2D Vision Language model. (ii) The 3D geometric properties of unseen objects in the training data are neglected during the training process. (iii) There exists no avenue for enhancing the model's performance on novel classes in cases where new labels are introduced.(iv) The training process necessitates pairs of images and corresponding 3D scenes. + +# 3 Closed-world 3D Instance Segmentation + +We adopted the state-of-the-art 3D instance segmentation model Mask3D [29] as our baseline. The latter is a hybrid model that combines Convolutional Neural Networks (CNNs) with transformers to learn class-agnostic masks and labels for instance separation. The backbone of Mask3D is CNN-based and used to extract feature maps from multiple levels. Meanwhile, the decoder is transformer-based and used to refine $n_{Q} \in \mathbb{N}$ instance queries $Q = \{q_{j} \in \mathbb{R}^{D} \mid j \in (1, \dots, n_{Q})\}$ , using the extracted + +feature maps. The learning scheme consists of a Cross-entropy loss for learning semantic class labels and binary cross-entropy loss for learning instance masks during training. + +# 4 Open-World 3D Instance Segmentation + +# 4.1 Problem formulation + +We start by formulating the problem setting of open-world 3D instance segmentation. At a Task $\mathcal{T}^t$ , there exists a set of known object categories $\mathcal{K}^t = \{1,2,\dots,C\}$ and a set of unknown object categories $\mathcal{U}^t = \{C + 1,\ldots \}$ that may exist on inference time. The training dataset $\mathcal{D}^t = \{\mathbf{X}^t,\mathbf{Y}^t\}$ includes samples from the classes $\mathcal{K}^t$ . The input set $\mathbf{X}^t = \{\mathbf{P}_1,\dots,\mathbf{P}_M\}$ is made of $M$ point clouds, where $\mathbf{P}_i\in \mathbb{R}^{N\times 3}$ is a quantized point cloud of $N$ voxels each carrying average RGB color of the points within. The corresponding labels are $\mathbf{Y}^t = \{\mathbf{Y}_1,\dots,\mathbf{Y}_M\}$ , where $\mathbf{Y}_i = \{\mathbf{y}_1,\dots,\mathbf{y}_k\}$ encodes $k$ object instances. Each object instance $\mathbf{y}_i = [\mathbf{B}_i,l_i]$ represents a binary mask $\mathbf{B}_i\in \{0,1\} ^N$ and a corresponding class label $l_{i}\in \mathcal{K}^{t}$ . + +In our problem setting, $\mathcal{M}_C$ is a 3D instance segmentation model that is trained on $C$ object categories, and, on test time, can recognize instances from these classes, in addition to instances from new classes not seen during training by classifying them as unknown. The detected unknown instances can be used by a human user to identify a set of $n$ new classes not previously trained on, which can be incrementally added to the learner that updates itself to produce $\mathcal{M}_{C + n}$ without explicitly retraining on previously seen classes. At this point in Task $\mathcal{T}^{t + 1}$ , the known class object categories are $\mathcal{K}^{t + 1} = \mathcal{K}^t\cup \{C + 1,\dots,C + n\}$ . This process repeats throughout the lifespan of the instance segmentation model, continuously improving itself by incorporating new information from new classes until it reaches its maximum capacity of classes it can learn. In the rest of the paper, We assign the unknown class a label $\mathbf{0}$ . + +# 4.2 Open-world scenarios + +In order to simulate different realistic scenarios that might be encountered in an open-world, we propose three different ways of grouping classes under three tasks. These scenarios split scenes based on the inherent distribution (frequency-based) of object classes, the various classes encountered during the exploration of various indoor areas (region-based), and the randomness aspect of object classes in the open world. + +Table 1: The statistics of each split across the three tasks. The number of known classes per task is reported along with the count of instances (3D objects) in the training and validation set, we also show the number of non-empty scenes used during training and validation. + +
Split ASplit BSplit C
Task 1Task 2Task 3Task 1Task 2Task 3Task 1Task 2Task 3
Classes count646866735570666666
Train instances242243791161215327817761231348382397905
Validation instances65391000428417722611529377621022089
Train scenes120192462712011002895116910891159
Validation scenes312242165312264236307273300
+ +![](images/128f8d9981c9b7280e70727004892c8e764c3410d0cf763245cd7230b5a90367.jpg) +Figure 3: Point-wise count for each class across the three tasks under the three open-world scenarios + +![](images/0467f2f711e1d4248f87252a19da273e75998036e9b0bc2e48c7b985e393b1bf.jpg) + +![](images/fdee196c05d99da65b384ce3603783f9b983ca6df5897080e3584c62c8bcff73.jpg) + +Split A (Instance frequency-based): We introduce a split that leverages the inherent distribution of objects, with known classes being more prevalent than unknown categories. Task $\mathcal{T}^1$ encompasses all the head classes as defined in the ScanNet200 benchmark [8, 27], while tasks $\mathcal{T}^2$ and $\mathcal{T}^3$ group the common and tail classes, respectively. This division allows us to effectively capture the varying frequency and significance of object categories within the dataset. + +Split B (Region-based): In this split, our objective is to replicate the diverse class types encountered during indoor exploration. We argue that a perfect model for a robot moving indoors should segment both classes it knows and classes it hasn't seen before. Additionally, it should keep learning and getting better at segmenting new classes over time. This partition draws inspiration from the sequence of classes that a robot might encounter when navigating indoors. To achieve this, we group classes that are likely to be encountered initially when accessing an indoor space and share similarities in scenes. Initially, we assign each class to a specific scene where it predominantly occurs. Subsequently, we divide the classes into three distinct groups, corresponding to the three tasks. + +Split C (Random sampling of classes): This third split introduces a different challenge inspired by the randomness aspect of the open-world, where tasks can exhibit random levels of class imbalance. To create this split, we randomly shuffled the classes and sampled without replacement, selecting 66 classes three times for each task. + +# 4.3 Generating pseudo-labels for the unknown classes + +Because of the wide range of classes in an open-world setting, the auto-labeler is used as an alternative to manual labeling. The former makes use of the existing target labels from the available ground truth classes (known classes) to generate pseudo-labels for the unknown class in the process of training. In [18], the model is assumed to be class agnostic, where unknown objects are predicted as known with high confidence. As a result, the authors of the paper proposed to use the predictions with top-k confidence scores that do not intersect with the ground truth as pseudo-labels for the unknown class. In our study, we show that top-k pseudo-label selection can severely harm the performance of the model on the known and unknown classes. Hence, we propose a Confidence Thresholding (CT) based selection of pseudo-labels. We show that the performance on the known and the unknown classes increases by a large margin in terms of mean Average Precision (mAP). + +The auto-labeler unit, depicted in Fig. 2, is used for unknown pseudo-labels generation. It takes a set of predicted binary masks $\mathbf{B} = \{\mathbf{B}_i \mid i \in (1, \dots, n_Q)\}$ , where $n_Q$ is the number of queries, $\mathbf{B}_i = \mathbb{1}(M_i > 0.5)$ is a mask from a single query, and $M_i = \{m_{i,j} \in [0,1] \mid j \in (1, \dots, N)\}$ is a heat map measuring the similarity between a query $q_j \in \mathbb{R}^D$ and the features of $N$ voxels extracted from the high-resolution level in the backbone. + +Moreover, each query $q_{j}$ encodes semantic information and can generate a class prediction $\mathbb{P}_{cls}(q_j) = \{\mathbb{P}_{cls}(c;q_j)\mid c\in (0,1,\dots,|\mathcal{K}^t |)\}$ using a classification head (refer to Fig. 2). Subsequently, the objectness confidence score is assigned to predictions following Eq 1. + +$$ +s _ {j} = s _ {c l s, j} \cdot \frac {M _ {j} \cdot \mathbb {1} (M _ {j} > 0 . 5) ^ {T}}{| \mathbb {1} (M _ {j} > 0 . 5) | _ {1}} \tag {1} +$$ + +where $s_{cls,j} \in \mathbb{R}$ is the max output probability from the classification head $\mathbb{P}_{cls}(q_j)$ , and $\mathbb{1}$ is the indicator function. After scoring the predictions, the auto-labeler returns $m$ pseudo-labels $\tilde{\mathbf{Y}} = \{\tilde{\mathbf{y}}_i = [\tilde{\mathbf{B}}_i, \mathbf{0}] \mid i \in (1, \dots, m)\}$ with confidence above a threshold and has a low IoU with the known classes' target masks. + +# 4.4 Query target assignment and contrastive clustering + +Similar to [18], we utilize contrastive clustering to enhance the separation of classes within the query embedding space. To achieve this, we employ a set of query prototypes denoted as $\mathcal{Q}_p = \{\mathbf{q}_i \in \mathbb{R}^D \mid i \in (0, 1,.., |\mathcal{K}^t|)\}$ , where $\mathbf{q}_0$ denotes the prototype of the class unknown. We apply a contrastive loss that encourages queries with similar classes to be attracted to their respective prototypes while pushing them away from those representing negative classes, as illustrated in Fig. 2. Since the queries are used to determine the class of the objects (see Fig. 2 inference block), the class prototypes are expected to hold general semantic knowledge of their corresponding classes. + +Hungarian matching is performed in the Assign target to query module, depicted in Fig. 2, where the indices of prediction-target are used to assign a label to the queries used to generate the matched prediction. The labeled queries are then stored in a query store $\mathcal{Q}_{\text{store}}$ , which represents a queue with a maximum capacity. This queue is employed to update the query prototypes $\mathcal{Q}_p$ using an exponential moving average. + +Hinge embedding loss is utilized according to Eq 2. This loss ensures that queries belonging to the same class denoted as $q_{c}$ , are pulled towards their corresponding class prototype $\mathbf{q}_c$ , while being pushed away from other prototypes representing different classes. + +$$ +\mathcal {L} _ {\text {c o n t}} \left(q _ {c}\right) = \sum_ {i = 0} ^ {| \mathcal {K} ^ {t} |} \ell \left(q _ {c}, \mathbf {q} _ {i}\right) \tag {2} +$$ + +$$ +\ell (q _ {c}, \mathbf {q} _ {i}) = \left\{ \begin{array}{l l} | | q _ {c} - \mathbf {q} _ {i} | | _ {2} & i = c \\ \max (0, \Delta - | | q _ {c} - \mathbf {q} _ {i} | | _ {2}) & i \neq c \end{array} \right. +$$ + +where $\Delta$ is the margin of the contrastive clustering. + +# 4.5 Reachability-based probability correction (PC) + +In [23], an architecture that can deal with long-tail distribution and unknown class prediction for open-world object recognition was proposed, where unknown classes are assumed to be very different in color and texture from the known classes without prior on the unknown classes. However, we show in Fig. 6 that many unknown instances hold similar features to the known ones. + +In our method, we relax the strict assumption of high dissimilarity of unknown and known classes and correct the predicted output probability following two characteristics of a feature + +![](images/d21028e2347e3673153363ef5b2192503c2e9ad8020765ff754da0010b6a093c.jpg) +Figure 4: Illustration of the region in the query embedding space where the class probability is corrected. + +from an unknown object: (1) it has to be far from the nearest known class, as features of the class unknown are expected to be pushed away from the prototypes of the known classes, after applying constructive clustering, and (2) the feature should correspond to an object that is not a known class. We show that applying this approach during inference boosts the performance of the model on the unknown class considerably by compensating for the weak pseudo-labels provided by the auto-labeler. + +Our probability correction scheme is the following + +$$ +\mathbb {P} (\mathbf {0}; q _ {j}) = \mathbb {P} _ {\text {c l s}} (\mathbf {0}; q _ {j}) \cup \mathbb {P} _ {\text {c o r r}} (\mathbf {0}; q _ {j}) \tag {3} +$$ + +where $\mathbb{P}_{cls}$ is the probability from the classification head, and $\mathbb{P}_{corr}$ is the correction probability. We base our intuition on the fact that unknown classes have high objectness scores, which makes them not too far from the prototypes of the known classes. To model this behavior we choose + +$$ +\mathbb {P} _ {c o r r} (\mathbf {0}; q _ {j}) = \mathbb {P} _ {c o r r} (\mathbf {0}; o, q _ {j}) \cdot \mathbb {P} _ {c o r r} (o; q _ {j}) +$$ + +where $\mathbb{P}_{corr}(o; q_j)$ is the likelihood of the query to correspond to an object that is not known (either background or true unknown). Since the query prototypes encode class-specific information we propose the following method to measure the objectness of a query given all prototypes from the known classes, where it assigns a high objectness probability if it is close to only a few known classes. This probability distribution defines the objectness of unknown objects around a certain boundary from the prototypes as follows. + +$$ +\mathbb {P} _ {c o r r} (o; q _ {j}) = 1 - \sum_ {k = 1} ^ {| \mathcal {K} ^ {t} |} \mathbb {P} _ {c l s} (k; q _ {j}) +$$ + +![](images/1dc61c33416a7ba8687b5defcc54adb07b52436b5e1662558e916d522e455336.jpg) + +![](images/2665a34cb34f9efb53aab6598273aae7931b783ff82052e2f00d29e5a353e441.jpg) + +![](images/0b401f42a4b29d96d36e03104a9165869a95dff90091f4ec84eb95c4b3a9fb11.jpg) + +![](images/545b5d751319679c9b202d14d3c90df6d3c921488b734de71702833a8c10d757.jpg) +Ground Truth + +![](images/3628792dcc43d969f434a4f87d7593211e5424500693592df1d7abcf7aea7970.jpg) +3D-OWIS-PC-CT + +![](images/169b7cb558600ad55df4c6054a6a318c4cfb31f0693df5dc4418415bd6c1327d.jpg) +3D-OWIS +Figure 5: Qualitative results for 3D instance segmentation results on some ScanNet200 validation scenes. Points highlighted in blue belong to unknown classes and those highlighted in green belong to known classes. We show the performance of our model in retrieving the unknown class objects compared to 3D-OWIS-PC-CT for the three scenes. + +while $\mathbb{P}_{corr}(\mathbf{0};o,q_j)$ is the probability of the query being an unknown object, which has a high value the further it is from the nearest prototype of the known classes. + +$$ +\mathbb {P} _ {\text {c o r r}} (\mathbf {0}; o, q _ {j}) = \sigma \left(\frac {\gamma (q _ {j}) - a}{b}\right); \quad \gamma (q _ {j}) = \min _ {\mathbf {q} _ {i}} | | q _ {j} - \mathbf {q} _ {i} | | _ {2} +$$ + +Here $\sigma$ is the sigmoid function, $\gamma(q_j)$ is the reachability of the query $q_j$ , $\mathbf{q}_i$ is the prototype of the $i^{th}$ class, and $a, b$ are the shift and scale of the sigmoid function that assure $\mathbb{P}_{corr}(\mathbf{0}; o, q_j, \gamma(q_j) = 0) = 0.05$ and $\mathbb{P}_{corr}(\mathbf{0}; o, q_j, \gamma(q_j) = \frac{\Delta}{2}) = 0.95$ , for a contrastive clustering margin $\Delta$ . + +We finally normalize the probabilities from the classification head of the known classes as follows + +$$ +\mathbb {P} (c; q _ {j}) = \frac {\mathbb {P} _ {c l s} (c ; q _ {j})}{\sum_ {l \in \mathcal {K} ^ {t}} \mathbb {P} _ {c l s} (l ; q _ {j})} (1 - \mathbb {P} (\mathbf {0}; q _ {j})) +$$ + +# 4.6 Alleviating catastrophic forgetting for incremental learning + +Following the success of exemplar replay in avoiding catastrophic forgetting of the old classes during incremental learning for object detection [18, 11, 41], we adopt it for the task of incremental learning in 3D instance segmentation where we use exemplars from the classes of the previous task to fine-tune the model trained on the novel classes. In our setting, we use the same dataset for the three tasks and mask the classes of the previous task when training on the novel classes from the current task. As a result, the novel classes of the current task might be encountered again when replaying the exemplars from the previous task, as the same scenes are being used in fine-tuning. + +# 5 Experiments + +# 5.1 Open-world evaluation protocol + +We use our proposed splits of classes which mimic the challenges that are mostly faced in the open-world to ensure a strict performance evaluation for 3D instance segmentation models. + +Evaluation metrics. We adopt three common evaluation metrics, wilderness impact (WI) [9], absolute open set error (A-OSE) [26], and the recall of the unknown classes (U-Recall) [1, 24, 11] + +Table 2: State-of-the-Art comparison for 3D-OWIS model. We show a comparison of performance under the three open-world scenarios, where 3D-OWIS-PC - CT is our model 3D-OWIS without Probability Correction (PC) and Confidence Thresholding (CT). We rely on the metrics used in the open-world literature, A-OSE which quantifies the number of unknown objects misclassified as one of the known classes, WI which measures the impact of the unknown class on the precision of the model on the known classes, and the U-Recall to evaluate the model's ability to recover the unknown objects. We show that 3D-OWIS performs remarkably better than the other models under all scenarios when dealing with the known classes, and superior performance in split A and B, and slightly less performance in split C when handling the unknown objects. We also provide a closed-setting comparison between Mask3D and Oracle (Ours with access to unknown labels). + +
Task IDs (→)Task 1Task 2Task 3
WI(↓)A-OSE(↓)U-Recall(↑)mAP (↑)WI(↓)A-OSE(↓)U-Recall(↑)mAP (↑)mAP (↑)
Current knownAllPreviously knownCurrent knownAllPreviously knownCurrent knownAll
Split A
Oracle0.12922755.9438.7538.600.0311245.4038.2520.9129.4029.5817.7826.10
Mask3D [29]---39.1239.12---38.3020.5729.1528.6118.3325.58
3D-OW-DETR [11]0.54772122.1435.5635.050.28225326.2418.1813.6215.7621.5608.3817.67
3D-OWIS-PC - CT1.58970730.7237.5037.000.000404.7511.0017.3014.1021.4008.0017.50
Ours: 3D-OWIS0.39760734.7540.239.70.00712627.0329.4016.4022.7020.2015.2018.70
Split B
Oracle1.12693970.3124.5724.800.18044173.1625.5020.3023.4023.4030.4026.00
Mask3D [29]---23.4823.48---21.8118.9120.3724.2029.2226.06
3D-OW-DETR [11]3.229193517.1820.0019.732.053138933.3112.3613.8612.9307.2718.9611.62
3D-OWIS-PC - CT3.133189521.6718.9418.703.169108126.6318.0016.4017.2017.3020.1018.30
Ours: 3D-OWIS3.684178024.7923.6023.300.75558124.2118.7017.3017.9018.7024.6020.90
Split C
Oracle1.03965171.6123.3023.60.24959162.8320.5018.4019.6025.3028.2026.30
Mask3D [29]---20.8221.15---22.6726.6724.1325.4125.2125.35
3D-OW-DETR [11]1.463151713.0014.8114.591.33084716.0408.0017.4112.4008.8115.6311.01
3D-OWIS-PC - CT2.901175215.6615.0014.801.79966615.9913.5019.7016.4017.5017.7017.50
Ours: 3D-OWIS0.419129414.3418.0017.600.15230315.8013.9022.2017.8017.8017.7017.80
+ +to evaluate the performance of our model on the unknown classes and to provide a fair comparison with and without contributions. For the known classes, we use mean Average Precision (mAP). WI measures the impact of the unknown classes on the precision of the model at a specific confidence level. Ideally, WI is nil, i.e., there are no unknown objects predicted as known. For our evaluation, we report WI at 0.5 confidence. It can be computed as follows: $\mathrm{WI} = \frac{P_{\mathcal{K}}}{P_{\mathcal{K}\cup u}} -1$ + +We also report A-OSE, which represents the count of unknown instances misclassified as one of the known classes, and the U-Recall at 0.5 IoU, which reflects the ability of the model to recover unknown objects. + +# 5.2 Implementation details + +We adapt Mask3D [29] for the task of open-world instance segmentation. We add an extra prediction output for the unknown class. In training, we assign an ignore label to the classes of the future and previous tasks, while we keep the labels of the previous task and assign an unknown class label to the classes of the future task during evaluation. For contrastive clustering, we use the indices obtained after matching the predictions with the target using Hungarian matching to assign a label to the queries and store them in the Query Store $\mathcal{Q}_{\text{store}}$ . The store is then averaged per class and used to periodically update the prototypes every 10 iterations for the hinge loss computation. Finally, we use 40 exemplars per class on average for incremental learning. The classes from the current task are kept during class exemplar replay since we are using the same dataset for the three tasks. + +# 5.3 Open-world results + +Table 2 provides a comprehensive performance comparison between the Oracle, our implementation of [11] as 3D-OW-DETR, 3D-OWIS, and 3D-OWIS-PC - CT when excluding the Probability Correction (PC) and Confidence Thresholding (CT) components. Across all scenarios and tasks, + +Table 3: Extensive ablation of the added components. We perform the ablation by adding Probability Correction (PC) and Confidence Thresholding (CT) components to 3D-OWIS-PC-CT. We conduct the performance comparison in terms of mAP, U-Recall, WI, and A-OSE. Even though 3D-OWIS is performing well in retrieving the unknown classes without PC and CT, which is reflected by the high U-Recall, it is still performing poorly on the known classes, based on the high WI and A-OSE. This negative impact on the known classes accumulates over the tasks and results in further reduction in mAP. When adding the CT, the performance on the known classes improves considerably and remains consistent throughout the incremental learning process. Probability correction (PC) significantly improves the U-Recall in all cases. Even though the latter shows lower performance in terms of WI and A-OSE, the overall mAP slightly improves or remains higher with a large margin compared 3D-OWIS-PC-CT. This shows that adding PC and CT gives the best compromise in performance on both known and unknown classes. + +
Task IDs (→)Task 1Task 2Task 3
w/ FinetuningCT PCWI(↓)A-OSE(↓)U-Recall(↑)mAP(↑)WI(↓)A-OSE(↓)U-Recall(↑)mAP(↑)mAP(↑)
Current knownAllPreviously knownCurrent knownAllPreviously knownCurrent knownAll
Split A
×× ×1.58970730.7237.5037.000.87032119.4200.0016.7408.4000.0009.3002.80
×✓ ×0.23744330.0040.3039.700.30612914.9600.0021.0010.5000.0017.4505.20
× ×1.58970730.7237.5037.000.000404.7511.0017.3014.1021.4008.0017.50
✓ ×0.23744330.0040.3039.700.00410223.6229.2215.8022.3019.7015.7018.50
0.39860734.7540.239.700.00712627.0329.4016.4022.70No unknown labels for evaluation
Split B
×× ×3.133189521.6718.9418.701.8282917.2000.0015.4006.6000.0020.2007.50
×✓ ×2.14721.7021.7023.8023.501.56337513.0800.0018.3007.9000.0025.4009.40
× ×3.219190521.7018.9418.703.169108126.6318.0016.4017.2017.3020.1018.30
✓ ×2.147139721.7023.8023.500.46641320.9018.6016.9017.7018.5024.2020.60
3.684178024.7923.623.300.75558124.2118.7017.3017.90No unknown labels for evaluation
Split C
×× ×2.901175215.6615.0014.806.29485711.050.0015.7007.5000.0014.6004.70
×✓ ×0.22782811.4418.7018.401.36136510.1600.0019.5009.4000.0019.106.20
× ×2.901175215.6615.0014.801.79966615.9913.5019.7016.4017.5017.7017.50
✓ ×0.22782811.4418.7018.400.08820812.6314.5022.1018.0017.8017.7017.80
0.419129414.341817.600.15230315.8013.9022.2017.80No unknown labels for evaluation
+ +3D-OWIS-PC - CT consistently exhibits inferior performance in terms of mAP. Additionally, it demonstrates considerably lower U-Recall performance in splits A and B, with slightly higher + +performance in split C. Of particular note, our 3D-OWIS demonstrates remarkable proficiency in preserving knowledge of the previous classes after fine-tuning. This proficiency is attributed to better pseudo-label selection for the unknown classes. 3D-OWIS outperforms 3D-OWIS-PC - CT in most cases while minimizing the impact of the unknown classes on the known classes, as evidenced by lower WI and A-OSE scores and higher mAP. + +Table 4 presents a comparison between our model, 3D-OWIS, and our implementation of two methods, GGN [32] and OLN [19]. For both models, we adapt Mask3D and train it with mask loss only for OLN. In the case of GGN, we train a Minkowski backbone to predict affinity maps and use Connected Components to generate class-agnostic proposals. These results underscore the effectiveness and potential of our approach in addressing the three proposed open-world challenges. + +# 5.4 Incremental learning results + +Our model's performance in incremental learning is evaluated based on its ability to preserve knowledge from previous classes. With the utilization of exemplar replay, the 3D-OWIS model demonstrates significant improvement on previous classes mAP. Table 2 presents the results, indicating that our model consistently outperforms the others in terms of mean Average Precision (mAP) for the previous classes in all cases. + +# 5.5 Discussion and analysis + +Ablation study. We show in Table 3 that 3D-OWIS-PC - CT model performs poorly on the known classes because of the high number of low-quality pseudo-labels generated by Auto-labeler, which is also explained by the high value of Wilderness Impact and Absolute open set error. The U-Recall drops considerably when fine-tuning the 3D-OWIS-PC - CT, while the WI and A-OSE either decrease or increase with the mAP on the unknown. On the other hand, our model limits the training only to the best pseudo-labels, which maintain good performance on the known classes in all cases, before and after fine-tuning, and also achieve results on the unknown class comparable to the 3D-OWIS-PC - CT in most of the cases. Adding the proba + +Table 4: Open-world instance segmentation comparison. We provide the results of our implementation of two methods for 2D open-world instance segmentation models. We show that our model performs comparatively better than others across all metrics. + +
Split A
Task IDTask 1
WI(↓)A-OSE(↓)U-Recall(↑)mAP(↑)
Current knownAll
3D-GGN [32]15.68145221.3320.5120.12
3D-OLN [19]--02.45--
Ours: 3D-OWIS0.39760734.7540.239.7
+ +bility correction module helps in improving the U-Recall while keeping the mAP of the known classes much above the 3D-OWIS-PC - CT. However, it results in an increase in WI and A-OSE because of the increase of false positives in the known classes. + +tSNE analysis The tSNE plot shown in Fig. 6 illustrates the below-par performance of the 3D-OWIS-PC - CT in clustering the unknown classes, where most queries are still maintaining features representative of the known classes. This behavior is a result of the weak supervision of the unknown class, which shows the need for correcting the predictions, and explains the improvement in U-Recall when applying the probability correction with nil deterioration in the known classes mAP in most cases. + +Qualitative analysis. Fig. 5 shows that 3D-OWIS is able to correctly identify background and unknown objects as unknown. Also note the second scene, where predictions are corrected from known to unknown without affecting the predictions of the known classes. + +![](images/24eb60ca403c95b6e8b5a7f1f4d270696dcd384b5a61793b2eb4ddeffd03bcd2.jpg) +Figure 6: tSNE visualization of the queries for known & unknown classes + +# 6 Limitations + +Confidence Thresholding (CT) enhances the performance of the model on known classes; nonetheless, it diminishes the model's capacity to segment unknown classes, mainly due to its reliance on a smaller number of pseudo-labels during training. Additionally, the effectiveness of Probability Correction (PC) is contingent upon the inherent characteristics of the clusters within the known classes. In scenarios characterized by data imbalance, the performance of probability correction may deteriorate when applied to the undersampled classes. + +# 7 Conclusion + +In this paper, we address the challenge of 3D instance segmentation in open-world scenarios, which is a novel problem formulation. We propose an innovative approach that incorporates an unknown object identifier to detect objects not present in the training set. To facilitate evaluation and experimentation, we present three dataset splits of ScanNet200 based on different criteria for selecting unknown objects. Our experimental results demonstrate that our proposed unknown object identifier significantly improves the detection of unknown objects across various tasks and dataset splits. This work contributes to advancing the localization and segmentation of 3D objects in real-world environments and paves the way for more robust and adaptable vision systems. + +Acknowledgement The computational resources were provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement No. 2022-06725, and by the Berzelius resource, provided by Knut and Alice Wallenberg Foundation at the National Supercomputer Center. + +# References + +[1] A. Bansal, K. Sikka, G. Sharma, R. Chellappa, and A. Divakaran. Zero-shot object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 384-400, 2018. 7 +[2] A. Bendale and T. Boult. Towards open world recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1893-1902, 2015. 3 +[3] J. Cen, P. Yun, S. Zhang, J. Cai, D. Luan, M. Tang, M. Liu, and M. Yu Wang. Open-world semantic segmentation for lidar point clouds. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXVIII, pages 318-334. Springer, 2022. 3 +[4] S. Chen, J. Fang, Q. Zhang, W. Liu, and X. Wang. Hierarchical aggregation for 3d instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15467-15476, 2021. 2 +[5] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1290–1299, 2022. 2 +[6] B. Cheng, A. Schwing, and A. Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864-17875, 2021. 2 +[7] J. Chibane, F. Engelmann, T. Anh Tran, and G. Pons-Moll. Box2mask: Weakly supervised 3d semantic instance segmentation using bounding boxes. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXI, pages 681-699. Springer, 2022. 2 +[8] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839, 2017. 5 +[9] A. Dhamija, M. Gunther, J. Ventura, and T. Boult. The overlooked elephant of object detection: Open set. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1021-1030, 2020. 7 +[10] F. Engelmann, M. Bokeloh, A. Fathi, B. Leibe, and M. Nießner. 3d-mpa: Multi-proposal aggregation for 3d semantic instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9031–9040, 2020. 2 +[11] A. Gupta, S. Narayan, K. Joseph, S. Khan, F. S. Khan, and M. Shah. Ow-detr: Open-world detection transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9235–9244, 2022. 1, 3, 7, 8 +[12] H. Ha and S. Song. Semantic abstraction: Open-world 3d scene understanding from 2d vision-language models. In 6th Annual Conference on Robot Learning, 2022. 3 +[13] L. Han, T. Zheng, L. Xu, and L. Fang. Occuseg: Occupancy-aware 3d instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2940-2949, 2020. 2 +[14] T. He, C. Shen, and A. Van Den Hengel. Dyco3d: Robust instance segmentation of 3d point clouds through dynamic convolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 354-363, 2021. 2 +[15] J. Hou, A. Dai, and M. Nießner. 3d-sis: 3d semantic instance segmentation of rgb-d scans. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4421-4430, 2019. 2 + +[16] J. Hou, B. Graham, M. Nießner, and S. Xie. Exploring data-efficient 3d scene understanding with contrastive scene contexts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15587-15597, 2021. 2 +[17] L. Jiang, H. Zhao, S. Shi, S. Liu, C.-W. Fu, and J. Jia. Pointgroup: Dual-set point grouping for 3d instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and Pattern recognition, pages 4867-4876, 2020. 2 +[18] K. Joseph, S. Khan, F. S. Khan, and V. N. Balasubramanian. Towards open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5830-5840, 2021. 1, 3, 5, 7 +[19] D. Kim, T.-Y. Lin, A. Angelova, I. S. Kweon, and W. Kuo. Learning open-world object proposals without learning to classify. IEEE Robotics and Automation Letters, 7(2):5453-5460, 2022. 9, 10 +[20] J. Lahoud, B. Ghanem, M. Pollefeys, and M. R. Oswald. 3d instance segmentation via multi-task metric learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9256-9266, 2019. 2 +[21] Z. Liang, Z. Li, S. Xu, M. Tan, and K. Jia. Instance segmentation in 3d scenes using semantic superpoint tree networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2783-2792, 2021. 2 +[22] S.-H. Liu, S.-Y. Yu, S.-C. Wu, H.-T. Chen, and T.-L. Liu. Learning gaussian instance segmentation in point clouds. arXiv preprint arXiv:2007.09860, 2020. 2 +[23] Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, and S. X. Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2537-2546, 2019. 6 +[24] C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual relationship detection with language priors. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pages 852–869. Springer, 2016. 7 +[25] S. Ma, Y. Wang, J. Fan, Y. Wei, T. H. Li, H. Liu, and F. Lv. Cat: Localization and identification cascade detection transformer for open-world object detection. arXiv preprint arXiv:2301.01970, 2023. 3 +[26] D. Miller, L. Nicholson, F. Dayoub, and N. Sunderhauf. Dropout sampling for robust object detection in open-set conditions. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3243-3249. IEEE, 2018. 7 +[27] D. Rozenberszki, O. Litany, and A. Dai. Language-grounded indoor 3d semantic segmentation in the wild. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII, pages 125-141. Springer, 2022. 5 +[28] K. Saito, P. Hu, T. Darrell, and K. Saenko. Learning to detect every thing in an open world. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIV, pages 268-284. Springer, 2022. 1 +[29] J. Schult, F. Engelmann, A. Hermans, O. Litany, S. Tang, and B. Leibe. Mask3D: Mask Transformer for 3D Semantic Instance Segmentation. In International Conference on Robotics and Automation (ICRA), 2023. 2, 3, 8 +[30] J. Sun, C. Qing, J. Tan, and X. Xu. Superpoint transformer for 3d scene instance segmentation. arXiv preprint arXiv:2211.15766, 2022. 2 +[31] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 2 +[32] W. Wang, M. Feiszli, H. Wang, J. Malik, and D. Tran. Open-world instance segmentation: Exploiting pseudo ground truth from learned pairwise affinity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4422-4432, 2022. 9, 10 + +[33] W. Wang, M. Feiszli, H. Wang, and D. Tran. Unidentified video objects: A benchmark for dense, open-world segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10776-10785, 2021. 1 +[34] W. Wang, R. Yu, Q. Huang, and U. Neumann. Sgpn: Similarity group proposal network for 3d point cloud instance segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2569-2578, 2018. 2 +[35] S. Xie, J. Gu, D. Guo, C. R. Qi, L. Guibas, and O. Litany. Pointcontrast: Unsupervised pretraining for 3d point cloud understanding. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part III 16, pages 574-591. Springer, 2020. 2 +[36] B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, and N. Trigoni. Learning object bounding boxes for 3d instance segmentation on point clouds. Advances in neural information processing systems, 32, 2019. 2 +[37] L. Yi, W. Zhao, H. Wang, M. Sung, and L. J. Guibas. Gspn: Generative shape proposal network for 3d instance segmentation in point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3947-3956, 2019. 2 +[38] B. Zhang and P. Wonka. Point cloud instance segmentation using probabilistic embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8883-8892, 2021. 2 +[39] J. Zhang, R. Dong, and K. Ma. Clip-for3d: Learning free open-world 3d scene representations from 2d dense clip. arXiv preprint arXiv:2303.04748, 2023. 3 +[40] X. Zhu, R. Zhang, B. He, Z. Zeng, S. Zhang, and P. Gao. Pointclip v2: Adapting clip for powerful 3d open-world learning. arXiv preprint arXiv:2211.11682, 2022. 3 +[41] O. Zohar, K.-C. Wang, and S. Yeung. Prob: Probabilistic objectness for open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11444-11453, 2023. 3, 7 + +# Appendix + +# A Scalability of 3D-OWIS + +We show in Table 5 that 3D-OWIS can accommodate a large number of classes without a major size increase + +Table 5: Demonstrating the Scalability of 3D-OWIS with Respect to the maximum number of classes it can learn. + +
# of classes200100050001000050000100000
Size of 3D-OWIS39.7M39.8M40.7M41.9M50.9M62.2M
+ +# B Additional details on Split B + +We utilize the 20 scene types present in the ScanNet200 dataset to distribute the 200 classes over the three tasks. Initially, we establish a notion of similarity between two scene types by assessing the extent of their shared classes. This similarity is quantified through the intersection over the union $(IoU)$ metric, which measures the ratio of common classes to the total count of unique classes across both scenes. By employing this metric, we identify scene types that exhibit a substantial $IoU$ , indicating a higher degree of similarity. The similarity matrix, depicted in Fig. 7, showcases the relationships between the 20 scene types within the ScanNet200 dataset. + +Subsequently, we employed three criteria to group the classes: $(i)$ the likelihood of encountering them first when accessing an indoor area, $(ii)$ their affiliation with similar scene types, and $(iii)$ the proximity in the number of known classes across tasks. By taking these factors into consideration, we arrived at the split of scenes presented in Table 6. + +![](images/fb3e69b22fec04cac8af818823617525804d7845e239e81f60caa0a2d6e78899.jpg) +Figure 7: Similarity matrix between the 20 scene types in ScanNet200 dataset. We show the ratio of common classes to the total count of unique classes between two scene types. + +Table 6: Frequently occurring scene when training during the three tasks in Split B. Scene types are grouped into tasks based on three criteria: (i) the likelihood of encountering the classes within the scene types when entering an indoor area, (ii) similarity of scene types containing the classes, and (iii) consistency in the overall number of classes within the scene types across all tasks. This grouping ensures a cohesive organization of scene types for effective evaluation of 3D instance segmentation models integrated with tasks such as robot navigation within indoor environments. + +
Split B
Task 1Task 2Task 3
Bedroom / HotelKitchenComputerClusterMail RoomGame roomOffice
Dining RoomBathroomMisc.HallwayApartment
LoungeClosetGymClassroomLobby
GarageLibraryConference RoomStairs
+ +# C Additional details on the experimentation + +Training: We train the model on the entire ScanNet200 dataset for all tasks. In Task 1, objects belonging to the classes from Task 2 and Task 3 are masked, excluding them from the learning process. Moving to Task 2, we utilize the last saved checkpoint of the model from Task 1 as a starting point and mask the objects with labels that correspond to the current known classes of Task 1 and Task 3. This allows the model to focus solely on learning and distinguishing the specific objects associated with the current task. Finally, Task 3 builds upon the progress made in Task 2. We load the + +latest checkpoint of the model from Task 2 and incorporate an exemplar replay. Similar to Task 2, the objects with labels belonging to the known classes in Task 1 and Task 2 are masked during training. This step further refines the model's understanding and discrimination abilities for the specific objects relevant to the current task. + +Evaluation: To conduct the evaluation during a task, we assign the "unknown" label to the known classes from all the future tasks. + +# D Additional qualitative results + +# D.1 Unknown objects identification + +The qualitative results depicted in Fig. 10, 12, 13, and 11 highlight the superior performance of our contribution in retrieving unknown objects. Across the majority of scenes, our model consistently corrects the mispredicted unknown classes while preserving the accuracy of known objects, thus demonstrating its robustness and effectiveness. + +# D.2 Learning novel classes + +Fig. 8 and Fig. 9 illustrate the sequential process of learning novel classes after identifying unknown objects from the previous task. In Fig. 8, we demonstrate the effectiveness of our method in successfully retrieving unknown classes in all tasks. Additionally, in Fig. 9, we highlight the potential of exemplar replay in retaining knowledge of the old classes after learning the novel classes in Task 2 and Task 3. + +![](images/474df57eccd8d68a42ff10fbcd10d26a31eb03949ef79726792e6366d27bbb2f.jpg) +Figure 8: Illustration of the process of unknown identification and learning novel classes. We use orange circles to highlight the differences between 3D-OWIS and 3D-OWIS-PC-CT. The objects depicted in green represent the known classes, while those in blue represent the unknown objects. The gray objects correspond to the background. The qualitative results demonstrate that 3D-OWIS outperforms 3D-OWIS-PC-CT in retrieving unknown objects. Notably, 3D-OWIS correctly identifies the background objects as unknown, whereas 3D-OWIS-PC-CT misclassifies them as known objects. + +Table 7: Proposed distribution of ScanNet200 classes across tasks for each split. We show the classes that are known when training the model during a specific task for the three splits. + +
Split ASplit BSplit C
Task 1Task 2Task 3Task 1Task 2Task 3Task 1Task 2Task 3Task 2Task 3
tv standcushionpaperalarm clockguitarbarbasketironing boardmattress
curtainend tableplatebackpackpaper towel rollbaskettrash candivinertoaster
blindsdining tablesoap dispenserbagbookbathroom cabinetstair railovenstool
shower curtainkeyboardbucketbedbookshelfbathroom countertoaster ovendish rackplant
bookshelfbagclockblanketcartbathroom stalllaundry hampershower doorfolded chair
tvtoilet paperguitarcase of water bottlesfurniturebathroom stall doorbulletmini fridgemicrowave
kitchen cabinetprintertoilet paper holderceilingblackboardbathroom vanitydining tablebicyclecushion
pillowblanketspeakerclosetprojectorbathtubstuffed animallaptopbench
lampmicrowavecupcloset doorseatbottlebathroom vanityarmchairsoap dispenser
dressershoepaper towel rollcloset wallfolded chairbroomcockcouchstorage organizer
monitorcomputer towerbarclothesoffice chairclothes dryerceilingcoffee kettleshower curtain
objectbottletoastercoat rackprojector screencushionpotted plantcountercart
ceilingbinironing boardcontainerwhiteboarddoorframeluggagestructurekitchen counter
boardotomansoap dishcurtainbinfire alarmclutter wallpipetowel
stovebenchtoilet paper dispenserdoorbuckethair dryerdeskbowblackboard
closet wallbasketfire extinguisherdresserbuttonhandicap barduggershower curtain rodTV
coachfanballdumbbellcoperledgeobjectssofa chairprinter
office chairlaptophatfanmachinelight switchrailclothes dryerstand
kitchen counterpersonhatguitar casemailboxmattissue boxcoffee tablerack
showerpaper towel dispensershower curtain rodhatpaper cuttermirrorplatestairsbathroom counter
closetpaper towel dispenserpaper cutterironing boardprimerpaper towel dispenserkeyboardtoilet seat cover dispensercloset rod
dungeonoventraylampcolumnplungerhatmachinebottle
doorframeracktoaster ovenlaptopstorage containerscalecopierpaper bagrange hood
sofa chairpianomouselaundry basketblindsshowersheetbookpurse
mailboxsuitcasetoilet seat cover dispenserlaundry hamperstructureshower curtainbedblindscandle
nightstandrailstorage containerluggagewater bottleshower curtain rodpaper towel dispensermonitorperson
washing machinecontainerscalemattressballshower doorfire extinguishershower wallcoffee maker
picturetelephonetissue boxmini fridgeboardshower floorpaper towel rollcurtainlight switch
bookstandlight switchnightstandboxshower headbackpackclosetstorage container
sinklightcrateobjectcabinetshower wallwater bottletelephonebathroom stall door
recycling binlaundry basketpower outletpillowcd casesinkstovebean bagkitchen floor
tablepipesignposterceiling lightsoap dishlaundry basketbucketrefrigerator
backpackseatprojectorpower outletclocksoap dispenseralarm clocksignrefrigerator
toiletbicycleplungerpursecomputer towertoiletheadphonesmirrortube
copierladderstuffed animalrackcuptoilet paperpianoclocktoilet paper holder
counterjacketheadphonesrecycling bindesktoilet paper dispenserguitarnightstandceiling light
stoolstorage binbroomshelfdividertoilet paper holderbagtv standpicture
refrigeratorcoffee makerguitar caseshoefile cabinettoilet seat cover dispenserdoorhandicap barend table
windowdishwasherdustpansignheadphonestowelspeakerpostercloset door
file cabinetmachinehair dryerstorage binkeyboardtrash binwater coolerblanketfile cabinet
chairmatwater bottlepower outletmonitorwashing machinecupcupcrate
plantwindowsillhandicap bartissue boxmousedustpanwater pitcherrecycling bintoilet paper dispenser
coffee tablebullet boardpursetissue boxpaperlaundry detergentdumbbelllamppillow
stairsfireplaceventwardrobepersonstuffed animalfurniturescalemat
armchairmini fridgeshower floordecorationpower stripstuffed animaldoorhandicap barend table
cabinetwater coolerwater heaterarmchairradiatorbowldoorhandicap barend table
bathroom vanityshower doorbowlstorage binkeyboardcell phonetoilethandicap barend table
chairpatelpaper bagcandletelephonecoffee kettleplungerotomancontainer
mirrorlidgealarm clockchairtraycoffee makershowerpapersleet
blackboardfurnituremusic standchairtubecounterbarpowder stripjacket
trash cancartlaundry detergentcoffee tablewindow silldishwasherfire extinguisherfireplacedresser
stair railcorrectiondumbbellcouchpipedish racksuitcasedoufframedustpan
boxcloset doortubedining tablepipefire extinguishercabinettoilettable
toiletvacuum cleanercd caseend tablestair railkitchen cabinetboardtoiletprojector
dinnerdishwaretoilet screenfireplacestairskitchen countertoilethandy detergenttoilet
clothesrange hoodcoffee kettlecupovenoventoiletcleaning machinetoilet
whiteboardprojector screenshower headkeyboard piano
beddivisionkeyboard pianolight
bathroom countertoilet countercase of water bottlesmusic standplate
clotheslaundry hampercoat rackottoman
wardrobetoilet stall doorfolded chairpiano
clothes dryerceiling lightfire alarmpicture
radiatortrash binpower strippillar
shelftrash bincolor cardpotted plant
radiatorstructurepostertable
shelfstorage organizerpotted plantvacuum cleaner
+ +![](images/56852bc2e499c5a531d15975391b6174aeeaac341de141d15e73ed7f0dd680a6.jpg) +Figure 9: Alleviating catastrophic forgetting during incremental learning. The capability of 3D-OWIS in retaining knowledge of the previously known classes after learning the new one is demonstrated across Task 2 and Task 3 for both scenes, where all objects of old known classes are still being predicted as known. + +![](images/38793c857c3d279ce2d1cbd4f86f694d593fb2a3dd1c1fae025465f54ea15066.jpg) +Figure 10: Qualitative results. The objects depicted in green represent the known classes, while the ones in blue represent the "unknown" class, and the gray objects represent the background. To emphasize the differences between 3D-OWIS and 3D-OWIS-PC-CT, we highlight them with orange circles. + +![](images/bdf04d60cb832b572d7aa34999a8120641b688de8d97b3653b77955237a6d3e3.jpg) +Ground Truth + +![](images/2759b3305ecd6c27cb55df3d3a2ad781a76be32235e7a991b1230b1f0e1db6ee.jpg) +3D-OWIS-PC-CT + +![](images/3d2bf2ff94f763d09db4bc21a9ab3bf7c169f997a6d73d189c9d10a6463fb441.jpg) +3D-OWIS + +![](images/eebb70cb9db0b9519b509499b7438c00e2f17105e0874e005d9175cc04ade200.jpg) + +![](images/a14f09233f9605d29d34fff06b3bdffd09fa1a8fb7f8429b8a8772bcfb905e25.jpg) + +![](images/d22e73ea59cb01ea2eef10bef3b0b6772213e17747987b231b1da001b47fac35.jpg) + +![](images/f2137822f1ea72a197ce8570411130de2079d02e6be892f826b69c46415c5afd.jpg) + +![](images/c17ce6ccba7863c563f8cf98f0a439c7f14e148162ea1151862d6de5c409acd2.jpg) + +![](images/56e6d46415c4f6ef6a00bd0f343acda8c869244cba4db7a797dddd26e7f5ab38.jpg) + +![](images/01c5a871e7789244adb062737bf34e1368c6b176c3ab52bf64854254185553ec.jpg) + +![](images/7aed76e909f4cae28e13e1f9e4260008de5c37ea4eff7e4c9f569f583488a6db.jpg) + +![](images/163a7a769078238a6d1dbc79f58257e5646316d1954bd4190c1f96a687e38fc4.jpg) + +![](images/0cb6c69ede3f8adec9b24314fba8300a2ebcc1d40b63b9e98460ae596db33c1f.jpg) + +![](images/0b374660047e50a6b1afecbf7c601f23526a60ee0c5683724836d830b870d35a.jpg) + +![](images/ba7923c0f82f9d3e4ca98b5db85b8c77625fd6ed68275fed038acaff183a86c9.jpg) + +![](images/4e59196bf321ded453f148d35ecbe39ea287ede5d7904b73e916353d22f13a56.jpg) +Figure 11: Additional qualitative results We demonstrate the better performance of our model in accurately identifying background objects (depicted in gray) as unknown (represented by the blue color), and also correcting the predictions from known class to unknown class. This capability greatly reduces the misclassification of background objects as known objects, leading to improved overall classification accuracy. + +![](images/52f4be168223491adc2e1ce4ef3727bdba76d8a302058d3f9853683047dfa3f9.jpg) + +![](images/dd4ccd69c74b093046a3910d05f8059e60657e32f52f1f9e46dbbd3c7772ca7e.jpg) + +![](images/2fe9487a323f067256a6a291867c5c2ff7149e6617c2a9d27fcb7937a2e8bed6.jpg) +Ground Truth + +![](images/3c7a7a7478c3b12875f78e0e1681312d6a428dcafcea91ec6a400c9524cc4526.jpg) +3D-OWIS-PC-CT + +![](images/cbd2a0094e4ec4efb7421edba53981c3b541c3a5449a1e613404724437bea23f.jpg) +3D-OWIS + +![](images/67aa3d0968db2b87be482b1889b6e26e55e1a80536b685a2b24e1541aa007a50.jpg) + +![](images/4161f8915770c57426233e946901e74ae8aa19bacf9889c460e5c44a95e89037.jpg) + +![](images/37a9ec0b611eab8a1d69e97841947d471b77a2c9a4bbd0dd53e296d70396964b.jpg) + +![](images/1dc43acdf13f200357cccea28a52d4880ab0a693b03a7268af8352e8818bce3c.jpg) + +![](images/726b47d281c1a12a5fa76bb56d0f5d29c71c42606483eda3f08593ac8f11100c.jpg) + +![](images/5deaa85f8ab6980746708ed8564903029b7ee6def4aa54d8d3b283d1d1d4e948.jpg) + +![](images/9c3b631162c0d4daea7bd0d4cf16af9c3a3705ec8fc678952d8dfe408ff56dae.jpg) + +![](images/6f43c98692388bc5c8c91f08cab038cfc69ce055da57fc8691289fe4fce0512a.jpg) + +![](images/a5c9c763d99ad2664c5264371fe014bbdfc93589bfb94d33ee9d768cbde00d3c.jpg) + +![](images/be12ef636e694725ec2b16005d1a4346497d64409f4052fca601e4af1f6b08fa.jpg) + +![](images/62da61aaa0f508521790d602a56a598ab3f7506a73250afa63d0c06b9dfc54f7.jpg) + +![](images/555a99c0071de297d956f18f6e8e70c723f6e5c36647a829604be2fbcd751887.jpg) + +![](images/27c81a3498e0a0e3e3ef1c279df88743c8eef31727b82e7e714023eb9b8bad98.jpg) +Figure 12: Additional qualitative results + +![](images/f751ae665b730952836b361e6092e9a3c2cf522f57e8f9362aefb0daa4563684.jpg) + +![](images/6e097c04ed91052c9269278d30712346355395ae88f9f1cb6e1cc0fb76e83fc6.jpg) + +![](images/040e7aed043228b97b541e6b76ec64ad11163cb10efb42337dcebfaa4a959261.jpg) +Figure 13: Additional qualitative results \ No newline at end of file diff --git a/3dindoorinstancesegmentationinanopenworld/images.zip b/3dindoorinstancesegmentationinanopenworld/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6c2554dc5f06b71369316029ce2ad53c4099a41c --- /dev/null +++ b/3dindoorinstancesegmentationinanopenworld/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f1b9fafa64c527bf5f3514ddf215f0b20e19ec9da852787b06ecc45626eac07 +size 1665115 diff --git a/3dindoorinstancesegmentationinanopenworld/layout.json b/3dindoorinstancesegmentationinanopenworld/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..486d9113f68535b27e0f89bea0913669b4346d47 --- /dev/null +++ b/3dindoorinstancesegmentationinanopenworld/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecdd31d2ff5546d463b1a8b6cd9609c6880f54067d3a81891482c9f58d5fb6a5 +size 487562 diff --git a/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_content_list.json b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8eceac98d0a772451734838ec0b50355af809267 --- /dev/null +++ b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f64a5e3b45bd34296e0b49b0a9ab82da32fc781a827060762a6c5a35460dfa20 +size 103078 diff --git a/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_model.json b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_model.json new file mode 100644 index 0000000000000000000000000000000000000000..86d23ec665301a5c74c22f3f5e5ecca6b6f0b51b --- /dev/null +++ b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a1e758f397ca42fe482e35dcd51e0504749676150030bf8d6c64a5d14d50a7b +size 128294 diff --git a/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_origin.pdf b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6147373b19f4b60beec0b77ddc45ff4fd9365a68 --- /dev/null +++ b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bb7064ad0e30db731e540d1ab6db8102905daefebaec03234248b8d8e66c8df +size 8361109 diff --git a/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/full.md b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a323b7bc2083d42fcb60fdde7bc76c67f9cb2427 --- /dev/null +++ b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/full.md @@ -0,0 +1,441 @@ +# 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive Physics under Challenging Scenes + +Haotian Xue + +Antonio Torralba + +Joshua Tenenbaum + +Daniel Yamins3 + +Yunzhu Li 3,4* + +Hsiao-Yu Tung* + +1 Georgia Tech + +$^{2}$ MIT + +3 Stanford Univeristy + +4 UIUC + +# Abstract + +Given a visual scene, humans have strong intuitions about how a scene can evolve over time under given actions. The intuition, often termed visual intuitive physics, is a critical ability that allows us to make effective plans to manipulate the scene to achieve desired outcomes without relying on extensive trial and error. In this paper, we present a framework capable of learning 3D-grounded visual intuitive physics models from videos of complex scenes. Our method is composed of a conditional Neural Radiance Field (NeRF)-style visual frontend and a 3D point-based dynamics prediction backend, using which we can impose strong relational and structural inductive bias to capture the structure of the underlying environment. Unlike existing intuitive point-based dynamics works that rely on the supervision of dense point trajectory from simulators, we relax the requirements and only assume access to multi-view RGB images and (imperfect) instance masks acquired using color prior. This enables the proposed model to handle scenarios where accurate point estimation and tracking are hard or impossible. We generate datasets including three challenging scenarios involving fluid, granular materials, and rigid objects in the simulation. The datasets do not include any dense particle information so most previous 3D-based intuitive physics pipelines can barely deal with that. We show our model can make long-horizon future predictions by learning from raw images and significantly outperforms models that do not employ an explicit 3D representation space. We also show that once trained, our model can achieve strong generalization in complex scenarios under extrapolate settings. The code is released in https://github.com/xavihart/3D-IntPhys. + +# 1 Introduction + +Humans can achieve a strong intuitive understanding of the 3D physical world around us simply from visual perception [5, 8, 52, 51, 46, 11]. As we constantly make physical interactions with the environment, the intuitive physical understanding applies to objects of a wide variety of materials [6, 56]. For example, after watching videos of water pouring and doing the task ourselves, we can develop a mental model of the interaction process and predict how the water will move when we apply actions like tilting or shaking the cup (Figure 1). The ability to predict the future evolution of the physical environment is extremely useful for humans to plan our behavior and perform everyday manipulation tasks. It is thus desirable to develop computational tools that learn 3D-grounded models of the world purely from visual observations that can generalize to objects with complicated physical properties like fluid and granular materials. + +![](images/895d3dc39c2d25900d9e4fba8793dfa5212feac4b242ab0866f7890a3bdf59e3.jpg) +Figure 1: Visual Intuitive Physics Grounded in 3D Space. Humans have a strong intuitive understanding of the physical environment. We can predict how the environment would evolve when applying specific actions. This ability roots in our understanding of 3D and applies to objects of diverse materials, which is essential when planning our behavior to achieve specific goals. In this work, we are the first to leverage a combination of implicit neural representation and explicit 3D particle representation to build 3D-grounded visual intuitive physics models of the challenging scenes that applies to objects with complicated physical properties, such as fluids, rigid objects, and granular materials. + +![](images/0660be69ab178dae853cd48ee7386e413f19c1a47e5d744a661875b9403b8cc9.jpg) + +![](images/55a2b90bf56b1e04504a8615cbd353e1764a8ed8a5bf8c2fa71d262389692b2d.jpg) + +There has been a series of works on learning intuitive physics models of the environment from data. However, most existing work either focuses on 2D environments [60, 1, 21, 64, 20, 4, 44, 28, 67, 24, 23, 49, 33, 19, 31, 58, 22, 12, 65] or has to make strong assumptions about the accessible information of the underlying environment [36, 35, 47, 43, 70, 54, 48, 7, 2, 26] (e.g., full-state information of the fluids represented as points). The limitations prevent their use in tasks requiring an explicit 3D understanding of the environments and make it hard to extend to more complicated real-world environments where only visual observations are available. There are works aiming to address this issue by learning 3D-grounded representation of the environment and modeling the dynamics in a latent vector space [34, 32]. However, these models typically encode the entire scene into one single vector. Such design does not capture the structure of the underlying systems, limiting its generalization to compositional systems or systems of different sizes (e.g., unseen container shapes or different numbers of floating ice cubes). + +In this work, we propose 3D Visual Intuitive Physics (3D-IntPhys), a framework that learns intuitive physics models of the environment with explicit 3D and compositional structures with visual inputs. + +Specifically, the model consists of (1) a perception module based on conditional Neural Radiance Fields (NeRF) [41, 68] that transforms the input images and instance masks into 3D point representations and (2) a dynamics module instantiated as graph neural networks to model the interactions between the points and predict their evolutions over time. Despite advances in graph-based dynamics networks [47, 36], existing methods require strong supervision provided by 3D GT point trajectories, which are hard to obtain in most real setups. To tackle the problem, we train the dynamics model using (1) a distribution-based loss function measuring the difference between the predicted point sets and the actual point distributions at the future timesteps and (2) a spacing loss to avoid degenerated point set predictions. Our perception module learns spatial-equivariant representations of the environment grounded in the 3D space, which then transforms into points as a flexible representation to describe the system's state. Our dynamics module regards the point set as a graph and exploits the compositional structure of the point systems. + +The structures allow the model to capture the compositionality of the underlying environment, handle systems involving objects with complicated physical properties, and perform extrapolated generalization, which we show via experiments greatly outperform various baselines without a structured 3D representation space. + +# 2 Related Work + +Visual dynamics learning. Existing works learn to predict object motions from pixels using frame-centric features [1, 20, 4, 24, 23, 53, 30, 59, 69, 10, 25, 62] or object-centric features [21, 61, 28, 44, 27, 58, 14, 22, 45, 66], yet, most works only demonstrate the learning in 2D scenes with objects + +![](images/6499542e8f80019e6084805a372086fbfa95d424c0a62837964d697d796268cc.jpg) +Figure 2: Overview of 3D Visual Intuitive Physics (3D-IntPhys). Our model consists of two major components: Left: The perception module maps the visual observations into implicit neural representations of the environment. We then subsample from the reconstructed implicit volume to obtain a particle representation of the environment. Right: The dynamics module, instantiated as graph neural networks, models the interaction within and between the objects and predicts the evolution of the particle set. + +moving only on a 2D plane. We argue that one reason that makes it hard for these existing methods to be applied to general 3D visual scenes is because they often operate on view-dependent features that can change dramatically due to changes in the camera viewpoint, which shouldn't have any effect on the actual motion of the objects. Recent works by [9] have shown that only methods that use 3D view-invariant representations can pave the way toward human-level physics dynamics prediction in diverse scenarios. + +Researchers have attempted to learn object motion in 3D [55, 63, 40, 34], [55] and [63] use object-centric volumetric representations inferred from RGB-D to predict object motion, yet, these volumetric approaches have much higher computation costs than 2D methods due to the 4D representation bottleneck, which hinders them from scaling up to more complex scenes. [40] use self-supervised 3D keypoints and [15, 16] use implicit representations to model multi-object dynamics but cannot handle objects with high degrees of freedom like fluid and granular materials. [34] use neural implicit representation to reduce the potential computational cost, yet the works have not shown how the approach can generalize to unseen scenarios. Our works aim to solve the tasks of learning generalizable object dynamics in 3D by combining the generalization strength of input-feature-conditioned implicit representation and point-based dynamics models. + +Point-based dynamics models. Existing works in point- and mesh-based dynamics models [36, 42, 57, 47, 43] have shown impressive results in predicting the dynamics of rigid objects, fluid [36, 42, 57, 47, 3, 13], deformable objects [36, 42, 47], and clothes [43, 38]. Most works require access to full 3D states of the points during training and testing, yet, such information is usually not accessible in a real-world setup. [35] learn a visual frontend to infer 3D point states from images, but still require 3D point states and trajectories during training time. [50] propose to learn point dynamics directly from vision, but they only consider elasto-plastic objects consisting of homogeneous materials. How to learn about 3D point states and their motion from raw pixels remain a question. Our paper tries to build the link from pixels to points using recent advances in unsupervised 3D inference from images using NeRF [41, 68]. + +# 3 Methods + +We present 3D Visual Intuitive Physics (3D-IntPhys), a model that learns to simulate physical events from unlabeled images (Figure 2). 3D-IntPhys contains a perception module that transforms visual observations into a 3D point cloud that captures the object geometries (Section 3.1) and a point-based simulator that learns to simulate the rollout trajectories of the points (Section 3.2). The design choice of learning physics simulation in a 3D-point representation space enables stronger simulation performance and generalization ability. The performance gain mainly comes from the fact that describing/learning objects' motion and interactions in 3D are easier compared to doing so in 2D since objects live and move persistently in the 3D space. 3D-IntPhys also supports better + +generalization ability since its neural architecture explicitly models how local geometries of two objects/parts interact, and these local geometries and interactions can be shared across different and novel object combinations. + +Although 3D-IntPhys learns to simulate in a 3D representation space, we show it can learn without any 3D supervision such as dense point trajectories as in previous work [47, 36]. Dense point trajectories are hard and sometimes impossible to obtain in the real world, e.g., capturing the trajectories of each water point. 3D-IntPhys does not require such 3D supervision and can simply learn by observing videos of the scene evolution. + +# 3.1 2D-to-3D Perception Module + +Given a static scene, the perception module learns to transform one or a few posed RGB images, $\mathbf{I} = \{(I_i,\pi_i)|i\in \{1,2,\dots ,N_v\} \}$ , taken from $N_{v}$ different views, into a 3D point cloud representation of the scene, $\mathbf{X}$ . We train the model in an unsupervised manner through view reconstruction, using a dataset consisting of $N_{t}$ videos, where each video has $N_{f}$ frames, and each frame contains images taken from $N_{v}$ viewpoints. + +Neural Radiance Field (NeRF). NeRF [41] learns to reconstruct a volumetric radiance field of a scene from unlabeled multi-view images. After training, the model learns to predict the RGB color $\mathbf{c}$ and the corresponding density $\sigma$ of a query 3D point $\mathbf{x} \in \mathbb{R}^3$ from the viewing direction $\mathbf{d} \in \mathbb{R}^3$ with a function $(\mathbf{c}, \sigma) = f(\mathbf{x}, \mathbf{d})$ . We can formulate a camera ray as $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ , where $\mathbf{o} \in \mathbb{R}^3$ is the origin of the ray. The volumetric radiance field can then be rendered into a 2D image via $\hat{\mathbf{C}}(\mathbf{r}) = \int_{t_n}^{t_f} T(t)\sigma(t)\mathbf{c}(t)dt$ , where $T(t) = \exp(-\int_{t_n}^{t}\sigma(s)ds)$ handles occlusion. The rendering range is controlled by the depths of the near and far plane (i.e., $t_n$ and $t_f$ ). We can train NeRF through view prediction by: + +$$ +\mathcal {L} = \sum_ {\mathbf {r} \in \mathcal {R} (\mathbf {P})} \| \hat {\mathbf {C}} (\mathbf {r}) - \mathbf {C} (\mathbf {r}) \|, \tag {1} +$$ + +where $\mathcal{R}(\mathbf{p})$ is the set of camera rays sampled from target camera pose $\mathbf{p}$ . + +Image-conditioned NeRF. To infer the NeRF function from an image, previous work proposed to encode the input image into a vector, with a CNN encoder, as a conditioning input to the target NeRF function [34]. We found this type of architecture is in general hard to train and does not generalize well. Instead, we adopt pixelNeRF [68], which conditions NeRF rendering with local features, as opposed to global features. Given an image $I$ in a scene, pixelNeRF first extracts a feature volume using a CNN encoder $\mathbf{W} = E(I)$ . For a point $\mathbf{x}$ in the world coordinate, we retrieve its feature vector by projecting it onto the image plane, so that we can get the feature vector $\mathbf{W}(\pi (\mathbf{x}))$ . PixelNeRF combines the feature vector together with the 3D position of that point and predict the RGB color and density information: + +$$ +\mathbf {V} (\mathbf {x}) = (\mathbf {c}, \sigma) = f (\mathbf {x}, \mathbf {d}, \mathbf {W} (\pi (\mathbf {x}))). \tag {2} +$$ + +In the experiment section, we will show that it is surprisingly effective to train only one general model to learn a conditional Neural Radiance Field that can apply to all videos in one type of scene (e.g. FluidPour) with five different settings (e.g. extrapolate setting), which provides a better 3D representation for the scene and greatly facilitates the learning of 3D Intuitive Physics. + +Explicit 3D representation from pixelNeRF. From a few posed RGB images $\mathbf{I}$ , of a scene $s$ , we infer a set of points for $O_{s}$ target object (such as fluid, cube) in the scene. We achieve this by first sampling a set of points according to the predicted occupancy measure, then clustering the points into objects using object segmentations. We found that sampling with low resolution will hurt the quality of the rendered point cloud to generate objects with inaccurate shapes, while sampling with high resolution will increase the computation for training the dynamics model since the input size increases. To speed up training while maintaining the quality of the reconstructed point cloud, we first infer the points with higher resolution and do sparse sampling of each point cloud using FPS (Farthest Point Sampling) [17]. Next, we cluster the inferred points into objects according to object segmentation masks. Since solving object segmentation in general is not the main focus of this paper, we resort to using the color information to obtain the masks. + +# 3.2 Point-Based Dynamics Learner + +Given the point representation at the current time step, $\mathbf{X}_t$ , the dynamics simulator predicts the points' evolution $T$ steps in the future, $\{\mathbf{X}_{t + 1},\mathbf{X}_{t + 2},\dots \mathbf{X}_{t + T}\}$ , using graph-based networks [47, 36]. + +We first form a graph $(V,E)$ based on the distance between points. If the distance between two points is smaller than a threshold $\delta$ , we include an edge between these two points. Each vertex $v_{i} = (\dot{x}_{i},a_{i}^{v})\in V$ contains the velocity of the point, $\dot{x}_i$ , and point attributes, $a_i^v$ , to indicate the point's type. For each relation, $(i,j)\in E$ , we have its associated relation attribute $a_{ij}^{e}$ , indicating the types of relation and the relative distance between the connected points. + +Spatial message passing and propagation. At time step $t$ , we can do message passing to update the points, $v_{i} \in V$ , and relation representations, $(i,j) \in E$ , in the graph: + +$$ +g _ {i j, t} = Q _ {e} \left(v _ {i, t}, v _ {j, t}, a _ {i j} ^ {e}\right), (i, j) \in E \tag {3} +$$ + +$$ +h _ {i, t} = Q _ {v} \left(v _ {i, t}, \sum_ {k \in \{j | (i, j) \in E \}} g _ {i k, t}\right), v _ {i} \in V \tag {4} +$$ + +where $Q_v$ and $Q_e$ are encoders for vertices and relations respectively. Please refer to [7] for more details. Though this kind of message passing can help with updating representation, it can only share one-hop information in each step, limiting its performance on instantaneous passing of forces. To improve long-range instantaneous effect propagation, we use multi-step message propagation as in [37, 36]. The propagation step is shown in Alg 1: + +Algorithm 1: Point-based Dynamics Predictor +```txt +Data: Current timestep $t$ , point cloud $V_{t}$ , vertex encoder $Q_{v}$ , edge encoder $Q_{e}$ , vertex propagator $P_{v}$ , edge propagator $P_{e}$ , state predictor $f_{s}$ +``` + +Result: $V_{t + 1}$ + +Form graph $G_{t} = (V_{t},E_{t})$ + +//message passing + +$$ +g _ {i j, t} = Q _ {e} \left(v _ {i, t}, v _ {j, t}, a _ {i j} ^ {e}\right), (i, j) \in E _ {t} +$$ + +$$ +h _ {i, t} = Q _ {v} \left(v _ {i, t}, \sum_ {k \in \{j | (i, j) \in E \}} g _ {i k, t}\right), v _ {i} \in V _ {t} +$$ + +//message propagation + +$$ +h _ {i, t} ^ {0} = h _ {i, t}, g _ {i, t} ^ {0} = g _ {i, t}, +$$ + +for $l\in \{1,2,3,\dots,L\}$ do + +$$ +\left| \begin{array}{l l} & g _ {i j, t} ^ {l} = P _ {e} (g _ {i j, t} ^ {l - 1}, h _ {i, t} ^ {l - 1}, h _ {j, t} ^ {l - 1}) \quad , (i, j) \in E _ {t} \\ & h _ {i, t} ^ {l} = P _ {v} (h _ {i, t} ^ {l - 1}, \sum_ {k \in \{j | (i, j) \in E \}} g _ {i k, t} ^ {l}) \quad , v _ {i} \in V _ {t} \end{array} \right. +$$ + +end + +//state prediction + +$$ +v _ {i, t + 1} = f _ {s} \left(h _ {i, t} ^ {L}\right) +$$ + +$$ +V _ {t + 1} = \{v _ {i, t + 1} \} +$$ + +where $P_{e}, P_{v}$ are propagation functions of nodes and edges, respectively, and $g_{ij,t}^{l}$ is the effect of relation $(i,j)$ in propagation step $l$ . $h_{i,t}^{l}$ is the hidden states for each point in the propagation process. Finally, we have the predicted states of points at time step $t + 1$ after $L$ steps of propagation: + +$$ +\hat {v} _ {i, t + 1} = f _ {s} \left(h _ {i, t} ^ {L}\right). \tag {5} +$$ + +**Environments.** We assume that the surrounding environment (e.g., the table) is known and the robot/tool/container are of known shape and fully actuated, where the model has access to their complete 3D state information. We convert the full 3D states into points through sampling on the 3D meshes and include these points in the prediction of the graph-based dynamics. + +Fluids, rigid bodies, and granular materials. We distinguish different materials by using different point attributes $a_{i}^{v}$ . We also set different relation attributes $a_{ij}^{e}$ in Equation 3 to distinguish different interaction (e.g., Rigid-Fluids, Fluids-Fluids, Granular-Pusher). For rigid objects, to ensure the object shapes remain consistent throughout the rollout predictions, we add a differentiable rigid constraint in the prediction head following [36]. + +Training dynamics model without point-level correspondence. Since our perception model parses each RGB image into object-centric point clouds independently, there does not exist an explicit one-to-one correspondence for points across frames. To handle this, we measure the Chamfer distance between the prediction $\hat{\mathbf{X}}_t = (\hat{V}_t,\hat{E}_t)$ from the dynamics network and the inferred point state $\mathbf{X}_t = (V_t,E_t)$ from the perception module and treat it as the objective function. The Chamfer + +![](images/2df05e618b447a2fe908dc9469ab3844122b79c8b4a583aabc590312f92842e5.jpg) +Figure 3: Data Collection and Evaluation Setups. Left: We collect multi-view videos of the environment from six cameras. Right: We consider a diverse set of evaluating environments involving fluids, rigid objects, granular materials, and their interactions with the fully-actuated container and the environment. We evaluate the learned visual intuitive physics model on both the interpolated settings (i.e., seen environment but with different action sequences) and extrapolated settings (i.e., unseen environment with different amounts of fluids, cubes, granular pieces, and containers of different sizes). + +![](images/e732f4e24a5f669543d7ea1de66d44c51a63f47122840cbf45ffbb46dbe29151.jpg) + +![](images/9b3096a43004fcbac80f87949f9105c57c44a8834fcb3f0cac0dcfc9558fd97e.jpg) + +![](images/233f7aa3055aae24dcffdf9ffacf9cd4a868d2dbab148e2259bab43ca91497e2.jpg) + +distance between two point cloud $\hat{V}$ and $V$ is defined as: + +$$ +L _ {c} (\hat {V}, V) = \frac {1}{\| \hat {V} \|} \sum_ {x \in \hat {V}} \min _ {y \in V} \| x - y \| _ {2} ^ {2} + \frac {1}{\| V \|} \sum_ {x \in V} \min _ {y \in \hat {V}} \| x - y \| _ {2} ^ {2}. \tag {6} +$$ + +We found that training the model with Chamfer distance in dense scenes with granular materials will often lead to predictions with unevenly distributed points where some points stick too close to each other. To alleviate this issue, we further introduce a spacing loss $L_{s}$ , which penalizes the gated distance (gated by $d_{\mathrm{min}}$ ) of nearest neighbor of each point to ensure enough space between points: + +$$ +L _ {s} (\hat {V}) = \sum_ {v \in \hat {V}} \left(\operatorname {R e L U} \left(d _ {\min } - \min _ {v ^ {\prime} \in \{\hat {V} \backslash v \}} \| v ^ {\prime} - v \| _ {2} ^ {2}\right)\right) ^ {2}. \tag {7} +$$ + +The one-step prediction loss $L_{dy}$ for training the dynamics model is $L_{c}(\hat{V}, V) + \sigma L_{s}(\hat{V})$ where $\sigma$ reweights the second loss. To improve long-term rollout accuracy, we train the model with two-step predictions using the first predicted state as input and feed it back into the model to generate the second predicted state. With the two-step loss, the model becomes more robust to errors generated from its own prediction. Finally, the $L_{dy}$ losses for all rolling steps are summed up to get the final loss for this trajectory. More implementation details are included in the supplementary material. + +# 4 Experiments + +The experiment section aims to answer the following three questions. (1) How well can the visual inference module capture the content of the environment (i.e., can we use the learned representations to reconstruct the scene)? (2) How well does the proposed framework perform in scenes with objects of complicated physical properties (e.g., fluids, rigid and granular objects) compared to baselines without explicit 3D representations? (3) How well do the models generalize in extrapolate scenarios? + +Datasets. We generated three simulated datasets using the physics simulator Nvidia FleX [39]. Each of the datasets represents one specific kind of manipulation scenario, where a robot arm interacts with rigid, fluid, and granular objects (Figure 3). For each of the three scenarios, we apply randomized input actions and change some properties of objects in the scene, e.g., the shape of the container, the amount of water, and the color/number of cubes, to make it diverse. To test the generalization capability of the trained model, we design extrapolated datasets where the data is generated from an extrapolated set of parameters outside the training distribution. + +a) FluidPour. This scenario contains a fully-actuated cup pouring fluid into a container. We design the extrapolate dataset to have a larger container, more quantity of fluid, and different pouring actions. +b) FluidCubeShake. This scenario contains a fully-actuated container that moves on top of a table. Inside the container are fluids and cubes with diverse colors. We design the extrapolate dataset to have different container shapes, number of cubes, cube colors, and different shaking actions. + +![](images/f6b5f967b7f5ff6af7f556eb80e230ffcab68bae33c07e3c5a1c856de6084ead.jpg) +Figure 4: Qualitative Results of the Dynamics Module on Future Prediction. Here we visualize our model's predicted future evolution of the particle set as compared with the NeRF-dy [34] baseline in both interpolate and extrapolate settings. Our method correctly identifies the shape/distribution of the fluids, rigid objects, and granular pieces with much better accuracy than NeRF-dy. The future evolution predicted by our method also matches the ground truth much better and produces reasonable results even in extrapolate settings. + +![](images/39f53279749a577ce688614ce31b237f78c79512e19fe1fddf433e326e271b29.jpg) + +c) GranularPush. This environment contains a fully-actuated board pushing a pile of granular pieces. We design the extrapolate dataset to have a larger quantity of granular objects in the scene than the model has ever seen during training. + +Baselines. We compare our method with two baselines, NeRF-dy [34] and autoencoder (AE) (similar to GQN [18] augmented with a latent-space dynamics model). NeRF-dy is a 3D-aware framework that also learns intuitive physics from multi-view videos. Yet, instead of learning the object dynamics with explicit and compositional 3D representations, the model learns dynamics models with implicit 3D representations in the form of a single latent vector. We also compare our method with an autoencoder-based reconstruction model (AE) [18] that can perform novel-view synthesis but is worse at handling 3D transformations than neural implicit representations. AE first learns scene representations through per-frame image reconstruction, and then it learns a dynamics model on top of the learned latent representations. All methods take RGB images and camera parameters as inputs. To incorporate object-level information, we perform color-based segmentation to obtain object masks as additional inputs to the baselines. The implementation details and parameter settings of our method can be found in the supplementary materials. + +
FluidPourFluidCubeShakeGranularPush
MetricsModelInDOoDInDOoDInDOoD
MSE(↓)AE451.03542.86869.31727.55562.061537.2
NeRF-dy202.95317.27527.461585.97481.951020.0
Ours111.66124.3366.5281.38147.97646.85
SSIM(↑)AE0.860.840.710.860.810.62
NeRF-dy0.890.860.730.650.810.61
Ours0.900.890.940.930.890.69
+ +Table 1: Quantitative Results of the Perception Module. We compare our method with autoencoder (AE) and NeRF-dy [34] with additional instance masks based on color. We measure the quality of rendered images by computing the Mean Squared Error (MSE) and Structural Similarity Index Measure (SSIM) compared to the ground truth. InD stands for in-distribution tests, and OoD stands for out-of-distribution tests. + +![](images/9d7996bca1d22662deac2521577329bbd38348a5f49c081d72d1e71aca0ff857.jpg) +Figure 5: Qualitative Reconstruction Results of the Perception Module. The images generated by our method contain more visual details and are much better aligned with the ground truth. Our model is much better at handling large scene variations than NeRF-dy, especially in extrapolate settings. + +# 4.1 Image Reconstruction From Learned Scene Representations + +We test how well the perception modules capture scene information by evaluating the visual front-end of all models on their ability to reconstruct the observed scene from the inferred representations. We measure the difference between the reconstructed and ground truth images with Mean Squared Error (MSE) and Structural Similarity (SSIM) in pixel level (Table 1). Our perception module outperforms all baselines in all three environments. The performance gap is exaggerated in extrapolate settings, especially in scenarios that involve complex interactions between rigid and deformable materials (Figure 5 qualitative comparisons). + +# 4.2 Learned Visual Dynamics On In-Distribution Held-Out Scenes + +Next, we compare long-term rollouts in the 3D space. We evaluate the models using the Chamfer distance between the predicted point cloud and the ground truth. For NeRF-dy, we decode the predicted rollouts latent vectors into the point cloud with the learned NeRF decoder. We exclude the comparison with AE since it is unclear how to decode the learned representations into point clouds. We show quantitative comparison in Figure 6 and qualitative results in Figure 4. 3D-IntPhys can + +![](images/44574cd6a2ab718edc01a41766864852f4c99331c25166894b25a0607b87be65.jpg) +Figure 6: Quantitative Results of the Dynamics Module. This figure compares our method and NeRF-dy [34] on their long-horizon open-loop future prediction loss. The loss is measured as the Chamfer distance between the predicted particle set evolution and the actual future. Our method outperforms the baseline in both interpolate and extrapolate settings, showing the benefits of explicit 3D modeling. + +![](images/b7ce4620d54d09177c6b7ed2a53d4827397dc6a7f61b9fae0f034d39f60bb3f8.jpg) + +![](images/aa7938459d51854d90f1c21843bda2d87a07c60fe92a472512ece5c6e91e155b.jpg) + +![](images/6e54fd0687a3c8afdff8bfe2525dde5d60d975274cc88f08d54f6733d5da4faa.jpg) +Figure 7: Strong Generalization Ability of the Dynamics Module to Wider Pushers. We evaluate our dynamics model on unseen width of pushers in GranularPush environment. The left part shows in 3D space where red indicates granular materials, green shows the table and pusher, and the arrow shows how the pusher is about to move. The right part shows from the top view of the rendering results. + +![](images/8bb0e18560dec0f8fff56d9e256a61213500472976162673c7c698f28c0d93b6.jpg) + +learn reasonable scene dynamics in all scenarios and significantly outperforms NeRF-dy. While NeRF-dy can learn relatively reasonable movements of fluids, it fails to learn complex dynamics such as the floating cube and the morphing of the granular materials. The results suggest that the proposed explicit 3D point-based representations are critical to learning complex multi-material dynamics. + +# 4.3 Generalization on Out-of-Distribution Scenes + +To test the generalization ability of the models, we introduce extrapolate settings of all of the three scenarios. See "Extrapolate" results in Table 1, Figure 5, 6, and 4. The proposed 3D-IntPhys generalizes well to extrapolate settings both at the visual perception stage and the dynamics prediction stage, whereas NeRF-dy and autoencoder both fail at generalizing under extrapolate settings. For example, in FluidShake, both baselines cannot capture the number and the color of the rigid cubes (Figure 5). And in GranularPush, both baselines fail to capture the distributions of the granular materials. NeRF-dy performs much worse on extrapolation scenes compared to in-distribution scenes, suggesting that incorporating 3D information in an explicit way, as opposed to implicit, is much better at capturing the structure of the underlying environment, thus leading to better generalization. We further test our model on completely unseen changes to the environment – in the GranularPush environment, we extend the width of the pusher by a factor of 2 and 5. Though the stretched pusher has never shown in the training data, our model can make reasonable pushing predictions (see Fig 7). + +# 5 Conclusions + +In this work, we propose a 3D-aware and compositional framework, 3D-IntPhys, to learn intuitive physics from unlabeled visual inputs. Our framework can work on complex scenes involving fluid, + +rigid objects, and granular materials, and generalize to unseen scenes with containers of different sizes, more objects, or larger quantities of fluids and granular pieces. We show the proposed model outperforms baselines by a large margin, highlighting the importance of learning dynamics models in an explicit 3D representations space. The major limitation of our work is the assumption of access to object masks. However, with the progress on segmentation in the wild [71, 29], we believe that it will be possible to get such kinds of masks in real-world 3D environments. Our work serves as a pioneer in visual intuitive physics learning of complex scenes, and it is an exciting future direction to learn more complex intuitive physics from real-world data with the help of these large models. + +# References + +[1] P. Agrawal, A. V. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intuitive physics. Advances in neural information processing systems, 29, 2016. +[2] A. Ajay, M. Bauza, J. Wu, N. Fazeli, J. B. Tenenbaum, A. Rodriguez, and L. P. Kaelbling. Combining physical simulators and object-based networks for control. CoRR, abs/1904.06580, 2019. +[3] K. R. Allen, T. Lopez-Guevara, K. L. Stachenfeld, A. Sanchez-Gonzalez, P. W. Battaglia, J. B. Hamrick, and T. Pfaff. Physical design using differentiable learned simulators. CoRR, abs/2202.00728, 2022. +[4] M. Babaeizadeh, M. T. Saffar, S. Nair, S. Levine, C. Finn, and D. Erhan. Fitvid: Overfitting in pixel-level video prediction. CoRR, abs/2106.13195, 2021. +[5] R. Baillargeon, E. S. Spelke, and S. Wasserman. Object permanence in five-month-old infants. Cognition, 20:191-208, 1985. +[6] C. Bates, I. Yildirim, J. B. Tenenbaum, and P. W. Battaglia. Modeling human intuitions about liquid flow with particle-based simulation. CoRR, abs/1809.01524, 2018. +[7] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. F. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulçehre, H. F. Song, A. J. Ballard, J. Gilmer, G. E. Dahl, A. Vaswani, K. R. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. M. Botvinick, O. Vinyls, Y. Li, and R. Pascanu. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018. +[8] P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110:18327 - 18332, 2013. +[9] D. M. Bear, E. Wang, D. Mrowca, F. J. Binder, H.-Y. F. Tung, R. Pramod, C. Holdaway, S. Tao, K. Smith, L. Fei-Fei, et al. Physion: Evaluating physical prediction from vision in humans and machines. arXiv preprint arXiv:2106.08261, 2021. +[10] Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. In ICLR, 2019. +[11] S. Carey and F. Xu. Infants' knowledge of objects: beyond object files and object tracking. Cognition, 80(1):179-213, 2001. Objects and Attention. +[12] M. B. Chang, T. D. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-based approach to learning physical dynamics. CoRR, abs/1612.00341, 2016. +[13] F. de Avila Belbute-Peres, T. D. Economon, and J. Z. Kolter. Combining differentiable PDE solvers and graph neural networks for fluid flow prediction. CoRR, abs/2007.04439, 2020. +[14] D. Ding, F. Hill, A. Santoro, and M. M. Botvinick. Object-based attention for spatio-temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures. CoRR, abs/2012.08508, 2020. +[15] D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake. Learning models as functionals of signed-distance fields for manipulation planning. In Conference on Robot Learning, pages 245-255. PMLR, 2022. +[16] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint. Learning multi-object dynamics with compositional neural radiance fields. arXiv preprint arXiv:2202.11855, 2022. +[17] Y. Eldar, M. Lindenbaum, M. Porat, and Y. Zeevi. The farthest point strategy for progressive image sampling. IEEE Transactions on Image Processing, 6(9):1305-1315, 1997. +[18] S. A. Eslami, D. Jimenez Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor, et al. Neural scene representation and rendering. Science, 360(6394):1204-1210, 2018. + +[19] C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. CoRR, abs/1605.07157, 2016. +[20] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786-2793. IEEE, 2017. +[21] K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik. Learning visual predictive models of physics for playing billiards. In Y. Bengio and Y. LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. +[22] R. Girdhar, L. Gustafson, A. Adcock, and L. van der Maaten. Forward prediction for physical reasoning. CoRR, abs/2006.10734, 2020. +[23] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019. +[24] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latent dynamics for planning from pixels. In International conference on machine learning, pages 2555-2565. PMLR, 2019. +[25] D. Hafner, T. P. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. CoRR, abs/1912.01603, 2019. +[26] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy optimization. CoRR, abs/1906.08253, 2019. +[27] M. Janner, S. Levine, W. T. Freeman, J. B. Tenenbaum, C. Finn, and J. Wu. Reasoning about physical interactions with object-oriented prediction and planning. In International Conference on Learning Representations, 2019. +[28] T. Kipf, E. van der Pol, and M. Welling. Contrastive learning of structured world models. arXiv preprint arXiv:1911.12247, 2019. +[29] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. +[30] A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine. Stochastic adversarial video prediction. CoRR, abs/1804.01523, 2018. +[31] A. Lerer, S. Gross, and R. Fergus. Learning physical intuition of block towers by example. CoRR, abs/1603.01312, 2016. +[32] T. Li, M. Slavcheva, M. Zollhoefer, S. Green, C. Lassner, C. Kim, T. Schmidt, S. Lovegrove, M. Goesele, and Z. Lv. Neural 3d video synthesis. arXiv preprint arXiv:2103.02597, 2021. +[33] W. Li, S. Azimi, A. Leonardis, and M. Fritz. To fall or not to fall: A visual approach to physical stability prediction. CoRR, abs/1604.00066, 2016. +[34] Y. Li, S. Li, V. Sitzmann, P. Agrawal, and A. Torralba. 3d neural scene representations for visuomotor control. arXiv preprint arXiv:2107.04004, 2021. +[35] Y. Li, T. Lin, K. Yi, D. Bear, D. L. Yamins, J. Wu, J. B. Tenenbaum, and A. Torralba. Visual grounding of learned physical models. In International Conference on Machine Learning, 2020. +[36] Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. In ICLR, 2019. +[37] Y. Li, J. Wu, J.-Y. Zhu, J. B. Tenenbaum, A. Torralba, and R. Tedrake. Propagation networks for model-based control under partial observation. In 2019 International Conference on Robotics and Automation (ICRA), pages 1205-1211. IEEE, 2019. +[38] X. Lin, Y. Wang, Z. Huang, and D. Held. Learning visible connectivity dynamics for cloth smoothing. In Conference on Robot Learning, 2021. + +[39] M. Macklin, M. Müller, N. Chentanez, and T.-Y. Kim. Unified particle physics for real-time applications. ACM Transactions on Graphics (TOG), 33(4):1-12, 2014. +[40] L. Manuelli, Y. Li, P. Florence, and R. Tedrake. Keypoints into the future: Self-supervised correspondence in model-based reinforcement learning. arXiv preprint arXiv:2009.05085, 2020. +[41] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pages 405-421. Springer, 2020. +[42] D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. Fei-Fei, J. B. Tenenbaum, and D. L. K. Yamins. Flexible neural representation for physics prediction. CoRR, abs/1806.08047, 2018. +[43] T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. W. Battaglia. Learning mesh-based simulation with graph networks. In International Conference on Learning Representations, 2021. +[44] H. Qi, X. Wang, D. Pathak, Y. Ma, and J. Malik. Learning long-term visual dynamics with region proposal interaction networks. In ICLR, 2021. +[45] R. Riochet, J. Sivic, I. Laptev, and E. Dupoux. Occlusion resistant learning of intuitive physics from videos. CoRR, abs/2005.00069, 2020. +[46] A. N. Sanborn, V. K. Mansinghka, and T. L. Griffiths. Reconciling intuitive physics and newtonian mechanics for colliding objects. Psychological review, 120 2:411-37, 2013. +[47] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. Battaglia. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, pages 8459-8468. PMLR, 2020. +[48] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. A. Riedmiller, R. Hadsell, and P. W. Battaglia. Graph networks as learnable physics engines for inference and control. CoRR, abs/1806.01242, 2018. +[49] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020. +[50] H. Shi, H. Xu, Z. Huang, Y. Li, and J. Wu. Robocraft: Learning to see, simulate, and shape elasto-plastic objects with graph networks. arXiv preprint arXiv:2205.02909, 2022. +[51] K. Smith, L. Mei, S. Yao, J. Wu, E. Spelke, J. Tenenbaum, and T. Ullman. Modeling expectation violation in intuitive physics with coarse probabilistic object representations. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. +[52] E. S. Spelke. Principles of object perception. Cognitive Science, 14(1):29-56, 1990. +[53] H. Suh and R. Tedrake. The surprising effectiveness of linear models for visual foresight in object pile manipulation. In International Workshop on the Algorithmic Foundations of Robotics, pages 347-363. Springer, 2020. +[54] A. Tacchetti, H. F. Song, P. A. M. Mediano, V. F. Zambaldi, N. C. Rabinowitz, T. Graepel, M. M. Botvinick, and P. W. Battaglia. Relational forward models for multi-agent learning. CoRR, abs/1809.11044, 2018. +[55] H.-Y. F. Tung, Z. Xian, M. Prabhudesai, S. Lal, and K. Fragkiadaki. 3d-oes: Viewpoint-invariant object-factorized environment simulators. arXiv preprint arXiv:2011.06464, 2020. +[56] T. Ullman, E. Kosoy, I. Yildirim, A. A. Soltani, M. H. Siegel, J. Tenenbaum, and E. S. Spelke. Draping an elephant: Uncovering children's reasoning about cloth-covered objects. In Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages 3008-3014, 2019. + +[57] B. Ummenhofer, L. Prantl, N. Thuerey, and V. Koltun. Lagrangian fluid simulation with continuous convolutions. In International Conference on Learning Representations, 2019. +[58] R. Veerapaneni, J. D. Co-Reyes, M. Chang, M. Janner, C. Finn, J. Wu, J. B. Tenenbaum, and S. Levine. Entity abstraction in visual model-based reinforcement learning. CoRR, abs/1910.12827, 2019. +[59] C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video. CoRR, abs/1504.08023, 2015. +[60] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. Advances in neural information processing systems, 28, 2015. +[61] N. Watters, D. Zoran, T. Weber, P. Battaglia, R. Pascanu, and A. Tacchetti. Visual interaction networks: Learning a physics simulator from video. Advances in neural information processing systems, 30, 2017. +[62] B. Wu, S. Nair, R. Martin-Martin, L. Fei-Fei, and C. Finn. Greedy hierarchical variational autoencoders for large-scale video prediction. CoRR, abs/2103.04174, 2021. +[63] Z. Xu, Z. He, J. Wu, and S. Song. Learning 3d dynamic scene representations for robot manipulation. In Conference on Robotic Learning (CoRL), 2020. +[64] Z. Xu, J. Wu, A. Zeng, J. B. Tenenbaum, and S. Song. Densephysnet: Learning dense physical object representations via multi-step dynamic interactions. In Robotics: Science and Systems (RSS), 2019. +[65] T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances In Neural Information Processing Systems, 2016. +[66] Y. Ye, D. Gandhi, A. Gupta, and S. Tulsiani. Object-centric forward modeling for model predictive control. In CoRL, 2019. +[67] Y. Ye, M. Singh, A. Gupta, and S. Tulsiani. Compositional video prediction. In International Conference on Computer Vision (ICCV), 2019. +[68] A. Yu, V. Ye, M. Tancik, and A. Kanazawa. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4578-4587, 2021. +[69] M. Zhang, S. Vikram, L. Smith, P. Abbeel, M. J. Johnson, and S. Levine. SOLAR: deep structured latent representations for model-based reinforcement learning. CoRR, abs/1808.09105, 2018. +[70] R. Zhang, J. Wu, C. Zhang, W. T. Freeman, and J. B. Tenenbaum. A comparative evaluation of approximate probabilistic simulation and deep neural networks as accounts of human physical scene understanding. CoRR, abs/1605.01138, 2016. +[71] X. Zou, J. Yang, H. Zhang, F. Li, L. Li, J. Gao, and Y. J. Lee. Segment everything everywhere all at once. arXiv preprint arXiv:2304.06718, 2023. + +# A Additional Results + +To better understand the performance of our framework visually, we prepare test time rollouts of our framework as well as those of various baselines in the supplementary video. The video is published anonymously and can be accessed in https://sites.google.com/view/3d-intphys + +# A.1 Ablation Study + +We find that training the model with Chamfer distance in dense scenes with granular materials will often lead to predictions with unevenly distributed points where some points stick too close to each other. To alleviate the issue, we introduce the spacing loss to penalize the distance between these points. We set the threshold of penalty $d_{min}$ to be 0.08 and the loss weight $\sigma$ to be 10. We find that spacing loss can help improve the performance of the dynamics learner especially under extrapolate settings, as shown in Figure 8. We provide qualitative results in the supplementary video. + +![](images/bbca977cda5ec360e7e0137adcba7d69bea41b60fa5b0957e2e3183bd4689fbe.jpg) +Figure 8: Ablation Study on the Spacing Loss. Training dynamics models in the GranularPush scenario with spacing loss results in better rolling prediction. The performance gap is even more substantial in the extrapolate setting. + +![](images/358b2f922cf6cad6b16ee8477a77cdcaffd2b9341a44183fa9167a195cc2b70f.jpg) + +# B Implementation Details + +# B.1 Dataset Generation + +Our datasets are generated by the NVIDIA Flex simulator. Each of the three scenarios (Pour, Shake and Push) has 500 videos of trajectories taken from 6 views, with each trajectory consisting of 300 frames. We manually select the 6 views with reasonable coverage of the tabletop space to minimize the occlusion. The 500 trials are generated from five different sets of environmental parameters, detailed in Table 3. We take one set of parameters that are outside the training distribution as the extrapolate dataset for evaluating model generalization. For the rest of the four settings, we randomly split them into train and test sets with a ratio of 0.8. + +Next, we provide more details for each scenario: + +- In the FluidPour environment, we randomly initialize the position of the upper container and then generate random back-and-forth actions by tilting the container. The action space is then the position and tilting angle of the upper container. +- In FluidCubeShake, we also randomly initialize the position of the container and the cubes inside the container. We then generate random but smooth action sequences moving the container in the 2D plane. The action space is then the x-y location of the container. +- In GranularPush, we randomly initialize the position of the granular pile. Then, for each push, we randomly generate the starting and ending positions of the pusher and move the pusher along the straight line with an angle perpendicular to the pushing direction. The action space is a four-number tuple stating the starting and ending position on the 2D plane. + +The following table shows the moving range of the robot arms in the FluidPour and FluidCubeShake environments after normalizing the robot into a size that is the same as in the real world (unit: + +
X-RangeY-RangeZ-Range
FluidPour[-29.11, -12.66][42.00, 60.00][-7.78, 7.78]
FluidCubeShake[-3.25, 42.25][19.25, 19.25][-24.50, 24.00]
+ +![](images/87cbf2bccb71f3c0e93e360c2c4660bacab10c23997c7230dc017a7c3835adfb.jpg) +Figure 9: Illustration of the Environment Settings. In the FluidPour scenario, a robot arm holds a container and tries to pour some fluid into another container. In the FluidShake scenario, a robot moves a container with some fluid and cubes. We show the parameters for the container shape referred in Table 3. + +Table 2: Robot Action Space(centimeters): we show the range the robot arms can move in the FluidPour and FluidCubeShake environments. + +
SceneNameParamsEnv1Env2Env3Env4Extrapolate
FluidPourX20.530.530.810.810.81
Y20.530.810.530.810.81
Z21.241.241.241.241.24
X11.351.351.351.351.35
Y11.351.351.351.351.35
Z10.740.740.740.740.74
AmountofWater51255125612553757625
FluidCubeShakeX10.880.881.321.321.32
Y10.881.320.881.321.32
CubeNumber11223
Water21733322332248584983
GranularPushGranularNumber219740325832926112167
+ +Table 3: Scene Parameters for Generating the Interpolate and Extrapolate Datasets. We generate the datasets by varying the shape of container, amount of water, number of cubes, and quantity of the granular material. $Z_{i}, X_{i}, Y_{i}$ are the height, width, and depth for a container $i$ . Please refer to Figure 9 for more details. + +centimeters). For GranularPush, the pusher is moving over the entire table; we ignore the specific number in this environment as we do not have robot arms as a reference. + +Additional dataset samples. We show samples from the FluidPour, FluidCubeShake and GranularPush dataset in Figure 10, 11 and 12, respectively. Note that all trajectories for the extrapolate settings are used only for testing and will not show up during the training process. We include more samples from the dataset in the video format in the supplementary video. + +![](images/f5719c44f4fc8f80110db95462525a68204f7bbab2e022a954867c1288dcb6bf.jpg) +Figure 10: Samples from FluidPour Dataset. We show sequences of frames over time with an interval of 20 frames. The sequences above the dashed line are for interpolate data, and the bottom images illustrate the extrapolate data. + +![](images/34b771a35bb59d5c833fd8f132fc168720b4022e6b13147cccc5e1e632c4e1de.jpg) +Figure 11: Samples from FluidCubeShake Dataset. We show sequences of frames over time with an interval of 20 frames. The sequences above the dashed line are for interpolate data, and the bottom images illustrate the extrapolate data. + +# B.2 Model Architecture + +Image-conditional NeRF. We follow the architectural design by [68]. For the feature encoder, we employ a ResNet-34 backbone to extract features. We use the output layers prior to the first four pooling layers, upsampling them using bilinear interpolation to the same size, and then concatenating these four feature maps. We initialize the weight of the feature extractor of the scene using ImageNet pre-trained weight. For the NeRF function $f$ , We use fully-connected ResNet architecture with 5 ResNet blocks with a width of 512. + +![](images/fe854b1c4543d387706d496a3c79406f37d8ad055f9275b18f7b1134af6e7a90.jpg) +Figure 12: Samples from GranularPush Dataset. We show sequences of frames over time with an interval of 20 frames. The sequences above the dashed line are for interpolate data, and the bottom images illustrate the extrapolate data. + +Dynamics predictor. For the edge and vertex encoders, $Q_{e}$ and $Q_{v}$ , we use 3-layer fully-connected networks activated by the ReLU function with 150 hidden units. For the propagators, $P_{e}$ and $P_{v}$ , we use a 1-layer fully-connected network followed by ReLU activation. The output dimension of the linear layer is 150. + +Sampling 3D points from the trained visual perception module. We sample points on a $40 \times 40 \times 40$ grid from an area of $55\mathrm{cm} \times 55\mathrm{cm} \times 55\mathrm{cm}$ and $63\mathrm{cm} \times 63\mathrm{cm} \times 63\mathrm{cm}$ at the center of the table for FluidPour and FluidCubeShake respectively, and on a $70 \times 70 \times 70$ grid from an area of $6\mathrm{cm} \times 6\mathrm{cm} \times 6\mathrm{cm}$ for GranularPush. We evaluate and include points with a density (measured by the occupancy in the predicted neural radiance fields) larger than 0.99. To reduce the total number of points, we subsample the inferred points with FPS with a ratio of $5\%$ for FluidPour and $10\%$ for FluidCubeShake and GranularPush. + +Graph building. We set the neighbour distance threshold $\delta$ to be 0.2, 0.15, 0.15 for FluidPour, FluidCubeShake and GranularPush respectively. We select the threshold so that each point will have on average 20 30 neighbors. Since, in FluidPour, we sample the points with lower density 2000points/ $m^2$ , we use a larger threshold for this scenario. For FluidShape and GranularPush, since the density is around 3000 points/ $m^2$ , we cut down the number by $25\%$ . + +We found that if the threshold is too small, the performance will degrade significantly since each particle will only receive messages from a few neighbors (and miss out on the larger context). On the other hand, setting the threshold too large will cause the training time to increase since the graph will have more edges. We found that setting the threshold around the right scale generally leads to more effective training of a reasonable dynamics network. + +# B.3 Training Details + +The models are implemented in PyTorch. We train the perception module using Adam optimizer with a learning rate of $1e - 4$ , and we reduce the learning rate by $80\%$ when the performance on the validation set has stopped improving for 3 epochs. To compute the rendering loss when training the perception module, we sample 64 points through each ray in the scene and set the ray-batch size of the NeRF query function $f$ to be $1024\times 32$ . Training the perception module on a single scenario takes around 5 hours on one RTX-3090. + +We train the dynamics simulator using Adam optimizer with a learning rate of $1e - 4$ , and we reduce the learning rate by $80\%$ when the performance on the validation set has stopped improving for 3 + +epochs. The batch size is set to 4. We train the model for 20, 30, and 40 epochs for FluidPour, FluidCubeShake, and GranularPush, respectively. It takes around $10 \sim 15$ hours to train the dynamics model in one environment on one single RTX-3090. + +# B.4 Graph-Based Dynamics Model without Particle-level Correspondence + +The velocity of an object provides critical information on how the object will move in the future, yet, we do not have access to such information when tracking the object is impossible. As described in Section 3.2, the attributes $a_i^v$ of a vertex $v_i$ in the built graph consists of (1) velocity of this point in the past frames and (2) attributes of the point (rigid, fluid, granular). To get the velocity of a vertex $v$ , we should have the history position of this vertex. However, since the point clouds are inferred from each frame independently, we do not know how each point moves over time since we do not have point correspondence between frames. + +To address the problem, we leverage the fact that some objects in the scene are easier to track, and we try to use the motion of these trackable objects to infer motion for the untrackable units. We assume that we know the dense-labeled states of some known fully-actuated shapes like desks and cups connected to the robot arms. Here we will list one specific scenario where a cup of water is poured into another cup. In this case, we have two different types of points: points for fluid and points for cups, we name the states of them in time step $t$ as $V_{P}^{t} = \{v_{P,i}^{t}\}$ and $V_{S}^{t} = \{v_{S,i}^{t}\}$ respectively. For the particle encoder $Q_{v}$ , if the particle belongs to the cups, then the input of particle encoder contains $n_{s}$ history states before $t_0: \{V_S^{(t_0 - n_s):t_0}\}$ . If the particle belongs to the water, then we have no history states, so the input of $Q_{v}$ is all-zero. + +By adding the relative position between receiver and sender points, we can pass the momentum of $V_{P}$ to $V_{S}$ . Compared with human intuition, we can get an intuitive prediction of the movement of water by simply knowing the past movement of the cup without knowing the past movement of water. + +Following [47], we use the velocity of points and their relative position as inputs to the dynamics module instead of using the absolute positions of the points. This ensures the model is translation-invariant so the learned dynamics model can be shared across different spatial locations. + +# B.5 Inference Speed of Our Model + +The prediction speed of the dynamics module depends on the number of input particles, and it takes around 0.1s for graphs with around 300 nodes in FluidShake and FluidPour, and around 0.2s for scenes with $700+$ nodes in GranularPush. + +For our visual module, the main time consumption comes from NeRF sampling, it takes 0.2s to sample from a grid space introduced in the experiment section of our paper, this was run in blocks, with block-size=1000, made up 4G of a V100 GPU. And it can be even faster with larger blocks. The sub-sampling process (FPS, segmentation) is fast since they are all written in parallel versions, which takes less than 5ms. + +# C Potential Society Impact + +Our work shows the possibility of learning dynamics models from raw sensory inputs, opening up opportunities to automate the design of differentiable physics engines through data-driven learning algorithms. The resulting system can potentially benefit many downstream tasks, including general scene understanding, robotics manipulation, the construction of 3D generative models, and inverse tasks like planning/control and inverse design. Furthermore, predictions from our model are highly interpretable, which makes it straightforward to explain model behaviors and re-purpose the outputs for other downstream applications. + +Though data-driven approaches are potentially more scalable with enough data, concerns still exist that it might be hard to ensure the robustness of the model under sensor noise and adversarial attacks. It also becomes less clear how to fully mitigate data biases. Therefore, bringing in advanced techniques from ML robustness will be one critical future avenue to pursue. + +![](images/dcb7915087941b7f73e618b7cce4267711ed614ce3b585378c59a6a9e60374cb.jpg) +Figure 13: SAM Working on FluidCube Shake: Recent large segmentation models can well generate masks for different objects in the scene. + +# D Important Clarifications and Discussions + +Q: What is the input of 3D-IntPhys? + +Video inputs and object instance masks based on color assumption. + +Q: What is the difference of our work compared with [34, 35, 16]? + +1. Compared with nef-dy [34]: our method uses explicit 3D representation instead of implicit representation, which we show in our paper that our method can generalize better than [34]. +2. Compared with VPGL [35]: (1)[35] requires GT 3D particle set as supervision to train the visual frontend, while our 3D-IntPhys does not need particle-level supervision (2) In [35], the 3D representation is a particle set, which is an ordered list. As a result, it can only generate a fixed number of points. In contrast, our method produces dense representations that are not limited to ordered sets, making it more flexible and adaptable to systems of varying sizes. (3) [35] assumes that we have a learned dynamics model from the simulation as a dynamics prior, while we learn the dynamics model from the data. +3. Compared with Comp nerfdyn [16]: [16]only works on rigid objects and a rope that exhibits only slight deformation, most of the objects do not have topological changes, and they are all constrained to move in a 2D plane. So while the object-centric dynamics model used in [16] can solve the tasks in their paper, the object-centric representation is not suitable for learning the complex dynamics of fluids or granular materials, as in our paper. Our settings contain much more diverse 3D dynamics of challenging materials, which can not be solved using [16]. + +Q: Is the color segmentation of the fluid objects a reasonable assumption? + +It should be noted that the color-based segmentation will not degrade the challenging problem of learning 3D Intuitive Physics, since the task focuses more on learning complex visual dynamics from images. + +We want to emphasize that the work focuses more on learning complex visual dynamics from images, as opposed to solving object segmentation in general. Learning fluids dynamics from videos is a challenging task, and there are only a few existing works. NeRF-dy is the closest to us, yet the model's generalization ability is limited. We have shown in the proposed work that we can significantly improve the generalization ability by operating with a hybrid of implicit and explicit, as opposed to pure implicit, 3D representations. We agree object segmentation is a critical visual understanding + +problem, and solving it is an important next step to getting a more general visual dynamics learning framework. + +With recent advancements such as SAM [29] and SEER [71], which focus on segmentation in real-world scenarios, the possibility of video segmentation without the need for annotations has emerged (as is shown in Figure 13). This development paves the way for leveraging existing large-scale models to enhance the segmentation pipeline, offering great promise for future applications. + +Q: Since the fluid has zero velocities, how to predict the intuitive dynamics? + +The intuition is that we can infer the water movement from the container's movement. We also assume that the initial velocity of water is nearly zero, which is also used in [50], so the momentum can be gradually passed from the container to the water. + +We propose this assumption so that the intuitive physics model can be learned from (1) particles sampled from the neural radiance field, which is not stable (2) point clouds without one-to-one correspondence. The results show that we can learn reasonable dynamics (water poured out from a cup, water falling in the container, cubes moving in water, and granular materials pushed away by a pusher). It also shows the potential of distribution-based loss in learning visual dynamics. \ No newline at end of file diff --git a/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/images.zip b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3e95c9d585706abfb1a9cd9dab156e757354bc15 --- /dev/null +++ b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc2cb47e0509aa77d598e3bbf608ff1534f77c533766ec891a9e40a802fe0ded +size 1048489 diff --git a/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/layout.json b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..55718bbf36cc47b2658a4a709085bfb8a25639b0 --- /dev/null +++ b/3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9446478059c03bfe8a345964e7801503c38f7efac9d0071b83715670917d46a0 +size 520402 diff --git a/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_content_list.json b/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f1527e45c1e310f54ba67ad57846f659fe9ff827 --- /dev/null +++ b/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9db1ed9c0748e094102ae6e5ec3462c22dfbd68d1e1f2b36503d3e187932ecf +size 69483 diff --git a/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_model.json b/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_model.json new file mode 100644 index 0000000000000000000000000000000000000000..03670386d320c826f50250b04dcdd8c636eac921 --- /dev/null +++ b/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c7cd884a1c8536fe90ac6be6afbababddc89b8ff0d54a116c65f75df679e998 +size 87348 diff --git a/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_origin.pdf b/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e7349b2a558b71e3593997ea111236dc5c78f4fd --- /dev/null +++ b/3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1317b63cc8e7995c6b5de12f1e84c75f083dfc41900ec25b2ec7412e46eaac4 +size 4144084 diff --git a/3dllminjectingthe3dworldintolargelanguagemodels/full.md b/3dllminjectingthe3dworldintolargelanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fd9ee7a7f9efb7b90040755ca01d06052a3fe947 --- /dev/null +++ b/3dllminjectingthe3dworldintolargelanguagemodels/full.md @@ -0,0 +1,246 @@ +# 3D-LLM: Injecting the 3D World into Large Language Models + +Yining Hong + +University of California, Los Angeles + +Haoyu Zhen + +Shanghai Jiao Tong University + +Peihao Chen + +South China University of Technology + +Shuhong Zheng + +University of Illinois Urbana-Champaign + +Yilun Du + +Massachusetts Institute of Technology + +Zhenfang Chen + +MIT-IBM Watson AI Lab + +Chuang Gan + +UMass Amherst and MIT-IBM Watson AI Lab + +# Abstract + +Large language models (LLMs) and Vision-Language Models (VLMs) have been proven to excel at multiple tasks, such as commonsense reasoning. Powerful as these models can be, they are not grounded in the 3D physical world, which involves richer concepts such as spatial relationships, affordances, physics, layout, and so on. In this work, we propose to inject the 3D world into large language models and introduce a whole new family of 3D-LLMs. Specifically, 3D-LLMs can take 3D point clouds and their features as input and perform a diverse set of 3D-related tasks, including captioning, dense captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, navigation, and so on. Using three types of prompting mechanisms that we design, we are able to collect over 1M 3D-language data covering these tasks. To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that obtains 3D features from rendered multi-view images. Then, we use 2D VLMs as our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism, 3D-LLMs can better capture 3D spatial information. Experiments on held-out evaluation dataset, ScanQA, SQA3D and 3DMV-VQA, outperform state-of-the-art baselines. In particular, experiments on ScanQA show that our model outperforms state-of-the-art baselines by a large margin (e.g., the BLEU-1 score surpasses state-of-the-art score by $9\%$ ). Furthermore, experiments on our held-in datasets for 3D captioning, task composition, and 3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative examples also show that our model could perform more tasks beyond the scope of existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3d11m/. + +# 1 Introduction + +In the past several years, we have witnessed a surge of large language models (LLMs) (e.g., GPT4 [33]) that excel at multiple tasks, such as communication and commonsense reasoning. Recent works have explored aligning images and videos with LLM for a new generation of multi-modal LLMs (e.g., Flamingo [15], BLIP-2 [29]) that equip LLMs with the ability to understand and reason about 2D images. However, as powerful as the models can be in communication and reasoning, + +![](images/14873319919fb6039fa4911382e3ae3f9adc70016587371a27bf881bfd2d4e4f.jpg) +Figure 1: Examples from our generated 3D-language data, which covers multiple 3D-related tasks. + +they are not grounded in the real 3D physical world, which involves richer concepts such as spatial relationships, affordances, physics and interaction so on. Therefore, such LLMs pale in comparison with the robots depicted in sci-fi movies - the assistants that could understand the 3D environments, as well as perform reasoning and planning based on the 3D understandings. + +To this end, we propose to inject the 3D world into large language models, and introduce a whole new family of 3D-LLMs that could take 3D representations (i.e., 3D point clouds with their features) as input, and perform a series of 3D-related tasks. By taking the 3D representations of scenes as input, LLMs are blessed with twofold advantages: (1) long-term memories about the entire scene can be stored in the holistic 3D representations, instead of episodic partial-view observations. (2) 3D properties such as affordances and spatial relationships can be reasoned from 3D representations, far beyond the scope of language-based or 2D image-based LLMs. + +One major challenge of training the proposed 3D-LLMs lies in data acquisition. Unlike the vast amount of paired 2D-images-and-text data on the Internet, the scarcity of 3D data hinders the development of 3D-based foundation models. 3D data paired with language descriptions are even harder to obtain. To address this, we propose a set of unique data generation pipelines that could generate large-scale 3D data paired with language. Specifically, we make use of ChatGPT [33] and devise three efficient prompting procedures for communication between 3D data and language. In this way, we are able to obtain approximately one million 3D-language data covering a diverse set of + +tasks, including but not limited to 3D captioning, dense captioning, 3D question answering, 3D task decomposition, 3D grounding, 3D-assisted dialog, navigation and so on, as shown in Figure 1. + +The next challenge resides in how to obtain meaningful 3D features that could align with language features for 3D-LLMs. One way is to train 3D encoders from scratch using a similar contrastive-learning paradigm for the alignment between 2D images and language (e.g., CLIP [36]). However, this paradigm consumes tremendous data, time, and GPU resources. From another perspective, there are numerous recent works that build 3D features from 2D multi-view images (e.g., concept fusion [24], 3D-CLR [20]). Inspired by this, we also utilize a 3D feature extractor that constructs 3D features from the 2D pretrained features of rendered multi-view images. Recently, there are also quite a few visual-language models (e.g., BLIP-2 [29], Flamingo [15]) utilizing the 2D pretrained CLIP features for training their VLMs. Since our extracted 3D features are mapped to the same feature space as 2D pretrained features, we can seamlessly use 2D VLMs as our backbones and input the 3D features for the efficient training of 3D-LLMs. + +One crucial aspect of 3D-LLMs, different from vanilla LLMs and 2D VLMs, is that 3D-LLMs are expected to have an underlying 3D spatial sense of information. Thus, we develop a 3D localization mechanism that bridges the gap between language and spatial locations. Specifically, we append 3D position embeddings to the extracted 3D features to better encode spatial information. In addition, we append a series of location tokens to the 3D-LLMs, and localization can be trained via outputting location tokens given the language descriptions of specific objects in the scenes. In this way, 3D-LLMs could better capture 3D spatial information. + +To sum up, our paper has the following contributions: + +- We introduce a new family of 3D-based Large Language models (3D-LLMs) that can take 3D points with features and language prompts as input, and perform a variety of 3D-related tasks. We focus on tasks beyond the scope of vanilla LLMs or 2D-LLMs, such as tasks about holistic scene understanding, 3D spatial relationships, affordances and 3D planning. +- We devise novel data collection pipelines that could generate large-scale 3D-language data. Based on the pipelines, we collect a dataset that has over 1M 3D-language data that cover a diverse set of 3D-related tasks, including but not limited to 3D captioning, dense captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, navigation, and so on. +- We use a 3D feature extractor that extracts meaningful 3D features from rendered multi-view images. We utilize 2D pretrained VLMs as our backbones for efficient training. We introduce a 3D localization mechanism for training the 3D-LLMs to better capture 3D spatial information. +- Experiments on held-out evaluation dataset, ScanQA, SQA3D and 3DMV-VQA, outperform state-of-the-art baselines. In particular, 3D LLMs outperform baselines by a large margin on ScanQA (e.g., $9\%$ for BLEU-1 and $10\%$ for CIDER). Experiments on held-in datasets for 3D captioning, task composition, and 3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative studies further demonstrate that our model is able to handle a diverse set of tasks. +- We release our 3D-LLMs, the 3D-language dataset, and language-aligned 3D features of the dataset for future research development ${}^{1}$ . + +# 2 Related Works + +Large Language Models. Our work is closely related to large language models [4, 14, 37, 10, 34] (LLMs) like GPT-3 [4] and PaLM [10], which are able to handle different language tasks with a single model and show strong generalization abilities. These models are typically trained on massive textual data with self-supervised training targets like predicting the next tokens [4, 37] or reconstructing the masked tokens [14, 38]. To better align these LLMs' predictions to human instructions, improve the models' generalization abilities on unseen tasks, a series of instruction tuning methods [35, 42] and datasets [11, 13] have been proposed. In this work, we aim to inject the 3D world into large language models, understanding rich 3D concepts such as spatial relations, affordances, and physics. + +Vision-Language Pre-trained Models. Our work is also related to vision-language pre-trained models that connect images and natural language [30, 31, 18, 36, 25]. Some research [36, 25] learn to train models from scratch with massive image-language pairs and apply them to downstream tasks like visual question answering [19, 47], captioning [7], and referring expression comprehension [46] with finetuning. Other researchers have connected pre-trained vision models and pre-trained LLMs + +![](images/945f996c9a5c9375481ce577ac0c1f935e212e80b3111af4588e5b2d9ce451d8.jpg) +Figure 2: 3D-language data generation pipelines. + +with additional learnable neural modules like perceiver [2] and QFormers [30], leveraging perception abilities in pre-trained vision models, and reasoning and generalization capacities in LLMs. Inspired by these previous works, we plan to build an AI assistant that could understand the 3D world and perform corresponding 3D reasoning and planning. This is not trivial and we need to overcome obstacles like how to handle the problem of data sparsity, how to align the 3D world with 2D images, and how to capture 3D spatial information. + +3D & Language. Another line of research that is similar to ours is 3D and language [5, 45, 8, 20, 1, 16, 22, 45, 3]. ScanQA [45] requires a model to answer questions related to the 3D world; ScanRefer [5] asks a model to localize a region that the text expression refers to; 3D captioning [8] tests models' abilities to generate captions describing the 3D scenes. However, these 3D tasks and their corresponding models are usually task-specific and could only handle cases within the same distribution of the training sets without generalization. Different from them, we aim to build a 3D model that could handle different tasks at the same time and enable new abilities like 3D-assistant dialog and task decomposition. + +# 3 3D-Language Data Generation + +The community has witnessed the proliferation of multi-modal data thanks to easy access to a tremendous amount of 2D image and text pairs on the internet. However, when it comes to 3D-related data, obtaining multimodal resource is not easy, due to not only the scarcity of 3D assets, but also the difficulty of providing language data for 3D assets. There are some existing datasets that contain 3D-language data (e.g., ScanQA [45], ScanRefer [5]). However, they are limited with regard to both quantity and diversity, restricted to only one task per dataset. How to generate a 3D-language dataset that can be utilized for all kinds of 3D-related tasks is well worth delving into. + +Inspired by the recent success of large language models like GPT [33], we propose to leverage such models for 3D-language data collection. Specifically, as shown in Figure 2, we have three ways to prompt a text-only GPT for generating data. 1) boxes-demonstration-instruction based prompting. We input the axis-aligned bounding boxes (AABB) of both the rooms and the objects in the 3D + +![](images/e2c45a87bfa2e029cfb620bd7b4a8c8cfee73c780487ec6c0ac56607459dcd62.jpg) +Figure 3: Overview of our 3D-LLM framework. The first two columns show our 3D feature extractor. We first render a few multi-view images from the 3D scene, extract 2D dense features, and then construct 3D features from these multi-view images using three kinds of methods. And then, the 3D features and input language prompts are input to the 3D-LLMs to generate responses. + +scenes, providing information about the semantics and spatial locations of the scene. We then provide specific instructions to the GPT model to generate diverse data. We give 0-3 few-shot demonstration examples of the GPT model showing what kind of data it is instructed to generate. 2) ChatCaptioner based prompting. We utilize techniques similar to [48], in which ChatGPT is prompted to ask a series of informative questions about an image and BLIP-2 [29] answers the questions. In order to collect 3D-related data, we first sample several images from different views of a 3D scene. These images are fed into ChatGPT and BLIP-2 to get the caption of each image. We then leverage ChatGPT to summarize all these captions, which contain information about different regions, to form a global 3D description of the entire scene. 3) Revision based prompting. It can be used to transfer one type of 3D data to another. + +Given the prompting pipelines, GPT is able to generate various types of 3D-language data as summarized in Figure 1. More data generation details and prompt designs are shown in the Appendix. + +We mainly establish our 3D-language dataset upon several 3D assets: + +- Objaverse is a universe of 800K 3D objects. However, since the language descriptions were extracted from online sources and not examined by humans, most objects have very noisy descriptions (e.g., with urls) or no descriptions. We utilize ChatCaptioner based prompting to generate high-quality 3D-related descriptions for the scenes and reivison-based prompting to generate questions. +- Scannet [12] is a richly-annotated dataset of approximately 1k 3D indoor scenes. It provides semantics and bounding boxes of the objects in the scenes. +- Habitat-Matterport (HM3D) [39] is a dataset of 3D environments of embodied AI. HM3DSem [44] further adds semantic annotations and bounding boxes for more than 200 scenes of HM3D. We use the pre-segmented rooms of HM3D in 3D-CLR [20]. + +# 4 3D-LLM + +# 4.1 Overview + +In this section, we introduce how we train our 3D-LLMs. We argue that it's hard to train 3D-LLMs from scratch, since our collected 3D-language dataset is still not the size of billion-scale image-language dataset used to train 2D VLMs. Furthermore, for 3D scenes, there are no available pretrained encoders like those for 2D images (e.g., CLIP ViT encoders). Thus, retraining 3D-language models from scratch is data-inefficient and resource-heavy. Recently, researchers have proposed to extract 3D features from 2D multi-view images [24, 20]. Using these alignment methods, we could use pretrained image encoders to extract image features, and then map the features to the 3D data. Since the pretrained image features serve as inputs to 2D VLMs, the mapped 3d features of the same feature space can also be seamlessly fed into the pretrained 2D VLMs, which we use as our backbones to train 3D-LLMs. We also propose a 3D localization mechanism to boost the model's ability to capture 3D spatial information. Figure 3 shows our framework. + +# 4.2 3D Feature Extractor + +The first step of training 3D-LLMs is to build meaningful 3D features that could be aligned with language features. For 2D images, there exist feature extractors like CLIP, which learn visual models from language supervision. The models are pretrained using billion-scale internet data of image-language pairs. It's hard to pre-train such feature learners from scratch, since there are no 3D-language assets comparable to internet-scale image-language pairs in terms of quantity and diversity. + +On the contrary, numerous methods have been proposed to extract 3D features from 2D multi-view images [24, 20, 17, 21]. Inspired by these works, we extract features for 3D points by rendering the 3D scenes in several different views, and construct 3D features from rendered image features. + +We first extract pixel-aligned dense features for rendered images following [24]. Then, we utilize three methods to construct 3D features from rendered image features. These methods are designed for different types of 3D data. + +- Direct Reconstruction. We directly reconstruct point cloud from rgbd images rendered from the 3D data using ground-truth camera matrixes. The features are directly mapped to the reconstructed 3D points. This method is suitable for rendered rgbd data with perfect camera poses and intrinsics. +- Feature Fusion. Similar to [24], we fuse 2D features into 3D maps using gradslam [27]. Different from dense mapping methods, the features are fused in addition to depths and colors. This method is suitable for 3D data with noisy depth map renderings, or noisy camera poses and intrinsics. +- Neural Field. We utilize [20], which constructs 3D compact representation using neural voxel field [40]. Specifically, each voxel in the field has a feature in addition to density and color. Then we align 3D features in the rays and 2D features in the pixels using MSE loss. This method is for 3D data with RGB renderings but no depth data, and noisy camera poses and intrinsics. + +In this way, we are able to obtain the $< N$ , $\mathcal{D}_v >$ -dim 3D features of each 3D scene, where $N$ is the number of points in the point cloud, and $\mathcal{D}_v$ is the feature dimension. + +# 4.3 Training 3D-LLMs + +# 4.3.1 2D VLMs as backbones + +In addition to the feature extractor, training 3D-LLMs from scratch is also non-trivial. In fact, according to [29, 15], the training of 2D VLMs only begins to show "signs of life" after consuming half a billion images. They usually use frozen and pre-trained image encoders such as CLIP to extract features for 2D images. Considering that with 3D feature extractor, the 3D features can be mapped into the same feature space as 2D images, it's reasonable to use these 2D VLMs as our backbones. + +The perceiver architecture proposed by [23] leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to handle very large inputs of arbitrary input sizes, thus can tackle different modalities. This architecture is utilized in VLMs like Flamingo [15]. BLIP-2 [29] also utilizes a similar structure called QFormer. The 2D image features, output from frozen image encoders, are flattened and sent to the perceiver to generate a fixed-sized input. Given that our 3D features are in the same feature space as the 2D features by the 3D feature extractor, and that perceiver is able to handle inputs of arbitrary input sizes of the same feature dimension, point cloud features with arbitrary sizes could also be fed into the perceiver. Therefore, we use the 3D feature extractor to extract the 3D features in the same feature space as the features of the frozen image encoders. Then, we use pretrained 2D VLMs as our backbones, input the aligned 3D features to train 3D-LLMs with our collected 3D-language dataset. + +# 4.3.2 3D Localization Mechanism + +Notice that since 3D features are reconstructed via 2D pretrained feature extractor that has been aligned with language (e.g., CLIP [36] and EVA-CLIP [41]), localization can be performed by directly calculating the similarity between 3D features and language features. However, Apart from building 3D features, which can be aligned with language semantics, it's also essential that the model itself could capture 3D spatial information. To this end, we propose a 3D localization mechanism that boosts 3D LLMs' abilities to absorb spatial information. It consists of two parts: + +Augmenting 3D features with position embeddings Besides the 3D features aggregated from 2D multi-view features, we also add position embeddings to the features. Supposing the feature dim is $\mathcal{D}_v$ , we generate sin/cos position embeddings of the three dimensions, each has an embedding size + +$\mathcal{D}_v / 3$ . We concatenate the embeddings of all three dimensions, and add them to the 3D features with a weight. + +Augmenting LLM vocabularies with location tokens In order to align 3D spatial locations with LLMs, we propose to embed 3D locations in the vocabularies, following [6] and [43]. To be specific, the region to be grounded can be denoted as a sequence of discrete tokens representing the bounding box in the form of AABB. The continuous corner coordinates of the bounding boxes are uniformly discretized to voxel integers as location tokens $\langle x_{min}, y_{min}, z_{min}, x_{max}, y_{max}, z_{max} \rangle$ . After adding these additional location tokens, we unfreeze the weights for these tokens in the input and output embeddings of language models. + +# 5 Experiments + +We first introduce the architecture, and training and evaluation protocols. In Sec 5.1, we analyze the held-out experiments on ScanQA [3], SQA3D [32], and 3DMV-VQA [20] Dataset. Sec 5.2 covers more analysis on held-in evaluation and qualitative examples. + +Architecture We experiment on three backbone 2D VLMs for 3D-LLMs: Flamingo 9B, BLIP-2 Vit-g Opt2.7B, BLIP-2 Vit-g FlanT5-XL. For BLIP-2, during pre-training the 3D-LLMs, we initialize the model from BLIP-2 checkpoints released in LAVIS library [28], and finetune the parameters for the QFormer. 3D features are 1408-dim features, same as EVA_CLAMP [41] hidden feature dim used by BLIP-2. We keep most parts of the LLMs (i.e., Opt and FlanT5) frozen, except the weights for the newly-added location tokens in the input and the output embeddings. For Flamingo, we initialize the model from the Flamingo9B checkpoint released in OpenFlamingo repository [2]. We finetune the parameters for perceiver, gated cross attention layers, and the weights for additional location tokens in the input and output embeddings. 3D features are 1024-dim features, same as CLIP hidden feature dim used by Flamingo. For generating class-agnostic (generic) object masks for the 2D pixel-aligned dense feature extraction, we follow [24] and use the Mask2Former (M2F) [9] or the segment anything (SAM) [26]. + +Training & Evaluation Datasets & Protocols We split our datasets into two genres, held-in datasets and held-out datasets. Specifically, our 3D-language data generation pipeline generates the held-in datasets of multiple tasks. We utilize training sets of held-in datasets for pre-training foundation 3D-LLMs, and their validation sets can be applied for held-in evaluation. During pre-training, we mix the held-in datasets of all tasks. The models are trained with the standard language modeling loss to output responses. Held-out datasets, on the other hand, are not used in training the foundation 3D-LLMs. We use three held-out 3D question answering datasets for held-out evaluation: ScanQA, SQA3D and 3DMV-VQA. + +# 5.1 Held-Out Evaluation + +# 5.1.1 Experiments on ScanQA + +We finetune our pretrained 3D-LLMs on the ScanQA dataset and compare with baseline models. + +Baselines & Evaluation Metrics We include representative baseline models on the benchmark. ScanQA is the state-of-the-art method on the benchmark that uses VoteNet to obtain object proposals, and then fuse them with language embeddings. ScanRefer+MCAN is a baseline that identifies the referred object and the MCAN model is applied to the image surrounding the localized object. VoteNet+MCAN detects objects in a 3D space, extracts their features, and uses them in a standard VQA model. Notably, these baseline models all extract explicit object representations from a pretrained localization module. In addition to these baselines, we also design several LLM-based baselines. LLaVA is a visual instruction tuning that connects a vision encoder and LLM for general-purpose visual and language understanding. We use its pretrained model and do zero-shot evaluation on our dataset. We use a single random image as input. We use LLaVA 13B model. ULIP encoders + LLMs use existing pre-trained 3D encoders with LLMs, for comparison between 3D pre-trained encoders, and 2D encoders for feature encoding. Single Image + Pretrained VLMs use our 2D VLM backbones (i.e., flamingo and BLIP-2), replace the 3D inputs of 3D-LLMs with single image features to train the models, and then finetune on ScanQA dataset. Multi-View Image + Pretrained VLMs use our 2D VLM backbones, replace the 3D inputs of 3D-LLMs with concatenated features of multi-view images to train the models, and then finetune on ScanQA dataset. We report BLEU, ROUGE-L, METEOR, CIDEr for robust answer matching. We also use exact match (EM) metric. + +
B-1B-2B-3B-4METEORROUHE-LCIDEREM
VoteNet+MCAN*28.016.710.86.211.429.854.717.3
ScanRefer+MCAN*26.916.611.67.911.53055.418.6
ScanQA*30.220.415.110.113.133.364.921.0
LLaVA(zero-shot)7.12.60.90.310.512.35.70.0
ULIPPointMLP+flant518.47.22.71.47.418.126.97.5
ULIPPointMLP+opt19.17.32.71.97.418.228.08.4
ULIPPointBERT+flant529.217.910.36.111.628.150.914.5
ULIPPointBERT+opt28.816.99.75.911.327.950.513.8
flamingo-SingleImage23.814.59.28.510.729.65216.9
flamingo-MultiView25.615.29.28.411.331.15518.8
BLIP2-flant5-SingleImage28.615.19.05.110.625.842.613.3
BLIP2-flant5-MultiView29.716.29.85.911.326.645.713.6
3D-LLM (M2F, flamingo)30.317.812.07.212.232.359.220.4
3D-LLM (M2F, BLIP2-opt)35.922.516.09.413.834.063.819.3
3D-LLM (SAM, BLIP2-opt)35.021.715.59.514.034.567.119.8
3D-LLM (M2F, BLIP2-flant5)39.325.218.412.014.535.769.420.5
3D-LLM (SAM, BLIP2-flant5)37.524.117.612.915.137.574.521.2
+ +Table 1: Experimental results on ScanQA validation set. * Means the models use explicit object representations. B-1, B-2, B-3, B-4 denote BLEU-1, BLEU-2, BLEU-3, BLEU-4 respectively. M2F denotes mask2former, SAM denotes Segment Anything. + +
Formattest setAvg.
WhatIsHowCanWhichOthers
Blind testSQ → A26.7563.3443.4469.5337.8943.4143.65
ScanQA(w/o s\( ^{\text{txt}} \))VQ → A28.5865.0347.3166.2743.8742.8845.27
ScanQAVSQ → A31.6463.8046.0269.5343.8745.3446.58
ScanQA+aux taskVSQ → AL33.4866.1042.3769.5343.0246.4047.20
MCANVSQ → A28.8659.6644.0968.3440.7440.4643.42
ClipBERTVSQ → A30.2460.1238.7163.3142.4542.7143.31
Unified QAVSQ → A33.0150.4331.9156.5145.1741.1141.00
Unified QAVSQ → A27.5847.9934.0559.4740.9139.7738.71
GPT-3VSQ → A39.6745.9940.4745.5636.0838.4241.00
GPT-3VSQ → A28.9046.4228.0540.2430.1136.0734.57
3D-LLMVSQ → A37.0565.1845.8167.4651.0049.8249.79
+ +Table 2: Experimental Results on SQA3D test set. In the Format column, "V" means the 3D visual inputs, "S" means the situation inputs, "Q" and "A" denote questions and answers respectively. Here we use 3D-LLM (SAM, BLIP2-flant5). + +Result Analysis We report our results on ScanQA validation set in Table 1. We observe a significant increase in the evaluation metrics. For example, for BLEU-1, our model outperforms the state-of-the-art ScanQA model by $\sim 9\%$ for validation set. For CIDER, we report a $\sim 10\%$ gain compared to ScanQA, and much higher than other 3D-based baselines. These results show that by injecting 3D into LLMs, the models can generate answers that are much more similar to the ground-truth answers. Furthermore, 3D-based baselines use object detectors like VoteNet to segment the objects, and then send per-object features into their models, while our inputs are holistic 3D features without explicit object representations. This shows that our model could perform visual reasoning about objects and their relationships even without explicit object representations. We then examine whether 2D VLMs have the same ability. We find that by taking single-view images or multi-view images as inputs, the performances drop much compared to 3D-LLMs. Specifically, multi-view images also contain information about the whole scene. However, they have much lower performances compared to 3D-LLMs, probably because features of multi-view images are disorganized, thus losing 3D-related information. + +# 5.1.2 Experiments on SQA3D + +SQA3D [32] requires the tested agent to first understand its situation (position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. We finetune our pretrained 3D-LLMs on the SQA3D dataset and compare with baseline models. We include all baseline models introduced by the original paper. Specifically, ScanQA+aux task achieves the SOTA performance by adding two auxiliary tasks: prediction the + +position and rotation of the agent situation. Table 2 shows the results. We can see that our 3D-LLM outperforms all baseline models a lot, even without training with auxiliary tasks and losses. + +
MethodsConceptCountingRelationComparisonOverall
NS-VQA*59.821.533.461.638.0
3D-Feature+LSTM61.222.449.961.348.2
3D-CLR*66.141.357.672.357.7
flamingo-SingleImage58.718.538.460.140.3
flamingo-MultiView60.018.340.261.441.6
BLIP-SingleImage58.020.442.362.343.1
BLIP-MultiView61.921.148.062.347.1
3D-LLM (M2F, flamingo)68.932.461.668.358.6
3D-LLM (M2F, BLIP2-opt)63.430.757.665.254.9
3D-LLM (SAM, BLIP2-opt)73.424.563.277.661.5
3D-LLM (M2F, BLIP2-flanT5)68.131.455.169.754.6
3D-LLM (SAM, BLIP2-flanT5)76.330.264.380.264.0
+ +Table 3: Experimental results on 3DMV-VQA dataset. * denotes using explicit object representations and neuro-symbolic reasoning. + +# 5.1.3 Experiments on 3DMV-VQA + +We finetune our pretrained 3D-LLMs on the 3DMV-VQA dataset and compare with baseline models. We include all baseline models introduced by the original paper. Specifically, 3D-CLR [20] is the SOTA achieves the SOTA performance via neuro-symbolic reasoning based on 3D features. + +Result Analysis Table 3 shows the performances on 3DMV-VQA. We can see that 3D-LLMs outperform state-of-the-art baseline model in the question types of concept and relation, and also in the overall performance. Our model also outperforms 3D-Feature+LSTM, demonstrating the power of LLMs over vanilla language models with similar 3D features as inputs. Overall, 3D-based methods outshine 2D-based versions of the methods. Our 3D-LLMs outperform their corresponding 2D VLMs with image input, further demonstrating the importance of 3D representations for 3D-LLMs. + +
TasksModelsBLEU-1BLEU-2BLEU-3BLEU-4METEORROUGH-L
3D Captioningflamingo-SingleImage29.017.912.512.112.428.2
flamingo-MultiView29.518.613.712.414.029.0
BLIP2-flant5-SingleImage30.318.314.512.013.130.9
BLIP2-flant5-MultiView34.423.918.014.117.535.7
3D-LLM (flamingo)36.124.518.715.617.635.8
3D-LLM (BLIP2-opt)35.726.720.315.918.740.1
3D-LLM (BLIP2-t5)39.831.024.720.117.742.6
3D-LLM (SAM, BLIP2-t5)44.538.629.524.222.145.4
3D-assisted Dialogflant527.416.511.18.79.527.5
flamingo-SingleImage29.418.711.39.410.026.8
flamingo-MultiView30.621.311.99.110.427.9
BLIP2-flant5-SingleImage28.417.310.69.110.227.4
BLIP2-flant5-MultiView32.420.912.19.511.029.5
3D-LLM (flamingo)35.022.815.410.616.034.2
3D-LLM (BLIP2-opt)39.627.520.516.218.438.6
3D-LLM (BLIP2-flant5)39.027.821.216.618.939.3
3D-LLM (SAM, BLIP2-t5)40.529.423.921.419.640.8
Task Decompositionflant525.521.116.76.013.928.4
flamingo-SingleImage31.423.018.87.115.630.6
flamingo-MultiView33.124.721.47.316.133.2
BLIP2-flant5-SingleImage32.225.318.26.915.031.0
BLIP2-flant5-MultiView33.127.020.66.915.534.0
3D-LLM (flamingo)32.925.620.26.416.033.5
3D-LLM (BLIP2-opt)34.127.720.87.616.535.4
3D-LLM (BLIP2-flant5)33.928.120.77.415.937.8
3D-LLM (SAM, BLIP2-t5)31.622.317.28.814.038.3
+ +Table 4: Experimental Results on Held-In Datasets. 3D-LLMs outperform 2D VLMs. + +# 5.2 More Extensive Evaluation + +Held-In Evaluation We carry out experiments on held-in datasets of three tasks: 3D captioning, 3D-assisted dialog and task decomposition. The baselines include 2D VLMs as for the held-out evaluation. We add one language-only baseline: FlanT5, which examines LLMs' ability to complete these tasks without any visual input. To evaluate the quality of responses, we include BLEU, ROUGE-L, METEOR, CIDEr as our metrics. We report the held-in evaluation performances in Table 4. From the table, we could see that 3D-LLMs could generate high-quality responses, outperforming both 2D VLMs and language-only LLMs. + +Qualitative Examples In Figure 4, we show qualitative examples of 3D-LLM's predictions. We can see that our 3D-LLM is able to perform a variety of tasks. + +![](images/aaa8c18d6150002b2e605fe3826c0580ad4a4de65be7875fc5e531fcef710cfd.jpg) +Figure 4: Qualitative examples of 3D-LLM's prediction. + +# 6 Conclusion + +In this paper, we propose a new family of 3D-LLMs that can take 3D representations as inputs and generate responses. We introduce a series of 3D-language data generation pipelines to generate a dataset of 1M 3D-language pairs to train our 3D-LLMs. Our 3D-LLMs leverage 2D pretrained VLMs as backbones and a novel 3D localization mechanism. Experiments show that our 3D-LLMs outperform state-of-the-art baseline models on ScanQA datasets, and could perform a diverse set of 3D-related tasks. A limitation is that the 3D feature extractor relies on 2D multi-view images, and thus all 3D scenes need to be rendered so that they can be trained in 3D-LLMs, which introduces an additional rendering process. + +# 7 Acknowledgements + +This work was supported by the MIT-IBM Watson AI Lab, DARPA MCS, DSO grant DSOCO21072, and gift funding from MERL, Cisco, Sony, and Amazon. We would also like to thank the computation support from AiMOS, a server cluster for the IBM Research AI Hardware Center. + +# References + +[1] P. Achlioptas, A. Abdelreheem, F. Xia, M. Elhoseiny, and L. J. Guibas. ReferIt3D: Neural listeners for fine-grained 3D object identification in real-world scenes. In ECCV, 2020. +[2] A. Awadalla, I. Gao, J. Gardner, J. Hessel, Y. Hanafy, W. Zhu, K. Marathe, Y. Bitton, S. Gadre, J. Jitsev, S. Kornblith, P. W. Koh, G. Ilharco, M. Wortman, and L. Schmidt. Openflamingo, Mar. 2023. +[3] D. Azuma, T. Miyanishi, S. Kurita, and M. Kawanabe. ScanQA: 3D question answering for spatial scene understanding. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19107-19117, 2022. +[4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, pages 1877-1901, 2020. +[5] D. Z. Chen, A. X. Chang, and M. Nießner. ScanRefer: 3D object localization in RGB-D scans using natural language. 16th European Conference on Computer Vision (ECCV), 2020. +[6] T. Chen, S. Saxena, L. Li, D. J. Fleet, and G. E. Hinton. Pix2seq: A language modeling framework for object detection. ArXiv, abs/2109.10852, 2021. +[7] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. +[8] Z. Chen, A. Gholami, M. Nießner, and A. X. Chang. Scan2cap: Context-aware dense captioning in rgb-d scans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3193-3203, 2021. +[9] B. Cheng, A. Choudhuri, I. Misra, A. Kirillov, R. Girdhar, and A. G. Schwing. Mask2former for video instance segmentation. ArXiv, abs/2112.10764, 2021. +[10] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. +[11] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. +[12] A. Dai, A. X. Chang, M. Savva, M. Halber, T. A. Funkhouser, and M. Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2432–2443, 2017. +[13] Databricks. Free dolly: Introducing the world's first truly open instruction-tuned llm, 2023. +[14] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +[15] J.-B. A. et al. Flamingo: a visual language model for few-shot learning. 2022. +[16] M. Feng, Z. Li, Q. Li, L. Zhang, X. Zhang, G. Zhu, H. Zhang, Y. Wang, and A. S. Mian. Free-form description guided 3d visual graph network for object grounding in point cloud. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3702-3711, 2021. +[17] S. Y. Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. CLIP on wheels: Zero-shot object navigation as object localization and exploration. ArXiv, abs/2203.10421, 2022. +[18] T. Gong, C. Lyu, S. Zhang, Y. Wang, M. Zheng, Q. Zhao, K. Liu, W. Zhang, P. Luo, and K. Chen. MultiModal-GPT: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023. + +[19] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. +[20] Y. Hong, C. Lin, Y. Du, Z. Chen, J. B. Tenenbaum, and C. Gan. 3D concept learning and reasoning from multi-view images, 2023. +[21] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation, 2023. +[22] P.-H. Huang, H.-H. Lee, H.-T. Chen, and T.-L. Liu. Text-guided graph neural networks for referring 3D instance segmentation. In AAAI, 2021. +[23] A. Jaegle, F. Gimeno, A. Brock, A. Zisserman, O. Vinyals, and J. Carreira. Perceiver: General perception with iterative attention. In International Conference on Machine Learning, 2021. +[24] K. M. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi, N. Keetha, A. Tewari, J. B. Tenenbaum, C. M. de Melo, M. Krishna, L. Paull, F. Shkurti, and A. Torralba. Conceptfusion: Open-set multimodal 3D mapping, 2023. +[25] C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. Le, Y.-H. Sung, Z. Li, and T. Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021. +[26] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. B. Girshick. Segment anything. ArXiv, abs/2304.02643, 2023. +[27] J. Krishna Murthy, S. Saryazdi, G. Iyer, and L. Paull. gradslam: Dense slam meets automatic differentiation. arXiv, 2020. +[28] D. Li, J. Li, H. Le, G. Wang, S. Savarese, and S. C. H. Hoi. LAVIS: A library for language-vision intelligence, 2022. +[29] J. Li, D. Li, S. Savarese, and S. Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023. +[30] J. Li, D. Li, S. Savarese, and S. Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. +[31] H. Liu, C. Li, Q. Wu, and Y. J. Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. +[32] X. Ma, S. Yong, Z. Zheng, Q. Li, Y. Liang, S.-C. Zhu, and S. Huang. Sqa3d: Situated question answering in 3d scenes, 2023. +[33] OpenAI. GPT-4 technical report, 2023. +[34] OpenAI. GPT-4 technical report. ArXiv, abs/2303.08774, 2023. +[35] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022. +[36] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. +[37] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. +[38] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551, 2020. +[39] S. K. Ramakrishnan, A. Gokaslan, E. Wijmans, O. Maksymets, A. Clegg, J. M. Turner, E. Undersander, W. Galuba, A. Westbury, A. X. Chang, M. Savva, Y. Zhao, and D. Batra. Habitat-matterport 3D dataset (HM3D): 1000 large-scale 3D environments for embodied AI. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021. +[40] C. Sun, M. Sun, and H.-T. Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5459-5469, June 2022. + +[41] Q. Sun, Y. Fang, L. Wu, X. Wang, and Y. Cao. Eva-clip: Improved training techniques for clip at scale, 2023. +[42] Z. Sun, Y. Shen, Q. Zhou, H. Zhang, Z. Chen, D. Cox, Y. Yang, and C. Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv e-prints, pages arXiv-2305, 2023. +[43] P. Wang, A. Yang, R. Men, J. Lin, S. Bai, Z. Li, J. Ma, C. Zhou, J. Zhou, and H. Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, 2022. +[44] K. Yadav, R. Ramrakhya, S. K. Ramakrishnan, T. Gervet, J. Turner, A. Gokaslan, N. Maestre, A. X. Chang, D. Batra, M. Savva, et al. Habitat-matterport 3D semantics dataset. arXiv preprint arXiv:2210.05633, 2022. +[45] S. Ye, D. Chen, S. Han, and J. Liao. 3D question answering. IEEE transactions on visualization and computer graphics, PP, 2021. +[46] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Modeling context in referring expressions. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 69-85. Springer, 2016. +[47] P. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. Yin and Yang: Balancing and answering binary visual questions. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016. +[48] D. Zhu, J. Chen, K. Haydarov, X. Shen, W. Zhang, and M. Elhoseiny. Chatgpt asks, blip-2 answers: Automatic questioning towards enriched visual descriptions, 2023. \ No newline at end of file diff --git a/3dllminjectingthe3dworldintolargelanguagemodels/images.zip b/3dllminjectingthe3dworldintolargelanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..275a54b193846b96eb07fca9fc81d03f90e6890c --- /dev/null +++ b/3dllminjectingthe3dworldintolargelanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fb7c926ba1151bd480dda8a528ff8899a536408d27b2b48b80fec45df7a535c +size 960145 diff --git a/3dllminjectingthe3dworldintolargelanguagemodels/layout.json b/3dllminjectingthe3dworldintolargelanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8438cb883ec0a9bde2227865d0188eac8e59b8c9 --- /dev/null +++ b/3dllminjectingthe3dworldintolargelanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd9f818d14c7cb70a0c819d02a9bbda55945d880818ba2b82e37eb498848b4b9 +size 275754 diff --git a/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_content_list.json b/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f36c721e97cb05498fd64cdc204b76ad0b8fa89d --- /dev/null +++ b/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c909af40d7a6dde678b101a8de1cecf619d119541091f34a938323a45ad137cb +size 113840 diff --git a/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_model.json b/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_model.json new file mode 100644 index 0000000000000000000000000000000000000000..98d80df9b6ea212ba79034d810fe0e33b361d0db --- /dev/null +++ b/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7091d7ff19179188d8972fd6668bc742ee5aaf7a9aa9bfcbd46c4bbb21c3deb6 +size 144508 diff --git a/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_origin.pdf b/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..53dd4341d761c9e3a2328505450aaea7c8ade0fd --- /dev/null +++ b/3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34407024fc369349c4bdc21df0cf27ff090a289a208e736b7371a28ca82760ad +size 11073044 diff --git a/3dmoleculegenerationbydenoisingvoxelgrids/full.md b/3dmoleculegenerationbydenoisingvoxelgrids/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e19cd9dde49ea10d0a083241d38ad86fb3dcbfff --- /dev/null +++ b/3dmoleculegenerationbydenoisingvoxelgrids/full.md @@ -0,0 +1,446 @@ +# 3D molecule generation by denoising voxel grids + +Pedro O. Pinheiro, Joshua Rackers, Joseph Kleinhenz, Michael Maser, Omar Mahmood, Andrew Martin Watkins, Stephen Ra, Vishnu Sresht, Saeed Saremi + +Prescient Design, Genentech + +# Abstract + +We propose a new score-based approach to generate 3D molecules represented as atomic densities on regular grids. First, we train a denoising neural network that learns to map from a smooth distribution of noisy molecules to the distribution of real molecules. Then, we follow the neural empirical Bayes framework [1] and generate molecules in two steps: (i) sample noisy density grids from a smooth distribution via underdamped Langevin Markov chain Monte Carlo, and (ii) recover the "clean" molecule by denoising the noisy grid with a single step. Our method, VoxMol, generates molecules in a fundamentally different way than the current state of the art (i.e., diffusion models applied to atom point clouds). It differs in terms of the data representation, the noise model, the network architecture and the generative modeling algorithm. Our experiments show that VoxMol captures the distribution of drug-like molecules better than state of the art, while being faster to generate samples. + +# 1 Introduction + +Finding novel molecules with desired properties is an important problem in chemistry with applications to many scientific domains. In drug discovery in particular, standard computational approaches perform some sort of local search—by scoring and ranking molecules—around a region of the molecular space (chosen based on some prior domain knowledge). The space of possible drug-like molecules is prohibitively large (it scales exponentially with the molecular size [2, 3], estimated to be around $10^{60}$ [4]), therefore search in this space is very hard. Search-based approaches achieve some successes in practice, but have some severe limitations: we can only explore very small portions of the molecular space (on the order of billions to trillions molecules) and these approaches cannot propose new molecules conditioned on some desiderata. + +Generative models for molecules have been proposed to overcome these limitations and explore the molecular space more efficiently [5]. These approaches often consider one of the following types of molecule representations: (i) one-dimensional sequences such as SMILES [6] or SELFFIES [7] (e.g., [8, 9, 10]), (ii) two-dimensional molecular graphs, where nodes represent atoms or molecular substructures and edges represent bonds between them (e.g., [11, 12, 13, 14]), or (iii) atoms as three-dimensional points in space. Molecules are entities laying on three-dimensional space, therefore 3D representations are arguably the most complete ones—they contain information about atom types, their bonds and the molecular conformation. + +Recent generative models consider molecules as a set of points in 3D Euclidean space and apply diffusion models on them [15, 16, 17, 18, 19, 20]. Point-cloud representations allow us to use equivariant graph neural networks [21, 22, 23, 24, 25]—known to be very effective in molecular discriminative tasks—as the diffusion model's denoising network. However, point-based diffusion approaches have some limitations when it comes to generative modeling. First, the number of atoms in the molecule (i.e., nodes on the 3D graph) to be diffused need to be known beforehand. Second, atom types and their coordinates have very different distributions (categorical and continuous variables, respectively) and are treated separately. Because a score function is undefined on discrete distributions, + +![](images/7c158c828322bfa3fc24c422fdfc3f3f40391e788f24aed57e2c75d20aa65a23.jpg) +Figure 1: Voxelized molecules generated by our model and their corresponding molecular graphs. Left, samples from a model trained on QM9 dataset $(32^{3}$ voxels). Right, samples from a model trained on GEOM-drugs $(64^{3}$ voxels). In both cases, each voxel is a cubic grid with side length of .25Å. Each color represents a different atom (and a different channel on the voxel grid). Best seen in digital version. See appendix for more generated samples. + +some workaround is necessary. Finally, graph networks operate only on nodes and edges (single and pairwise iterations, respectively). Therefore, capturing long-range dependencies over multiple atoms (nodes) can become difficult as the number of atoms increases. This is related to the limitations of the message-passing formalism in graph neural networks [26]. Higher-order message passing can alleviate this problem to a degree [27, 28], but they come at a significant computational cost and they have been limited to third-order models [29] (see next section for more discussions on the tradeoffs between model expressivity and built-in equivariance).1 + +In this work we introduce VoxMol, a new score-based method to generate 3D molecules. Similar to [33], and unlike most recent approaches, we represent atoms as continuous (Gaussian-like) densities and molecules as a discretization of 3D space on voxel (i.e., a discrete unit of volume) grids. Voxelized representations allow us to use the same type of denoising architectures used in computer vision. These neural networks—the workhorse behind the success of score-based generative models on images, e.g. [34, 35, 36]—are very effective and scale very well with data. + +We start by training a neural network to denoise noisy voxelized molecules. Noisy samples are created simply by adding Gaussian noise (with a fixed identity covariance matrix scaled by a large noise level) to each voxel in the molecular grid. This denoising network also parametrizes the score function of the smooth/noisy distribution. Note that in contrast to diffusion models, the noise process we use here does not displace atoms. Then, we leverage the (learned) denoising network and generate molecules in two steps [1]: (i) (walk) sample noisy density grids from the smooth distribution via Langevin Markov chain Monte Carlo (MCMC), and (ii) (jump) recover "clean" molecules by denoising the noisy grid. This sampling scheme, referred to as walk-jump sampling in [1], has been successfully applied before to 2D natural images [37, 38] and 1D amino acid sequences [39]. + +Compared to point-cloud diffusion models, VoxMol is simpler to train, it does not require knowing the number of atoms beforehand, and it does not treat features as different distributions (continuous, categorical and ordinal for coordinates, atom types and formal charge)—we only use the "raw" voxelized molecule. Moreover, due to its expressive network architecture, our method scales better to large, drug-sized molecules. Figure 1 (and Figures 8, 9 on appendix) illustrates voxelized molecules and their corresponding molecular graphs generated by our model, trained on two different datasets. These samples show visually that our model learns valences of atoms and symmetries of molecules. + +The main contributions of this work can be summarized as follows. We present VoxMol, a new score-based method for 3D molecule generation. The proposed method differs from current approaches—usually diffusion models on point clouds—in terms of the data representation, the noise model, the network architecture, and the generative modeling algorithm. We show in experiments that VoxMol performs slightly worse than state of the art on a small dataset (QM9 [40]), while outperforms it (by a large margin) on a challenging, more realistic drug-like molecules dataset (GEOM-drugs [41]). + +# 2 Related Work + +Voxel-based unconditional 3D molecule generation. Skalic et al. [42] and Ragoza et al. [33] map atomic densities on 3D regular grids and train VAEs [43] using 3D convolutional networks to generate voxelized molecules. To recover atomic coordinates from the generated voxel grids², [33] introduces a simple optimization-based solution, while [42] trains another model that "translates" voxel structures into SMILES strings. Voxel representations are flexible and can trivially be applied to related problems with different data modalities. For instance, [44] proposes a GAN [45] on voxelized electron densities, while [46] leverages voxelized 3D pharmacophore features to train a pocket-conditional model. Similar to these works, our model also relies on discretization of 3D space. Like [33], we use a simple peak detection algorithm to extract atomic coordinates from the generated voxel grids. However, our method differs on the underlying generative modeling, architecture, datasets, input representations and evaluations. + +Point cloud-based unconditional generation. Most recent models treat molecules as sets of points, where each node is associated with a particular atom type, its coordinates and potentially extra information like formal charge. Different modeling approaches have been proposed, e.g., [47, 48, 49] utilize autoregressive models to iteratively sample atoms, and [50, 51] use normalizing flows [52]. Hoogeboom et al. [15] proposes E(3) Equivariant Diffusion Models (EDM), a diffusion [53]-based approach that performs considerably better than previous models on this task. EDMs learn to denoise a diffusion process (operating on both continuous and categorical data) and generate molecules by iteratively applying the denoising network on an initial noise. Several works have been proposed on the top of EDM [54, 55, 20, 56]. For instance, Xu et al. [56] improves EDM by applying diffusion on a latent space instead of the atomic coordinates, while MiDi [20] shows that EDM results can be improved by jointly generating the 3D conformation and the connectivity graph of molecules (in this setting, the model has access to both the 3D structure and the 2D connectivity graph). + +Conditional 3D molecule generation. A related body of work is concerned with conditional generation. In many cases, conditional generation is built on the top of unconditional generation methods. Some authors propose to predict the 3D structure of the molecules given a molecular graph (this is called the conformer generation task): VAEs [57, 58], normalizing flows [59], reinforcement learning [60], optimal transport [61], autoregressive models [62] and diffusion models [63, 16, 64] have been proposed to this task. Some work [65, 66] condition 3D generation on shape while other works condition molecule generation on other structures. For instance, [17, 18, 19, 67] adapt (unconditional) diffusion models to condition on protein pockets, while [68] adapt their previous work [33] to condition voxelized structures to protein targets. Finally, [46] proposes a hybrid conditional generation model by modeling fragments/scaffolds with point cloud representation and the 3D target structures and pharmacophores features [69] with voxel grids. + +Comparison between voxel and point-cloud representations. Voxels have some advantages and disadvantages compared to point cloud representations. First, voxels are straightforward generalizations of 2D pixels to 3D space, therefore we can leverage similar machinery used in score-based generative modeling for images. These models are known to perform well and scale nicely with data. Second, message passing on graphs operates on single and pairwise interactions while convolution filters (and potentially transformer layers applied to regular grids) can capture multiple local interactions by construction (see [70] for a discussion on the many-body representation hypothesis). Third, voxel representations have a higher memory footprint and lower random memory accesses than point cloud representations [71]. We note however, that developing models on drug-sized molecules (that is, molecules with size close to those on GEOM-drugs [41]) with reasonable resolution $(.1 - .2\AA)$ is possible on current GPU hardware. Fourth, recovering point coordinates from a discrete grid has no analytical solution, therefore voxel-based models require an extra step to retrieve atomic coordinates. We show empirically that this is not a problem in practice as we can achieve competitive results, even with a very simple peak detection algorithm. + +Finally, graph networks are less expressive due to message passing formalism [26, 27], but are a better fit for built-in SE(3)-equivariance architectures (e.g. [21, 22, 23, 24, 25]). Rotation-equivariant 3D con + +volitional network have been proposed [72, 73, 74]3, but current models do not scale as well as standard convnets, and it would be a challenge to apply them to drug-sized molecules. Built-in rotation equivariance is a good property to have, however equivariance can also be learned with strong data augmentation/larger datasets [75, 76, 32]. In fact, concurrently to this work, [77] also show that built-in SE(3)-equivariant architecture is not necessary to generate molecules. Our experiments show that an expressive denoiser scales up better, allowing VoxMol to outperform current state of the art on GEOM-drugs. However, we hope our results motivate exploration of more efficient SE(3)-equivariant convnet architectures. + +# 3 Method + +We follow previous work (e.g., [78, 33, 70, 79]) and represent atoms as continuous Gaussian-like atomic densities in 3D space, centered around their atomic coordinates. Molecules are generated by discretizing the 3D space around the atoms into voxel grids, where each atom type (element) is represented by a different grid channel. See appendix for more information on how we discretize molecules. This discretization process gives us a dataset with $n$ voxelized molecules $\{x_{i}\}_{i = 1}^{n}, x_{i} \in \mathbb{R}^{d}, d = c \times l^{3}$ , where $l$ is the length of each grid edge and $c$ is the number of atom channels in the dataset. Each voxel in the grid can take values between 0 (far from all atoms) and 1 (at the center of atoms). Throughout our experiments, we consider a fixed resolution of .25 Å (we found it to be a good trade-off between accuracy and computation). Therefore, voxel grids occupy a volume of $(l / 4)^{3}$ cubic Ångströms. + +# 3.1 Background: neural empirical Bayes + +Let $p(x)$ be an unknown distribution of voxelized molecules and $p(y)$ a smoother version of it obtained by convolving $p(x)$ with an isotropic Gaussian kernel with a known covariance $\sigma^2 I_d$ . Equivalently, $Y = X + N$ , where $X \sim p(x)$ , $N \sim \mathcal{N}(0, \sigma^2 I_d)$ . Therefore $Y$ is sampled from: + +$$ +p (y) = \int_ {\mathbb {R} ^ {d}} \frac {1}{(2 \pi \sigma^ {2}) ^ {d / 2}} \exp \left(- \frac {\| y - x \| ^ {2}}{2 \sigma^ {2}}\right) p (x) d x. +$$ + +This transformation will smooth the density of $X$ while still preserving some of the structure information of the original voxel signals. Robbins [80] showed that if we observe $Y = y$ , then the least-square estimator of $X$ is the Bayes estimator, i.e., $\hat{x}(y) = \mathbb{E}[X|Y = y]$ . Built on this result, Miyasawa [81] showed that, if the noisng process is Gaussian (as in our case), then the least-square estimator $\hat{x}(y)$ can be obtained purely from the (unnormized) smoothed density $p(y)$ : + +$$ +\hat {x} (y) = y + \sigma^ {2} g (y), \tag {1} +$$ + +where $g(y) = \nabla_y\log p(y)$ is the score function [82] of $p(y)$ . This interesting equation tells us that, if we know $p(y)$ up to a normalizing constant (and therefore the score function associated with it), we can estimate the original signal $x$ only by observing its noisy version $y$ . Equivalently, if we have access to the estimator $\hat{x} (y)$ , we can compute the score function of $p(y)$ via (1). + +Our generative model is based on the neural empirical Bayes (NEB) formalism [1]: we are interested in learning the score function of the smoothed density $p(y)$ and the least-square estimator $\hat{x}(y)$ from a dataset of voxelized molecules $\{x_i\}_{i=1}^n$ , sampled from unknown $p(x)$ . We leverage the (learned) estimator and score function to generate voxelized molecules in two steps: (i) sample $y_k \sim p(y)$ with Langevin MCMC [83], and (ii) generate clean samples with the least-square estimator. The intuition is that it is much easier to sample from the smooth density than the original distribution. See Saremi and Hyvarinen [1] for more details. + +# 3.2 Denoising voxelized molecules + +We parametrize the Bayes estimator of $X$ using a neural network with parameters $\theta$ denoted by $\hat{x}_{\theta}:\mathbb{R}^{d}\to \mathbb{R}^{d}$ . Since the Bayes estimator is the least-squares estimator, the learning becomes a least-squares denoising objective as follows: + +$$ +\mathcal {L} (\theta) = \mathbb {E} _ {x \sim p (x), y \sim \mathbb {N} \left(x, \sigma^ {2} I _ {d}\right)} | | x - \hat {x} _ {\theta} (y) | | ^ {2}. \tag {2} +$$ + +![](images/a2a5536f5fe6182708a2ede8054c2d6a23720dbc8384ae71c7bb430879061223.jpg) +Figure 2: (a) A representation of our denoising training procedure. Each training sample (i.e., a voxelized molecule) is corrupted with isotropic Gaussian noise with a fixed noise level $\sigma$ . The model is trained to recover clean voxel grids from the noisy version. To facilitate visualization, we threshold the grid values, $\hat{x} = \mathbb{1}_{\geq 1}(\hat{x})$ . (b) Graphical model representation of the walk-jump sampling scheme. The dashed arrows represent the walk, a MCMC chain to draw noisy samples from $p(y)$ . The solid arrow represents the jump. Both walks and jumps leverage the trained denoising network. + +Using (1), we have the following expression for the smoothed score function in terms of the denoising network5: + +$$ +g _ {\theta} (y) = \frac {1}{\sigma^ {2}} \left(\hat {x} _ {\theta} (y) - y\right). \tag {3} +$$ + +By minimizing the learning objective (2) we learn the optimal $\hat{x}_{\theta}$ and by using (3) we can compute the score function $g_{\theta}(y)\approx \nabla_y\log p(y)$ . + +We model the denoising network $\hat{x}_{\theta}$ with an encoder-decoder 3D convolutional network that maps every noised voxel on the grid to a clean version of it. Figure 2(a) shows a general overview of the denoising model. The noise level, $\sigma$ , is kept constant during training and is a key hyperparameter of the model. Note that in the empirical Bayes formalism, $\sigma$ can be any (large) value. + +Compared to diffusion models, this training scheme is simpler as the noise level is fixed during training. VoxMol does not require noise scheduling nor temporal embedding on the network layers. We observe empirically that single-step denoising is sufficient to reconstruct voxelized molecules (within the noise levels considered in this paper). Our hypothesis is that this is due to the nature of the voxel signals, which contain much more "structure" than "texture" information, in comparison to natural images. + +# 3.3 Sampling voxelized molecules + +We use the learned score function $g_{\theta}$ and the estimator $\hat{x}_{\theta}$ to sample. We follow the walk-jump sampling scheme [1, 37, 38, 39] to generate voxelized molecules $x_{k}$ : + +(i) (walk step) For sampling noisy voxels from $p(y)$ , we consider Langevin MCMC algorithms that are based on discretizing the underdamped Langevin diffusion [84]: + +$$ +d v _ {t} = - \gamma v _ {t} d t - u g _ {\theta} \left(y _ {t}\right) d t + (\sqrt {2 \gamma u}) d B _ {t} \tag {4} +$$ + +$$ +d y _ {t} = v _ {t} d t, +$$ + +where $B_{t}$ is the standard Brownian motion in $\mathbb{R}^d$ , $\gamma$ and $u$ are hyperparameters to tune (friction and inverse mass, respectively). We use the discretization algorithm proposed by Sachs et al. [85] to generate samples $y_{k}$ , which requires a discretization step $\delta$ . See appendix for a description of the algorithm. + +(ii) (jump step) At an arbitrary time step $k$ , clean samples can be generated by estimating $X$ from $y_{k}$ with the denoising network, i.e., computing $x_{k} = \hat{x}_{\theta}(y_{k})$ . + +This approach allows us to approximately sample molecules from $p(x)$ without the need to compute (or approximate) $\nabla_{x}\log p(x)$ . In fact, we do MCMC on the smooth density $p(y)$ , which is known to be easier to sample and mixes faster than the original density $p(x)$ [1, 38, 86]. Figure 2(b) shows a schematic representation of the generation process. Following [37], we initialize the chains at by adding uniform noise to Gaussian noise (with the same $\sigma$ used during training), i.e., $y_0 = N + U$ , $N\sim \mathcal{N}(0,\sigma^2 I_d)$ , $U\sim \mathcal{U}_d(0,1)$ (this was observed to mix faster in practice). + +![](images/89ad952c0c7fc554a35d4abf9f92cba02ab386b649fcaeb51ca5183bc54d2d30.jpg) +Figure 3: Illustration of walk-jump sampling chain. We do Langevin MCMC on the noisy distribution (walk) and estimate clean samples with the denoising network at arbitrary time (jump). + +The noise level plays a key role in this sampling framework. If the noise is low, denoising (jump step) becomes easier, with lower variance, while sampling a "less smooth" $p(y)$ (walk step) becomes harder. If the noise is high, the opposite is true. + +Figure 3 illustrates an example of a walk-jump sampling chain, where generated molecules change gradually as we walk through the chain (the clean samples are shown every ten steps, $\Delta k = 10$ ). This figure is a demonstration of the fast-mixing properties of our sampling scheme in generating 3D molecules. For instance, some atoms (or other structures like rings) might appear/disappear/change as we move through the chain. Interestingly, this behavior happened on most chains we looked into explicitly. + +# 3.4 Recovering atomic coordinates from voxelized molecules + +It is often useful to extract atomic coordinates from generated voxelized molecules (e.g., to validate atomic valences and bond types or compare with other models). We use a very simple algorithm (a simplified version of the approach used in [33]) to recover the set of atomic coordinates from generated voxel grids: first we set to 0 all voxels with value less than .1, i.e., $x_{k} = \mathbb{1}_{\geq .1}(x_{k})$ . Then we run a simple peak detection to locate the voxel on the center of each Gaussian blob (corresponding to the center of each atom). Finally we run a simple gradient descent coordinate optimization algorithm to find the set of points that best create the generated voxelized molecule. Once we have obtained the optimized atomic coordinates, we follow previous work [33, 18, 17, 20] and use standard cheminformatics software to determine the molecule's atomic bonds. Figure 4 shows our pipeline to recover atomic coordinates and molecular graphs from generated voxelized molecules. See appendix for more details. + +# 4 Experiments + +In this section, we evaluate the performance of our model on the task of unconditional 3D molecule generation. Our approach is the first of its kind and therefore the objective of our experiments is to show that (i) VoxMol is a feasible approach for unconditional generation (this is non-trivial) and (ii) it scales well with data, beating a established model on a large, drug-like dataset. In principle, VoxMol can be used for guided (or conditional) generation, an arguably more useful application for molecular sciences (see appendix for a discussion on how guidance can be used on generation). + +We start with a description of our experimental setup, followed by results on two popular datasets for this problem. We then show ablation studies performed on different components of the model. + +# 4.1 Experimental setup + +Architecture. The denoising network $\hat{x}_{\theta}$ is used in both the walk and jump steps described above. Therefore, its parametrization is very important to the performance of this approach. We use a 3D U-Net [87] architecture for our denoising network. We follow the same architecture recipe as DDPM [34], with two differences: we use 3D convnets instead of 2D and we use fewer channels on all layers. The model has 4 levels of resolution and we use self-attention on the two lowest resolutions. We augment our dataset during training by applying random rotation and translation to every training sample. Our models are trained with noise level $\sigma = .9$ , unless stated otherwise. We train our models with batch size of 128 and 64 (for QM9 and GEOM-drugs, respectively) and we use AdamW [88] + +![](images/1eb4021d1b51cdd88968dbf25a2f70f13ce89246131c85dc6033ae1ec645e740.jpg) +Figure 4: Pipeline for recovering atomic coordinates from voxel grids: (i) VoxMol generates voxelized molecules, (ii) atomic coordinates are extracted from voxel grid with simple peak detection algorithm, (iii) we use cheminformatics software to add atomic bonds and extract SMILES strings, molecular graphs, etc. + +(learning rate $2 \times 10^{-5}$ , weight decay $10^{-2}$ ) to optimize the weights. The weights are updated with exponential moving average with a decay of .999. We use $\gamma = 1.0$ , $u = 1.0$ and $\delta = .5$ for all our MCMC samplings. See appendix for more details on the architecture, training and sampling. + +Datasets. We consider two popular datasets for this task: QM9 [40] and GEOM-drugs [41]. QM9 contains small molecules with up to 9 heavy atoms (29 if we consider hydrogen atoms). GEOM-drugs contain multiple conformations for 430k drug-sized molecules and its molecules have 44 atoms on average (up to 181 atoms and over $99\%$ are under 80 atoms). We use grids of dimension $32^3$ and $64^3$ for QM9 and GEOM-drugs respectively. These volumes are able to cover over $99.8\%$ of all points on both datasets. All our models model hydrogens explicitly. For QM9, we consider all 5 chemical elements (C, H, O, N and F) present on the dataset. For GEOM-drugs, we consider 8 elements (C, H, O, N, F, S, Cl and Br). We ignore P, I and B elements as they appear in less than $1\%$ of the molecules in the dataset. Finally, the input voxel grids are of dimension $\mathbb{R}^{5\times 32\times 32\times 32}$ and $\mathbb{R}^{8\times 64\times 64\times 64}$ for QM9 and GEOM-drugs, respectively. We perform the same pre-processing and dataset split as [20] and end up with 100K/20K/13K molecules for QM9 and 1.1M/146K/146K for GEOM-drugs (train, validation, test splits respectively). + +Baselines. We compare our method with two state-of-the-art approaches: $GSchNet$ [47], a point-cloud autoregressive model and $EDM$ [15], a point-cloud diffusion-based model. We note that both methods rely on equivariant networks, while ours does not. Our results could potentially be improved by successfully exploiting equivariant 3D convolutional networks. We also show results of $\mathrm{VoxMol}_{\mathrm{oracle}}$ in our main results, where we assume we have access to real samples from the noisy distribution. Instead of performing MCMC to sample $y_{k}$ , we sample molecules from the validation set and add noise to them. This baseline assumes we would have perfect sampling of noisy samples (walk step) and let us assess the quality of our model to recover clean samples. It serves as an upper bound for our model and allows us to disentangle the quality of the walk (sampling noisy samples) and jump (estimating clean molecules) steps. + +All methods generate molecules as a set of atom types and their coordinates (in the case of voxelized molecules, we use the post-processing described above to get the atomic coordinates). We follow previous work [33, 18, 17, 20] and use standard cheminformatics software to determine the molecule's atomic bonds given the atomic coordinates $^6$ . Using the same post-processing for all methods allows a more apples-to-apples comparison of the models. + +Metrics. Most metrics we use to benchmark our model come from [20] $^{7}$ . We draw 10,000 samples from each method and measure performance with the following metrics: stable mol and stable atom, the percentage of stable molecules and atoms, respectively, as defined in [15]; validity, the percentage of generated molecules that passes RDKit [90]'s sanitization filter; uniqueness, the proportion of valid molecules that have different canonical SMILES; valency $W_{1}$ , the Wasserstein distance between the distribution of valencies in the generated and test set; atoms $TV$ and bonds $TV$ , the total variation + +
stable mol %↑stable atom %↑valid %↑unique %↑valency W1↓atom TV↓bond TV↓bond len W1↓bond ang W1↓
data98.799.898.999.9.001.003.000.000.120
GSchNet92.098.798.194.5.049.042.041.0051.68
EDM97.999.899.098.5.011.021.002.0010.44
VoxMolno rot84.2 (±1.6)98.2 (±.3)98.1 (±.4)77.2 (±1.7).043 (±.0).171 (±.200).050 (±.010).007 (±.0)3.80 (±.7)
VoxMol89.3 (±.6)99.2 (±.1)98.7 (±.1)92.1 (±.3).023 (±.002).029 (±.009).009 (±.002).003 (±.002)1.96 (±.04)
VoxMoloracle90.199.398.999.9.024.009.002.0010.37
+ +Table 1: Results on QM9. We use 10,000 samples from each method. Our results are shown with mean/standard deviation across 3 runs. + +between the distribution of atom types and bond types, respectively; bond length $W_{1}$ and bond angle $W_{1}$ , the Wasserstein distance between the distribution of bond and lengths, respectively. Finally, we also report the strain energy metric proposed in [91]. This metric is defined as the difference between the internal energy of the generated molecule's pose and a relaxed pose of the molecule. The relaxation and the energy are computed using the Universal Force Field (UFF) [92] within RDKit. See appendix for more details about the metrics. + +# 4.2 Experimental results + +Table 1 and Table 2 show results on QM9 and GEOM-drugs respectively. We report results for models trained with and without data augmentation (VoxMol and VoxMol $_{\text{no rot}}$ , respectively) and generate 10,000 samples with multiple MCMC chains. Each chain is initialized with 1,000 warm-up steps, as we observed empirically that it slightly improves the quality of generated samples. Then, samples are generated after each 500 walk steps (each chain having a maximum of 1,000 steps after the warm-up steps). Results for our models are shown with mean/standard deviation among three runs. The row data on both tables are randomly sampled molecules from the training set. + +On QM9, VoxMol performs similar to EDM in some metrics while performing worse in others (specially stable molecule, uniqueness and angle lengths). On GEOM-drugs, a more challenging and realistic drug-like dataset, the results are very different: VoxMol outperforms EDM in eight out of nine metrics, often by a considerably large margin. + +Figure 5(a,b) shows the cumulative distribution function (CDF) of strain energies for the generated molecules of different models on QM9 and GEOM-drugs, respectively. The closer the CDF of generated molecules from a model is to that of data (samples from training set), the lower is the strain energy of generated molecules. The ground truth data has median strain energy of 43.87 and 54.95 kcal/mol for QM9 and GEOM-drugs, respectively. On QM9, all models have median strain energy around the same ballpark: 52.58, 66.32 and 56.54 kcal/mol for EDM, $\mathrm{VoxMol}_{\mathrm{no rot}}$ and $\mathrm{VoxMol}$ , respectively. On GEOM-drugs, the molecules generated by VoxMol have considerably lower median strain energy than EDM: 951.23 kcal/mol for EDM versus 286.06 and 171.57 for $\mathrm{VoxMol}_{\mathrm{no rot}}$ and $\mathrm{VoxMol}$ . + +We observe, as expected, that augmenting the training data with random rotations and translations improves the performance of the model. The improvement is bigger on QM9 (smaller dataset) than on GEOM-drugs. In particular, the augmentations help to capture the distribution of bonds and angles between atoms and to generate more unique molecules. We note that, unlike EDM, our model does not require knowledge of the number of atoms beforehand (neither for training nor sampling). In fact, Figure 6 show that our model learns the approximate distribution of the number of atoms per molecule on both datasets. Implicitly learning this distribution can be particularly useful in applications related to in-painting (e.g., pocket conditioning, linking, scaffold conditioning). Finally, our method generates drug-like molecules in fewer iterations and is faster than EDM on average (see Table 3). EDM sampling time scales quadratically with the number of atoms, while ours has constant time (but scales cubically with grid dimensions). + +These results clearly show one of the main advantages of our approach: a more expressive model scales better with data. Architecture inductive biases (such as built-in SE(3) equivariance) are helpful in the setting of small dataset and small molecules. However, on the large-scale regime, a more expressive model is more advantageous in capturing the modes of the distribution we want to model. Compared + +![](images/65d4cf3b40d0f0a769c348f1ca0d97acb98c66e2aeb15c054033371fd13de84a.jpg) +Figure 5: The cumulative distribution function of strain energy of generated molecules on (a) QM9 and (b) GEOM-drugs. For each method, we use 10,000 molecules. + +![](images/0945746fc003f7553bea40f21301d7893d70cbf2edbd16d9c7bcc5ff4895b2ab.jpg) + +
stable mol %↑stable atom %↑valid %↑unique %↑valency W1↓atom TV↓bond TV↓bond len W1↓bond ang W1↓
data99.999.999.8100..001.001.025.000.050
EDM40.397.887.899.9.285.212.048.0026.42
VoxMolno rot44.4 (±.1)96.6 (±.1)89.7 (±.2)99.9 (±.0).238 (±.001).025 (±.001).024 (±.001).004 (±.000)2.14 (±.02)
VoxMol75.0 (±.1.)98.1 (±.3)93.4 (±.5)99.1 (±.2).254 (±.003).033 (±.041).036 (±.006).002 (±.001)0.64 (±.13)
VoxMoloracle81.999.094.797.4.253.002.024.0010.31
+ +Table 2: Results on GEOM-drugs. We use 10,000 samples from each method. Our results are shown with mean/standard deviation across 3 runs. + +![](images/8d112d6da16be7b5ab3b72c77cfa7bd46f31d32342999ebce6e791f012f1192c.jpg) +Figure 6: Empirical distribution of number of atoms per molecule on QM9 (left) and GEOM-drugs (right). We sample 10,000 molecules from train set and generate the same number of VoxMol samples. + +![](images/5e2dbd5dc7ea351eb5e4c78761b6ce7f773a664f21015dcbff8dca806a823bd7.jpg) + +to VoxMoloracle results, we see that VoxMol can still be vastly improved. We can potentially close this gap by improving the quality of the denoising network (e.g., by improving the architecture, train on more data, efficient built-in SE(3)-equivariance CNNs, etc). + +# 4.3 Ablation studies + +Noise level $\sigma$ . Unlike diffusion models, the noise level is considered fixed during training and sampling. It is an important hyperparameter as it poses a trade-off between the quality of the walk step (Langevin MCMC) and the jump step (empirical Bayes). The ideal noise level is the highest possible value such that the network can still learn how to denoise. We train models on QM9 with $\sigma$ in $\{.6, .7, ..., 1.2\}$ , while keeping all other hyperparameters the same. Figure 7(a,b,c) shows how noise level $\sigma$ influences the performance on the validation set. While most metrics get better as the noise level increases, others (like stable molecules and valency W1) get worse after a value. We observe empirically that $\sigma = .9$ is the sweet spot level that achieves better overall performance on the validation set of QM9. + +Number of steps $\Delta k$ . Table 3 shows how VoxMol's performance on GEOM-drugs change with the number of walk steps $\Delta k$ on the Langevin MCMC sampling. In this experiment, we use the same trained model and only change the number of steps during sampling. Results of EDM are also shown for comparison (it always requires 1,000 diffusion steps for generation). We see that some metrics barely change, while others improve as $\Delta k$ increases. The average time (in seconds) to generate a + +![](images/f1bc463bc94fad60062eb43ea09481cbebf55414387912b6a78a8d66a04e1f45.jpg) +Figure 7: Effect of noise level $\sigma$ on generation quality. Models are trained on QM9 with a different noise level. Each plot shows two metrics: (a) molecule stability and uniqueness, (b) atom and bond TV, (c) valency and angle lengths W1. + +![](images/6832e3fcd2fdf2be5b1db0a21a04bcdd22ffd59ef26ef99b1eacd9f7c7670f00.jpg) + +![](images/0220035cc944cc2c930740814cdd440a7640943bcb919041dd74a2ccc351ecd5.jpg) + +
Δk(n steps)stable mol %↑stable atom %↑valid %↑unique %↑valency W1↓atom TV↓bond TV↓bond len W1↓bond ang W1↓avg. t s/mol.↓
5078.998.796.387.8.250.073.102.0021.180.90
10078.698.695.594.3.256.050.101.0021.621.64
20077.998.494.498.6.253.037.104.0021.023.17
50076.798.293.899.2.252.043.042.0020.567.55
1,00075.598.493.499.8.257.029.050.0020.7914.9
EDM40.397.887.899.9.285.212.048.0026.429.35
+ +Table 3: Effect of number of walk steps $\Delta k$ on generation quality on GEOM-drugs (2,000 samples). EDM results are shown for comparison. + +molecule increases linearly with the number of steps, as expected. We observe that even using 500 steps, our model is still faster than EDM on average, while achieving better performance in these metrics. Remarkably, with only 50 steps, VoxMol already outperforms EDM in most metrics, while being an order of magnitude faster on average. + +Atomic density radii. We also assess how the performance of the model changes with respect to the size of atomic radii chosen during the voxelization step (while always keeping the resolution of the grid fixed at .25 Å). See appendix for how this is done. We tried four different values for the radii (same for all elements): .25, .5, .75 and 1.0. We observe—throughout different versions of the model, with different hyperparameters—that using a fixed radius of .5 consistently outperform other values. Training does not converge with radius .25 and quality of generated samples degrades as we increase the radius. We also tried to use Van der Waals radii (where each atom type would have their own radius), but results were also not improved. + +# 5 Conclusion + +We introduce VoxMol, a novel score-based method for 3D molecule generation. This method generates molecules in a fundamentally different way than the current state of the art (i.e., diffusion models applied to atoms). The noise model used is also novel in the class of score-based generative models for molecules. We represent molecules on regular voxel grids and VoxMol is trained to predict "clean" molecules from its noised counterpart. The denoising model (which approximates the score function of the smoothed density) is used to sample voxelized molecules with walk-jump sampling strategy. Finally atomic coordinates are retrieved by extracting the peaks from the generated voxel grids. Our experiments show that VoxMol scales better with data and outperforms (by a large margin) a representative state of the art point cloud-based diffusion model on GEOM-drugs, while being faster to generate samples. + +Broader impact. Generating molecules conditioned on some desiderata can have huge impacts in many different domains, such as, drug discovery, biology, materials, agriculture, climate, etc. This work deals with unconditional 3D molecule generation (in a pure algorithmic way): a problem that can be seen as an initial stepping stone (out of many) to this long-term objective. We, as a society, need to find solutions to use these technologies in ways that are safe, ethical, accountable and exclusively beneficial to society. These are important concerns and they need to be thought of at the same time we design machine learning algorithms. + +Acknowledgements. The authors would like to thank the whole President Design team for helpful discussions and Genentech's HPC team for providing a reliable environment to train/analyse models. + +# References + +[1] Saeed Saremi and Aapo Hyvarinen. Neural empirical Bayes. JMLR, 2019. +[2] Tobias Fink, Heinz Bruggesser, and Jean-Louis Reymond. Virtual exploration of the small-molecule chemical universe below 160 daltons. Angewandte Chemie International Edition, 2005. +[3] Jiankun Lyu, Sheng Wang, Trent E Balius, Isha Singh, Anat Levit, Yurii S Moroz, Matthew J O'Meara, Tao Che, Enkhjargal Algaa, Kateryna Tolmachova, et al. Ultra-large library docking for discovering new chemotypes. Nature, 2019. +[4] Regine S Bohacek, Colin McMartin, and Wayne C Guida. The art and practice of structure-based drug design: a molecular modeling perspective. Medicinal research reviews, 1996. +[5] Camille Bilodeau, Wengong Jin, Tommi Jaakkola, Regina Barzilay, and Klavs F Jensen. Generative models for molecular discovery: Recent advances and challenges. Computational Molecular Science, 2022. +[6] David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 1988. +[7] Mario Krenn, Florian Häse, Akshit Kumar Nigam, Pascal Friederich, and Alan Aspuru-Guzik. Self-referencing embedded strings (selfies): A $100\%$ robust molecular string representation. Machine Learning: Science and Technology, 2020. +[8] Marwin HS Segler, Thierry Kogej, Christian Tyrchan, and Mark P Waller. Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS central science, 2018. +[9] Thomas Blaschke, Marcus Olivecrona, Ola Engkvist, Jürgen Bajorath, and Hongming Chen. Application of generative autoencoder in de novo molecular design. Molecular informatics, 2018. +[10] Gabriel Lima Guimaraes, Benjamin Sanchez-Lengeling, Carlos Outeiral, Pedro Luis Cunha Farias, and Alán Aspuru-Guzik. Objective-reinforced generative adversarial networks (organ) for sequence generation models. arXiv preprint arXiv:1705.10843, 2017. +[11] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In ICML, 2018. +[12] Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018. +[13] Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. Graphnn: A deep generative model for graphs. In ICML, 2018. +[14] Omar Mahmood, Elman Mansimov, Richard Bonneau, and Kyunghyun Cho. Masked graph modeling for molecule generation. Nature Communications, 2021. +[15] Emiel Hoogeboom, Víctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3D. In ICML, 2022. +[16] Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geometric diffusion model for molecular conformation generation. In ICLR, 2022. +[17] Ilia Igashov, Hannes Stärk, Clement Vignac, Victor Garcia Satorras, Pascal Frossard, Max Welling, Michael M Bronstein, and Bruno Correia. Equivariant 3d-conditional diffusion models for molecular linker design. In NeurIPS, AI for ScienceWorkshop, 2022. +[18] Arne Schneuing, Yuanqi Du, Charles Harris, Arian Jamasb, Ilia Igashov, Weitao Du, Tom Blundell, Pietro Lió, Carla Gomes, Max Welling, et al. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695, 2022. +[19] Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi S. Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking. In ICLR, 2023. +[20] Clement Vignac, Nagham Osman, Laura Toni, and Pascal Frossard. Midi: Mixed graph and 3d denoising diffusion for molecule generation. arXiv preprint arXiv:2302.09048, 2023. +[21] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. + +[22] Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael JL Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. *ICLR*, 2021. +[23] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In ICML, 2021. +[24] Mario Geiger and Tess Smidt. e3nn: Euclidean neural networks. arXiv preprint arXiv:2207.09453, 2022. +[25] Yi-Lun Liao and Tess Smidt. Equformer: Equivariant graph attention transformer for 3d atomistic graphs. *ICLR*, 2023. +[26] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? ICLR, 2019. +[27] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, 2019. +[28] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Neurips, 32, 2019. +[29] Ilyes Batatia, David P Kovacs, Gregor Simm, Christoph Ortner, and Gábor Csányi. MACE: Higher order equivariant message passing neural networks for fast and accurate force fields. Neurips, 35:11423-11436, 2022. +[30] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +[31] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. +[32] Nate Gruver, Marc Finzi, Micah Goldblum, and Andrew Gordon Wilson. The Lie derivative for measuring learned equivariance. arXiv preprint arXiv:2210.02984, 2022. +[33] Matthew Ragoza, Tomohide Masuda, and David Ryan Koes. Learning a continuous representation of 3d molecular structures with deep generative models. In Neurips, Structural Biology workshop, 2020. +[34] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020. +[35] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NeurIPS, 2021. +[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. +[37] Saeed Saremi and Rupesh Kumar Srivastava. Multimeasurement generative models. ICLR, 2022. +[38] Saeed Saremi, Rupesh Kumar Srivastava, and Francis Bach. Universal smoothed score functions for generative modeling. arXiv preprint arXiv:2303.11669, 2023. +[39] Nathan C Frey, Dan Berenberg, Joseph Kleinhenz, Isidro Hotzel, Julien Lafrance-Vanasse, Ryan Lewis Kelly, Yan Wu, Arvind Rajpal, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, and Saeed Saremi. Learning protein family manifolds with smoothed energy-based models. In ICLR, Workshop on Physics for Machine Learning, 2023. +[40] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 2018. +[41] Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. Scientific Data, 2022. +[42] Miha Skalic, José Jiménez, Davide Sabbadin, and Gianni De Fabritiis. Shape-based generative modeling for de novo drug design. Journal of chemical information and modeling, 2019. +[43] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR, 2014. +[44] Lvwei Wang, Rong Bai, Xiaoxuan Shi, Wei Zhang, Yinuo Cui, Xiaoman Wang, Cheng Wang, Haoyu Chang, Yingsheng Zhang, Jielong Zhou, et al. A pocket-based 3d molecule generative model fueled by experimental electron density. Scientific reports, 2022. + +[45] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yosh Bengio. Generative adversarial nets. In NIPS, 2014. +[46] Fergus Imrie, Thomas E Hadfield, Anthony R Bradley, and Charlotte M Deane. Deep generative design with 3d pharmacophoric constraints. Chemical science, 2021. +[47] Niklas Gebauer, Michael Gastegger, and Kristof Schütt. Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules. In NeurIPS, 2019. +[48] Niklas Gebauer, Michael Gastegger, and Kristof T Schütt. Generating equilibrium molecules with deep neural networks. arXiv preprint arXiv:1810.11347, 2018. +[49] Youzhi Luo and Shuiwang Ji. An autoregressive flow model for 3d molecular geometry generation from scratch. In ICLR, 2022. +[50] Jonas Kohler, Leon Klein, and Frank Noé. Equivariant flows: exact likelihood generative learning for symmetric densities. In ICML, 2020. +[51] Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, and Max Welling. E (n) equivariant normalizing flows. In NeurIPS, 2021. +[52] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In ICML, 2015. +[53] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015. +[54] Lei Huang, Hengtong Zhang, Tingyang Xu, and Ka-Chun Wong. Mdm: Molecular diffusion model for 3d molecule generation. arXiv preprint arXiv:2209.05710, 2022. +[55] Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, et al. Diffusion-based molecule generation with informative prior bridges. In NeurIPS, 2022. +[56] Minkai Xu, Alexander Powers, Ron Dror, Stefano Ermon, and Jure Leskovec. Geometric latent diffusion models for 3d molecule generation. In ICML, 2023. +[57] Elman Mansimov, Omar Mahmood, Seokho Kang, and Kyunghyun Cho. Molecular geometry prediction using a deep generative graph neural network. Scientific Reports, 2019. +[58] Gregor NC Simm and José Miguel Hernández-Lobato. A generative model for molecular distance geometry. ICML, 2020. +[59] Frank Noé, Simon Olsson, Jonas Köhler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 2019. +[60] Gregor NC Simm, Robert Pinsler, Gábor Csányi, and José Miguel Hernández-Lobato. Symmetry-aware actor-critic for 3d molecular design. arXiv preprint arXiv:2011.12747, 2020. +[61] Octavian Ganea, Lagnajit Pattanaik, Connor Coley, Regina Barzilay, Klavs Jensen, William Green, and Tommi Jaakkola. Geomol: Torsional geometric generation of molecular 3d conformer ensembles. In NeurIPS, 2021. +[62] Xingang Peng, Shitong Luo, Jiaqi Guan, Qi Xie, Jian Peng, and Jianzhu Ma. Pocket2mol: Efficient molecular sampling based on 3d protein pockets. In ICML, 2022. +[63] Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang. Learning gradient fields for molecular conformation generation. In ICML, 2021. +[64] Bowen Jing, Gabriele Corso, Jeffrey Chang, Regina Barzilay, and Tommi Jaakkola. Torsional diffusion for molecular conformer generation. arXiv preprint arXiv:2206.01729, 2022. +[65] Siyu Long, Yi Zhou, Xinyu Dai, and Hao Zhou. Zero-shot 3d drug design by sketching and generating. In NeurIPS, 2022. +[66] Keir Adams and Connor W Coley. Equivariant shape-conditioned generation of 3d molecules for ligand-based drug design. arXiv preprint arXiv:2210.04893, 2022. +[67] Jiaqi Guan, Wesley Wei Qian, Xingang Peng, Yufeng Su, Jian Peng, and Jianzhu Ma. 3d equivariant diffusion for target-aware molecule generation and affinity prediction. *ICLR*, 2023. + +[68] Matthew Ragoza, Tomohide Masuda, and David Ryan Koes. Generating 3d molecules conditional on receptor binding sites with deep generative models. Chemical science, 2022. +[69] David Schaller, Dora Šribar, Theresa Noonan, Lihua Deng, Trung Ngoc Nguyen, Szymon Pach, David Machalz, Marcel Bermudez, and Gerhard Wolber. Next generation 3d pharmacophore modeling. Wiley Interdisciplinary Reviews: Computational Molecular Science, 2020. +[70] Raphael JL Townshend, Martin Vögele, Patricia Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon Anderson, Stephan Eismann, et al. Atom3d: Tasks on molecules in three dimensions. NeurIPS, 2020. +[71] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point-voxel cnn for efficient 3d deep learning. In NeurIPS, 2019. +[72] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. NeurIPS, 2018. +[73] Ivan Diaz, Mario Geiger, and Richard Iain McKinley. An end-to-end se (3)-equivariant segmentation network. arXiv preprint arXiv:2303.00351, 2023. +[74] Jiehong Lin, Hongyang Li, Ke Chen, Jiangbo Lu, and Kui Jia. Sparse steerable convolutions: An efficient learning of se (3)-equivariant features for estimation and tracking of object poses in 3d space. NeurIPS, 2021. +[75] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In CVPR, 2015. +[76] Diane Bouchacourt, Mark Ibrahim, and Ari Morcos. Grounding inductive biases in natural images: invariance stems from variations in data. In NeurIPS, 2021. +[77] Daniel Flam-Shepherd and Alán Aspuru-Guzik. Language models can generate molecules, materials, and protein binding sites directly in three dimensions as xyz, cif, and pdb files. arXiv preprint arXiv:2305.05708, 2023. +[78] Matthew Ragoza, Joshua Hochuli, Elisa Idrobo, Jocelyn Sunseri, and David Ryan Koes. Protein-ligand scoring with convolutional neural networks. Journal of chemical information and modeling, 2017. +[79] Michael Maser and SE Reisman. 3d computer vision models predict dft-level homo-lumo gap energies from force-field-optimized geometries. ChemRvix, 2021. +[80] Herbert Ellis Robbins. An empirical Bayes approach to statistics. In Proc. 3rd Berkeley Symp. Math. Statist. Probab., 1956, 1956. +[81] Koichi Miyasawa. An empirical bayes estimator of the mean of a normal population. Bull. Inst. Internat. Statist, 1961. +[82] Aapo Hyvarinen. Estimation of non-normalized statistical models by score matching. JMLR, 2005. +[83] Giorgio Parisi. Correlation functions and computer simulations. *Nuclear Physics B*, 1981. +[84] Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, and Michael I. Jordan. Underdamped Langevin MCMC: A non-asymptotic analysis. In $COLT$ , 2018. +[85] Matthias Sachs, Benedict Leimkuhler, and Vincent Danos. Langevin dynamics with variable coefficients and nonconservative forces: from stationary states to numerical methods. Entropy, 2017. +[86] Saeed Saremi, Ji Won Park, and Francis Bach. Chain of log-concave Markov chains. arXiv preprint arXiv:2305.19473, 2023. +[87] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. +[88] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In $ICLR$ , 2019. +[89] Noel M O'Boyle, Michael Banck, Craig A James, Chris Morley, Tim Vandermeersch, and Geoffrey R Hutchison. Open babel: An open chemical toolbox. Journal of cheminformatics, 2011. +[90] Greg Landrum. Rdkit: Open-source cheminformatics software, 2016. + +[91] Charles Harris, Kieran Didi, Arian R Jamasb, Chaitanya K Joshi, Simon V Mathis, Pietro Lio, and Tom Blundell. Benchmarking generated poses: How rational is structure-based drug design with generative models? arXiv preprint arXiv:2308.07413, 2023. +[92] Anthony K Rappé, Carla J Casewit, KS Colwell, William A Goddard III, and W Mason Skiff. Uff, a full periodic table force field for molecular mechanics and molecular dynamics simulations. Journal of the American chemical society, 1992. +[93] Lin Li, Chuan Li, and Emil Alexov. On the modeling of polar component of solvation energy using smooth gaussian-based dielectric function. Journal of Theoretical and Computational Chemistry, 2014. +[94] Gabriele Orlando, Daniele Raimondi, Ramon Duran-Romana, Yves Moreau, Joost Schymkowitz, and Frederic Rousseau. Pyuul provides an interface between biological structures and deep learning algorithms. Nature communications, 2022. +[95] Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018. +[96] Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 2018. +[97] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019. + +# A Extra implementation details + +# A.1 Voxel representation + +Molecules in our datasets are converted into voxelized atomic densities. For each molecule, we consider a box around its center and divide it into discrete volume elements. We follow [93, 94] and first convert each atom (of each molecule) into 3D Gaussian-like densities: + +$$ +V _ {a} (d, r _ {a}) = \exp \left(- \frac {d ^ {2}}{(. 9 3 \cdot r _ {a}) ^ {2}}\right), \tag {5} +$$ + +where $V_{a}$ is defined as the fraction of occupied volume by atom $a$ of radius $r_a$ at distance $d$ from its center. Although we could consider a different radius for each element, in this work we consider all atoms to have the same radius $r_a = .5\AA$ . The occupancy of each voxel in the grid is computed by integrating the occupancy generated by every atom in a molecule: + +$$ +\operatorname {O c c} _ {i, j, k} = 1 - \prod_ {n = 1} ^ {N _ {a}} \left(1 - V _ {a _ {n}} \left(\left| \left| C _ {i, j, k} - x _ {n} \right| \right|, r _ {a _ {n}}\right)\right), \tag {6} +$$ + +where $N_{a}$ is the number of atoms in the molecule, $a_{n}$ is the $\mathfrak{n}^{th}$ atom, $C_{i,j,k}$ are the coordinates (i,j,k) in the grid and $x_{n}$ is the coordinates of the center of atom $n$ [93]. The occupancy takes the maximum value of 1 at the center of the atom and goes to 0 as it moves away from it. Every channel is considered independent of one another and they do not interact nor share volumetric contributions. We use the python package PyUUL [94] to generate the voxel grids from the raw molecules (.xyz or .sdf format). + +We use grids with $32^{3}$ voxels on QM9 and $64^{3}$ on GEOM-drugs and place the molecules on the center of the grid. These volumes are able to cover over $99\%$ of all points in the datasets. We use all 5 chemical elements present on the dataset (C, H, O, N and F), while for GEOM-drugs, we use 8 (C, H, O, N, F, S, Cl and Br). We model hydrogen explicitly in all our experiments. Finally, the input voxel grids are of dimension $\mathbb{R}^{5\times 32\times 32\times 32}$ and $\mathbb{R}^{8\times 64\times 64\times 64}$ for QM9 and GEOM-drugs, respectively. We augment the dataset during training by applying random rotation and translation to the molecules. For rotation, we sample three Euler angles uniformly (between $[0,2\pi)$ ) and rotate each training sample. For translation, we randomly shift the center of the molecules on each of the three dimensions by sampling an uniform shift between $[0,.25]\mathring{\mathrm{A}}$ . + +# A.2 Architecture + +Our neural network architecture follows standard encoder-decoder convnet architecture. We use a very similar architecture recipe to DDPM [34]. The model uses four levels of resolution: $32^3$ to $4^3$ for the QM9 dataset and $64^3$ to $8^3$ for the GEOM-drugs dataset. The input voxel is embedded into a 32 dimensional space with a grid projection layer (3D convnet with kernel size $3 \times 3 \times 3$ ). Each resolution (on both encoder and decoder) has two convolutional residual blocks. Each block contains a group normalization [95] layer, followed by SiLU [96] non-linearity and a 3D convnet (with kernel size $3 \times 3 \times 3$ ). All convolutions have stride 1 and we pad the feature maps with 1 on each side. We use self-attention layers between the convolutional layers in the two lowest resolutions. We reduce (increase, respectively) the resolution of the encoder (decoder) with $2 \times 2 \times 2$ (stride 1) max-poolings (bilinear-upsampling). The model has skip connections at each resolution to concatenate the encoder feature map with the decoder feature map. We double the number of feature maps at each resolution, except the last resolution where we quadruple. VoxMol has approximately 111M parameters. We also implemented a smaller version (with reduced number of channels per layer) with around 30M. These models achieve performance close to the base model and are faster to train and sample. + +# A.3 Built-in SE(3) equivariance experiments + +In early experiments, we made an attempt at using a SE(3)-equivariant 3D U-Net using steerable convnets [72] for denoising, but initial experiments were not successful. The hypothesis is that a built-in SE(3) equivariant version of our model, $\mathrm{VoxMole_{equi}}$ , would be advantageous over the non-equivariant version for the task of molecule generation. We start with the official implementation of [73] and tune several network hyperparameters (related to architecture, optimization and training) so that the network is able to achieve good denoising metrics on QM9. We then use the same procedure to + +generate samples as described in the main paper (only switching the network from the non-equivariant to the equivariant version). We tried different sampling hyperparameters, but we were never able to achieve the same performance as non-equivariant VoxMol. Table 4 compares the results of the model with and without built-in SE(3) equivariance. + +
QM9stable mol %↑stable atom %↑valid %↑unique %↑valency W1↓atom TV↓bond TV↓bond len W1↓bond ang W1↓
VoxMol89.399.298.792.1.023.029.009.0031.96
VoxMolequi25.181.895.992.913.2.104.015.0155.31
+ +Table 4: Results on QM9 of our model without (VoxMol) and with (VoxMolequi) built-in SE(3) equivariance. + +There might be many reasons why this is the case: (i) the best reconstruction loss we found with the equivariant model is higher than the non-equivariant (approx. $9.4 \times 10^{-5}$ vs. $5.4 \times 10^{-5}$ MSE on val. set), (ii) the equivariant model needs more capacity to be competitive with the non-equivariant one (currently it has over $90 \times$ fewer parameters), (iii) something in the sampling procedure needs to be different on the equivariant version (unlikely). + +We hypothesize that if an equivariant version of VoxMol achieves similar (or lower) reconstruction loss as the vanilla version, it will probably achieve competitive/better results in the task of molecule generation. Finally, our equivariant implementation is less efficient (around $50 - 60\%$ slower) and consumes more memory than the original version. This poses, therefore, an extra challenge to scale up the size of the dataset and the size of the molecules (e.g., GEOM-drugs requires $64^{3}$ voxel grid). + +# A.4 Training and sampling + +The weights are optimized with batch size 128 and 64 (for QM9 and GEOM-drugs, respectively), AdamW optimizer $(\beta_{1} = .9, \beta_{2} = .999)$ , learning rate of $10^{-5}$ and weight decay of $10^{-2}$ . The models are trained for 500 epochs on QM9 and around 24 epochs on GEOM-drugs. We discretize the underdamped Langevin MCMC (Equation 4) with the algorithm proposed by Sachs et al. [85] (this has been applied on images before [37]). Algorithm 1 describes this process. + +Algorithm 1: Walk-jump sampling [1] using the discretization of Langevin diffusion by [85]. Lines 6-13 correspond to the walk step and line 14 to the jump step. +1: Input $\delta$ (step size), $u$ (inverse mass), $\gamma$ (friction), $K$ (steps taken) +2: Input Learned score function $g_{\theta}(y) \approx \nabla_y \log p(y)$ and noise level $\sigma$ +3: Output $\widehat{x}_K$ +4: $y_0 \sim \mathcal{N}(0, \sigma^2 I_d) + \mathcal{U}_d(0, 1)$ +5: $v_0 \gets 0$ +6: for $k = 0, \dots, K-1$ do +7: $y_{k+1} \gets y_k + \frac{\delta}{2} v_k$ +8: $g \gets g_{\theta}(y_{k+1})$ +9: $v_{k+1} \gets v_k + \frac{u\delta}{2} g$ +10: $\varepsilon \sim \mathcal{N}(0, I_d)$ +11: $v_{k+1} \gets \exp(-\gamma\delta) v_{k+1} + \frac{u\delta}{2} g + \sqrt{u(1 - \exp(-2\gamma\delta))}\varepsilon$ +12: $y_{k+1} \gets y_{k+1} + \frac{\delta}{2} v_{k+1}$ +13: end for +14: $\hat{x}_K \gets y_K + \sigma^2 g_{\theta}(y_K)$ + +We use $\gamma = 1.0$ , $u = 1.0$ , $\delta = .5$ for all samplings and we generate multiple chains in parallel (200 chains for QM9 and 100 for GEOM-drugs). We follow [37] and initialize the chains by adding uniform noise to the initial Gaussian noise (with the same $\sigma$ used during training), i.e., $y_0 = \mathcal{N}(0, \sigma^2 I_d) + \mathcal{U}_d(0, 1)$ (this was observed to mix faster in practice). + +All experiments and analysis on this paper were done on A100 GPUs and with PyTorch [97]. The models on QM9 were trained with 2 GPUs and the models on GEOM-drugs on 4 GPUs. + +
dsetcoordinates ref.stable mol %↑stable atom %↑valid %↑unique %↑valency W1↓atom TV↓bond TV↓bond len W1↓bond ang W1↓
QM9-80.598.598.193.3.051.028.005.0082.94
89.399.298.792.1.023.029.009.0031.96
GEOM-73.999.094.798.6.236.030.038.0082.92
74.998.193.499.2.254.033.036.002.63
+ +Table 5: Effect of coordinate refinement on QM9 and GEOM-drugs. We use 10,000 samples from each method. + +# A.5 Recovering atomic coordinates from voxel grid + +Figure 4 shows our pipeline to recover atomic coordinates and molecular graphs from generated voxelized molecules. In the first step, we use the model to "jump" to the data manifold generating a sample in the voxelized representation, $x_{k}$ . We set to 0 all voxels with value less than .1, i.e., $x_{k} = \mathbb{1}_{\geq .1}(x_{k})$ . We then apply a simple peak finding algorithm to find the voxel coordinates corresponding to the peaks in the generated sample. Our peak finding algorithm uses a maximum filter with a $3\times 3\times 3$ stencil to find local maxima. Note that this algorithm always returns points on the voxel grid and is therefore limited by the resolution of the discretization. + +In order to further refine the atomic coordinates we take advantage of the fact that our voxelization procedure is differentiable to perform gradient based optimization of the coordinates. Specifically we use L-BFGS to optimize the atomic coordinates based on the L2 norm of the reconstruction error in the voxel representation. Note, unlike some previous work [33] we perform peak detection and refinement in a single step and do not perform search over multiple possible numbers of atoms or atom identities. + +Table 5 shows the effect of coordinate refinement on molecule generation. We generate molecules on the same setting in the experimental section. + +Once we have obtained the optimized atomic coordinates, we follow previous work [33, 18, 17, 20] and use standard cheminformatics software to determine the molecule's atomic bonds. + +# A.6 Metrics + +Most of the metrics used to benchmark models come from $[20]^8$ . Below we describe the metrics: + +- Atom stability: the percentage of generated atoms with the right valency. This metric is computed on the raw 3D sample (before any postprocessing), therefore it is more stringent than validity. +- Molecule stability: the percentage of generated molecules where all its atoms are stable. +- Validity: The percentage of generated molecules that passes RDKit's sanitization filter. +- Uniqueness:. The proportion of valid molecules (defined above) that has a unique canonical SMILES (generated with RDKit) representation. +- Atoms TV: The total variation between the distribution of bond types in the generated and test set. We consider 5 atom types on QM9 and 8 atom types on GEOM-drugs. The histograms $\hat{h}_{\mathrm{atm}}$ and $h_{\mathrm{atm}}$ are generated by counting the number of each atom type on all molecules on both the generated and real sample set. The total variation is computed as: + +$$ +\text {A t o m s} \mathrm {T V} (\hat {h} _ {\mathrm {a t m}}, h _ {\mathrm {a t m}}) = \sum_ {x \in \text {a t o m t y p e s}} | \hat {h} _ {\mathrm {a t m}} (x) - h _ {\mathrm {a t m}} (x) | +$$ + +- Bonds TV: Similar to above, the histograms for real and generated samples are created by counting all bond types on all molecules. The total variation is computed as: + +$$ +\text {B o n d s} \mathrm {T V} (\hat {h} _ {\text {b o n d}}, h _ {\text {b o n d}}) = \sum_ {x \in \text {b o n d t y p e s}} | \hat {h} _ {\text {b o n d}} (x) - h _ {\text {b o n d}} (x) | +$$ + +- Valency $W_{1}$ : This is the weighted sum of the Wasserstein distance between the distribution of valencies for each atom type: + +$$ +\operatorname {V a l e n c y} \mathrm {W} _ {1} (\text {g e n e r a t e d}, \text {t a r g e t}) = \sum_ {x \in \text {a t o m t y p e s}} p (x) W _ {1} \left(\hat {h} _ {\text {v a l}} (x), h _ {\text {v a l}} (x)\right), +$$ + +where $\hat{h}_{\mathrm{val}}(x)$ and $h_{\mathrm{val}}(x)$ are the histogram of valencies for atom type $x$ for generated and holdout set samples, respectively. + +- Bond length $W_{1}$ : The weighted sum of Wasserstein distance between the distribution of bond lengths for each bond type: + +$$ +\text {B o n d L e n W} _ {1} (\text {g e n e r a t e d}, \text {t a r g e t}) = \sum_ {b \in \text {b o n d t y p e s}} p (b) W _ {1} \left(\hat {h} _ {\text {d i s t}} (b), h _ {\text {d i s t}} (b)\right), +$$ + +where $\hat{h}_{\mathrm{dist}}(b)$ and $h_{\mathrm{dist}}(b)$ are the histogram of bond lengths for bond type $b$ , for generated and holdout set samples, respectively. + +- Bond angles $W_{1}$ : The weighted sum of Wasserstein distance between the distribution of bond angles (in degrees) for each atom type in the dataset: + +$$ +\text {B o n d A n g W} _ {1} (\text {g e n e r a t e d}, \text {t a r g e t}) = \sum_ {x \in \text {a t o m t y p e s}} p (x) W _ {1} \left(\hat {h} _ {\text {a n g}} (x), h _ {\text {a n g}} (x)\right), +$$ + +where $\hat{h}_{\mathrm{ang}}(x)$ and $h_{\mathrm{ang}}(x)$ are the histogram of angles for atom type $x$ for generated and holdout set samples, respectively. See [20] for how angles are measured. + +- Strain energy: The strain energy for a generated molecule is computed as the difference between the energy on the generated pose and the energy of a relaxed position. The relaxation and the energy are computed using UFF provided by RDKit. We use [91]'s implementation9. + +# A.7 Guiding the generation process + +Like diffusion models, our method also leverages (learned) score functions and relies on Langevin MCMC for sampling. Therefore, in theory we can condition VoxMol similarly to how it is done in diffusion models: by constraining the score function as we walk through the MCMC chain. In the case of diffusion models, the score function of all steps is constrained to guide the transition steps from noise to a (conditioned) sample. In VoxMol, the constrained score function would affect the "walk steps" (the Langevin MCMC steps): it would restrict the region where the chain samples noisy molecules $y$ to $p(y|c)$ (instead of $p(y))$ , $c$ is the condition (e.g., gradient of a classifier). The "jump step" (a forward pass of the denoising network over the noised molecules) is independent of the condition and remains unchangeable. + +Many of the innovations on conditioning diffusion models come from computer vision, where U-nets are usually used. Since VoxMol has the same architecture (albeit 3D instead of 2D), many of the conditioning techniques/tricks used in images may be more easily transferable. For example, we could in principle use the gradient of a classifier (trained jointly) to guide the sampling (using the same trick as in Dhariwal and Nichol [35]) or adapt gradient-free guidance ([34]). Pocket conditioning could also be possible, as in e.g., [18, 67], but utilizing voxel representations instead of point clouds and neural empirical Bayes instead of diffusion models. In-painting has also proven to work very well in 2D U-Nets, so it could potentially work with 3D U-Nets as well. These in-painting techniques could also be leveraged in the context of molecule generation on voxel grids, e.g., for linker generation, scaffold/fragment conditioning. + +# B Generated samples + +![](images/53e6f4723cb233a46bef5bccba06f437f3cd1ef6d08277401a515227355ef5e5.jpg) +Figure 8: Random generated samples from VoxMol trained on QM9 (passing RDKit's sanitization). Molecular graphs are generated with RDKit. + +![](images/b2615d2eeafb2711b9624e7502ff9460cf69c07466840efb4b617b74ec444682.jpg) +Figure 9: Random generated samples from VoxMol trained on GEOM-drugs (passing RDKit's sanitization). Molecular graphs are generated with RDKit. \ No newline at end of file diff --git a/3dmoleculegenerationbydenoisingvoxelgrids/images.zip b/3dmoleculegenerationbydenoisingvoxelgrids/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..88bedc1ce053848aae8a511872366dfa01d960d9 --- /dev/null +++ b/3dmoleculegenerationbydenoisingvoxelgrids/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4859b66ef363115ec285164e2ddbb61c9ef1081b6345a59cda1835afb556d743 +size 828147 diff --git a/3dmoleculegenerationbydenoisingvoxelgrids/layout.json b/3dmoleculegenerationbydenoisingvoxelgrids/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..880f60841178bbfa2e1627baf05b43a44591304c --- /dev/null +++ b/3dmoleculegenerationbydenoisingvoxelgrids/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcf2b7085c815abf80fb6f178013f56fd686221f525076698d75d46fa4443c32 +size 628472 diff --git a/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_content_list.json b/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a932f295d27ca20a7c185ab38258ef381458ee6b --- /dev/null +++ b/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5917ac3ff88ec36c19f0205f1b24a00009dd5280c90f6e00a60ad595099fc1d7 +size 78199 diff --git a/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_model.json b/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8f4428621de28047557ec5efc5d11ca90be21436 --- /dev/null +++ b/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63ca9ca1177cfc6acfb7a49f762a12e0ffbfd2ae5585e5764a07694fe4bfce5a +size 101588 diff --git a/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_origin.pdf b/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1be32064ea9ed16121964cc32d87a442b9ad48d4 --- /dev/null +++ b/4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:479a0ced364432246b62a72b83f6c31210114666b329d7110c4cc73593a0d3ec +size 13878005 diff --git a/4dpanopticscenegraphgeneration/full.md b/4dpanopticscenegraphgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ea3f1c97d3307f783f435297e0d1a1f6a7db8b5c --- /dev/null +++ b/4dpanopticscenegraphgeneration/full.md @@ -0,0 +1,270 @@ +# 4D Panoptic Scene Graph Generation + +Jingkang Yang $^{1}$ , Jun Cen $^{2}$ , Wenxuan Peng $^{1}$ , Shuai Liu $^{3}$ , Fangzhou Hong $^{1}$ , Xiangtai Li $^{1}$ , Kaiyang Zhou $^{4}$ , Qifeng Chen $^{2}$ , Ziwei Liu $^{1}$ + +$^{1}$ S-Lab, Nanyang Technological University + +2The Hong Kong University of Science and Technology + +$^{3}$ Beijing University of Posts and Telecommunications $^{4}$ Hong Kong Baptist University + +https://github.com/Jingkang50/PSG4D + +![](images/75b3f03e48249328a44860e9db03098019c8e01d21180b72ac08ca0bb9900481.jpg) +(a) Visual Input from the 4D Dynamic World +(b) PSG-4D: 4D Panoptic Scene Graph + +![](images/a345ccd293fe97b6f5b9ee14f8490bcb7d0d836e161aed5751ca06bba0bd808c.jpg) +Figure 1: Conceptual illustration of PSG-4D. PSG-4D is essentially a spatiotemporal representation capturing not only fine-grained semantics in image pixels (i.e., panoptic segmentation masks) but also the temporal relational information (i.e., scene graphs). In (a) and (b), the model abstracts information streaming in RGB-D videos into (i) nodes that represent entities with accurate location and status information and (ii) edges that encapsulate the temporal relations. Such a rich 4D representation serves as a bridge between the PSG-4D system and a large language model, which greatly facilitates the decision-making process, as illustrated in (c). + +![](images/6ae545ca6553d8a70f0c60525f0c22ae60120edcace9f5b3975a574c14b3390a.jpg) +(c) Reasoning & Planning + +![](images/feb383c0baa9e12c71c71afc83e10627a5bf9b6962a6767a26eec527f606fae7.jpg) + +# Abstract + +We are living in a three-dimensional space while moving forward through a fourth dimension: time. To allow artificial intelligence to develop a comprehensive understanding of such a 4D environment, we introduce 4D Panoptic Scene Graph (PSG-4D), a new representation that bridges the raw visual data perceived in a dynamic 4D world and high-level visual understanding. Specifically, PSG-4D abstracts rich 4D sensory data into nodes, which represent entities with precise location and status information, and edges, which capture the temporal relations. To facilitate research in this new area, we build a richly annotated PSG-4D dataset consisting of 3K RGB-D videos with a total of 1M frames, each of which is labeled with 4D panoptic segmentation masks as well as fine-grained, dynamic scene graphs. To solve PSG-4D, we propose PSG4DFormer, a Transformer-based model that can predict panoptic segmentation masks, track masks along the time axis, and generate the corresponding scene graphs via a relation component. Extensive experiments on the new dataset show that our method can serve as a strong baseline for future research on PSG-4D. In the end, we provide a real-world application example to demonstrate how we can achieve dynamic scene understanding by integrating a large language model into our PSG-4D system. + +# 1 Introduction + +The emergence of intelligent agents, autonomous systems, and robots demands a profound understanding of real-world environments [1, 2, 3, 4, 5, 6]. This understanding involves more than just recognizing individual objects - it requires an intricate understanding of the relationships between these objects. In this context, research on Scene Graph Generation (SGG) [7], has sought to provide a more detailed, relational perspective on scene understanding. In this approach, scene graphs represent objects as nodes and their relationships as edges, offering a more comprehensive and structured understanding of the scene [8, 7, 9, 10, 11]. Panoptic Scene Graph Generation (PSG) [12] expands the scope of SGG to encompass pixel-level precise object localization and comprehensive scene understanding, including background elements. Then PSG has been further extended to the domain of videos [13] with the inspiration from Video Scene Graph Generation (VidSGG) [14, 15]. + +The utility of scene graphs also extends into the realm of 3D perception, introducing the concept of 3D Scene Graphs (3DSG) [16, 17]. 3DSGs offer a precise representation of object locations and inter-object relationships within three-dimensional scenes [18, 19]. Despite these developments, the existing approaches have not fully integrated dynamic, spatio-temporal relationships, particularly those involving human-object and human-human interactions. Consider Figure 1 as an illustrative example. Traditional 3D scene graph methods may recognize the static elements of this scene, such as identifying a booth situated on the ground. However, a more ideal, advanced, and dynamic perception is required for real-world scenarios. For instance, a system should be capable of identifying a dynamic event like a person who has fallen off their bike, so that it could then comprehend the necessity to offer assistance, like helping the person stand up and stabilize their bike. + +Therefore, our work takes a significant step towards a more comprehensive approach to sensing and understanding the world. We introduce a new task, the 4D Panoptic Scene Graph (PSG-4D), aiming to bridge the gap between raw visual inputs in a dynamic 4D world and high-level visual understanding. PSG-4D comprises two main elements: nodes, representing entities with accurate location and status information, and edges, denoting temporal relations. This task encapsulates both spatial and temporal dimensions, bringing us closer to a true understanding of the dynamic world. + +To facilitate research on this new task, we contribute an extensively annotated PSG-4D dataset that is composed of 2 sub-sets, PSG4D-GTA and PSG4D-HOI. The PSG4D-GTA subset consists of 67 RGB-D videos with a total of 28K frames, selected from the SAIL-VOS 3D dataset [20] collected from the video game Grand Theft Auto V (GTA-V) [21]. The PSG4D-HOI subset is a collection of 3K egocentric real-world videos sampled from the HOI4D dataset [22]. All frames in either of the subset are labeled with 4D panoptic segmentation masks as well as fine-grained, dynamic scene graphs. We believe this dataset will serve as a valuable resource for researchers in the field. + +To tackle this novel task, we propose a unified framework called PSG4DFormer. This unified structure encapsulates two primary components: a 4D Panoptic Segmentation model and a Relation model. The 4D Panoptic Segmentation model is designed to accommodate both RGB-D and point cloud data inputs, yielding a 4D panoptic segmentation. This output comprises 3D object masks, which are continuously tracked across temporal dimensions. Then, the Relation model accepts these 3D mask tubes and utilizes a spatial-temporal transformer architecture to delineate long-term dependencies and intricate inter-entity relationships, subsequently yielding a relational scene graph. Through extensive experiments, we demonstrate the effectiveness of the proposed PSG-4D task and the PSG4DFormer model. Our work constitutes a pivotal step towards a comprehensive understanding of dynamic environments, setting the stage for future research in this exciting and crucial area of study. + +In summary, we make the following contributions to the community: + +- A New Task: We propose a novel scene graph generation task focusing on the prediction of 4D panoptic scene graphs from RGB-D or point cloud video sequences. +- A New Dataset: We provide a PSG-4D dataset, which covers diverse viewpoints: (i) a third-view synthetic subset (PSG4D-GTA) and (ii) an egocentric real-world subset (PSG4D-HOI). +- A Unified Framework: We propose a unified two-stage model composed of a feature extractor and a relation learner. In addition, we offer demo support for both synthetic and real-world scenarios to facilitate future research and real-world applications. +- Open-Source Codebase: We open-source our codebase to facilitate future PSG-4D research. + +# 2 Related Work + +Scene Graph Generation (SGG) SGG transforms an image into a graph, where nodes represent objects and edges represent relationships [7]. Several datasets [23] and methods, including two-stage [8, 7, 9, 10, 11] and one-stage models [24, 12, 25], have been developed for SGG. Video scene graph generation (VidSGG) extends SGG to videos with notable datasets [14, 15, 26]. Despite progress, limitations remain in SGG and VidSGG due to noisy grounding annotations caused by coarse bounding box annotations and trivial relation definitions. Recent work on panoptic scene graph generation (PSG) [12, 27, 28, 29, 30, 31] has attempted to overcome these issues, and PVSG [13, 32] further extends it into the video domain. This paper presents an extension of PSG into a 4D dynamic world, meeting the needs of active agents for precise location and comprehensive scene understanding. + +3D Scene Graph Generation 3D Scene Graphs (3DSGs) offer a precise 3D representation of object locations and inter-object relationships, making them a vital tool for intelligent agents operating in real-world environments [16, 17]. 3DSGs can be categorized into flat and hierarchical structures [33]. The former represents objects and relationships as a simple graph [18, 19], while the latter layers the structures of 3D scenes [34, 35]. Recent 3DSG techniques [19] employ PointNet [36] with 3D object detectors on point clouds or RGBD scans, generating 3D graphs via graph neural networks [18]. Some settings, such as Kimera [37], emphasize pairwise spatiotemporal status to facilitate task planning, while incremental 3DSG necessitates agents to progressively explore environments [38]. However, these graphs largely represent positional relations, lacking dynamic spatiotemporal relations like human-object interactions and human-human relations. + +4D Perception Research on 4D perception can be divided by the specific data format they use. The first one is RGB-D video, which can be easily obtained using cheap sensors, e.g. Kinect, and iPhone. With the additional depth data, more geometric and spatial information can be used for reliable and robust detection [39, 40, 41] and segmentation [42, 43, 44, 45]. For RGB-D video, the depth input is usually treated like images. But for point clouds video, 3D or higher dimension convolutions [46, 47, 48, 49] are more commonly used, especially on LiDAR point cloud videos for autonomous driving perception system. In this work, beyond the 4D panoptic segmentation, we focus on more daily scenes and pursue a more high-level and structured understanding of 4D scenes by building 4D scene graphs. + +# 3 The PSG-4D Problem + +The PSG-4D task is aimed at generating a dynamic scene graph, which describes a given 4D environment. In this context, each node corresponds to an object, while each edge represents a spatial-temporal relation. The PSG-4D model ingests either an RGB-D video sequence or a point cloud video sequence, subsequently outputting a PSG-4D scene graph $\mathbf{G}$ . This graph is composed of 4D object binary mask tubes $\mathbf{M}$ , object labels $\mathbf{O}$ , and relations $\mathbf{R}$ . + +The object binary mask tubes, $\mathbf{m}_i\in \{0,1\}^{T\times H\times W\times 4}$ , express the 3D location and extent of the tracked object $i$ over time $(T)$ in the case of an RGB-D sequence input, while $\mathbf{m}_i\in \{0,1\}^{T\times M\times 6}$ is used for point cloud video inputs. Here, 4 denotes RGB-D values, and 6 represents XYZ plus RGB values. M stands for the number of point clouds of interest. The object label, $o_i\in \mathbb{C}^O$ , designates the category of the object. The relation $r_i\in \mathbb{C}^R$ represents a subject and an object linked by a predicate class and a time period. $\mathbb{C}^O$ and $\mathbb{C}^R$ refer to the object and predicate classes, respectively. The PSG-4D task can be mathematically formulated as: + +$$ +\Pr (\mathbf {G} \mid \mathbf {I}) = \Pr (\mathbf {M}, \mathbf {O}, \mathbf {R} \mid \mathbf {I}), \tag {1} +$$ + +where $\mathbf{I}$ represents the input RGB-D video sequence or point cloud representation. + +Evaluation Metrics For evaluating the performance of the PSG-4D model, we employ the R@K and mR@K metrics, traditionally used in the scene graph generation tasks. R@K calculates the triplet recall, while mR@K computes the mean recall, both considering the top K triplets from the PSG-4D model. A successful recall of a ground-truth triplet must meet the following criteria: 1) correct category labels for the subject, object, and predicate; 2) a volume Intersection over Union (vIOU) greater than 0.5 between the predicted mask tubes and the ground-truth tubes. When these criteria are satisfied, a soft recall score is recorded, representing the time vIOU between the predicted and the ground-truth time periods. + +Table 1: Illustration of the PSG-4D dataset and related datasets. Unlike the static 3D indoor scenes usually found in 3DSG datasets, the PSG-4D dataset introduces dynamic 3D videos, each annotated with panoptic segmentation. Various 3D video datasets were evaluated as potential sources for PSG-4D, resulting in the creation of two subsets: PSG4D-GTA and PSG4D-HOI. Regarding annotations, PS represents Panoptic Segmentation, BB represents Bounding Box, SS represents Semantic Segmentation, KP represents key points, and PC represents point clouds. TPV represents third-person-view. + +
DatasetTypeScaleView#ObjCls#RelClsAnnotationYear
3DSSG [18]3DSG363K RGB-D images, 1482 scans, 478 scenes,TPV534403D model, 3D graph2020
Rel3D [50]3DSG27K RGB-D images, 9990 3D ScenesTPV67303D model2020
ScanNet [51]3D Images2.5M RGB-D images, 1513 indoor scenesTPV20-SS, 3D model2017
Matterport 3D [52]3D Images194,400 RGB-D images, 90 building-scale scenesTPV40-SS, 3D model2017
Nuscenes [53]2D Video+PC1K videos (avg. 20s), 1.3M pointcloudsVehicle23-3D BB2020
WAYMO [54]2D Video+PC1.2K videos (avg. 20s), 177K pointcloudsVehicle20-2D BB, 3D BB2020
Sail-VOS 3D [55]3D Video484 videos, 238K RGB-D image, 6807 clipsegocentric178-SS, 3D model2021
HOI4D [56]3D Video4K videos, 2.4M RGB-D image, 610 indoor scenesegocentric1611PS, KP2022
EgoBody [57]3D Video125 videos, 199K RGB-D images, 15 indoor scenesegocentric, TPV36133D model, KP2022
PSG4D-GTAPSG4D67 videos (avg. 84s), 28K RGB-D images, 28.3B pointcloudsTPV3543PS, 4DSG2023
PSG4D-HOIPSG4D2973 videos (avg. 20s), 891K RGB-D images, 282 indoor scenesegocentric4615PS, 4DSG2023
+ +# 4 The PSG-4D Dataset + +This section outlines the development of the PSG-4D dataset. We begin by exploring existing datasets that inspired the creation of PSG-4D, followed by a presentation of its statistics, and finally a brief overview of the steps involved in its construction. + +# 4.1 Leveraging Existing Datasets for PSG-4D + +Rather than constructing the PSG-4D dataset from the ground up, we sought to evaluate whether currently available datasets could either directly support or be adapted for the PSG-4D task. As shown in Table 1, our initial exploration focused on 3D datasets, including 3D scene graph datasets like 3DSGG [18] and Rel3D [50], along with more conventional 3D datasets such as ScanNet [51] and Matterport 3D [52]. However, while these datasets can be used to reconstruct entire scenes and can generate 3D videos accordingly, the resulting scenes remain static and lack dynamic elements. + +We then shifted our focus to video datasets containing 3D information. Autonomous driving datasets such as Nuscenes [53] and WAYMO [54] incorporate point cloud videos, particularly bird's-eye view footage. Nevertheless, the vehicles within these scenes are only captured in 2D video. While this technically constitutes a dynamic 4D scene, it does not align well with the objectives of this study. The dynamic relations in traffic scenarios are relatively limited, and our goal is to develop a visual understanding model for embodied AI [58, 59, 60, 61] that captures 3D scenes from the agent's perspective, not a bird's-eye view. + +Another category of 3D videos uses RGB-D sequences as input, which can be easily converted into point clouds. This data format aligns perfectly with the operation of intelligent agents, mimicking human perception, which captures continuous RGB images with depth. Thankfully, recent datasets like SAIL-VOS 3D [55], HOI4D [56], and EgoBody [57] have adopted this approach. While SAIL-VOS 3D uses synthetic data from the GTA game [21], the HOI4D dataset captures egocentric RGB-D videos of simple tasks, such as tool picking. On the other hand, the EgoBody dataset [57] records office activities like conversations, but lacks segmentation annotation and is primarily intended for human pose reconstruction. Despite its wealth of videos, the object interaction in EgoBody is limited. In the medical domain, 4D-OR [60] excels in providing detailed depictions of surgical scenes, showcasing its specialized utility. To cater to a broader spectrum of research applications, we formulated the PSG-4D dataset, integrating the versatile strengths of the SAIL-VOS 3D [55] and HOI4D [56] datasets. + +# 4.2 Dataset Statistics + +Figure 2 presents a selection of four video frames, drawn from both the PSG4D-GTA and PSG4D-HOI datasets. Each frame is an RGB-D video with corresponding panoptic segmentation annotations. Underneath each scene, we depict the associated scene graph and statistical word clouds. Annotators constructed these scene graphs as triplets, complete with frame duration. The PSG4D-GTA dataset is + +![](images/4118cd43cb02474c9640b91f9269d824264cf4c23dafa2e7bd0b94614fac3273.jpg) +(a) PSG4D-GTA (Synthetic, Third-Person View) + +![](images/d9c23cfff3603bc197874f832c0f6d792c22b220af2d6fad93f924900368350d.jpg) +(b) PSG4D-HOI (Real-World, Egocentric) +Figure 2: The Examples and Word Clouds of PSG-4D dataset. The PSG-4D dataset contains 2 subsets, including (a) PSG4D-GTA selected from the SAIL-VOS 3D [20] dataset, and (b) PSG4D-HOI from HOI4D [22] dataset. We selected 4 frames of an example video from each subset. Each frame has aligned RGB and depth with panoptic segmentation annotation. The scene graph is annotated in the form of triplets. The word cloud for object and relation categories in each dataset is also represented. + +particularly noteworthy for its composition: it contains 67 videos with an average length of 84 seconds, amounting to 27,700 RGB-D images, 28.3 billion point clouds, and comprises 35 object categories, and 43 relationship categories. This synthetic dataset was captured from a third-person perspective. In contrast, the PSG4D-HOI dataset is compiled from an egocentric perspective, providing a different context for analysis. It includes 2,973 videos with an average duration of 20 seconds, equating to 891,000 RGB-D images across 282 indoor scenes. This dataset includes 46 object categories and 15 object-object relationship categories, offering a diverse range of real-world data for the study. The combination of these two datasets offers a comprehensive understanding of 4D environments due to their complementary nature. A statistical overview of both datasets can be found in the final two rows of Table 1. + +# 4.3 Dataset Construction Pipeline + +As outlined in Section 4.1, the PSG4D-GTA is built upon the SAIL-VOS 3D dataset, while the PSG4D-HOI is derived from the HOI4D dataset. To adapt the SAIL-VOS 3D dataset for our purpose, we commenced with a comprehensive review of all 178 GTA videos within the dataset. This stage involved a meticulous elimination process to exclude videos containing NSFW content, resulting in a refined pool of 67 videos. The SAIL-VOS 3D dataset, which is equipped with 3D instance segmentation, required additional annotation for background elements to integrate panoptic segmentation. Leveraging the PVSG annotation pipeline, we employed an event detection method [62] to isolate the key frames. The background elements within these key frames were subsequently annotated using the pre-annotation provided by the SAM model [63]. Upon completion of key frame annotations, the AOT method [64] was utilized to propagate the segmentation across the entire video sequence. The final step involved overlaying the instance segmentation on the stuff segmentation, thereby completing the process. The HOI-4D dataset, devoid of NSFW content, already provides a 4D panoptic segmentation. Consequently, we included all videos from the HOI-4D dataset in the PSG4D-HOI dataset without further modifications. + +Upon completion of 4D panoptic segmentation annotation, we proceed to annotate the dynamic scene graph according to the masks. Although HOI4D includes action annotation concerning the person, it doesn't account for interactions between objects. Nevertheless, certain actions such as "pick up" are appropriately considered predicates, and we automatically position the key object in the video to form a subject-verb-object triplet. Once the automatic-annotated dataset is prepared, we ask annotators to review and revise the pre-annotations to ensure accuracy. As SAIL-VOS 3D lacks all kinds of relational annotation, we commence scene graph annotation from scratch. The entire annotation process is diligently executed by the authors around the clock. + +# 5 Methodology + +This section details a unified pipeline PSG4DFormer for addressing the PSG-4D problem. As shown in Figure 3, our approach comprises two stages. The initial 4D panoptic segmentation stage aims to segment all 4D entities, including objects and background elements in Figure 3 (a), with the accurate + +![](images/d019444ed0209112cdb67b2496d9f8175f6ac2b05ebf3ac019ed4fe58d4b0c89.jpg) +(a) Frame-Level Panoptic Segmentation +(b) Tracking +(c) Inference $(\uparrow)$ and Training $(\downarrow)$ of Relation Model +Figure 3: Illustration of the PSG4DFormer pipeline. This unified pipeline supports both RGB-D and point cloud video inputs and is composed of two main components: 4D panoptic segmentation modeling and relation modeling. The first stage seeks to obtain the 4D panoptic segmentation mask for each object, along with its corresponding feature tube spanning the video length. This is accomplished with the aid of (a) frame-level panoptic segmentation and (b) a tracking model. The subsequent stage (c) employs a spatial-temporal transformer to predict pairwise relations based on all feature tubes derived from the first stage. + +temporal association in Figure 3 (b). We extract features for each object and obtain feature tubes according to tracking results for subsequent relation modeling in Figure 3 (c). + +# 5.1 4D Panoptic Segmentation Modeling + +As specified in Section 3, given a 3D video clip input, such as an RGB-D sequence of $\mathbf{I} \in \mathbb{R}^{T \times H \times W \times 4}$ or a point cloud sequence of $\mathbf{I} \in \mathbb{R}^{T \times M \times 6}$ , the initial stage's goal is to segment and track each pixel non-overlappingly. The model predicts a set of video clips with the output of $(\mathbf{m}_i, \mathbf{q}_i, p_i)_{i=1}^N$ , where $\mathbf{m}_i$ denotes the tracked object mask tube, $\mathbf{q}_i$ denotes the tracked feature tube, and $p_i$ represents the probability of the object belonging to each category. $N$ is the number of entities, encompassing things and stuff classes. + +Frame-Level Panoptic Segmentation with RGB-D Sequence Given the dual input of RGB and depth images, we adopt a separation-and-aggregation gate (SA-Gate) [65] to efficiently blend information from both modalities. This combined feature set, enriched with data from both inputs, is then fed into a robust Mask2Former [4] for frame-level panoptic segmentation. In the inference stage, at the frame $t$ , given an RGB-D image $\mathbf{I}$ , the Mask2Former with SA-Gate directly outputs a set of object query features $q_{i}^{t} \in \mathbb{R}^{d}, i = 1,\dots ,N$ , each $q_{i}^{t}$ representing one entity at the frame $t$ . + +Frame-Level Panoptic Segmentation with Point Cloud Sequence Apart from perceiving point cloud sequences directly, 3D point cloud coordinates can be calculated and converted from RGB-D data. This conversion involves computing the Normalized Device Coordinates (NDC) using the depth map and projecting the NDC to world coordinates using the transformation matrix provided. We retain only points with a depth below a defined threshold $\lambda$ , discarding distant, less relevant elements like far-off mountains. To leverage texture information from the image, point cloud coordinates can be augmented with corresponding RGB values, creating a colorful point cloud representation $\mathbf{P} \in \mathbb{R}^{M \times 6}$ , where $M$ is the total number of points in a frame. + +We employ DKNet [66], a state-of-the-art indoor segmentation method, as our point cloud segmentation network. It processes input point clouds with a 3D UNet-like [67] backbone and uses sparse convolutions [68] for feature extraction. DKNet localizes instance centroids based on a candidate mining branch and encodes each instance's information into an instance kernel $k_{i} \in \mathbb{R}^{d}$ . These instance kernels $\{k_{i}\}_{i=1}^{N}$ are used as the weights of a few convolution layers to obtain the final instance masks. + +Tracking After frame-level panoptic segmentation, we link each frame via using UniTrack [69] for tracking to obtain the final tracked video cubes for each clip for either modality input. Specifically, instead of incorporating an additional appearance model for tracking embedding extraction, we directly utilize the instance kernels $\{k_i\}_{i=1}^N$ from the segmentation step of DKNet, or object query features $\{q_i\}_{i=1}^N$ from Mask2Former as the tracking embeddings for the association. We find that the instance kernels exhibit sufficient distinctiveness for tracking purposes, even when dealing with different objects belonging to the same semantic class. This is primarily because each instance kernel is designed to maximize the response for a specific instance while suppressing the responses of all other instances, including those with the same semantic class. For a video sequence with the length $T$ , the obtained 4D feature tubes are noted as $Q_i = \{q_i^t\}_{t=1}^T$ . + +# 5.2 Relation Modeling: 4D Scene Graph Generation + +The object query tubes $Q_{i}$ and mask tubes $\mathbf{m}_i$ form a bridge between the first and second stages. These feature tubes first pass through a spatial-temporal transformer encoder, which augments them with both global context information from the overall image and global temporal space. + +Spatial-Temporal Transformer Encoder To infuse the feature tubes with additional temporal dimension information and characteristics from other objects in the scene, we draw inspiration from the Spatial-Temporal Transformer [70]. A spatial encoder is initially employed. For all objects co-occurring at the same time $t$ , a two-layer transformer encoder is applied to the input, comprising all object features specific to time frame $t$ . The spatially-encoded feature tube updates the object feature tube into $\{\tilde{q}_i^t\}_{i=1}^N$ . Subsequently, a temporal transformer encoder updates each object feature tube along the temporal dimension $T$ . By leveraging both the spatial and temporal encoders, we obtain the final feature tube $\{\hat{q}_i^t\}_{i=1}^N$ , ready for relation training. + +Relation Classification Training To train the relation model based on the updated query tube, a training set for relation training must be constructed. It is worth noting that the relation annotation in the training set is in the form of "object-1 relation object-2", with the mask tube of both objects annotated. To start, we associate the updated query tube with ground truth objects. For each ground truth tube, we find the most suitable updated query tube by calculating the video Intersection over Union (vIOU) between ground truth mask tubes, and assign the query feature tube to the respective objects. A frame-level predicate classification is conducted with the assistance of a lightweight fully-connected layer. The inference of the relation classification component simply computes the relation probability between pairs of $\hat{q}_i^t$ and $\hat{q}_j^t$ . + +# 6 Experiments + +Table 2 presents the results of experiments conducted on the PSG-4D dataset. For RGB-D sequences, an ImageNet-pretrained ResNet-101 serves as both the RGB and depth encoder. We set the training duration to 12 epochs. The DKNet, trained from scratch, requires a longer training period of 200 epochs. In the second stage, both spatial and temporal transformer encoders span two layers, and training continues for an additional 100 epochs. Besides the standard PSG4DFormer, we also examine variants with the temporal encoder removed (denoted as “/t”) and the depth branch removed (denoted as “/d”). As a baseline, we use the 3DSGG model [18], which employs a GNN model to encode frame-level object and relation information, without considering temporal data. + +RGB-D vs. Point Cloud Input Table 2 is divided into two sections. The upper part (#1-#3) reports results from point cloud input, while the latter part (#4-#7) details results from the RGB-D sequence. It appears that the RGB-D sequence generally yields better results than the point cloud sequence, particularly for the PSG4D-GTA dataset. This could potentially be attributed to the ResNet-101 backbone used for the RGB-D data, which being pretrained on ImageNet, exhibits robust performance on complex datasets like PSG4D-GTA. Meanwhile, the PSG4D-HOI dataset seems to offer a more consistent scenario with abundant training data, thus narrowing the performance gap between the point cloud and RGB-D methods. + +Significance of Depth The results in Table 2 also allow us to evaluate the importance of depth in the RGB-D method. Specifically, we designed a variant of PSG4DFormer (marked as “/d”) that + +Table 2: Main Results on PSG4D. Experimental results are reported on both the PSG4D-GTA and PSG4D-HOI datasets. In addition to comparing with traditional 3DSGG methods, we conduct experiments to compare the PSG4DFormer and its variants. This includes a version with the temporal encoder removed (denoted as “/t”) and one with the depth branch removed (denoted as “/d”). + +
Input TypeMethodPSG4D-GTAPSG4D-HOI
R/mR@20R/mR@50R/mR@100R/mR@20R/mR@50R/mR@100
Point Cloud Sequence#1 3DSGG [18]1.48 / 0.732.16 / 0.792.92 / 0.853.46 / 2.193.15 / 2.474.96 / 2.84
#2 PSG4DFounder/t2.25 / 1.032.67 / 1.723.14 / 2.053.26 / 2.043.16 / 2.354.18 / 2.64
#3 PSG4DFounder4.33 / 2.104.83 / 2.935.22 / 3.135.36 / 3.105.61 / 3.956.76 / 4.17
RGB-D Sequence#4 3DSGG [18]2.29 / 0.922.46 / 1.013.81 / 1.454.23 / 2.194.47 / 2.314.86 / 2.41
#5 PSG4DFounder/t4.43 / 1.344.89 / 2.425.26 / 2.834.44 / 2.374.83 / 2.435.21 / 2.84
#6 PSG4DFounder/d4.40 / 1.424.91 / 1.935.49 / 2.275.49 / 3.425.97 / 3.926.43 / 4.21
#7 PSG4DFounder6.68 / 3.317.17 / 3.857.22 / 4.025.62 / 3.656.16 / 4.166.28 / 4.97
+ +doesn't utilize the depth branch. In other words, both the RGB encoder and the SA-Gate are removed, turning the pipeline into a video scene graph generation pipeline. The performance of this variant is inferior compared to the original, which highlights the significance of depth information in the scene graph generation task. + +Necessity of Temporal Attention Table 2 includes two methods that do not utilize temporal attention. Specifically, the 3DSGG baseline learns interactions between static object features using a graph convolutional network, while PSG4DFormer $^{t}$ removes the temporal transformer encoders. The results demonstrate that ignoring the temporal component could lead to sub-optimal outcomes, emphasizing the importance of temporal attention in 4D scene graph generation. + +# 7 Real-World Application + +This section illustrates the deployment of the PSG-4D model in a real-world application, specifically within a service robot. It extends beyond theoretical concepts and computational models, delving into the practical integration and execution of this cutting-edge technology. As shown in Figure 4, the focus here is to demonstrate how the robot leverages the PSG-4D model (pretrained from PSG4D-HOI, RGB-D input) to interpret and respond to its surroundings effectively. + +Interaction with Large Language Models The recent advancements in large language models (LLMs) have displayed their exceptional capabilities in reasoning and planning [71]. LLMs have been utilized as planners in numerous recent studies to bridge different modalities, paving the way for more intuitive and efficient human-machine interaction [72]. In this work, we employ GPT-4 [71], as the primary planner. Designed to align with human instruction, GPT-4 communicates with the robot by translating the raw scene graph representations into comprehensible human language. Therefore, the interaction begins with the prompt, "I am a service robot. For every 30 seconds, I will give you what I have seen in the last 30 seconds. Please suggest me what I could serve." Subsequently, every 30 seconds, the robot engages with GPT-4, providing an update: "In the past 30s, what I captured is: , <...>, <...>." This enables GPT-4 to analyze the situation and provide appropriate feedback. + +Post-Processing for Execution The effective deployment of the PSG-4D model necessitates a robust set of predefined actions that the robot can execute. Currently, the action list includes tasks such as picking up litter and engaging in conversation with individuals. After GPT-4 provides its suggestions, it is further prompted to select a suitable action from this predefined list for the robot to execute. However, the flexibility of this system allows for the expansion of this action list, paving the way for more complex and varied tasks in the future. To encourage community involvement and the development of fascinating applications, we also release the robot deployment module alongside the PSG4D codebase. The demo robot is priced at approximately $1.2K and comes equipped with an RGB-D sensor, microphone, speakers, and a robotic arm. + +![](images/896fe11c1fa6c301a950b1c8df80b04f956db607c2962ec51eaa5d077755587f.jpg) +(a) The RGB-D sequence that is captured by the robot. + +![](images/2e9172670129e3cc6802c90015b923d2bad3af57ddebcea9db4c80c6aaf93e99.jpg) +(b) PSG-4D Parsing + +![](images/565cbce4886b4895b84ac75fc29b7d6c99582549a443e75dd2fef3464f433d1c.jpg) +(c) Reasoning & Planning + +![](images/3918f45ce857b31c060377a8f97ba7e2eb20d7634fa28d911c269fd9aefe18fb.jpg) +Figure 4: Demonstration of a Robot Deployed with the PSG-4D Model. The service robot interprets the RGB-D sequence shown in (a), where a man is seen drinking coffee and subsequently dropping the empty bottle on the ground. The robot processes this sequence, translating it into a 4D scene graph depicted in (b). This graph comprises a set of temporally stamped triplets, with each object associated with a panoptic mask, accurately grounding it in 3D space. The robot regularly updates its PSG4D to GPT-4, awaiting feedback and instructions. In this scenario, GPT-4 advises the robot to clean up the discarded bottle and remind the man about his action. This directive is translated into robot action, as visualized in (d). + +![](images/e61a9cc3afbfc89114aae4c8a9d1fa4255bf408c560e6ec6b64d2aa8edbe0b93.jpg) +(d) Robot Reaction + +# 8 Conclusion, Challenges, and Outlook + +This paper presents a novel and demanding extension to the traditional scene graph generation, the 4D Panoptic Scene Graph Generation, which incorporates the spatio-temporal domain into the framework. We introduce a comprehensive framework, the PSG4DFormer, capable of processing both RGB-D and point cloud sequences. The successful deployment of this pipeline in a practical service robot scenario underscores its potential in real-world applications. However, these achievements also highlight the nascent state of this field, emphasizing the necessity for continued advancements to fully exploit the potential of 4D Panoptic Scene Graph Generation. + +Challenges Despite encouraging results, we have also revealed several persistent challenges in the realm of 4D Panoptic Scene Graph Generation. Through our demonstration, we found that current models, whether derived from PSG4D-GTA or PSG4D-HOI, can handle only simple scenes and falter when faced with more complex real-world environments. Notably, there exist robust models trained in the 2D world. Finding effective and efficient strategies to adapt these models to the 4D domain presents a compelling direction for future exploration. + +Outlook Future work in this field presents several intriguing trajectories. There is a pressing need for more efficient algorithms for 4D Panoptic Scene Graph Generation, which can handle larger and more diverse environments. Equally important is the creation of comprehensive and diverse datasets that would allow more rigorous evaluation and foster advancements in model development. Particularly noteworthy is a recent Digital Twin dataset [73], which promises a high level of accuracy and photorealism, aligning seamlessly with the objectives of PSG4D. This dataset will be incorporated as the third subset of the PSG4D dataset, readily accessible from our codebase. In addition to robotics, as demonstrated by the practical application of PSG4DFormer, we are also exploring its potential as an autonomous player in the GTA game. Actually, our recent endeavor Octopus [58] strives to complete GTA missions by employing a visual-language programmer to generate executable action code. In contrast to the previously passive task completion, the application in this paper actively perceives and understands the environment, showcasing a shift towards autonomy in robotics. Furthermore, Octopus [58] utilizes a 4D scene graph structure to capture environmental information during the visual-language programmer training, exemplifying a practical application of the PSG4D modality. + +We eagerly anticipate the future progress in the field of 4D Panoptic Scene Graph Generation and its potential to revolutionize our understanding of real-world dynamics. + +Potential Negative Societal Impacts This work releases a dataset containing human behaviors, posing possible gender and social biases inherently from data. Potential users are encouraged to consider the risks of overseeing ethical issues in imbalanced data, especially in underrepresented minority classes. Nevertheless, all the NSFW content is removed from the dataset. + +# Acknowledgement + +This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012), NTU NAP, and under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, the National Key R&D Program of China under grant number 2022ZD0161501, as well as cash and in-kind contribution from the industry partner(s). + +# References + +[1] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. arXiv preprint arXiv:2210.07474, 2022. 2 +[2] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. 2 +[3] Sonia Raychaudhuri, Tommaso Campari, Unnat Jain, Manolis Savva, and Angel X Chang. Reduce, reuse, recycle: Modular multi-object navigation. arXiv preprint arXiv:2304.03696, 2023. 2 +[4] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. 2022. 2, 6 +[5] Xiangtai Li, Haobo Yuan, Wenwei Zhang, Guangliang Cheng, Jiangmiao Pang, and Chen Change Loy. Tube-link: A flexible cross tube baseline for universal video segmentation. In ICCV, 2023. 2 +[6] Xiangtai Li, Henghui Ding, Wenwei Zhang, Haobo Yuan, Guangliang Cheng, Pang Jiangmiao, Kai Chen, Ziwei Liu, and Chen Change Loy. Transformer-based visual segmentation: A survey. arXiv pre-print, 2023. 2 +[7] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In CVPR, 2017. 2, 3 +[8] Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, and Wei Liu. Learning to compose dynamic tree structures for visual contexts. In CVPR, 2019. 2, 3 +[9] Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global context. In CVPR, 2018. 2, 3 +[10] Mohammed Suhail, Abhay Mittal, Behjat Siddiquie, Chris Broaddus, Jayan Eledath, Gerard Medioni, and Leonid Sigal. Energy-based learning for scene graph generation. In CVPR, 2021. 2, 3 +[11] Yiwu Zhong, Jing Shi, Jianwei Yang, Chenliang Xu, and Yin Li. Learning to generate scene graph from natural language supervision. In ICCV, 2021. 2, 3 +[12] Jingkang Yang, Yi Zhe Ang, Zujin Guo, Kaiyang Zhou, Wayne Zhang, and Ziwei Liu. Panoptic scene graph generation. In European Conference on Computer Vision, pages 178-196. Springer, 2022. 2, 3 +[13] Jingkang Yang, Wenxuan Peng, Xiangtai Li, Zujin Guo, Liangyu Chen, Bo Li, Zheng Ma, Kaiyang Zhou, Wayne Zhang, Chen Change Loy, and Ziwei Liu. Panoptic video scene graph generation. In CVPR, 2023. 2, 3 +[14] Xindi Shang, Tongwei Ren, Jingfan Guo, Hanwang Zhang, and Tat-Seng Chua. Video visual relation detection. In ACM MM, 2017. 2, 3 +[15] Xindi Shang, Donglin Di, Junbin Xiao, Yu Cao, Xun Yang, and Tat-Seng Chua. Annotating objects and relations in user-generated videos. In ICMR, 2019. 2, 3 +[16] Matthew Fisher, Manolis Savva, and Pat Hanrahan. Characterizing structural relationships in scenes using graph kernels. In ACM SIGGRAPH 2011 papers, pages 1-12. 2011. 2, 3 + +[17] Robert F Tobler. Separating semantics from rendering: a scene graph based architecture for graphics applications. The Visual Computer, 27(6-8):687-695, 2011. 2, 3 +[18] Johanna Wald, Helisa Dhamo, Nassir Navab, and Federico Tombari. Learning 3d semantic scene graphs from 3d indoor reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3961-3970, 2020. 2, 3, 4, 7, 8 +[19] Shoulong Zhang, Aimin Hao, Hong Qin, et al. Knowledge-inspired 3d scene graph prediction in point cloud. Advances in Neural Information Processing Systems, 34:18620-18632, 2021. 2, 3 +[20] Y.-T. Hu, J. Wang, R. A. Yeh, and A. G. Schwing. SAIL-VOS 3D: A Synthetic Dataset and Baselines for Object Detection and 3D Mesh Reconstruction from Video Data. In Proc. CVPR, 2021. 2, 5 +[21] Grand theft auto v, 2014. 2, 4 +[22] Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, and Li Yi. Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21013-21022, June 2022. 2, 5 +[23] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017. 3 +[24] Rongjie Li, Songyang Zhang, and Xuming He. Sgtr: End-to-end scene graph generation with transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19486-19496, 2022. 3 +[25] Yuren Cong, Michael Ying Yang, and Bodo Rosenhahn. Reltr: Relation transformer for scene graph generation. arXiv preprint arXiv:2201.11460, 2022. 3 +[26] Jingwei Ji, Ranjay Krishna, Li Fei-Fei, and Juan Carlos Niebles. Action genome: Actions as compositions of spatio-temporal scene graphs. In CVPR, 2020. 3 +[27] Jinghao Wang, Zhengyu Wen, Xiangtai Li, Zujin Guo, Jingkang Yang, and Ziwei Liu. Pair then relation: Pair-net for panoptic scene graph generation. arXiv preprint arXiv:2307.08699, 2023. 3 +[28] Chengyang Zhao, Yikang Shen, Zhenfang Chen, Mingyu Ding, and Chuang Gan. Textpsg: Panoptic scene graph generation from textual descriptions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2839-2850, 2023. 3 +[29] Zijian Zhou, Miaojing Shi, and Holger Caesar. Hilo: Exploiting high low frequency relations for unbiased panoptic scene graph generation. arXiv preprint arXiv:2303.15994, 2023. 3 +[30] Julian Lorenz, Florian Barthel, Daniel Kienzle, and Rainer Lienhart. Haystack: A panoptic scene graph dataset to evaluate rare predicate classes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 62–70, 2023. 3 +[31] Jingkang Yang, Zheng Ma, Qixun Wang, Xiaofeng Guo, Haofan Wang, Ziwei Liu, Wayne Zhang, Xing Xu, and Hai Zhang. The psg challenge: towards comprehensive scene understanding. National Science Review, 10(6):nwad126, 2023. 3 +[32] Xiangtai Li, Wenwei Zhang, Jiangmiao Pang, Kai Chen, Guangliang Cheng, Yunhai Tong, and Chen Change Loy. Video k-net: A simple, strong, and unified baseline for video segmentation. In CVPR, 2022. 3 +[33] Jaewon Bae, Dongmin Shin, Kangbeen Ko, Juchan Lee, and Ue-Hwan Kim. A survey on 3d scene graphs: Definition, generation and application. In Robot Intelligence Technology and Applications 7: Results from the 10th International Conference on Robot Intelligence Technology and Applications, pages 136-147. Springer, 2023. 3 +[34] Ue-Hwan Kim, Jin-Man Park, Taek-Jin Song, and Jong-Hwan Kim. 3-d scene graph: A sparse and semantic representation of physical environments for intelligent agents. IEEE transactions on cybernetics, 50(12):4921-4933, 2019. 3 +[35] Iro Armeni, Zhi-Yang He, JunYoung Gwak, Amir R Zamir, Martin Fischer, Jitendra Malik, and Silvio Savarese. 3d scene graph: A structure for unified semantics, 3d space, and camera. In ICCV, 2019. 3 + +[36] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. 3 +[37] Antoni Rosinol, Andrew Violette, Marcus Abate, Nathan Hughes, Yun Chang, Jingnan Shi, Arjun Gupta, and Luca Carlone. Kimera: From slam to spatial perception with 3d dynamic scene graphs. The International Journal of Robotics Research, 40(12-14):1510-1546, 2021. 3 +[38] Shun-Cheng Wu, Johanna Wald, Keisuke Tateno, Nassir Navab, and Federico Tombari. Scenagraphfusion: Incremental 3d scene graph prediction from rgb-d sequences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7515-7525, 2021. 3 +[39] Hema Koppula and Ashutosh Saxena. Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation. In International conference on machine learning, pages 792-800. PMLR, 2013. 3 +[40] Qian Xie, Ouussama Remil, Yanwen Guo, Meng Wang, Mingqiang Wei, and Jun Wang. Object detection and tracking under occlusion for object-level rgb-d video segmentation. IEEE Transactions on Multimedia, 20(3):580-592, 2017. 3 +[41] Guyue Zhang, Jun Liu, Hengduo Li, Yan Qiu Chen, and Larry S Davis. Joint human detection and head pose estimation via multistream networks for rgb-d videos. IEEE Signal Processing Letters, 24(11):1666-1670, 2017. 3 +[42] David Weikersdorfer, Alexander Schick, and Daniel Cremers. Depth-adaptive supervoxels for rgb-d video segmentation. In 2013 IEEE International Conference on Image Processing, pages 2708-2712. IEEE, 2013. 3 +[43] Huazhu Fu, Dong Xu, and Stephen Lin. Object-based multiple foreground segmentation in rgbd video. IEEE Transactions on Image Processing, 26(3):1418-1427, 2017. 3 +[44] Steven Hickson, Stan Birchfield, Irfan Essa, and Henrik Christensen. Efficient hierarchical graph-based segmentation of rgbd videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 344-351, 2014. 3 +[45] Numair Khan, Qian Zhang, Lucas Kasser, Henry Stone, Min H Kim, and James Tompkin. View-consistent 4d light field superpixel segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7811-7819, 2019. 3 +[46] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3075-3084, 2019. 3 +[47] Wenwei Zhang, Hui Zhou, Shuyang Sun, Zhe Wang, Jianping Shi, and Chen Change Loy. Robust multimodality multi-object tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2365-2374, 2019. 3 +[48] Xinshuo Weng, Yongxin Wang, Yunze Man, and Kris M Kitani. Gnn3dmot: Graph neural network for 3d multi-object tracking with 2d-3d multi-feature learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6499–6508, 2020. 3 +[49] Xinshuo Weng, Jianren Wang, David Held, and Kris Kitani. 3d multi-object tracking: A baseline and new evaluation metrics. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10359-10366. IEEE, 2020. 3 +[50] Ankit Goyal, Kaiyu Yang, Dawei Yang, and Jia Deng. Rel3d: A minimally contrastive benchmark for grounding spatial relations in 3d. Advances in Neural Information Processing Systems, 33:10514-10525, 2020. 4 +[51] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828–5839, 2017. 4 +[52] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017. 4 +[53] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621-11631, 2020. 4 + +[54] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2446-2454, 2020. 4 +[55] Yuan-Ting Hu, Jiahong Wang, Raymond A Yeh, and Alexander G Schwing. Sail-vos 3d: A synthetic dataset and baselines for object detection and 3d mesh reconstruction from video data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1418-1428, 2021. 4 +[56] Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, and Li Yi. Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21013-21022, 2022. 4 +[57] Siwei Zhang, Qianli Ma, Yan Zhang, Zhiyin Qian, Taein Kwon, Marc Pollefeys, Federica Bogo, and Siyu Tang. Egobody: Human body shape and motion of interacting people from head-mounted devices. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part VI, pages 180-200. Springer, 2022. 4 +[58] Jingkang Yang, Yuhao Dong, Shuai Liu, Bo Li, Ziyue Wang, Chencheng Jiang, Haoran Tan, Jiamu Kang, Yuanhan Zhang, Kaiyang Zhou, et al. Octopus: Embodied vision-language programmer from environmental feedback. arXiv preprint arXiv:2310.08588, 2023. 4, 9 +[59] Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, and Niko Suenderhauf. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135, 2023.4 +[60] Ege Özsoy, Evin Pinar Örnek, Ulrich Eck, Tobias Czempiel, Federico Tombari, and Nassir Navab. 4d-or: Semantic scene graphs for or domain modeling. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 475-485. Springer, 2022. 4 +[61] Saeid Amiri, Kishan Chandan, and Shiqi Zhang. Reasoning with scene graphs for robot planning under partial observability. IEEE Robotics and Automation Letters, 7(2):5560-5567, 2022. 4 +[62] Kiyotaka Otsuji and Yoshinobu Tonomura. Projection detecting filter for video cut detection. In Proceedings of the first ACM international conference on Multimedia, pages 251-257, 1993. 5 +[63] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. 5 +[64] Zongxin Yang, Yunchao Wei, and Yi Yang. Associating objects with transformers for video object segmentation. In NeurIPS, 2021. 5 +[65] Xiaokang Chen, Kwan-Yee Lin, Jingbo Wang, Wayne Wu, Chen Qian, Hongsheng Li, and Gang Zeng. Bi-directional cross-modality feature propagation with separation-and-aggregation gate for rgb-d semantic segmentation. In European Conference on Computer Vision (ECCV), 2020. 6 +[66] Yizheng Wu, Min Shi, Shuaiyuan Du, Hao Lu, Zhiguo Cao, and Weicai Zhong. 3d instances as 1d kernels. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIX, pages 235-252. Springer, 2022. 6 +[67] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234-241. Springer, 2015. 6 +[68] Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9224-9232, 2018. 6 +[69] Zhongdao Wang, Hengshuang Zhao, Ya-Li Li, Shengjin Wang, Philip HS Torr, and Luca Bertinetto. Do different tracking tasks require different appearance models? NeurIPS, 2021. 7 +[70] Yuren Cong, Wentong Liao, Hanno Ackermann, Bodo Rosenhahn, and Michael Ying Yang. Spatial-temporal transformer for dynamic scene graph generation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 16372-16382, 2021. 7 +[71] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. 8 + +[72] Wenlong Huang, P. Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. ArXiv, abs/2201.07207, 2022. 8 +[73] Xiaqing Pan, Nicholas Charron, Yongqian Yang, Scott Peters, Thomas Whelan, Chen Kong, Omkar Parkhi, Richard Newcombe, and Yuheng Carl Ren. Aria digital twin: A new benchmark dataset for egocentric 3d machine perception. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20133-20143, 2023. 9 \ No newline at end of file diff --git a/4dpanopticscenegraphgeneration/images.zip b/4dpanopticscenegraphgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1dda7a97e51e9b64147279a8492ade4c67551c3b --- /dev/null +++ b/4dpanopticscenegraphgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a4deccd865490152f2e604aeeb9241368f17748320c0ef278311018ede6d929 +size 492238 diff --git a/4dpanopticscenegraphgeneration/layout.json b/4dpanopticscenegraphgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dfddaa11021c413af666f51723b9911b372566ee --- /dev/null +++ b/4dpanopticscenegraphgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cfe7ad2225663f05156da77745bf25fc40a33419a677c452f4eafbc95e1fcc3 +size 364142 diff --git a/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_content_list.json b/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0d4022d9cd9b0378eece6229f2c1140b58498d7e --- /dev/null +++ b/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e46757ffada37b6d3381d065eed8c1a85d3dfa622ca015a1edb62eee7e308a6 +size 251997 diff --git a/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_model.json b/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_model.json new file mode 100644 index 0000000000000000000000000000000000000000..81d4442601a81dc59b61fd9ad3553d0f2a07fcdd --- /dev/null +++ b/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eea6f181248927c9527bf7efa2ecbf77b325ce9cfa50b7d00d8d243ab3c4a2fe +size 308409 diff --git a/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_origin.pdf b/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7af8ad884603093f335ebc693c18129021c739a4 --- /dev/null +++ b/4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39d3a7cb8b29b3c4aa205356839faae277decbb24e4b218fe31b2f085b305ee6 +size 19632864 diff --git a/4mmassivelymultimodalmaskedmodeling/full.md b/4mmassivelymultimodalmaskedmodeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8e0cf2b8861ce1578afb5d3ae865659aeff026e0 --- /dev/null +++ b/4mmassivelymultimodalmaskedmodeling/full.md @@ -0,0 +1,920 @@ +# 4M: Massively Multimodal Masked Modeling + +David Mizrahi $^{1,2*}$ Roman Bachmann $^{1*}$ Oğuzhan Fatih Kar $^{1}$ + +Teresa Yeo $^{1}$ Mingfei Gao $^{2}$ Afshin Dehghan $^{2}$ Amir Zamir $^{1}$ + +$^{1}$ Swiss Federal Institute of Technology Lausanne (EPFL) $^{2}$ Apple + +https://4m.epfl.ch + +# Abstract + +Current machine learning models for vision are often highly specialized and limited to a single modality and task. In contrast, recent large language models exhibit a wide range of capabilities, hinting at a possibility for similarly versatile models in computer vision. In this paper, we take a step in this direction and propose a multimodal training scheme called 4M. It consists of training a single unified Transformer encoder-decoder using a masked modeling objective across a wide range of input/output modalities – including text, images, geometric, and semantic modalities, as well as neural network feature maps. 4M achieves scalability by unifying the representation space of all modalities through mapping them into discrete tokens and performing multimodal masked modeling on a small randomized subset of tokens. + +4M leads to models that exhibit several key capabilities: (1) they can perform a diverse set of vision tasks out of the box, (2) they excel when fine-tuned for unseen downstream tasks or new input modalities, and (3) they can function as a generative model that can be conditioned on arbitrary modalities, enabling a wide variety of expressive multimodal editing capabilities with remarkable flexibility. + +Through experimental analyses, we demonstrate the potential of 4M for training versatile and scalable foundation models for vision tasks, setting the stage for further exploration in multimodal learning for vision and other domains. + +# 1 Introduction + +In recent years, the field of natural language processing (NLP) has seen a shift toward training large language models (LLMs) that are inherently capable of performing a wide range of tasks without requiring extensive task-specific adaptations [12, 25]. While these models have demonstrated remarkable success in NLP, there remains a need to develop similarly versatile and scalable models for vision. A crucial aspect of scalability and versatility in vision is the ability to handle multiple (input) modalities and (output) tasks, as vision models must deal with a diverse range of sensory inputs, such as images, 3D, and text and solve a wide range of tasks. Unlike NLP, where language modeling on raw text has led to multitask capabilities [84, 86], training on only RGB images with a single objective has not exhibited the same behavior for vision. Therefore, it is deemed important to incorporate multiple modalities and tasks in training. It has been indeed suggested by psychophysical studies that multimodality is one key driver behind the development of biological intelligence [104]. + +To create a model that exhibits the desirable properties of foundation models in vision, it is important to consider three key aspects in terms of scalability: data, architecture, and training objective. For data, scalability means being able to benefit from more training samples toward improving performance. + +![](images/f25775972d54899f281546acba32bb7f11a78a226df68b6dbcaaca37efd8e037.jpg) +Figure 1: 4M enables training a versatile multimodal and multitask model, capable of performing a diverse set of vision tasks out of the box, as well as being able to perform multimodal conditional generation. This, coupled with the model's ability to perform in-painting, enables powerful image editing capabilities. This generalist model transfers well to a broad range of downstream tasks or to novel modalities, and can be easily fine-tuned into more specialized variants of itself. + +In terms of architecture, scalability implies increased performance with growing model size and remaining stable when trainingd at large sizes. Lastly, a scalable training objective should efficiently handle a growing number of modalities without incurring excessive computational costs. In our approach, we target scalability across these three aspects while maintaining compatibility with multiple modalities. + +We address these challenges by proposing a method consisting of training a single unified Transformer encoder-decoder using a multimodal masked modeling objective. We name this approach 4M (short for "Massively Multimodal Masked Modeling")\* to emphasize its ability to scale to many diverse modalities. Our method unifies the benefits of multimodal learning and masked modeling, such as (1) serving as an effective pre-training objective for learning rich representations [30, 48], (2) leading to strong cross-modal predictive coding abilities and shared scene representations [5, 44], (3) enabling models to be used for generative tasks through iterative sampling [17, 18]. Crucially, 4M combines these benefits while remaining efficient via a number of mechanisms. + +To enable training a single Transformer on modalities with different formats, like text, bounding boxes, images, or neural network features, we choose to unify their representational spaces by mapping them into sets or sequences of discrete tokens [21, 22, 74, 64] using modality-specific tokenizers [110]. This tokenization approach enhances compatibility, scalability, and sharing by removing the need for task-specific encoders and heads, allowing the Transformer to be compatible with all modalities and maintain full parameter-sharing. Also, although 4M operates on a large set of modalities, it can train in a highly efficient manner through the use of input and target masking. This involves randomly selecting a small subset of tokens from all modalities as inputs to the model, while another small subset of the remaining tokens is treated as targets. Decoupling the number of input and target tokens from the number of modalities prevents the computational cost from rapidly escalating with increasing modalities, allowing for a scalable training objective. + +Leveraging the availability of single-modal or text-image pair datasets, such as CC12M [19], we employ strong pseudo labeling networks to generate aligned binding data across various modalities. This pseudo labeling approach enables training on diverse and large-scale datasets without demanding them to come with multimodal/multitask annotations. + +4M models are capable of performing many key vision tasks out of the box and can also be fine-tuned to achieve highly competitive performance on unseen downstream tasks and input modalities. In + +addition, training using a multimodal masked modeling objective leads to steerable generative models that can be conditioned on arbitrary modalities, enabling the user's intent to be expressed in a versatile manner (as depicted in Figure 4) as well as various multimodal editing tasks (see Figure 1). + +We further perform an extensive ablation analysis that studies the factors affecting 4M's performance. This thorough examination, along with the simplicity and generality of our approach, demonstrates the potential of 4M for a wide range of vision tasks and further expansions. + +Our main contributions and results can be summarized as follows: + +1. Method: we introduce 4M, a framework for training versatile and scalable foundation models for vision tasks using a multimodal masked modeling objective. Our approach results in models that learn rich representations and perform well on a wide range of tasks without requiring task-specific adaptations. +2. Performance: we demonstrate the efficacy of our approach through extensive experiments and benchmarks, showcasing the ability of these models to perform many key vision tasks out of the box, as well as achieving highly competitive performance when fine-tuned on unseen downstream tasks. +3. Generative Capabilities: we showcase the flexible and steerable generative capabilities of models trained using 4M, enabling a variety of multimodal editing tasks utilizing conditioning on arbitrary modalities. +4. Experimental Study: we conduct an extensive ablation analysis to study the factors affecting 4M's performance, providing important insights into these models' behavior and design. + +Code, models, and additional interactive visualizations are available at https://4m.epfl.ch. + +# 2 Method Description + +The 4M architecture and training objective (depicted in Figure 2) were designed with a focus on being as compatible and scalable as possible in terms of the number and type of modalities it accepts, while being conceptually simple and computationally efficient. We enable these through the conjunction of the following key aspects: + +1. Tokenizing modalities: We abstract away modality-specific intricacies by mapping all modalities into sequences or sets of discrete tokens, whether they are images, text, sparse data, or neural network feature maps. This allows every possible mapping between modalities to be seen as predicting one sequence or set of tokens from another. In Section 2.1, we discuss what types of modalities we train on, how we generate the training data, and how we enable training a model on different modalities through tokenization. +2. Training a single compatible network on all modalities: Different tasks in vision, NLP and other domains traditionally required vastly different modeling choices, architectures, and losses, making the joint training on multiple modalities challenging. Tokenizing all modalities into a unified representation space allows us to train a single Transformer encoder-decoder (see Figure 2) to map between different modalities through (parallel or serialized autoregressive) token prediction. In Section 2.2, we provide more details on the 4M architecture. +3. Multimodal masked pre-training objective: Transformers have demonstrated excellent scalability with data and model size across a diverse set of tasks [60, 2], particularly when paired with a scalable pre-training objective such as masked reconstruction [30, 48, 56]. In Section 2.3, we detail our approach to training 4M using a multimodal masked modeling objective on randomized token subsets to learn strong cross-modal predictive coding abilities. + +# 2.1 Modalities & data + +Pre-training modalities. We train 4M models on a diverse set of modalities, namely RGB, captions, depth, surface normals, semantic segmentation maps, bounding boxes, and tokenized CLIP feature maps [85, 82, 114]. These modalities were chosen to cover several key aspects: First, they contain a mix of semantic information (captions, semantic segmentation, bounding boxes, CLIP), geometric information (depth, surface normals), and RGB. When used as input modalities, these modalities can be used as informative priors about the scene geometry and its semantic content [98, 69], and when used as target tasks, they allow us to steer what kind of representations are learned [42, 5, 82, 114]. Second, these modalities are diverse in terms of the format they use to encode information. They consist of dense visual modalities (RGB, depth, surface normals, semantic segmentation), sparse and/or sequence-base modalities (captions, bounding boxes), as well as neural network feature maps (CLIP [85]). Finally, these modalities allow for diverse and rich interaction with the model for + +![](images/da9f4c103a71585074090a07aa9c73c049055c3fd9aa8cf0c81e108218980a6d.jpg) +Figure 2: Method overview. (Left): 4M is a framework for training multimodal and multitask models that operate on tokenized versions of multiple image-like modalities (such as RGB, depth, etc.) and sequence modalities (such as captions and bounding boxes). (Right): The 4M pre-training objective consists of training a Transformer encoder-decoder to predict a randomly selected subset of tokens, which is sampled from all modalities, based on another random subset of tokens. + +generative purposes. For example, captions, segmentation maps, and bounding boxes allow for semantically conditioned generation, while geometric modalities enable grounding the generation on 3D information. 4M's versatility in handling various modalities, its capacity to benefit from cross-training, and its ability to learn cross-modal predictive representations (as demonstrated in Sections 3 and 4) suggest its potential for extension to even more modalities. + +Pseudo labeled multimodal training dataset. Training 4M models requires a large-scale and aligned multimodal/multitask dataset that contains all the above modalities/tasks and is sufficiently diverse. Most multimodal datasets, however, either do not contain all our pre-training modalities [32], are too small [94], or are not diverse enough [126]. For those reasons, we resort to pseudo labeling [42, 5] the publicly available Conceptual Captions 12M (CC12M) [19] as a binding dataset using powerful off-the-shelf models. Because this approach only requires access to a dataset of RGB images, it may scale to even larger web-scale image datasets [99, 14, 39]. + +Tokenization. All modalities are mapped to sets or sequences of discrete tokens (indices of a vocabulary) through the use of modality-specific tokenizers. Captions and bounding boxes are both treated as text and encoded using WordPiece [30]. For modeling bounding boxes, we follow the approach of Pix2Seq [21], that turns the task of object detection into a sequence prediction problem. RGB, depth, normals, semantic segmentation maps, and CLIP feature maps are tokenized using learned vector quantized autoencoders (VQ-VAE) [110]. Unlike Unified-IO [74] that represents all image-like modalities using an RGB pre-trained VQ-GAN [35], we instead use modality-specific tokenizers. This allows us to incorporate neural network feature maps that would otherwise be difficult to represent with existing image tokenizers. While mapping modalities to tokens and vice versa incurs a small computational overhead during inference, we avoid this overhead during pre-training by pre-computing the tokens while assembling our multimodal dataset. + +We provide a detailed overview of the multimodal dataset, pseudo labeling procedure, and tokenization in Appendix B. + +# 2.2 Multimodal Transformer + +We design the architecture of 4M with efficiency, scalability, and simplicity in mind. 4M's architecture closely resembles a standard Transformer [112] encoder-decoder but includes a few crucial modifications to enable joint modeling of multiple different image-like modalities, such as RGB or semantic segmentation, but also of sequence modalities, such as captions or bounding boxes. + +Multimodal encoder. The encoder is a standard Transformer encoder but features modality-specific learnable input embedding layers to map token indices to vectors. To each token of a specific modality, we add a learnable modality embedding and either 1D (for sequences) or 2D (for dense modalities) sine-cosine positional embeddings. To facilitate transfer learning, the encoder is additionally designed to accept RGB pixels using a learnable patch-wise linear projection, enabling it to double as a Vision Transformer [31] backbone. + +![](images/9ca202706dbc27ef1d5f2eb0f616e2771fa8aa1b58938507c6590299d379b018.jpg) +Iteration 1 (generate / in-paint RGB with MaskGIT) + +![](images/e8e5e0111c246e0503999f199d35305517863c85cb999c6746e300c0388390df.jpg) +4M chained multimodal generation +Iteration 4 (generate caption autoregressively) + +![](images/5609a0008fc758e87fd47d735f076c03bad6d11770692273f199d7ef447b1516.jpg) +Detokenization + +![](images/07244fab23e449bfae526aba185eef0d25a04b0f192aebbc874a7f4d2804c849.jpg) +Iteration 2 + +![](images/1ab4dbc0bc967e96d85c4e5732862479146490644b35c0057047a5d9d09e5b68.jpg) +Iteration 5 + +![](images/33a3595300fea33d9757ac7383a4994f3f6119aa70f3538ff95c4572d7d24ef7.jpg) + +![](images/533c1fa8495a25bb04abf80336b66ea275b53b24a5024a848a134b73fb94e177.jpg) +Iteration 3 + +![](images/3ec90bc57c99489041bc6835ccb0a58a4418331e85daf8962c10f1ac515dbf0e.jpg) +Iteration 6 +Figure 3: Chained multimodal generation. This simplified example illustrates the generation of a full RGB image from a partial RGB and bounding box input using the MaskGIT [17] decoding scheme, followed by autoregressive generation of a caption. Note that through chaining (i.e. using fully generated modalities as conditioning when generating subsequent modalities), we can predict multiple modalities in a self-consistent manner. This is in contrast to independently generating each modality from the original conditioning, where each generated output is consistent with the input but not necessarily with other outputs. See Figures 9 and 10 for visual examples of chained generation. Generated tokens can be turned back into images, text, and other modalities, using the tokenizers. + +![](images/1e66e260be32a08272308dc852f2a33d00a6a252634b5ae74d3abbf06a2fa5d9.jpg) + +Multimodal decoder. The decoder handles tokens from both dense image-like and sequence-like modalities, with each type requiring a different approach. However, two aspects are common to all tokens: First, they can all freely attend to any encoder tokens in the cross-attention layers, ensuring full access to the encoded information. Second, we employ attention masks to separate decoder tokens of different modalities. This ensures that the decoder produces consistent outputs for each specific modality, irrespective of what other outputs are being generated simultaneously. For dense image-like modalities, the decoder input consists of mask tokens along with modality and positional information. The decoder's role is to predict this masked content. For sequence-like modalities, the input to the decoder comprises modality, positional, and content information. The decoder is tasked to predict the next token in the sequence. To ensure that each token is only influenced by preceding tokens (and not by any future tokens), we apply a causal mask to the self-attention, as is standard in autoregressive models. Since all target tasks consist of discrete tokens, we can use the cross-entropy loss for all of them, which we found removes the need for task-specific loss balancing and improves training stability. Further details on the architecture are provided in Appendix C. + +# 2.3 Multimodal masking strategy + +For multimodal pre-training, we use a pre-training strategy similar to MultiMAE [5], in that we sample and encode a small set of visible tokens/patches from all modalities, and train the model to perform cross-modal predictive coding. + +Input & target masking. Dropping masked-out tokens and only encoding the small set of visible ones when performing masked image modeling has been shown to yield significant increases in training efficiency [48], and is crucial when training on multiple modalities [5]. The imbalance of the usually low number of input tokens and the much higher number of target tokens can induce significant computational costs in the decoder, even if they are small. We propose to use target masking, meaning that we do not decode all masked-out tokens, but only a randomly sampled subset. By fixing the number of randomly sampled input and target tokens (see Figure 2), 4M enables pre-training on many modalities while keeping training costs low. Similar to MultiMAE [5], we sample the number of input tokens per modality using a symmetric Dirichlet distribution with concentration parameter $\alpha$ . We follow the same approach to also sample the number of target tokens per modality. After sampling the per-modality number of input and target tokens, we sample tokens from dense modalities uniformly at random, and perform span masking [86] on sequence modalities. The experimental consequences of these design choices are studied in Section 5, and in more detail in Appendix E.4. + +# 3 Transfer Experiments + +To assess the effectiveness of 4M as a pre-training strategy, we train two models: a base version 4M-B with 86M encoder parameters and a large version 4M-L with 303M encoder parameters (for more details, see Appendix C). We then transfer these trained models to several common downstream tasks and compare their performance against relevant baselines. To better control for the dataset, augmenta + +Table 1: Transfer learning study: We transfer 4M models to semantic and geometric downstream tasks and compare it to several baselines. For transfers to ImageNet-1K [29, 96], we first perform intermediate fine-tuning on ImageNet-21K [29, 93]. 4M outperforms the baselines on all tasks except for ImageNet-1K, surpassed by DeiT III which is a specialized model. In contrast to 4M, all of the baselines employed data augmentations to achieve their results. Best results per category are bolded. + +
MethodPre-training dataData aug.Extra labelsImageNet-1K Top-1 acc. ↑COCO APbox↑APmask↑ADE20K mIoU ↑NYU depth δ1 acc. ↑
MAE B [48]IN-1K×84.248.341.646.189.1
DeiT III B [108]IN-21K×85.446.138.549.087.4
MultiMAE B [5]IN-1K84.044.137.846.289.0
4M-B (RGB → RGB only)CC12M××82.842.336.638.380.4
4M-B (RGB → CLIP only)CC12M×83.446.639.943.085.7
4M-BCC12M×84.549.742.750.192.0
MAE L [48]IN-1K×86.852.845.351.893.6
DeiT III L [108]IN-21K×87.048.741.152.089.6
4M-LCC12M×86.653.746.453.494.4
+ +tions, model architecture, and compute, which can significantly affect downstream performance, we additionally show self-baselines that are conceptually similar to MAE [48] (Masked RGB $\rightarrow$ RGB) and BEiT-v2 [82] (Masked RGB $\rightarrow$ CLIP). + +The transfer tasks include ImageNet-1K classification [29, 96], COCO detection and instance segmentation [67], ADE20K semantic segmentation [131], and NYUv2 depth estimation [102]. While some transfer tasks have similarities to our pseudo labeled tasks, they are different instantiations (e.g., ADE20K instead of COCO semantic classes, or absolute depth instead of relative depth). + +To make 4M models comparable to other ViT backbones, we train all methods by attaching transfer task-specific heads (e.g. Cascade Mask R-CNN [15, 47]) to the encoder, and discard any decoders. Note that we can also choose to keep the decoder for transfer learning, which we explore in Section 5. For comparability, we perform the transfers of all 4M and baseline models in the same controlled manner, by closely following commonly used settings from other papers. Exact training details are provided in Appendix D. + +Results in Table 1 show that 4M transfers exceptionally well to all downstream tasks, outperforming the baselines on detection, segmentation and depth estimation. While on ImageNet-1K, 4M is outperformed by more specialized models such as DeiT III, the results demonstrate 4M to be a versatile vision model that can strongly benefit from being pre-trained on multiple (pseudo labeled) tasks. We note that preliminary experiments with an even larger 4M-XL model with 2.7B parameters showed overfitting to the pseudo labeled CC12M dataset, resulting in limited additional improvement on downstream tasks. While a larger dataset or adding data augmentations are therefore necessary to fully benefit from larger 4M models, we still observe significant improvements in generation quality and use the 4M-XL model in the next section. + +# 4 Generative Capabilities & Probing the Learned Representation + +4M can directly be used for generation of all pre-training modalities, through iteratively decoding tokens [17, 65, 18], as illustrated in Figure 3. In addition, we enable several other generative capabilities by utilizing two key aspects of 4M: The first is the fact that 4M is crucially able to generate any of the training modalities, either unconditionally or conditioned on any other set of modalities (see Figure 4 top). The second is the fact that 4M is trained using masking, which enables (conditional) in-painting and out-painting (see Figs. 1 and 4). Combining these two key aspects enables several multimodal editing tasks, such as semantic editing, geometrically grounded generation, or guiding the generation with multiple strong and weak conditions (via weighting). + +To improve image generation fidelity, we trained a 4M-XL version and all subsequent images were generated with it. While 4M can be directly used for generation, there are certain common improvements we perform to improve image fidelity in the results shown. These include specializing 4M-L into a super-resolution variant that maps tokens of low-resolution generations to a higher resolution [18, 123], and specializing 4M models by fine-tuning them to be more aligned with specific generative use-cases (e.g. for text-to-image, or in-painting) [95]. See Appendix A for more details on these specializations. + +Probing learned representations through generation. In addition to performing transfers, we can get a glimpse into what kind of (predictive) representations 4M learned by manipulating one part of + +![](images/27852b6b4867719fb0aaa3f22fe316be7b0260ca5bf56e3495e1113c8c294307.jpg) +Any-to-any generation + +![](images/1ecebd2eb1e8cc5b9e88ab8643bbb4710039a3d906ae1259a9fa962529daba6e.jpg) + +![](images/f853c9ec867ed869e64b42ec0b16211f2a4580fb08268cb47542c1e8d2fe2c3f.jpg) + +![](images/f429c6e99054c32d1087fce9c4e0748a27add7e688fed2d72a773eb615c5c9f6.jpg) + +![](images/3894371ff1c2e01df6c3b486d84e847557c561672670616a89147e8ac84fb650.jpg) + +![](images/4f39c7348e4a2ea3547fa979369a713c24fd5e4e754109a10510a2657121e938.jpg) +Grounded generation of image variations + +![](images/e0bf145c2879bb4f03d405e1f20580ffb92c3b337228d4b027d731f65b14bf59.jpg) + +![](images/aca25709f20ab8118e940f945c5f8ed7fca29f6981a9d69bd872290db9287bd6.jpg) + +![](images/0c9fa52aaa4f5866bd47e17cc23b69589b9f53f0890834edc70222531cde8dd1.jpg) + +![](images/66b08d127dd6521320a36624e2d2aa2ab29b172cf0130be2cbfdf84d21f0bc11.jpg) +Edit segmentation label + +![](images/c6ed25f88e597c1b3456c046660411518220b909972c567655bbce8973535237.jpg) +Mask RGB + +![](images/e7372503da97f2e7d6fd4a8ed16b9a3b23ec670893271e1aad62d40f8b508bb7.jpg) +A chain of detailed multimodal edits allows the user to better generate what they had in mind +Mask RGB + +![](images/e67973c364aeceb8edb1725b04031572e103faa8ebaf0f1f2d1dcadcc9202277.jpg) + +![](images/59a6c6dc4ee8ecc0d59f2798791f0fe21507f9f057e6f16beb27b215c68698c8.jpg) +Weighting of different modalities ("classifier-free guidance") for added steering power to guide edits +Weakly condition on masked RGB and +Strongly on caption & bounding box +Weakly condition on RGB and strongly +on caption +Equally condition on caption and +masked RGB + +![](images/67376f22d2b17d47023860a099eac9f136cab15049bb257d7de95cf958853c76.jpg) +Figure 4: Multimodal generation and editing. 4M's in-painting and any-to-any prediction abilities unlock a suite of multimodal generation and editing capabilities, which allow for fine-grained creative control. We show several key capabilities such as grounding the generation in predicted geometry, performing semantic edits, and being able to control how much certain input modalities influence the generation via weighting. +4M implicitly infers object context + +![](images/1d37c21cd40dbcf2ec97bd8a6eca1723259d2f55078d3f77b7d06811e48f0eb4.jpg) + +![](images/be7fc062a2057ae90a61a1ea603e3fe8d893bee344c7ff4908b1eb0efd0a84b5.jpg) + +![](images/1834a73532172f0ba3a31d33d99382d475cc9525f7f028cc8cd95c3fae616d22.jpg) + +![](images/7c5f179fab378356b919e0470b999e5302510fdc9e6907a6db3af4b81bf377e4.jpg) +4M can perform semantic manipulations + +![](images/8505549add8de440f9face0c18bc3c8171f41e16a3ea81210192319ca76a6d07.jpg) + +![](images/fe518a5e90d272f1f91a4c30f605442a05e83aea718470bd06d3e5f91b0f3ee9.jpg) + +![](images/63c1c0005eb3df816a42b128aa71a0c2b2d0ebeb44a52dda535e306998ace238.jpg) +Figure 5: Probing the learned representation through manipulation: (Left): All of the conditioning bounding boxes are fixed except the marked "potted plant" ones that are changing. Depending on where the "potted plant" bounding boxes are placed, 4M infers how they can be part of the scene (e.g., as a painting or real plant) in a geometrically and physically plausible manner. (Right): Changing a single semantic class accordingly affects how 4M predicts the overall image. See more examples in Appendix A.4 and interactive visualizations on our website. + +![](images/b508e45c9b373a85aab3df76f5393d7b901ae5b0d3993bb1c23fd6766d03a9f7.jpg) + +![](images/6d1fa694f3652ff4634630ee6bb596d5ec3f499fe72dce0e5edca14c3c2a3521.jpg) + +![](images/46db667787157553ebfea63f6bf4a41be233c3a9cbc06f6456171f41e14fdc83.jpg) + +![](images/1b3d7c49ed918fc52c0ce7e6fddf5472ba8c8d49dbc993d780be99c09eef199c.jpg) + +![](images/b3e3c8eedc0a75b281805d0004c57e9114ee21a28a55cd4c5db2bc033794724b.jpg) + +![](images/0118a42c38a3e4da0ccc90bb4d1fb7556c4bb31a5a17d66221cd23f5f676b665.jpg) + +![](images/0490ac6f6c23245dd2fae9d24c3132c27b4d9fe7b7a74e8f067aa60b204f1ae3.jpg) + +the input and keeping the remainder fixed [5]. Figure 5 shows a series of manipulations, in which 4M shows intriguing capabilities at predicting geometrically and physically plausible arrangements while taking into account the semantic context of the scene. + +Multimodal editing. By combining the multimodal conditional generation and in-painting capabilities of 4M, we can perform various multimodal editing tasks, such as performing semantic edits or in-painting grounded by geometric conditioning (see Figure 4 top and middle). Drawing parallels to ControlNet [130], these allow for steering the generation using more than just text. However, 4M is able to perform these tasks with just a single network and condition on multiple (partial) modalities - individually or simultaneously. The conditions can either be hand-specified, or extracted from an image using 4M itself, thereby removing the need for specialist models to create the conditions. + +Multimodal weighted guidance. Classifier-free guidance [50] has been shown to improve image fidelity in token-based generative models [40, 123, 18]. Inspired by Liu et al. [68] that perform compositional generation on multiple text conditions, we can guide our generation by weighting different (parts of) modalities by different continuous amounts - even negatively. This unlocks further + +multimodal editing capabilities (see Figure 4 bottom), such as being able to weakly condition on certain modalities, or using negative weighing to avoid a certain concept in the generation. Multimodal guidance can be achieved by computing a weighted sum of the logits of an unconditional and each conditional case: $\log_{\mathrm{guided}} = \log_{\mathrm{uncond}} + \sum_{i=1}^{n} w_i$ ( $\log_{\mathrm{cond}, i} - \log_{\mathrm{uncond}}$ ). + +We provide additional visualizations covering 4M's wide range of generative capabilities on our website and in Appendix A.4. + +# 5 Ablations + +Pre-training on a large set of modalities using a masking objective creates a large design space that raise questions such as: what modalities should we pre-train on, what is the optimal masking ratio, or how do we select the number of tokens to mask from each modality? By performing a thorough ablation of these design parameters, we aim to find out which ones matter the most for multimodal pre-training and which ones do not. We used these findings to decide on the settings for the models shown in Section 3 and 4. + +We first choose a reference setting and measure the deviation of the model performance as a result of the one aspect under study while keeping the rest fixed. Performance is measured through transferring the models to a large set of downstream tasks and measuring the validation set performance. The aim of the benchmarks is to measure how good a certain instantiation of 4M is at transferring both to new target tasks, but also to unseen input modalities. For that, we include tasks that use RGB (pixels) as inputs and transfer to a new target, such as COCO [67] object detection, ADE20K [131] semantic segmentation and ten transfers from RGB to dense tasks in Taskonomy [126] and Hypersim [94] (e.g. depth, curvature, segmentation). Furthermore, we include eleven single-modal and twelve multimodal transfers from some Taskonomy or Hypersim modalities to others (e.g. curvature $\rightarrow$ occlusion edges, or RGB + depth $\rightarrow$ segmentation). + +For simplicity, and similar to pre-training, we model each of the benchmark tasks as predicting one set of tokens from another. This is achieved by training tokenizers on these modalities and tasks in the same way as we did for pre-training. All transfers are performed at $224 \times 224$ resolution. For comparability across modalities and tasks, we measure the validation set cross-entropy performance rather than task-specific metrics. Note that in the interest of space, we aggregate the results of the Taskonomy and Hypersim tasks, and denote them as $RGB \rightarrow X$ , $X \rightarrow Y$ and $X + Y \rightarrow Z$ . See Appendix E for further training details. + +# 5.1 Reference model + +Following common practice for the reference model size [30, 86], our model consists of 12 encoder and 12 decoder layers. We train it on CC12M [19] using all modalities as both inputs and targets, at resolution $224 \times 224$ pixels (corresponding to $14 \times 14$ tokens per dense modality), and using no augmentations such as cropping nor color augmentations. The total training length is fixed at 100B tokens, corresponding to roughly 400M masked samples. We set the number of randomly sampled input and target tokens to 12 each, and sample each using a symmetric Dirichlet distribution with parameter $\alpha = 0.2$ . To determine the significance of various modeling choices, we adopt the approach used by Raffel et al. [86] that calculates the standard deviation of the transfer results of ten independently trained reference models. + +# 5.2 Input modalities and target tasks + +Importance of target tasks for representation learning. 4M is both a multimodal and a multitask training scheme and the choice of target task(s) is a powerful way of steering what representation the model learns [10, 109, 42, 107, 5]. To ablate this for 4M, we fix the input modalities to be either RGB or all the modalities (denoted as "All"), and vary the target tasks. The results in Table 2 mirror the findings of Sax et al. [98], that the optimal choice of pre-training setting depends highly on the type of transfer that is performed, and that there is no single pre-training task that performs best on all transfers. However, pre-training on all target modalities consistently outperforms the other single-task and multitask alternatives in terms of average loss, no matter what input modalities were used during pre-training. This makes it the preferred configuration for generalist models, especially when their future applications are unknown or varied. + +Importance of multimodal pre-training for transferring to new input modalities. Table 2 shows that multimodal pre-training can significantly help with transferring to new input modalities $(\mathrm{X} \rightarrow \mathrm{Y}$ and $\mathrm{X} + \mathrm{Y} \rightarrow \mathrm{Z}$ transfers), but comes at a performance loss at transfers that use RGB as the sole input modality. In Appendix E.4, we explore pre-training using mixtures of different masking strategies, which enables us to train models that perform well in both regimes. + +Table 2: Pre-training input and target modalities ablation: The choice of pre-training tasks and modalities influences what representations the model learns, and how well it can be transferred to novel tasks and modalities. Here, Geometric = RGB + Depth + Normals and Semantic = RGB + Segmentation + CLIP + Detection + Captions. We show the average losses (↓) for several task categories and compute the average rank and best losses for "RGB" and "All" inputs separately. The reference model setting is indicated by $\triangleright$ and results that lie within two reference model standard deviations of the best result are bolded. Performing 4M pre-training on all input and target modalities is the most versatile choice, if the optimal set of pre-training modalities for any given downstream task is unknown. + +
Pre-training inputsPre-training targetsCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
RGBRGB3.146.215.036.946.155.498.00
RGBDepth3.116.064.726.895.845.324.00
RGBNormals3.126.024.666.835.875.303.20
RGBSegmentation3.175.944.846.865.895.344.60
RGBCLIP3.076.114.836.855.945.364.80
RGBDetection2.786.115.037.076.245.457.20
RGBCaptions3.456.555.927.356.866.0310.00
RGBGeometric3.116.084.706.885.855.323.80
RGBSemantic2.885.994.866.976.065.355.40
RGBAll2.905.994.746.915.935.294.00
AllRGB3.216.205.076.755.855.423.80
AllCLIP3.196.185.066.805.885.423.80
AllGeometric3.206.134.986.725.765.361.80
AllSemantic3.056.135.166.775.875.393.40
AllAll3.066.115.076.755.805.362.20
+ +![](images/f807adbbf0aa09395686ec89d86efddd1ef839f6a6e4893061410ede0fc86503.jpg) +Figure 6: Ablations results: We ablate several key design choices of the multimodal masking objective (in blue), and study how well 4M scales (in green). We show the overall average losses $(\downarrow)$ and highlight the reference model setting in blue / green. A detailed breakdown of the task losses is provided in Appendix E.4 and Appendix E.5. + +# 5.3 Multimodal masking strategy + +Multimodal masking is at the core of 4M, so in this section, we ablate how modality tokens should be sampled, and how many tokens we should encode and decode. + +Modality proportions in masking. We ablate various choices of Dirichlet parameter $\alpha$ , both for the input and target sampling. If $\alpha$ is low, the sampling procedure will often select cases where most of the tokens are sampled from only one modality. If $\alpha$ is high, however, most samples will contain tokens from all modalities to equal proportions. Results in Figure 6 (a) show that uniformly sampling over the simplex performs best on average, but not by a large margin. + +Input masking budget. The difficulty of the multimodal masked modeling task is largely determined by the number of visible (non-masked) input tokens. Encoding only the small set of visible tokens can significantly improve training efficiency [48]. Figure 6 (b) shows that with a fixed training token budget, training with 128-256 input tokens performs well. + +Target masking budget. We can decide to decode merely a small random subset of all remaining masked-out tokens, which is especially important for enabling efficient multimodal training with large decoders. As Figure 6 (c) shows, decoding only a small random subset of all targets performs well (for a fixed number of total training tokens), while also reducing computational costs. + +# 5.4 How well does 4M scale? + +Scalability is a key property that models and training objectives should have. We therefore ablate the following three axes: To ablate the dataset size, we train 4M models on various subsets of CC12M, down to 1/64th of the full dataset. To ablate the training length, we vary the total number of tokens seen. We define tokens seen as the total number of both input tokens and target tokens the model was + +trained on. To ablate the model size, we train different sizes of 4M models, ranging from Tiny (4M-Ti) with 24M parameters to Large variants (4M-L) with 705M parameters. Exact model specifications are given in Appendix C.1. Figure 6 (d), (e), (f) show that 4M scales with dataset size, training length, and model size, respectively. + +For additional ablations on the architectural and training design choices of 4M, see Appendix E. + +# 6 Related Work + +Large language models have been demonstrated to be capable of performing a diverse range of tasks out of the box [86, 12, 81, 25, 45] by training on large datasets with simple objectives [30, 84, 86, 106]. In vision, however, many scaling efforts have instead focused on training specialized models on a single task and modality, such as predicting masked RGB pixels [20, 31, 4, 48, 118, 34], discrete tokens [7, 132], or other (deep) features [117, 6, 82, 114, 36, 70] from RGB inputs. Training models instead on multiple tasks [16, 33, 63, 42, 92, 11] and modalities [85, 122, 76, 59, 133, 1, 3, 57, 43], or both [54, 103, 114, 5, 44] usually requires modality-specific modeling choices, making it difficult to extend these methods. + +While some recent works aim to consolidate various modalities and tasks by representing them as images [77, 8, 115], these approaches have limitations when dealing with modalities that cannot be readily converted into images, such as text or neural network feature maps. Instead, 4M adopts the approach of Pix2Seq [21, 22] and Unified-IO [74] which addresses these issues by unifying the representation space on which models are trained through tokenization [110, 35, 30]. However, unlike methods like Unified-IO which operate on a single RGB image tokenizer, 4M's ability to work with multiple modality-specific tokenizers enables scaling to visual modalities beyond those that can be represented as images, such as neural network feature maps. 4M also builds upon the multimodal masking approach of MultiMAE [5] and extends it beyond image-like modalities. + +Both token-based generative models [88, 123, 17, 18, 65] and diffusion models [89, 79, 95, 97] have been mostly limited text-to-image generation. While there are works that enable a greater amount of control by conditioning on additional modalities, they are either very limited in the number of ways they can be conditioned on [58, 40, 113, 119, 62, 9, 128], or require training a separate model for each new modality [130]. 4M flexibly allows for conditioning on any subset of the training modalities, and can, likewise, generate all these modalities, unlocking powerful generative editing capabilities. + +# 7 Conclusion and Limitations + +4M is a generalist framework for training multimodal and multitask models that not only perform many key vision tasks out of the box, but also demonstrate strong transfer results to a wide range of downstream tasks. 4M's in-painting and any-to-any generation capabilities enable it to perform a wide range of multimodal generative and expressive editing tasks - all using a single model. In the following, we discuss some limitations of our approach and potential future work. + +Additional modalities. While 4M already includes a number of modalities, from semantics-based and geometry-based to text-based, bringing additional modalities could vastly improve its usability. For example, training on features extracted from a large language model has been shown to significantly boost text-to-image generation capabilities [97, 123, 18]. Introducing modalities like edges, sketches, or human poses has the potential to greatly improve the expressiveness [130] of 4M, but it may also be expanded to videos or multi-view imagery to unlock spatial-temporal generation and editing capabilities. We anticipate 4M to conveniently extend to such new modalities. + +Tokenizer quality. 4M can benefit from better tokenizers, both in terms of generation and transfer results, but there are limits to the amount of information that can be encoded in tokenized patches. Operating on tokens that cover a smaller image region, or operating on higher-resolution images may improve image quality, but is expected to be computationally more expensive. Generally, improvements along this direction are expected to directly boost the performance of 4M. + +Dataset size and quality. While our binding dataset choice of CC12M is standard, 4M can benefit from training on significantly larger datasets [99, 14, 39]. Web-scraped image-text datasets contain many low-quality images, as well as captions that are not related to the image content. Fine-tuning 4M on a more curated dataset like LAION-Aesthetics V2 [99], or tuning the model using reinforcement learning [83] could significantly improve generation quality and diversity. We leave this for the future. + +Acknowledgements. We thank Hanlin Goh and Elmira Amirloo Abolfathi for their valuable feedback on earlier versions of this manuscript. + +# References + +[1] Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. CM3: A causal masked multimodal model of the internet. arXiv:2201.07520, 2022. 10 +[2] Armen Aghajanyan, L. Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, and Luke Zettlemoyer. Scaling laws for generative mixed-modal language models. In International Conference on Machine Learning, 2023. 3 +[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems, 2022. 10 +[4] Sara Atito, Muhammad Awais, and Josef Kittler. SiT: Self-supervised vision transformer. arXiv:2104.03602, 2021. 10 +[5] Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. MultiMAE: Multi-modal multi-task masked autoencoders. In European Conference on Computer Vision, 2022. 2, 3, 4, 5, 6, 7, 8, 10, 32, 35, 38, 39, 40 +[6] Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and language. In International Conference on Machine Learning, 2022. 10 +[7] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022. 10, 40 +[8] Amir Bar, Yossi Gandelsman, Trevor Darrell, Amir Globerson, and Alexei Efros. Visual prompting via image inpainting. In Advances in Neural Information Processing Systems, 2022. 10 +[9] Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. MultiDiffusion: Fusing diffusion paths for controlled image generation. In International Conference on Machine Learning, 2021. 10 +[10] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 2000. 8, 38 +[11] Deblina Bhattacharjee, Tong Zhang, Sabine Süssstrunk, and Mathieu Salzmann. MuT: An end-to-end multitask learning transformer. In Conference on Computer Vision and Pattern Recognition, 2022. 10 +[12] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. arXiv:2005.14165, 2020. 1, 10 +[13] Neil Burgess, Jelena Milanovic, Nigel Stephens, Konstantinos Monachopoulos, and David Mansell. Bfloat16 processing for neural networks. In Symposium on Computer Arithmetic, 2019. 32, 33 +[14] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. COYO-700M: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022. 4, 10 +[15] Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In Conference on Computer Vision and Pattern Recognition, 2018. 6, 34 +[16] Rich Caruana. Multitask learning. Machine Learning, 1997. 10 +[17] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T. Freeman. MaskGIT: Masked generative image transformer. In Conference on Computer Vision and Pattern Recognition, 2022. 2, 5, 6, 10, 21 + +[18] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, José Lezama, Lu Jiang, Ming Yang, Kevin P. Murphy, William T. Freeman, Michael Rubinstein, Yuanshen Li, and Dilip Krishnan. Muse: Text-to-image generation via masked generative transformers. In International Conference on Machine Learning, 2023. 2, 6, 7, 10, 20, 21, 45 +[19] Soravit Changpinyo, Piyush Kumar Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Conference on Computer Vision and Pattern Recognition, 2021. 2, 4, 8, 30, 37, 46 +[20] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In International Conference on Machine Learning, 2020. 10 +[21] Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, and Geoffrey Hinton. Pix2seq: A language modeling framework for object detection. In International Conference on Learning Representations, 2022. 2, 4, 10, 30, 36 +[22] Ting Chen, Saurabh Saxena, Lala Li, Tsung-Yi Lin, David J Fleet, and Geoffrey E Hinton. A unified sequence interface for vision tasks. In Advances in Neural Information Processing Systems, 2022. 2, 10, 30 +[23] Weifeng Chen, Shengyi Qian, David Fan, Noriyuki Kojima, Max Hamilton, and Jia Deng. OASIS: A large-scale dataset for single image 3d in the wild. In Conference on Computer Vision and Pattern Recognition, 2020. 44 +[24] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Conference on Computer Vision and Pattern Recognition, 2022. 30 +[25] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diazor, Orhan First, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. PaLM: Scaling language modeling with pathways. arXiv:2204.02311, 2022. 1, 10, 42 +[26] Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations, 2020. 34, 35 +[27] Ekin Dogus Cubuk, Barret Zoph, Jon Shiens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In Advances in Neural Information Processing Systems, 2020. 34 +[28] Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme Ruiz, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd Van Steenkiste, Gamaeldin Fathy Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Collier, Alexey A. Gritsenko, Vighnesh Birodkar, Cristina Nader Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetic, Dustin Tran, Thomas Kipf, Mario Lucic, Xiaohua Zhai, Daniel Keysers, Jeremiah J. Harmsen, and Neil Houlsby. Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning, 2023. 42 +[29] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009. 6, 34 +[30] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics, 2019. 2, 3, 4, 8, 10, 30, 37, 40 +[31] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. 4, 10, 31, 33 + +[32] Ainaz Eftekhar, Alexander Sax, Roman Bachmann, Jitendra Malik, and Amir Roshan Zamir. Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans. In International Conference on Computer Vision, 2021. 4, 30 +[33] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In International Conference on Computer Vision, 2015. 10 +[34] Alaaeldin El-Nouby, Gautier Izacard, Hugo Touvron, Ivan Laptev, Herve Jegou, and Edouard Grave. Are large-scale datasets necessary for self-supervised pre-training? arXiv:2112.10740, 2021. 10 +[35] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. In Conference on Computer Vision and Pattern Recognition, 2021. 4, 10, 31 +[36] Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. EVA-02: A visual representation for neon genesis. arXiv:2303.11331, 2023. 10, 30 +[37] Yuxin Fang, Wen Wang, Binhui Xie, Quan-Sen Sun, Ledell Yu Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. EVA: Exploring the limits of masked visual representation learning at scale. In Conference on Computer Vision and Pattern Recognition, 2023. 30 +[38] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. In Advances in Neural Information Processing Systems, 2022. 33, 38, 43 +[39] Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander J. Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alexandros G. Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, and Ludwig Schmidt. DataComp: In search of the next generation of multimodal datasets. arXiv:2304.14108, 2023. 4, 10 +[40] Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-A-Scene: Scene-based text-to-image generation with human priors. In European Conference on Computer Vision, 2022. 7, 10, 21, 30 +[41] Golnaz Ghiasi, Yin Cui, Aravind Srinivas, Rui Qian, Tsung-Yi Lin, Ekin D Cubuk, Quoc V Le, and Barret Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation. In Conference on Computer Vision and Pattern Recognition, 2021. 34, 35, 36 +[42] Golnaz Ghiasi, Barret Zoph, Ekin Dogus Cubuk, Quoc V. Le, and Tsung-Yi Lin. Multi-task self-training for learning general representations. In International Conference on Computer Vision, 2021. 3, 4, 8, 10, 38 +[43] Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. ImageBind: One embedding space to bind them all. In Conference on Computer Vision and Pattern Recognition, 2023. 10 +[44] Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. OmniMAE: Single model masked pretraining on images and videos. In Conference on Computer Vision and Pattern Recognition, 2023. 2, 10 +[45] Google. PaLM 2 technical report. https://ai.google/static/documents/palm2techreport.pdf, 2023.10 +[46] Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv:1706.02677, 2017. 31, 33, 34, 36, 37, 38, 43 +[47] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask r-cnn. Transactions on Pattern Analysis and Machine Intelligence, 2017. 6, 34 +[48] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll'ar, and Ross B. Girshick. Masked autoencoders are scalable vision learners. In Conference on Computer Vision and Pattern Recognition, 2022. 2, 3, 5, 6, 9, 10, 30, 32, 38, 40, 43 +[49] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). arXiv:1606.08415, 2016. 33, 38, 42 +[50] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv:2207.12598, 2022. 7, 21 + +[51] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, 2020. 32 +[52] Jonathan Ho, Chitwan Sahara, William Chan, David J. Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research, 2022. 20 +[53] Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. Autoregressive diffusion models. In International Conference on Learning Representations, 2022. 21 +[54] Ronghang Hu and Amanpreet Singh. UniT: Multimodal multitask learning with a unified transformer. In International Conference on Computer Vision, 2021. 10 +[55] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, 2016. 34, 35 +[56] Po-Yao Huang, Hu Xu, Juncheng Billy Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, and Christoph Feichtenhofer. Masked autoencoders that listen. In Advances in Neural Information Processing Systems, 2022. 3 +[57] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, and Furu Wei. Language is not all you need: Aligning perception with language models. arXiv:2302.14045, 2023. 10 +[58] Xun Huang, Arun Mallya, Ting-Chun Wang, and Ming-Yu Liu. Multimodal conditional image synthesis with product-of-experts gans. In European Conference on Computer Vision, 2022. 10 +[59] Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier J Henaff, Matthew Botvinick, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver IO: A general architecture for structured inputs & outputs. In International Conference on Learning Representations, 2022. 10 +[60] Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. Scaling laws for neural language models. arXiv:2001.08361, 2020. 3 +[61] Oğuzhan Fatih Kar, Teresa Yeo, Andrei Atanov, and Amir Zamir. 3d common corruptions and data augmentation. In Conference on Computer Vision and Pattern Recognition, 2022. 30 +[62] Yuval Kirstain, Omer Levy, and Adam Polyak. X&Fuse: Fusing visual information in text-to-image generation. arXiv:2303.01000, 2023. 10 +[63] Iasonas Kokkinos. Ethernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In Conference on Computer Vision and Pattern Recognition, 2017. 10 +[64] Alexander Kolesnikov, André Susano Pinto, Lucas Beyer, Xiaohua Zhai, Jeremiah Harmsen, and Neil Houlsby. UViM: A unified modeling approach for vision with learned guiding codes. In Advances in Neural Information Processing Systems, 2022. 2 +[65] Tianhong Li, Huiwen Chang, Shlok Mishra, Han Zhang, Dina Katabi, and Dilip Krishnan. MAGE: Masked generative encoder to unify representation learning and image synthesis. In Conference on Computer Vision and Pattern Recognition, 2023. 6, 10, 21 +[66] Yanghao Li, Hanzi Mao, Ross Girshick, and Kaiming He. Exploring plain vision transformer backbones for object detection. In European Conference on Computer Vision, 2022. 30, 33, 34 +[67] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In European Conference on Computer Vision, 2014. 6, 8, 30, 36 +[68] Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B. Tenenbaum. Compositional visual generation with composable diffusion models. In European Conference on Computer Vision, 2022. 7, 21 +[69] Shikun Liu, Linxi (Jim) Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, and Anima Anandkumar. Prismer: A vision-language model with an ensemble of experts. arXiv:2303.02506, 2023. 3 +[70] Xingbin Liu, Jinghao Zhou, Tao Kong, Xianming Lin, and Rongrong Ji. Exploring target representations for masked autoencoders. arXiv:2209.03917, 2022. 10 + +[71] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In International Conference on Computer Vision, 2021. 30 +[72] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Conference on Computer Vision and Pattern Recognition, 2022. 35 +[73] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. 31, 33, 34, 35, 36, 37, 38 +[74] Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. Unified-IO: A unified model for vision, language, and multi-modal tasks. In International Conference on Learning Representations, 2023. 2, 4, 10 +[75] Troy Luhman and Eric Luhman. Improving diffusion model efficiency through patching. arXiv:2207.04316, 2022. 32 +[76] Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, and Chen Sun. Attention bottlenecks for multimodal fusion. In Advances in Neural Information Processing Systems, 2021. 10 +[77] Charlie Nash, Joao Carreira, Jacob Walker, Iain Barr, Andrew Jaegle, Mateusz Malinowski, and Peter Battaglia. Transframer: Arbitrary frame prediction with generative models. arXiv:2203.09494, 2022. 10 +[78] NegPrompt. Negative prompt. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt, 2022.21 +[79] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, 2021. 10 +[80] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, 2021. 31 +[81] OpenAI. GPT-4 technical report. arXiv:2303.08774, 2023. 10 +[82] Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, and Furu Wei. BEiT v2: Masked image modeling with vector-quantized visual tokenizers. arXiv:2208.06366, 2022. 3, 6, 10, 30, 31, 38, 43 +[83] André Susano Pinto, Alexander Kolesnikov, Yuge Shi, Lucas Beyer, and Xiaohua Zhai. Tuning computer vision models with task rewards. In International Conference on Machine Learning, 2023. 10 +[84] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. 1, 10 +[85] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. 3, 10, 30 +[86] Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 2020. 1, 5, 8, 10, 32, 33, 36, 37, 42 +[87] Samyam Rajbhandari, Jeff Rasley, Olatunj Ruwase, and Yuxiong He. ZeRO: Memory optimizations toward training trillion parameter models. In International Conference for High Performance Computing, Networking, Storage and Analysis, 2020. 33 +[88] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, 2021. 10, 31 +[89] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with CLIP latents. arXiv:2204.06125, 2022. 10 +[90] René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. Transactions on Pattern Analysis and Machine Intelligence, 2020. 44 +[91] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In International Conference on Computer Vision, 2021. 30, 35 + +[92] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-maron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A generalist agent. Transactions on Machine Learning Research, 2022. 10 +[93] T. Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. ImageNet-21K pretraining for the masses. arXiv:2104.10972, 2021. 6, 34 +[94] Mike Roberts and Nathan Paczan. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In International Conference on Computer Vision, 2020. 4, 8, 36 +[95] Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Conference on Computer Vision and Pattern Recognition, 2022. 6, 10, 45 +[96] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2014. 6 +[97] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. In Advances in Neural Information Processing Systems, 2022. 10 +[98] Alexander Sax, Bradley Emi, Amir Roshan Zamir, Leonidas J. Guibas, Silvio Savarese, and Jitendra Malik. Mid-level visual representations improve generalization and sample efficiency for learning visuomotor policies. arXiv:1812.11971, 2018. 3, 8, 38 +[99] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5B: An open large-scale dataset for training next generation image-text models. In Advances in Neural Information Processing Systems, 2022. 4, 10 +[100] Noam Shazeer. GLU variants improve transformer. arXiv:2002.05202, 2020. 33, 42 +[101] Jie Shi, Chenfei Wu, Jian Liang, Xiang Liu, and Nan Duan. DiVAE: Photorealistic images synthesis with denoising diffusion decoder. arXiv:2206.00386, 2022. 31 +[102] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from RGBD images. In European Conference on Computer Vision, 2012. 6 +[103] Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. FLAVA: A foundational language and vision alignment model. In Conference on Computer Vision and Pattern Recognition, 2022. 10 +[104] Linda B. Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artificial Life, 2005. 1 +[105] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021. 32 +[106] Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Dara Bahri, Tal Schuster, Steven Zheng, Denny Zhou, Neil Houlsby, and Donald Metzler. UL2: Unifying language learning paradigms. In International Conference on Learning Representations, 2023. 10 +[107] Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, and Phillip Isola. Rethinking few-shot image classification: a good embedding is all you need? In European Conference on Computer Vision, 2020. 8, 38 +[108] Hugo Touvron, Matthieu Cord, and Hervé Jégou. DeiT III: Revenge of the vit. In European Conference on Computer Vision, 2022. 6 +[109] Nilesh Tripuraneni, Michael I. Jordan, and Chi Jin. On the theory of transfer learning: The importance of task diversity. In Advances in Neural Information Processing Systems, 2020. 8, 38 +[110] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Advances in Neural Information Processing Systems, 2017. 2, 4, 10, 30 +[111] Igor Vasiljevic, Nick Kolkin, Shanyi Zhang, Ruotian Luo, Haochen Wang, Falcon Z Dai, Andrea F Daniele, Mohammadreza Mostajabi, Steven Basart, Matthew R Walter, et al. DIODE: A dense indoor and outdoor depth dataset. arXiv:1908.00463, 2019. 44 + +[112] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017. 4 +[113] Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. Sketch-guided text-to-image diffusion models. In ACM SIGGRAPH Conference, 2023. 10 +[114] Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, and Furu Wei. Image as a foreign language: BEiT pretraining for all vision and vision-language tasks. arXiv:2208.10442, 2022. 3, 10, 34 +[115] Xinlong Wang, Wen Wang, Yue Cao, Chunhua Shen, and Tiejun Huang. Images speak in images: A generalist painter for in-context visual learning. In Conference on Computer Vision and Pattern Recognition, 2023. 10 +[116] WebDataset. Webdataset. https://github.com/webdataset/webdataset, 2022.43 +[117] Chen Wei, Haoqi Fan, Saining Xie, Chaoxia Wu, Alan Loddon Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In Conference on Computer Vision and Pattern Recognition, 2022. 10 +[118] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. SimMIM: A simple framework for masked image modeling. In Conference on Computer Vision and Pattern Recognition, 2022. 10 +[119] Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. ReCo: Region-controlled text-to-image generation. In Conference on Computer Vision and Pattern Recognition, 2023. 10 +[120] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, 2019. 21 +[121] Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved VQGAN. In International Conference on Learning Representations, 2022. 31 +[122] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. CoCa: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research, 2022. 10 +[123] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. Scaling autoregressive models for content-rich text-to-image generation. Transactions on Machine Learning Research, 2022. 6, 7, 10, 20, 21 +[124] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In International Conference on Computer Vision, 2019. 34 +[125] Amir R Zamir, Alexander Sax, Nikhil Cheerla, Rohan Suri, Zhangjie Cao, Jitendra Malik, and Leonidas J Guibas. Robust learning through cross-task consistency. In Conference on Computer Vision and Pattern Recognition, 2020. 30 +[126] Amir Roshan Zamir, Alexander Sax, Bokui (William) Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Conference on Computer Vision and Pattern Recognition, 2018. 4, 8, 30, 36 +[127] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. SoundStream: An end-to-end neural audio codec. Transactions on Audio, Speech, and Language Processing, 2022. 31, 32 +[128] Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, and Shijian Lu. Multimodal image synthesis and editing: A survey. arXiv:2112.13592, 2021. 10 +[129] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018. 34 +[130] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In International Conference on Computer Vision, 2023. 7, 10, 24 + +[131] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ADE20K dataset. In Conference on Computer Vision and Pattern Recognition, 2017. 6, 8, 36 +[132] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. iBoT: Image BERT pre-training with online tokenizer. In International Conference on Learning Representations, 2022. 10 +[133] Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Xiaogang Wang, Hongsheng Li, Xiaohua Wang, and Jifeng Dai. Uni-Perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. In Conference on Computer Vision and Pattern Recognition, 2022. 10 + +# Appendix + +# A Generative Capabilities & Probing the Learned Representation 20 + +A.1 Token super-resolution 20 +A.2 Generation-specific specializations of 4M 20 +A.3 Generation procedure details 21 +A.4 Additional visualizations 22 + +# B Multimodal Dataset & Tokenization 30 + +B.1 Pseudo labeled multimodal training dataset details 30 +B.2 Tokenization of captions & bounding boxes 30 +B.3 Tokenization of dense modalities 30 + +# C Method & Training Details 32 + +C.1 Additional 4M architecture details 32 +C.2 Training details 32 + +# D Transfer Experiments Details 33 + +D.1 Architectural differences with the baselines 33 +D.2 Image classification on ImageNet-1K 34 +D.3 Object detection and instance segmentation on COCO 34 +D.4 Semantic segmentation on ADE20K 35 +D.5 Depth estimation on NYUv2 35 + +# E Ablation Details & Results 36 + +E.1 Benchmark tasks 36 +E.2 Reference model 37 +E.3 Input modalities and target tasks 38 +E.4 Multimodal masking strategy 39 +E.5 How well does 4M scale? 41 +E.6 Architectural design choices 42 +E.7 Training design choices 43 +E.8 Self-baselines 43 +E.9 Comparison to the final models 44 + +# F Additional Evaluations 44 + +F.1 Out of the box (zero-shot) performance 44 +F.2 Text-to-image performance 45 + +# G Broader Impact 46 + +G.1 Computational costs 46 +G.2 Social impact 46 + +# A Generative Capabilities & Probing the Learned Representation + +# A.1 Token super-resolution + +For generating high-resolution images, it is common to first generate images at a lower resolution, followed by a super-resolution step [52, 18, 123]. These two-stage approaches often use a large model to generate a low-resolution image that contains most of the semantically and geometrically important information, followed by one or more smaller super-resolution models that fill in higher resolution details. Directly generating high-resolution images with Transformers is both computationally expensive due to the long sequence lengths, but has also been shown to perform worse at generating semantically coherent images [18] compared to cascade approaches. 4M is trained at a base resolution of $224 \times 224$ pixels, which corresponds to $14 \times 14$ tokens. At this low image resolution, fine details are difficult to model. We therefore train a super-resolution model, 4M-SR, starting with 4M-L as the base and fine-tune it to map from low-resolution image tokens to higher-resolution tokens. All images shown are generated at the base resolution using 4M-XL, and subsequently up-sampled to double the resolution using 4M-SR. + +Training details. For the high-resolution tokens, each $448 \times 448$ pixel image is represented by $28 \times 28$ tokens. The super resolution training-scheme is very similar to the base resolution 4M pre-training scheme, but the inputs to the model consist of a random subset of low-resolution and high-resolution tokens of all modalities, while the target is a random subset of all high-resolution tokens. At inference-time, one or multiple full low-resolution modalities (and sequence-like modalities) are used as conditioning to decode the high-resolution targets step-by-step. + +To train 4M-SR, we start from the pre-trained 4M-L and continue training it for an additional 100B tokens (20% of the pre-training length). For each sample, we randomly sample the input budget uniformly between 64 and 1024 to better accommodate for the varying number of tokens throughout the generation process, and keep the target budget fixed at 1024. As full captions are otherwise rarely seen during training, we train on a mixture of masking strategies where 1/4 of the input samples are heavily skewed towards fully unmasked captions ( $\alpha = 5.0$ for captions with $p_{\mathrm{mask}} = 0$ , and $\alpha = 0.05$ for all other modalities). + +![](images/afc286cf14aed3b81d09a11f6502cdadcdd18c8c871b8c167df126a1dc2e5ca3.jpg) +Figure 7: Token super-resolution: We specialize a version of 4M-L for performing super-resolution in the space of tokens ( $14^{2} \rightarrow 28^{2}$ tokens) to double the resolution ( $224^{2} \rightarrow 448^{2}$ pixels) and in the process add more details to the generated images. Just like the base model, it can generate any high-resolution modality conditioned on any low-resolution modality, but we use it to super-resolve modalities we generated at the base resolution. Best viewed when zoomed in. + +# A.2 Generation-specific specializations of 4M + +Adapting 4M to specific use-cases by fine-tuning it either on a subset of pre-training modalities, or fine-tuning it with a different token sampling distribution can improve performance on select downstream tasks. In a similar manner, to optimize a 4M model for text-to-image generation, we can fine-tune it by skewing the token sampling distribution to include more captions as input. To train this caption-skewed model, we start from a pre-trained 4M model and continue training it for an additional 50B tokens (10% of the pre-training length). For each sample in a batch, we randomly sample the input budget uniformly between 64 and 256 to better accommodate for the varying number of tokens throughout the generation process, and keep the target budget fixed at 256. To skew the model towards full captions, we train on a mixture of masking strategies where 1/3 of the input + +samples are heavily skewed towards fully unmasked captions ( $\alpha = 5.0$ for captions with $p_{\mathrm{mask}} = 0$ , and $\alpha = 0.05$ for all other modalities). + +All text-to-image results in this section were generated using a caption-skewed 4M specialization. + +# A.3 Generation procedure details + +Token-based masked image models that were trained with variable masking rates can be directly used for image generation [17, 65, 18]. These models can be seen as order-agnostic autoregressive models [53] in which generation is performed by iteratively decoding tokens in any order. Unlike traditional autoregressive models which need to generate tokens one-by-one in a predetermined order, these models are able to speed up inference by parallelizing the decoding process because the distribution over each masked token can be predicted at the same time. + +Generation schedule. The generation process is guided by a generation schedule that outlines a sequence of generation steps. Each step in the sequence determines the target modality, the decoding scheme (described below), the number of tokens to decode, the temperature, and the top-k and top-p sampling values. + +Decoding schemes. A crucial feature of our method is the flexibility to incorporate different decoding schemes, such as for different target modalities. We describe these decoding schemes below: + +- MaskGIT [17] (for image-like modalities). This parallel decoding scheme generates the entire image in a pre-determined number of steps (see Figure 3). At every prediction step, we encode all visible tokens and decode all masked out tokens. We then sample from their predicted distributions and choose the $n$ most confident tokens, where $n$ is determined by the generation schedule. Finally, we add the $n$ predicted tokens to the input and repeat the previous steps. +- Random order autoregressive (ROAR) (for image-like modalities). This scheme is conceptually similar to the decoding scheme proposed in XLNet [120]. Unlike MaskGIT, we do not decode all masked-out tokens at every step, but instead randomly select $n$ tokens to decode. +- Left-to-right autoregressive (for sequences). Besides images, we are able to generate sequence modalities such as captions and bounding boxes by autoregressively decoding them with the Transformer decoder of 4M (see Figure 3). + +Chained generation. Because any generated modality can be used as a conditioning too, we can perform chained generation of several modalities, one after another, with each fully generated one being added to the conditioning of the next (see Figure 3). Performing generation in this chained manner results in each additional modality being generated in a consistent manner, as shown in Figures 9 and 10. In addition, we found that for certain generative tasks, such as caption-to-RGB, generating intermediate modalities such as CLIP tokens can further improve image fidelity (see Figure 9). + +Classifier-free guidance. Classifier-free guidance is crucial for improving both image fidelity and how well the generation matches the conditioning. It is most commonly used in diffusion models [50], but can be applied in token-based models as well [40, 123, 18]. We perform classifier-free guidance by computing a weighted combinations of the logits of a forward pass with the conditioning and one without the conditioning: + +$$ +\log_ {i} \left( \right.\log_ {i} \left( \right.\log_ {i} \left( \right.\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {i} \left(\log_ {1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1\right)\right)\right)\right)\right)\right)\right)\right.\right.\right. +$$ + +Here, $w$ is the guidance scale. When performing chained generation, we add each fully generated modality to the set of guided modalities. + +Multimodal guidance. While guidance has been shown to significantly improve image quality, it can still happen that generative models ignore parts of the input, unpredictably focus on some parts more than others, or generate undesired concepts. Negative prompting [78] is a popular way of keeping the model from generating undesired concepts. Liu et al. [68] show that performing compositional guidance on multiple conditions can further improve text-image similarity. In a similar way, we can perform compositional generation by weighting different (parts of) modalities by different continuous amounts – even negatively. We can do this by computing a weighted sum of the logits of an unconditional case and the logits of each conditional case: + +$$ +\log i t s _ {\text {g u i d e d}} = \log i t s _ {\text {u n c o n d}} + \sum_ {i = 1} ^ {n} w _ {i} \left(\log i t s _ {\text {c o n d , i}} - \log i t s _ {\text {u n c o n d}}\right). +$$ + +Here, $w_{i}$ are the guidance scales for the different conditions. For example, this allows 4M to generate semantically or geometrically similar variants of images by weakly conditioning on their extracted segmentation, normal, or depth maps (see Figure 13). It can further be used for fine-grained steerability of multimodal edits (see Figs. 16, 14, and 15), or to use negative weighting to avoid certain concepts from being generated (see Figure 17). + +# A.4 Additional visualizations + +$\mathbf{RGB} \rightarrow \mathbf{X}$ . The model can solve several common $\mathbf{RGB} \rightarrow \mathbf{X}$ vision tasks out of the box, such as predicting normals, depth, semantic segmentation maps, or performing object detection and captioning. In Figure 8, we show examples of this functionality. + +![](images/c5e35f5e27de01d38eeac455e0782fdd64b6f172b715ede1234309980d4ec1de.jpg) +Figure 8: RGB $\rightarrow$ X: 4M can perform several common vision tasks, such as predicting surface normals, depth, semantic segmentation maps, or performing object detection and captioning. + +Chained generation. For certain generation tasks, like text-to-image, we observed that generating intermediate modalities, such as CLIP, before generating the RGB image can improve image fidelity. This leads to a form of progressive self-conditioning, which our model particularly enables as it includes several modalities. We demonstrate this phenomenon in Figure 9. + +![](images/aafaea779f877f58ca7e549719b0103863c5b37c665d54ab6779da188c9d41c4.jpg) +Figure 9: Chained generation: First, we generate intermediate modalities from the caption (CLIP, depth, etc.). Then we generate the RGB images by conditioning on both the caption and the intermediate modality, i.e., caption $\rightarrow$ intermediate modality $\rightarrow$ RGB. We compare this to directly predicting the RGB image from the caption, i.e., caption $\rightarrow$ RGB, in the 'none' column. We observe that certain intermediate modalities like CLIP can improve image fidelity. As 4M includes multiple modalities, it particularly enables this form of progressive self-conditioning and engineering a sequence of modalities for a better final generation. + +Any-to-any generative modeling. 4M can be used to generate any modality from any other (and partial) subset of modalities. Figure 10 displays examples where all modalities are generated from a single input, and Figure 11 shows several examples of generating different modalities using two full inputs. Subsequent figures will make use of this functionality and show masked predictions and examples with 1-3 different conditionings. + +![](images/fb2df7a759100acb101f4ad4f685c1b992bd210c96f7e98602ab81e77f6aedb8.jpg) +Figure 10: One-to-all generation: Starting from a single input modality, we use 4M to generate the corresponding outputs for all others. Each row shows the outputs from a different starting modality, all from the same original sample. Chained generation is used to ensure consistent outputs. + +![](images/462e80c3cd13be96611ead4d744199923e13cac637b0a419c28167927c900ee9.jpg) +Figure 11: Any-to-any generation: $4\mathrm{M}$ can perform any-to-any generation, meaning that any modality can be generated by conditioning on any subset of modalities. + +Conditional in-painting. By virtue of being trained on a masking objective, 4M can be used for conditional and unconditional in-painting (see Figure 12). In-painting coupled with 4M's ability to predict any modality from any other and flexible multimodal conditioning is the basis for several multimodal editing capabilities (see Figure 4 middle and bottom). + +![](images/d4fab01d6aae5e610c43c90f82450d2b1d8d238480517d4afcfa59e96166bb93.jpg) +Figure 12: Conditional in-painting: 4M can be used for conditional in-painting by masking out a selected image region and generating the remaining tokens using the masked image and a caption (or other modalities) as conditions. + +Grounded generation of image variations. 4M can be used to generate different versions of images by grounding the generation in some aspect extracted from a reference input image (see Figure 13). For example, we can take an image and generate variations that respect the geometry of the original image but may contain completely different colors by conditioning only on extracted surface normals. This is conceptually similar to ControlNet [130], but 4M is able to perform this using a single model trained in one single run, and 4M is able to flexibly use any combination of input modalities (see Figure 11). In addition, unlike ControlNet which requires external networks to extract the conditioning modalities, 4M models are able to generate all of them by themselves. + +![](images/0e39e2e20189c65ba2f7e598b5aa5cdddcfa66f524037f286fd80a57e5102a50.jpg) + +![](images/06f366ff116a8a0c34d353bd6a12663237ef56dff252c92ddf6e5262b04c7cad.jpg) + +![](images/482049fa36d9524e41642266a2f15e6b44fd860020bd3e5364e091f65b0b501b.jpg) + +![](images/e10b5270713182885ee264dfd1559582b2cef18b61ae5b54eb9b5b72f87a73e6.jpg) + +![](images/4bb48396b334801143476f4f2b2a81d2e0fb37bae04af75297d772ecb5e65717.jpg) + +![](images/ca81061a62acda3b4444228dcf57f9cfda8a96ac4ce236e3e4c9c28a2d6db00b.jpg) + +![](images/dda8817cc7025531c4d2772f43a9fae23e29641340055f9bb9c8607282849f13.jpg) +Figure 13: Grounded generation of image variations: 4M can be used to generate variations of a reference image by conditioning on a modality extracted from the original image. This allows for generation of diverse images that are similar geometrically, semantically, or in another (user-specified) aspect. + +![](images/f9a17199a013a7dcb67c95fa927296eceb8dbd11d790b4d1f09e9f453baddf42.jpg) + +![](images/9ce25ce54c2ac5372400beafbc9103830a47af12796bed988c4aff39c3015470.jpg) + +Multimodal guidance. Multimodal guidance, as explained in Appendix A.3, can be used for various fine-grained editing tasks. We show here several demonstrations of how it can be used to (1) use weak/strong weighting to generate images that loosely/strongly match a certain input modality, and (2) avoid the generation of certain undesired concepts via negative weighting. + +![](images/da79351cc861f4156cf0a330bf45f1868f06dddfa7a9eae72fa9f6f766c0c0ed.jpg) +Figure 14: Multimodal guidance: By weakly conditioning on geometry (here surface normals) or semantics (here semantic segmentation maps), and strongly conditioning on a caption, we can generate image variations that are only loosely grounded in some aspect of the original image. + +![](images/8b630ba31c48a9d292edeeb61b527fcdd94e80947e95fdcfed564b79f85943e8.jpg) +Figure 15: Multimodal guidance: Similar to Figure 14, we show here multimodal guidance with captions and either a normal or semantic segmentation map. We keep the weight of the caption fixed and vary the weight of the other modality. The examples demonstrate how a user can effectively control to which degree the generation should use a certain conditioning. + +![](images/35f5a77507cebe1c90ae56a9364b0a60e75987840340e0a3a87218dd2054c62a.jpg) +Figure 16: Multimodal guidance: In this example, we weakly condition on a masked RGB image, and strongly on a caption and bounding box. The generations follow the caption and bounding box precisely, while showing some degree of variation for the rest of the generated image. + +![](images/a66ce15ec4313e9176cae8d1d541f1e3b92cfe110da5b969137c1ed705069097.jpg) +Figure 17: Multimodal guidance: Multi captioning with negative weighting. (Top of each figure): Standard text-to-image generation can create pleasing results, but the user might want greater control over the generation to counteract dataset biases (e.g., most generated buses are yellow), or may have envisioned a different image. (Bottom of each figure): Enabled by the process described earlier, by adding an additional caption with a negative guidance weight, we can steer which concepts are generated, and which are not. + +Probing the learned representation. We can probe what representations 4M models learned either by performing transfers to various downstream tasks or manipulating parts of the input and observe how 4M predicts some readout modality. Concretely, we perform these representation probes by keeping the entire input fixed and manipulating only a single aspect of it. + +![](images/b9765129247334ae076a134cb7f98a94624cc5cade4fea7a3de4ba0363f94423.jpg) +Figure 18: Probing 4M representations: In this example, we condition the generation on several bounding boxes. For each of the generations inside the blue cells, we keep all bounding boxes fixed, except for one (marked in red shade), which we move around and resize. Depending on where the bounding box is placed, 4M implicitly infers the semantic and geometric properties of the object. For example, placing a bicycle bounding box in front of a bed bounding box generates a real bicycle, while placing it above generates pictures of bicycles. + +![](images/69a6b60664f5a13bd815ed9c857f335699531c016934f1fa59a9bfae2ffd8f0f.jpg) +Figure 19: Probing 4M representations: Here, we condition the generation on segmentation maps in which we manipulate a single segment and keep all others fixed. 4M creates plausible generations, no matter what class the segment was changed to. + +# B Multimodal Dataset & Tokenization + +We pseudo label a large aligned multimodal dataset using CC12M [19] and several pre-trained networks to map RGB images to pseudo labels. To enable training a single Transformer on a large number of vastly different modalities, we resort to tokenization. Concretely, we train modality-specific tokenizers, allowing us to map different modalities to sequences or sets of tokens. + +# B.1 Pseudo labeled multimodal training dataset details + +We trained all 4M models using an aligned multimodal dataset that we created by pseudo labeling CC12M. CC12M already contains aligned RGB images and captions, and we pseudo labeled the remaining tasks using powerful off-the-shelf models. + +Surface normals & depth. To introduce geometric priors and to enable generation and transfer learning conditioned on measured or estimated geometric modalities, we pseudo labeled both surface normals and scene depth. We used DPT-Hybrid [91] models trained on Omnidata [32] using cross-task consistency [125] and 3D data augmentations [61] to predict normals and depth from the CC12M RGB images. To encode images using DPT, we resized their heights and widths to the closest multiple of 32, while keeping the maximum side length below 768 to avoid prediction degradation from too large image sizes. We found that even though the Omnidata DPT models were mostly trained on simulated and indoor scenes, they generalize well to a wide range of scenes. + +Semantic segmentation. For introducing dense semantic priors, semantic segmentation maps can be useful as it allows for fine-grained creative control when conditioned during generation [40]. We pseudo labeled semantic segmentation labels using a Mask2Former [24] with a SwinB [71] backbone trained on COCO [67] panoptic segmentation. We select the labels by taking the argmax over the predicted semantic classes. + +Bounding boxes. To add object bounding boxes to the training data, we use a ViTDet ViT-H model [66] initialized from MAE weights [48] and fine-tuned on COCO [67]. We filter the detected bounding boxes by removing all instances with a confidence score below 0.6. + +CLIP feature maps. Tokenized CLIP [85] feature maps have been demonstrated to be powerful targets for masked image models [82, 37, 36]. We use the ViT-B/16 visual backbone of a CLIP-B16 model to extract dense feature maps from its last Transformer layer. To visualize a $H \times W \times D$ dimensional CLIP feature map, we project all $H * W$ entries onto the first three principal components computed from said feature map, interpreting the normalized values as R, G, and B colors [126]. + +# B.2 Tokenization of captions & bounding boxes + +We follow Pix2Seq [21, 22] and treat detection as a sequence prediction problem. For example, a scene containing a bounding box around a cat and a plant could be parameterized as xmin=0.15 ymin=0.3 xmax=0.65 ymax=0.5 cat xmin=0.75 ymax=0.3 xmax=0.95 ymax=0.8 potted plant [EOS]. In practice, we model the corner coordinates (i.e. the minimum and maximum $x$ and $y$ coordinates) using a resolution of 1000 special tokens per each of those cases - namely we represent bounding boxes using xmin=0, ..., xmin=999, ymin=0, ..., ymin=999, xmax=0, ..., xmax=999, ymax=0, ..., ymax=999. In contrast to Pix2Seq, our method involves masking parts of the object sequence, which requires two modifications to the sequence construction pipeline to make this task more manageable: First, we order the objects in the sequence by their distance to the origin. Second, unlike in Pix2Seq where corner coordinates share tokens, we assign separate tokens for each of the corner coordinates, resulting in a vocabulary size 4 times larger than in Pix2Seq. Assigning separate tokens for each corner coordinate ensures that the meaning of each token is not made ambiguous when surrounding coordinates are masked out. + +We jointly fit a WordPiece [30] text tokenizer on all captions and the 4000 special bounding box tokens. COCO class labels for the bounding boxes are treated as special tokens, meaning that if they appear in any caption, they get mapped to the same tokens. The joint text and bounding box vocabulary has a size of $30\mathrm{K}$ . + +# B.3 Tokenization of dense modalities + +We tokenize all dense modalities using variants of vector-quantized autoencoders (VQ-VAEs) [110]. To avoid blurry and unrealistic images, we train the tokenizers for RGB, normals and depth using diffusion decoders. All tokenizers are first trained on 100 epochs of an ImageNet-1K version of the pseudo labeled dataset (see Appendix B.1), after which they continued training for 15 epochs on the + +pseudo labeled CC12M dataset. We follow up the training at the base resolution of $224^2$ with a short multi-resolution fine-tuning step to adapt the tokenizers up to resolution $448^2$ . + +The base resolution training specifics of our tokenizers are shown in Table 3. We employ several common practices: Following Yu et al. [121], we heavily reduce the dimensionality of the codebook and $l_{2}$ -normalize the codes and encoded vectors to improve codebook utilization and reconstruction quality. In addition, to avoid the joint VQ-VAE and diffusion training to collapse to degenerate solution, we found it to be crucial to restart stale codebook entries, similar to Zeghidour et al. [127]. To that end, we count the number of encoded vectors in a batch that map to a given codebook entry after every iteration, and replace (randomly from the batch) any codes that have an exponential moving average (EMA) count less than a specified threshold threshreplace. Because this value depends on the total batch size $B$ , number of tokens per image $N_{\mathrm{tokens}}$ , and codebook vocabulary size $N_{\mathrm{vocab}}$ , we specify this EMA threshold using a coefficient $c_{\mathrm{replace}}$ by: + +$$ +\operatorname {t h r e s h} _ {\text {r e p l a c e}} = \frac {B N _ {\text {t o k e n s}}}{c _ {\text {r e p l a c e}} N _ {\text {v o c a b}}} +$$ + +Table 3: Tokenizer training settings. Base resolution $(224^{2})$ training configuration for the tokenizers used for training 4M. The base learning rate is specified for batch size 256. + +
ConfigurationRGBNormalsDepthSegmentationCLIP
Codebook size163848192819240968192
Patch size16 × 16
Latent dimension32
EMA stale code coef. creplace32
Codebook EMA0.99
l2-normalized codes [121]
Codebook weight1.0
Commitment weight1.0
Encoder architectureViT-B/16
Decoder architecturePatch-UNetPatch-UNetPatch-UNetViT-B/16ViT-B/16
Diffusion decoder××
Loss functionMSEMSESmooth L1Cross-entropySmooth L1
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.95
Weight decay0.05
Base learning rate [46]5e-5
Batch size6406406406401024
Learning rate sched.Cosine decay
Training epochs100 (IN1K) + 15 (CC12M)
Warmup epochs5 (IN1K) + 1 (CC12M)
Resolution2242
Random crop scale(0.8, 1.0)(0.8, 1.0)(0.8, 1.0)(0.8, 1.0)(0.2, 1.0)
Random crop ratio(0.75, 1.3333)
Horizontal flip
Data typefloat16float16float16bfloat16bfloat16
+ +Semantic segmentation & CLIP feature maps. We tokenize segmentation maps and CLIP feature maps using ViT-B [31] encoders and decoders (without the patch projection on the latter). The reconstruction loss for segmentation maps is cross-entropy, while for CLIP we choose the smooth L1 loss. While we do not use any data augmentations for training 4M, we decided to train the CLIP tokenizer using a minimum crop scale of 0.2 to keep open the option of performing fine-tuning with CLIP targets and strong augmentations [82]. For all other tokenizers, we use a more moderate minimum crop scale of 0.8, to avoid training them on very low-resolution but upscaled images that could affect image fidelity negatively. + +RGB, normals & depth. Training a VQ-VAE by simply minimizing a reconstruction loss can lead to blurry and unrealistic looking results [88], which is why VQ-GAN [35] proposes to use a discriminator to get more realistic results. We found VQ-GAN training to be unstable and decided to instead train a diffusion model as the decoder, similar to DiVAE [101]. Unlike DiVAE, which trains a diffusion model decoder using a frozen and pre-trained VQ-GAN encoder, we train both the encoder and diffusion decoder end-to-end and from scratch. As the decoder, we use a UNet [80] with four down and up layers, each consisting of three residual blocks and one up/down sampling block, with self-attention blocks at the two smallest down and up layers. + +For improved training and inference efficiency, we process $C$ -channel image patches of size $C \times 4 \times 4$ , rather than individual pixels, similar to Patched Diffusion Models [75]. We condition the diffusion decoder by concatenating the 32-dimensional codebook entries of the $14 \times 14$ encoded tokens with the noised input after patching, i.e. we concatenate the noised and patched image of shape $16C \times 56 \times 56$ with the upsampled latent tensor of shape $32 \times 56 \times 56$ . The output of the UNet gets reshaped back to $C \times 224 \times 224$ before computing the loss. + +We found that predicting the noise leads to undesirable color shifts during inference, which is why we predict the clean image instead. In addition, we found that to avoid the training to collapse to a degenerate solution, it is crucial to restart dead / unused codebook entries [127]. We train the diffusion decoder using 1000 DDPM [51] steps and a linear noise schedule. At inference time, we sample from the decoder using 25 DDIM [105] steps. + +Multi-resolution training. To train super-resolution specializations of 4M (see Appendix A.1), we need to be able to use the tokenizers both at the base resolution of $224 \times 224$ , and at our chosen higher resolution of $448 \times 448$ . Unlike convolutional VQ-VAEs, however, our ViT based tokenizers do not perform well on resolutions different from the training resolution. For that reason, we perform multi-resolution training as a short fine-tuning step, and initialize from the weights trained at the base resolution. During training, the resolution of every batch is sampled randomly between 224 and 448, in increments of 32. All tokenizers were fine-tuned for one CC12M epoch, except for RGB, which was fine-tuned for five epochs. We lowered the batch size per GPU to 16, except for semantic segmentation, for which we lowered it to 10. The remaining settings are the same as in Table 3. + +# C Method & Training Details + +# C.1 Additional 4M architecture details + +For both the ablations and the longer training runs we train 4M models of different sizes. Table 4 lists their respective encoder and decoder depths, model dimension, number of attention heads, as well as the number of trainable parameters (excluding the embedding layers). + +**Embedding details.** Modality embeddings are shared between the encoder and decoder. Additionally, as is commonly done for autoregressive models, the parameters of the input and output embeddings (i.e., the final linear layer) of the decoder are shared for sequence modalities. Similar to MAE [48] and MultiMAE [5], the positional and modality embeddings are re-injected into the encoder output before it is passed on to the decoder. + +RGB pixels as input. All 4M models are trained to accept both RGB pixels and tokenized RGB as input. This is implemented by treating RGB pixels and tokenized RGB as two entirely separate modalities, with RGB pixels being an input-only modality (as it isn't tokenized). This simple approach makes it easy to incorporate additional non-tokenized modalities as input in the future. + +Span masking for sequences. We perform span masking [86] as follows: Given the probability of masking a token $(p_{\mathrm{mask}})$ , we randomly mask out tokens in the sequence and replace each consecutive span of masked-out tokens by a sentinel token (e.g., [S_1], [S_2], [S_3],....). The target sequence then consists of the masked-out spans delimited by the sentinel tokens, followed by a final sentinel token to signal the end of the sequence. + +Unlike dense modalities, it is not possible to strictly respect the token budget when sampling from sequences due their variable length. Instead, we treat the token budget as a strict upper bound and mask sequences as follows: For the input, we sample a masking probability $p_{\mathrm{mask}}$ from a uniform distribution and use it for span masking. If the sequence length after masking is greater than the input budget, we progressively increase $p_{\mathrm{mask}}$ until it fits within the assigned budget. For the target, if the sequence does not fit within the budget, we randomly truncate it while also ensuring that the first token of the truncated sequence is a sentinel token. + +# C.2 Training details + +The training details for the 4M models used for the transfer experiments (Section 3) and the generation results (Section 4) are shown in Table 5. These hyperparameters were chosen using insights from the ablation study (Section 5), more detailed ablation results are shown in Appendix E. All 4M models were trained using bfloat16 mixed-precision [13]. Training durations varied with model size; 4M-B was trained in 1.5 days on 64 A100 GPUs, 4M-L was trained in 3 days on 128 A100 GPUs, while training 4M-XL took 8 days on 128 A100 GPUs. For 4M-XL, we reduced GPU memory + +Table 4: Model size variants: The model sizes and naming scheme follow T5 [86]. We exclude the size of the embedding layers in the parameter count. + +
ModelEncoder BlocksDecoder BlocksModel DimNum HeadsTotal Params
4M-Ti66384624M
4M-S88512859M
4M-B121276812198M
4M-L2424102416705M
4M-XL24242048322818M
+ +consumption through the use of activation checkpointing along with optimizer state and gradient sharding (ZeRO-2 [87]), via PyTorch's Fully Sharded Data Parallel (FSDP). + +Table 5: Pre-training settings. Training configuration for the 4M models used in the transfer experiments and generation results. + +
Configuration4M-B4M-L4M-XL
Training length (n tokens)500B
Warmup length (n tokens)10B
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.95
Base learning rate [46]1e-41e-42e-5
Batch size8192
Weight decay0.05
Gradient clippingXX3.0
Learning rate scheduleCosine decay
Feedforward activation (Appendix E.6)SwiGLU [100]
Input token budget (Appendix E.4)128
Target token budget (Appendix E.4)128
Input and target α (Appendix E.4)0.5, 0.5
Masking strategy (Appendix E.4)RGB → All & All → All
Image resolution2242
AugmentationNone (Center Crop)
Repeated sampling [38] (Appendix E.7)4
Data typefloat16 [13]
+ +# D Transfer Experiments Details + +In this section, we report the fine-tuning details for all transfers shown in Section 3. Whenever possible, we follow commonly used settings from other papers performing such transfers. However, to avoid excessive computational costs, we adjust some transfer settings such that no single transfer requires more than $20\%$ of the pre-training compute. These adjustments include reducing the training resolution when fine-tuning on COCO, and reducing the number of epochs for intermediate fine-tuning on ImageNet-21K. + +# D.1 Architectural differences with the baselines + +4M's encoder and all baselines can be viewed as ViT-Base or ViT-Large backbones [31], and are nearly identical in terms of parameter count and model FLOPS. Despite this, a few minor architectural differences could potentially affect their performance. These include: (1) 4M lacks bias terms and uses SwiGLU [100] in the feedforward, while the baselines include bias terms and use GELU [49]. (2) All baselines include an additional learnable [CLS] token as input, which is absent in 4M. (3) DeiT III uses learnable positional embeddings, whereas all other methods including 4M use fixed sine-cosine positional embeddings. (4) When fine-tuning on COCO, MAE uses both absolute and relative positional embeddings (following ViTDet's approach [66]), while all other methods only use absolute positional embeddings. + +# D.2 Image classification on ImageNet-1K + +We use settings inspired by [114] and first perform intermediate fine-tuning on ImageNet-21K [93], followed by full fine-tuning on ImageNet-1K [29]. Intermediate fine-tuning helps improve overall performance for models that were not originally trained on ImageNet-21K (all 4M models and baselines aside from DeiT III). Details are shown in Table 6. + +Table 6: Image classification settings. Configuration for intermediate fine-tuning on ImageNet-21K and fine-tuning on ImageNet-1K. + +
ConfigurationImageNet-21KImageNet-1K
BaseLargeBaseLarge
Fine-tuning epochs205020
Warmup epochs22
OptimizerAdamW [73]AdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.95β1, β2 = 0.9, 0.999
Base learning rate [46]1e-41e-4
Batch size40964096
Weight decay0.050.05
Learning rate scheduleCosine decayCosine decay
Layer-wise lr decay [26]0.750.850.750.85
Drop path [55]0.10.20.10.2
Input resolution22422242
AugmentationRandAug(9, 0.5) [27]RandAug(9, 0.5) [27]
Random resized crop(0.5, 1)(0.08, 1)
Label smoothing ε0.10.1
Mixup [129]0.10.1
Cutmix [124]1.01.0
+ +# D.3 Object detection and instance segmentation on COCO + +We follow the settings from ViTDet [66] with a simplified Cascade Mask-RCNN head [47, 15] with two major changes: First, we reduce the image resolution from $1024 \times 1024$ to $512 \times 512$ , which reduces computational costs during training by over $4 \times$ but lowers the performance. Second, we do not use window attention, and instead keep the attention layers unchanged (i.e. global) for all models and rely on activation checkpointing to reduce the GPU memory usage. Details are shown in Table 7. + +Table 7: Object detection and instance segmentation settings. Configuration for object detection and instance segmentation fine-tuning on COCO, the settings follow ViTDet [66]. + +
ConfigurationCOCO
BaseLarge
Fine-tuning epochs100
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Weight decay0.1
Learning rate0.0001
Learning rate scheduleMulti-step decay
Lr schedule milestones[Epoch 89, Epoch 96]
Lr schedule decay values[1.0, 0.1, 0.01]
Warmup epochs0.010.007
Batch size512320
Layer-wise lr decay [26]0.70.8
Drop path [55]0.10.4
Input resolution5122
AugmentationLarge-scale jitter (LSJ) [41]
+ +# D.4 Semantic segmentation on ADE20K + +We follow the settings from MultiMAE [5] and use a ConvNext [72] prediction head of depth 4 on top of the encoder to perform semantic segmentation. Details are shown in Table 8. + +Table 8: Semantic segmentation settings. Configuration for semantic segmentation fine-tuning on ADE20K, the settings follow MultiMAE [5]. + +
ConfigurationADE20K
BaseLarge
Fine-tuning epochs64
Warmup epochs1
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Learning rate2e-4
Batch size64
Weight decay0.05
Learning rate scheduleCosine decay
Layer-wise lr decay [26]0.750.85
Drop path [55]0.10.2
Input resolution5122
AugmentationLarge-scale jitter (LSJ) [41]
Color jitter
+ +# D.5 Depth estimation on NYUv2 + +We closely follow the settings from MultiMAE [5] but replace the DPT [91] prediction head by a ConvNeXt [72] prediction of depth 2. We find that this ConvNeXt head reaches comparable performance to DPT on the MultiMAE baseline, while also removing the need to extract and rescale features from intermediate layers of the network. Details are shown in Table 9. + +Table 9: Depth estimation settings. Configuration for depth estimation fine-tuning on NYUv2, the settings follow MultiMAE [5]. + +
ConfigurationNYUv2
BaseLarge
Fine-tuning epochs1000
Warmup epochs100
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Learning rate1e-4
Batch size128
Weight decay1e-4
Learning rate scheduleCosine decay
Layer-wise lr decay [26]0.750.85
Drop path [55]0.10.2
Input resolution2562
Random crop
Color jitter
+ +# E Ablation Details & Results + +# E.1 Benchmark tasks + +All transfer tasks in the ablation are cast as token-to-token prediction problems, similar to the way 4M was pre-trained. We perform several transfers with RGB (pixels) as the input, test how well 4M variants perform at adapting to unseen input modalities, and ablate their ability to make use of multiple input modalities. + +All models are fine-tuned with a constant learning rate and evaluated using an exponential moving average of the weights. As different models may overfit to the transfer tasks at different speeds, we measure the validation loss 50 times per transfer and report the best loss, as in T5 [86]. + +For all tasks and modalities, we train new tokenizers on the respective training sets, which means that they are all either completely unseen, or a new instantiation of a training task. When transferring from an unseen modality (e.g., in $\mathrm{X} \to \mathrm{Y}$ and $\mathrm{X} + \mathrm{Y} \to \mathrm{Z}$ transfers), the input embeddings are not initialized, which can lead to instabilities and lower the overall performance. To address this problem, we initially freeze the Transformer encoder-decoder and only train the new input and output embedding layers. Afterwards, we unfreeze the Transformer and train the entire model. + +As described in Section 5, we report the validation set cross-entropy performance instead of task-specific metrics. The reasons for this are twofold: First, downstream performance hinges on A) how well the tokens are able to represent the downstream tasks (i.e. tokenizer reconstruction error), and B) how well 4M is able to predict these tokens (i.e. cross-entropy loss). Since the tokenizers are the same across all settings, we abstract away aspect A for this ablation and only report B. Second, reporting the cross-entropy loss makes comparisons more uniform and avoids having to scale and average wildly different task-specific metrics such as mAP, mIoU, MSE, etc. that are not comparable with each other. + +COCO transfers. We perform object detection on the COCO dataset [67] in the same manner as during pre-training (see Appendix B.2), i.e. by casting it as a sequence prediction problem [21]. The input modality for the transfer task is RGB (pixels). Exact training settings are provided in Table 10. + +ADE20K transfers. We perform semantic segmentation on ADE20K [131] by fine-tuning the 4M models to predict VQ-VAE tokens of the targets. Since our semantic segmentation tokenizer during pre-training was trained with COCO classes, we train a new tokenizer for ADE20K in the same manner (see Appendix B.2). The tokenizer is trained for 1500 epochs with a warmup period of 150 epochs. In order to perform random crop augmentations at transfer time, we also pre-trained the tokenizer with a crop scale of (0.2, 1.0). The input modality for the transfer task is RGB (pixels). Exact training settings are provided in Table 11. + +Table 10: COCO detection transfer settings. + +
ConfigurationCOCO Det.
Fine-tuning epochs100
Warmup epochs5
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Base learning rate [46]1e-4
Batch size256
Weight decay0.05
Learning rate scheduleConstant
EMA decay0.998
Eval. freq (epochs)2
Input resolution2242
AugmentationLarge-scale jitter (LSJ) [41]
Color jitter
+ +Table 11: ADE20K semantic segmentation transfer settings. + +
ConfigurationADE20K Seg.
Fine-tuning epochs200
Warmup epochs20
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Base learning rate [46]1e-4
Batch size256
Weight decay0.5
Learning rate scheduleConstant
EMA decay0.998
Eval. freq (epochs)4
Input resolution2242
AugmentationRandom crop
Crop scale(0.2, 1.0)
Crop ratio(0.75, 1.333)
Color jitterx
+ +Taskonomy-20K & Hypersim RGB $\rightarrow$ X transfers. We perform several transfers on Hypersim [94] and a subset of Taskonomy [126] Tiny, denoted here as Taskonomy-20K. The subset contains 20'000 training images and 2000 validation images. + +For Taskonomy-20K, we transfer to depth, principal curvature, reshading, occlusion edges, 2D edges, 2D keypoints, and 3D keypoints. For Hypersim, we transfer to surface normals, semantic segmentation, and 2D edges. The input modality for all transfer tasks is RGB (pixels). Exact training settings are provided in Table 12. + +Taskonomy-20K & Hypersim $\mathbf{X}\rightarrow \mathbf{Y}$ transfers. There are 72 possible transfers between the Taskonomy tasks, and 12 possible transfers between the Hypersim tasks. To reduce the computational cost of the ablations, we subsample these transfer tasks, trying to cast a diverse net. For Taskonomy-20K, we perform curvature $\rightarrow$ 2D keypoints, depth $\rightarrow$ normals, 2D edges $\rightarrow$ depth, surface normals $\rightarrow$ 2D edges, occlusion edges $\rightarrow$ principal curvature, and RGB (tokens) $\rightarrow$ surface normals. For Hypersim we perform RGB (tokens) $\rightarrow$ segmentation, 2D edges $\rightarrow$ segmentation, normals $\rightarrow$ segmentation, segmentation $\rightarrow$ 2D edges, and segmentation $\rightarrow$ normals. Exact training settings are provided in Table 13. + +Taskonomy-20K & Hypersim $\mathbf{X} + \mathbf{Y}\rightarrow \mathbf{Z}$ transfers. As before, we subsample all possible transfer tasks that use two modalities as the input, aiming for a diverse set of transfers. For Taskonomy-20K, we perform RGB (pixels) + normals → depth, RGB (pixels) + depth → reshading, reshading + 2D keypoints → normals, 2D edges + 3D keypoints → RGB (tokens), 2D edges + curvature → occlusion edges, and occlusion edges + 2D keypoints → curvature. For Hypersim we perform RGB (pixels) + depth → segmentation, RGB (pixels) + normals → segmentation, RGB (pixels) + segmentation → depth, depth + segmentation → RGB (tokens), 2D keypoints + normals → segmentation, and 2D edges + depth → segmentation. Exact training settings are provided in Table 13. + +Table 12: RGB $\rightarrow$ X transfer settings. + +
ConfigurationTaskonomy-20KHypersim
Fine-tuning epochs10050
Warmup epochs52
Frozen Transformer epochs02
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Base learning rate [46]1e-4
Frozen Transformer base lr1e-4
Batch size256
Weight decay0.05
Learning rate scheduleConstant
EMA decay0.998
Eval. freq (epochs)21
Input resolution2242
AugmentationRandom crop
Crop scale(0.2, 1.0)
Crop ratio(0.75, 1.333)
Color jitter×
+ +Table 13: $\mathbf{X} \rightarrow \mathbf{Y}$ and $\mathbf{X} + \mathbf{Y} \rightarrow \mathbf{Z}$ transfer settings. + +
ConfigurationTaskonomy-20KHypersim
Fine-tuning epochs10050
Warmup epochs52
Frozen Transformer epochs2520
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.999
Base learning rate [46]1e-4
Frozen Transformer base lr2e-3
Batch size256
Weight decay0.05
Learning rate scheduleConstant
EMA decay0.998
Eval. freq (epochs)21
Input resolution2242
AugmentationRandom crop
Crop scale(0.2, 1.0)
Crop ratio(0.75, 1.333)
Color jitterOnly if RGB is in the input
+ +# E.2 Reference model + +In this section, we specify the reference model and its training settings. These choices are made by following common practices and performing educated guesses. The settings include model size, number of encoding and decoding layers, additional modifications on the architecture (e.g. no bias), choice of Dirichlet sampling parameter $\alpha$ , input and target modalities, and training length. Following common practice for the base model size [30, 86], our base model consists of 12 encoder and 12 decoder layers. We train it on CC12M [19] using all modalities as both inputs and targets, at resolution $224 \times 224$ pixels (corresponding to $14 \times 14$ tokens per dense modality), and using no augmentations such as cropping nor color augmentations. The total training length is fixed at 100B tokens seen, corresponding to roughly 400M masked samples seen for the base setting. We set the number of randomly sampled input and target tokens both to 128, and sample each using a symmetric Dirichlet distribution with parameter $\alpha = 0.2$ . Full details are shown in Table 14. + +To determine the significance of various modeling choices, we adopt the approach used by Raffel et al. [86] that calculates the standard deviation of the transfer results of ten distinct reference models, each trained with different seeds. In the subsequent ablation results, the average performance of the reference model setting is indicated by the symbol $\triangleright$ . Furthermore, we highlight transfer results falling within two standard deviations (shown in Table 15) of the lowest achieved losses, enabling us to estimate which settings are significant. Table 15 further contains from-scratch results (training a randomly initialized model) for all the transfer tasks. This serves as a lower bound for the performance + +that can be achieved on a given transfer task, and allows us to calibrate how difficult each transfer is, and by how much pre-training with a given setting improves upon random initialization. + +Table 14: Reference model pre-training settings. Training configuration for the reference 4M model used in the ablation. All other models in ablation are obtained by changing just one of these hyperparameters. + +
ConfigurationReference model
Model sizeBase (see Table 4)
Training length (n tokens)100B
Warmup length (n tokens)10B
OptimizerAdamW [73]
Opt. momentumβ1, β2 = 0.9, 0.95
Base learning rate [46]1e-4
Batch size4096
Weight decay0.05
Gradient clippingx
Learning rate scheduleCosine decay
Feedforward activationGELU [49]
Input token budget128
Target token budget128
Input and target α0.2, 0.2
Masking strategyAll → All
Image resolution2242
AugmentationNone (Center Crop)
Repeated sampling [38]4
+ +Table 15: Reference model results: We report the average and standard deviation of the transfer results achieved by our reference 4M model. In addition, we show performance achieved by a randomly initialized model. All results are reported on the validation sets of each dataset. + +
COCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. Loss
▷Reference model avg.3.066.115.076.755.805.36
No pre-training3.316.596.037.526.626.01
Reference model std. dev.0.0080.0080.0210.0180.0160.010
+ +# E.3 Input modalities and target tasks + +Importance of target tasks for representation learning. 4M is both a multimodal and a multitask pre-training scheme and the choice of target task(s) is a powerful way of steering what representation the model learns [10, 109]. Furthermore, multitask pre-training has been shown to improve downstream task transfers [42, 107, 5]. In this ablation, we would like to learn how the choice of target tasks affects transfers to various downstream tasks. For that, we fix the input modalities to be either RGB or all the modalities (denoted as "All"), and vary just the target tasks. The results in Table 16 (or equivalently, Table 2) mirror the findings of Sax et al. [98], that the optimal choice of pre-training setting depends highly on the type of transfer that is performed, and that there is no single pre-training task that performs best on all transfers. That said, pre-training on all target modalities outperforms the other single-task and multitask alternatives in terms of average loss, no matter what input modalities were used during pre-training. Note that some of the settings in Table 16 are conceptually similar to well-known pre-training methods, and can serve as baselines for such methods, while abstracting away implementation details that would make them difficult to compare otherwise. E.g., $\mathrm{RBG} \rightarrow$ RGB (first row) is similar to MAE [48], while $\mathrm{RGB} \rightarrow \mathrm{CLIP}$ (fifth row) is similar to BEITv2 [82]. + +Importance of multimodal pre-training for transferring to new modalities. Just as multitask pre-training improves transfers to new target tasks, we would hope that, likewise, multimodal pretraining improves transfers to new input modalities. Indeed, pre-training with multiple modalities + +that may be available at transfer time has been shown to be beneficial [5], but it is not as clear how well multimodal models transfer to completely new, unseen input modalities. Table 16 shows that multimodal pre-training can significantly help with transferring to new input modalities (X→Y transfers), but comes at a performance loss at transfers that use RGB as the sole input modality. In Appendix E.4 we will explore pre-training using mixtures of different masking strategies, which enables us to train models that perform well in both regimes. + +Table 16: Pre-training input and target modalities ablation: The choice of pre-training tasks and modalities influences what representations the model learns, and how well it can be transferred to novel tasks and modalities. The average rank and best losses are shown for "RGB" and "All" inputs separately. Here, Geometric = RGB + Depth + Normals and Semantic = RGB + Segmentation + CLIP + Detection + Captions. Performing 4M pre-training on all input and target modalities is the most versatile choice, if the optimal set of pre-training modalities for any given downstream task is unknown. Note that the results shown here are identical to those of Table 2, and are included for completeness. + +
Pre-training inputsPre-training targetsCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
RGBRGB3.146.215.036.946.155.498.00
RGBDepth3.116.064.726.895.845.324.00
RGBNormals3.126.024.666.835.875.303.20
RGBSegmentation3.175.944.846.865.895.344.60
RGBCLIP3.076.114.836.855.945.364.80
RGBDetection2.786.115.037.076.245.457.20
RGBCaptions3.456.555.927.356.866.0310.00
RGBGeometric3.116.084.706.885.855.323.80
RGBSemantic2.885.994.866.976.065.355.40
RGBAll2.905.994.746.915.935.294.00
AllRGB3.216.205.076.755.855.423.80
AllCLIP3.196.185.066.805.885.423.80
AllGeometric3.206.134.986.725.765.361.80
AllSemantic3.056.135.166.775.875.393.40
AllAll3.066.115.076.755.805.362.20
+ +# E.4 Multimodal masking strategy + +Multimodal masking is at the core of 4M, so in this section, we ablate how modality tokens should be sampled and mixed, and how many tokens we should encode and decode. When performing multimodal training, it can be computationally challenging to deal with the large number of tokens from all modalities. For that reason, the choice of the number of input and target tokens has the potential to greatly reduce the computational burden, and setting them as low as possible is desirable. + +Mask sampling parameter. The number of tokens to sample from each modality is determined by a symmetric Dirichlet distribution with parameter $\alpha$ [5]. If $\alpha$ is low, the sampling procedure will often choose cases where most of the tokens are sampled from only one modality. If $\alpha$ is high, however, most samples will contain tokens from all modalities to equal proportions. We ablate here various choices of $\alpha$ values, both for the input and target sampling. Table 17 shows that models trained with higher $\alpha$ values generally transfer better to RGB tasks, but methods trained with lower input $\alpha$ values perform better at transferring to novel input modalities. + +Table 17: Mask sampling parameter ablation: The choice of Dirichlet sampling parameter $\alpha$ influences how many tokens are sampled from each modality. Low values lead to many samples consisting of entire modalities, while high values result in equal numbers of tokens to be sampled from each modality. We ablate various input and target $\alpha$ values. + +
Input αTarget αCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
0.10.13.116.145.186.815.895.434.80
▷0.20.23.066.115.076.755.805.363.20
0.50.53.026.094.986.765.805.332.60
1.01.03.006.074.956.805.795.321.80
3.006.084.936.845.835.342.60
+ +Input masking budget. The difficulty of the multimodal masked modeling task is largely determined by the number of visible (non-masked) input tokens, with fewer tokens used making the task more challenging. Indeed, since the modalities contain a lot of spatial information about each other, it is necessary to lower the number of visible tokens to keep the objective difficult enough. Furthermore, encoding only a small set of visible tokens (as opposed to encoding both visible and mask tokens [30, 7]) can significantly improve training efficiency [48]. This is especially important when training on multiple modalities [5], as the input length would otherwise grow too large. + +Table 18: Input masking budget ablation: The number of visible input tokens decides the difficulty of the masked modeling task (the fewer given, the harder the task), and influences the computational cost of the encoder (the fewer, the less expensive). We fix the target masking budget at 128 tokens and vary the number of input tokens. + +
Input TokensCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
643.156.215.276.895.995.504.00
▷ 1283.066.115.076.755.805.362.00
2563.036.085.046.715.755.321.00
5123.076.125.116.785.835.383.00
+ +Target masking budget. Just like how we could improve the training efficiency by only decoding a small set of visible tokens, so can we decide to decode merely a subset of all remaining masked-out tokens. Decoding all masked-out tokens, like MAE [48], or in the multimodal case MultiMAE [5], can quickly become infeasible as the number of modalities grows and the masking ratio is kept high. While they attempt to address this issue by decreasing the size of the decoder, such a strategy might ultimately hurt generation quality if the decoder is used for generative tasks. As Table 19 shows, decoding only a small random subset of all targets performs better (for a fixed training duration) than decoding a larger number of tokens, while also significantly reducing the computational costs. + +Table 19: Target masking budget ablation: Similar to passing only visible tokens to the encoder, we can decide to only decode a subset of masked-out tokens, which improves computational efficiency considerably. We fix the input masking budget at 128 and vary the number of target tokens. + +
Target TokensCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
643.066.115.096.735.795.361.40
▷ 1283.066.115.076.755.805.361.60
2563.106.165.156.815.915.433.00
5123.166.205.276.845.955.484.00
+ +Mixture of masking strategies. As shown in Appendix E.3, pre-training using RGB as the sole input modality instead of training on all of them performs significantly better at transfers that, likewise, use RGB as input. On the flip-side, pre-training with all modalities performs significantly better when transferring to unseen input modalities. This difference can be explained by the fact that these two mask sampling strategies are on opposite extremes – the number of RGB inputs in the multimodal pre-training case is relatively small, while the RGB-only case is never trained on any other modalities. We can find a compromise by sampling batch elements using these two strategies uniformly at random, creating a pre-training strategy, where approximately half the time input tokens are RGB-only, and half the time they are sampled from all modalities. Table 20 shows that while the mixture approach does not perform better than either individual mask sampling strategy at any of the transfers, it is a good compromise if the use-case of the model is open at the time of pre-training. + +Table 20: Mixture of masking strategies ablation: Mixing different masking strategies by randomly sampling between the two provides a good compromise in downstream performance. + +
Masking strategyCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
RGB → All2.905.994.746.915.935.291.80
▷ All → All3.066.115.076.755.805.362.20
RGB → All & All → All2.956.004.866.865.875.312.00
+ +# E.5 How well does 4M scale? + +We argue that scalability is a key property that models and training objectives should have. To ablate how well 4M scales, we ablate the following three axes: dataset size, training length, and model size. + +Does 4M benefit from more data? We train the base configuration on various subsets of CC12M, down to 1/64th of the full dataset. The results in Table 21 show that 4M is able to scale with the pre-training dataset size. + +Table 21: Dataset size ablation: To estimate how well 4M scales with pre-training dataset size, we train it on different subsets of CC12M ranging from $\frac{1}{64}$ of the dataset to the entire dataset.. + +
Dataset fractionCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
1/643.226.235.316.865.945.514.00
1/163.156.175.236.805.875.443.00
1/43.086.135.116.795.845.392.00
▷ 13.066.115.076.755.805.361.00
+ +Does 4M benefit from longer training? We train the base configuration for different amounts of total tokens seen. We define tokens seen as the total number of both input tokens and target tokens the model was trained on. Table 22 shows that 4M scales to longer training schedules. + +Table 22: Training duration ablation: To see whether 4M benefits from longer training schedules, we train it for up to 400B tokens seen. + +
Train tokens [B]COCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
503.136.175.236.865.935.464.00
▷ 1003.066.115.076.755.805.363.00
2003.006.054.936.715.745.292.00
4002.966.024.846.705.715.251.00
+ +Does 4M scale with model size? We train 4M models of different sizes, ranging from Tiny (4M-Ti) to Large variants (4M-L). Exact model specifications are given Table 4. Table 23 shows that 4M scales with model size. + +Table 23: Model size ablation: To see how 4M scales with model size, we train Tiny, Small, Base and Large versions. + +
Model sizeCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
4M-Ti3.286.385.576.936.155.664.00
4M-S3.166.255.346.845.975.513.00
▷ 4M-B3.066.115.076.755.805.362.00
4M-L2.935.974.866.635.755.231.00
+ +# E.6 Architectural design choices + +In the following, we ablate several design choices related to the Transformer architecture. + +Number of encoder & decoder layers. We ablate whether one should train 4M using a balanced allocation of encoder and decoder layers or rather go with a deeper or more shallow decoder. We fix the total number of Transformer layers to 24 and ablate three different configurations in Table 24. The results show that all configurations perform comparably, with the balanced and encoder-heavy settings having a slight edge over the decoder-heavy one. + +Table 24: Encoder & decoder depth ablation: We ablate three different ways of allocating 24 Transformer layers between the encoder and decoder. + +
Enc. depthDec. depthCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
8163.106.145.096.755.825.382.80
▷ 12123.066.115.076.755.805.361.40
1683.056.115.116.755.785.361.80
+ +Per-token vs. per-modality loss. Since we predict multiple modalities at the same time, there are several ways we can compute their losses. We ablate the following two loss weighting strategies: (1) We treat every predicted token the same and average all their losses. We call this setting per-token loss. This setting is biased against modalities that don't contain a lot of tokens, such as captions. (2) We first average the loss for every target modality individually and then average those. We call this setting per-modality loss. Computing the loss per-modality noticeably outperforms the per-token loss, as shown in Table 25. + +Table 25: Multitask loss aggregation ablation: We ablate weighting the loss contribution of every modality equally vs. weighting every token equally. + +
Loss typeCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
▷Per-modality loss3.066.115.076.755.805.361.00
Per-token loss3.076.165.126.785.865.402.00
+ +Transformer architecture modifications. We ablate several modifications to the Transformer architecture, and show results in Table 26. (1) Following Shazeer [100], we ablate the use of the SwiGLU activation function in the feed-forward layers, and reduce the feed-forward dimension by a factor of $\frac{2}{3}$ (from $4d$ to $\frac{2}{3}4d$ ) to keep the parameter count and amount of computation constant. (2) Following T5 [86] and PaLM [25], we ablate removing all bias terms from the Transformer. (3) Following ViT-22B [28], we ablate the use of query/key normalization, which has been shown to help the stability of very large Vision Transformers trained to perform image classification. + +We find SwiGLU slightly outperforms GELU while also removing the need for bias terms, and therefore use it to train the final model. As we do not observe noticeable stability issues with our training objective with current model scales, we refrain from using query/key normalization as we find that it negatively impacts the transfer performance. + +Table 26: Architecture modifications ablation: We ablate the use of different activations in the feed-forward [49, 100], removing bias terms [86, 25], and performing query-key normalization [28]. + +
FFN act.BiasQK Norm.COCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
▷ GELUX3.066.115.076.755.805.362.20
GELUXX3.076.115.116.805.865.394.00
GELUX3.086.135.166.925.965.455.20
SwiGLUX3.086.135.166.946.265.515.80
SwiGLUXX3.076.115.036.735.795.341.80
SwiGLUX3.066.095.036.795.805.352.00
+ +# E.7 Training design choices + +Base learning rate. We ablate several base learning rates [46] and show results in Table 27. + +Table 27: Base learning rate ablation. We ablate the choice of learning rate used during training. 4M is not too sensitive to the choice of learning rate, although we observe some performance degradation and instabilities when strongly increasing the learning rate. + +
Base Ir [46]COCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
5e-53.066.115.076.785.825.372.80
1e-43.066.115.076.755.805.361.20
2e-43.056.115.106.755.865.372.20
3e-43.076.115.116.775.885.393.80
5e-43.106.145.176.815.895.425.00
+ +Training with repeated sampling. Data loading can be a significant bottleneck when training efficient masked models. For that reason, we use webdataset [116] to load tar files consisting of 1000 samples instead of loading individual samples. We keep a buffer of loaded images in RAM and randomly sample from it repeatedly – each time applying different random masking. Whenever an element in the buffer has been sampled more than the specified number of repeats, we replace it with a fresh sample. Using repeated sampling [38] has the potential to significantly improve training efficiency at no loss in performance (see Table 28). + +Table 28: Repeated sampling ablation: To improve training efficiency, we reuse samples from a buffer multiple times, at no significant loss in performance. + +
Num. repeatsCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
13.066.115.066.735.805.351.40
▷ 43.066.115.076.755.805.361.60
83.076.125.106.765.815.373.00
+ +# E.8 Self-baselines + +When comparing 4M to other pre-training methods in the transfer learning study, we also report the performance of two self-baselines to better control for the dataset, augmentations, model architecture, compute, and use of tokenizers, all of which can significantly affect downstream performance. These two self-baselines were chosen to be conceptually similar to MAE [48] (Masked RGB $\rightarrow$ RGB) and BEiT-v2 [82] (Masked RGB $\rightarrow$ CLIP). Following these methods, we train these self-baselines with a relatively high masking ratio (in our case, $50\%$ ) and ensure that the target tokens do not spatially overlap with the inputs. In this section, we ablate the masking strategy for these self-baselines by also training a version with a lower masking ratio and with spatial overlap between the input and the targets (as in 4M), and a version without any masking and where all targets are predicted (standard RGB $\rightarrow$ X training). + +Results are shown in Table 29. We find that while the RGB $\rightarrow$ RGB setting does benefit from higher masking ratios and no spatial overlap (as shown in MAE), RGB $\rightarrow$ CLIP benefits from a lower masking ratio and from allowing targets to spatially overlap with the input. + +Table 29: Self-baseline ablation: The RGB $\rightarrow$ RGB self-baseline performs best with a higher masking ratio and no spatial overlap between inputs and targets, while RGB $\rightarrow$ CLIP benefits from a lower masking ratio and spatial overlap. + +
Target mod.Input tok.Target tok.Spatial overlapCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. LossAvg. Rank
RGB9898X3.126.165.016.916.145.471.40
RGB1281283.146.215.036.946.155.492.40
RGB1961963.246.375.426.876.135.612.20
CLIP9898X3.086.094.936.946.035.412.40
CLIP1281283.076.114.836.855.945.361.20
CLIP1961963.126.204.936.886.015.432.40
+ +# E.9 Comparison to the final models + +We gather insights from the ablation to train the "final" 4M models described in Appendix C. Here, we compare these models to the reference models used throughout the ablation. Unsurprisingly, we find that these models perform significantly better than the reference models on the benchmark tasks. The large performance gap for tasks that take as input RGB images can in large part be attributed to the mixture of masking strategies described in Appendix E.4. + +Table 30: Comparison to the final models: The final models obtained by combining insights from the ablations significantly outperform the reference models on all benchmark tasks. + +
SizeSettingCOCO Det.ADE20K Seg.RGB→XX→YX+Y→ZAvg. Loss
▷ BaseReference (Table 14)3.066.115.076.755.805.36
BaseFinal (Table 5)2.855.914.696.745.765.19
LargeReference (Table 14)2.935.974.866.635.755.23
LargeFinal (Table 5)2.685.764.486.675.635.04
+ +# F Additional Evaluations + +# F.1 Out of the box (zero-shot) performance + +In this section, we evaluate the out of the box capabilities of 4M models in three RGB $\rightarrow$ X tasks: surface normals estimation, depth estimation, and semantic segmentation. The results are shown in Table 31. Our findings demonstrate that 4M performs competitively in these tasks even without fine-tuning, when compared to both task-specific baselines and the pseudo labelers used for 4M training. Additionally, the tokenization quality, as shown in Table 31 and Figure 20, does not seem to limit the overall performance of 4M in these zero-shot scenarios. We therefore anticipate that the zero-shot performance of 4M can be further improved if not limited by the current pseudo labels, either through the use of ground truth data or stronger pseudo labeling networks. + +Table 31: Out of the box (zero-shot) performance: We report the out of the box performance of 4M models and of baselines on several tasks, all at $224 \times 224$ resolution. We use the DIODE [111] validation set for normals and depth, and COCO validation set for semantic segmentation. [P] denotes the pseudo labeler used to train 4M. Best results are bolded, second best are underlined. 4M's zero-shot performance matches or outperforms strong baselines such as OASIS [23] and MiDaS [90], and is competitive with pseudo labelers. * To estimate the performance upper bound of 4M due to tokenization, we also report the tokenizer reconstruction quality on validation images from CC12M. + +
MethodSurface Normals (mean angle error)↓Depth (standardized L1 error)↓Semantic Segmentation (mean IoU)↑
OASIS34.3--
MiDaS DPT Hybrid-0.73-
Omnidata+3DCC [P]22.50.68-
Mask2Former Swin-S--44.6
Mask2Former Swin-B [P]--45.7
Mask2Former Swin-L--48.0
4M-B21.90.7141.3
4M-L21.40.6945.8
4M-XL20.80.6846.5
Tokenizer reconstruction*4.00.0690.5
+ +![](images/ae8a5ef62c858570d3929105d62fbee705e360716e93445b1118dc81c8a9adb0.jpg) +Figure 20: Tokenizer reconstructions: We show sample reconstructions of surface normals, depth, and semantic segmentation pseudo labels from the CC12M validation set at $224 \times 224$ resolution ( $14 \times 14$ tokens). Quantitative evaluations are provided in Table 31 (last row), confirming that the reconstruction quality does not bottleneck $4\mathrm{M}$ 's out of the box performance. + +# F.2 Text-to-image performance + +In this section, we evaluate 4M's text-to-image capabilities, with detailed quantitative results shown in Table 32. To perform a controlled comparison, we train a pure text-to-image variant of 4M-B, akin to Muse [18], for a total of 300B tokens on CC12M, and using the same RGB tokenizer as the 4M models. 4M trained on all modalities achieves comparable FID and CLIP scores to this specialist model, and at the same time can be conditioned on any pre-training modality and can solve several common vision tasks out of the box. We also compare against the $512 \times 512$ base model of Stable Diffusion 2.1 (SD-2.1) [95] and observe a considerable gap to SOTA generative models on OOD data. It is important to highlight that SD-2.1 was trained on datasets that were two orders of magnitude larger and with a compute budget one order of magnitude greater than 4M-XL. When taking into account the scaling behaviors of comparable token-based text-to-image models such as Muse or MAGE, we anticipate a substantial improvement in the generation quality of 4M models if they were trained under a similar data and computational regime, alongside the use of improved image tokenizers. + +Table 32: Quantitative evaluation of text-to-image generative capabilities: We evaluate the text-to-image capabilities of 4M and compare them with a controlled text-to-image version of 4M (similar to Muse), and Stable Diffusion 2.1-base. We compute FID and CLIP-L/14 scores on 30K images of the CC12M and COCO validation sets after resizing the generations to 256x256. All runs use guidance scale 3.0. In-domain, 4M can approach SD-2.1 and matches the Muse baseline in a controlled setting. On COCO, we observe a larger gap to SD-2.1 which was trained on orders of magnitude more data and compute. + +
ModelRes.DatasetA100-hoursText encoderCC12M val (30K)COCO val (30K)
FID ↓CLIP score ↑FID ↓CLIP score ↑
4M-B224CC12M2.3k-15.920.637.521.4
4M-L224CC12M9.2k-11.918.930.123.1
4M-XL224CC12M24.5k-10.722.127.023.7
4M-B "Muse"224CC12M1.6k-18.318.839.820.3
SD-2.1-base512Curated LAION-5B>200kCLIP ViT-L/149.125.510.125.5
+ +# G Broader Impact + +# G.1 Computational costs + +The training time of the 4M models used in the transfer and generative results depends on their size: 4M-B was trained in 1.5 days on 64 A100 GPUs, 4M-L was trained in 3 days on 128 A100 GPUs, and training 4M-XL took 8 days on 128 A100 GPUs. + +The reference model used in the ablations can be trained in 12 hours on 32 A100 GPUs, or 2 days on 8 A100 GPUs. Transferring a pre-trained model to the 35 benchmark tasks used for the ablations takes approximately 3 days on 8 A100 GPUs. + +# G.2 Social impact + +We developed a framework for training general-purpose foundation models that, as demonstrated, can be conveniently re-purposed for various tasks a practitioner may be interested in. We also committed to open-sourcing our code and models. These actions support the public with the democratization of the tools and the possibility of transparent inspection and safeguarding. While our model is not particularly poised for negative use compared to the alternatives, it should be noted that powerful generative models are a general tool and have the potential to be used in ways authors did not intent. In addition, the data they are trained on may incorporate various societal biases or contain samples gathered in different ways from the internet. We trained our models on CC12M [19] which is an open-sourced dataset and has been curated to some degree (e.g., people's names are redacted), yet, due to the imperfections in this process, we still advise caution when using the models for generative purposes. \ No newline at end of file diff --git a/4mmassivelymultimodalmaskedmodeling/images.zip b/4mmassivelymultimodalmaskedmodeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8cdfd3a4c49524cf4ed32754e6d65797d073a97f --- /dev/null +++ b/4mmassivelymultimodalmaskedmodeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c608a44a2b334d5be6d125019a4421abc9d6750f457b0be9f1abf8d66fd2b55 +size 3491603 diff --git a/4mmassivelymultimodalmaskedmodeling/layout.json b/4mmassivelymultimodalmaskedmodeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a60772fd766512d10f169ab8b7f41341817e9122 --- /dev/null +++ b/4mmassivelymultimodalmaskedmodeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea21b1f3d02deacf63bc725ab79dac3d755e64517675a3f1e8c60cbc6fe59152 +size 1116790 diff --git a/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_content_list.json b/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..db45845b7b3e63abebb885bea046eecc0098fce7 --- /dev/null +++ b/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d9ef086fdef71b343d691b2472aa398dd58e72292b9ec946a0e992117934929 +size 119318 diff --git a/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_model.json b/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c38867de2316bfb9cf820b0f41f125975d1cb424 --- /dev/null +++ b/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49a5d906c0c89575a01e7e376ea4abd23d3de041cda5a94c9179a88d523c2d9c +size 137817 diff --git a/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_origin.pdf b/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b2088897a97a1775a20c7e7cde304ccb08c80f2a --- /dev/null +++ b/abatchtoonlinetransformationunderrandomordermodel/897ed33f-4704-4934-8396-fe8e18bc7cb7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c67653311cf8300eefb6c063446a31528ef3643b972c64720bce29c333abcd55 +size 562973 diff --git a/abatchtoonlinetransformationunderrandomordermodel/full.md b/abatchtoonlinetransformationunderrandomordermodel/full.md new file mode 100644 index 0000000000000000000000000000000000000000..72e6dca1dac47b7dd586d11c48f2c12871a6958c --- /dev/null +++ b/abatchtoonlinetransformationunderrandomordermodel/full.md @@ -0,0 +1,625 @@ +# A Batch-to-Online Transformation under Random-Order Model + +Jing Dong + +The Chinese University of Hong Kong, Shenzhen jingdong@link.cuhk.edu.cn + +Yuichi Yoshida + +National Institute of Informatics yyoshida@nii.ac.jp + +# Abstract + +We introduce a transformation framework that can be utilized to develop online algorithms with low $\epsilon$ -approximate regret in the random-order model from offline approximation algorithms. We first give a general reduction theorem that transforms an offline approximation algorithm with low average sensitivity to an online algorithm with low $\epsilon$ -approximate regret. We then demonstrate that offline approximation algorithms can be transformed into a low-sensitivity version using a coreset construction method. To showcase the versatility of our approach, we apply it to various problems, including online $(k,z)$ -clustering, online matrix approximation, and online regression, and successfully achieve polylogarithmic $\epsilon$ -approximate regret for each problem. Moreover, we show that in all three cases, our algorithm also enjoys low inconsistency, which may be desired in some online applications. + +# 1 Introduction + +In online learning literature, stochastic and adversarial settings are two of the most well-studied cases. Although the stochastic setting is not often satisfied in real applications, the performance and guarantees of online algorithms in the adversarial case are considerably compromised. This is particularly true for important online tasks such as $k$ -means clustering, which gives a significantly worse guarantee than their offline or stochastic counterparts [Cohen-Addad et al., 2021]. As a result, their practical applicability is greatly limited. + +Recently, the random-order model has been introduced as a means of modeling learning scenarios that fall between the stochastic and adversarial settings [Garber et al., 2020, Sherman et al., 2021]. In the random-order model, the adversary is permitted to choose the set of losses, with full knowledge of the learning algorithm, but has no influence over the order in which the losses are presented to the learner. Instead, the loss sequence is uniformly and randomly permuted. This effectively bridges the gap between the stochastic setting, where only the distribution of losses can be chosen by the setting, and the adversarial setting, where the adversary has complete control over the order of the losses presented to the learner. + +In this work, we introduce a batch-to-online transformation framework designed specifically for the random-order model. Our framework facilitates the conversion of an offline approximation algorithm into an online learning algorithm with $\epsilon$ -approximate regret guarantees. Our primary technical tool is average sensitivity, which was initially proposed by Varma and Yoshida [2021] to describe the algorithm's average-case sensitivity against input perturbations. We demonstrate that any offline approximation algorithm with low average sensitivity will result in a transformed online counterpart that has low $\epsilon$ -approximate regret. To achieve small average sensitivity for offline algorithms, we leverage the idea of a coreset [Agarwal et al., 2005, Har-Peled and Mazumdar, 2004], which is a small but representative subset of a larger dataset that preserves important properties of the original data. We present a coreset construction method that attains low average sensitivity, and when combined with the approximation algorithm, yields an overall algorithm with low average sensitivity. + +To showcase the practicality and versatility of our framework, we apply it to three popular online learning problems: online $(k,z)$ -clustering, online matrix approximation, and online regression. In all three cases, our approach yields a polylogarithmic $\epsilon$ -approximate regret. Furthermore, due to the low average sensitivity of our algorithms, they also enjoy low inconsistency, which is the cumulative number of times the solution changes. This additional property may prove useful in certain online settings. We note that this inconsistency has also been investigated in the classic online learning and multi-armed bandits literature [Agrawal et al., 1988, Cesa-Bianchi et al., 2013]. + +# 2 Related Works + +Average sensitivity Varma and Yoshida [2021] first introduced the notion of average sensitivity and proposed algorithms with low average sensitivity on graph problems such as minimum spanning tree, minimum cut, and minimum vertex cover problems. Various other problems have then been analyzed for the average sensitivity, including dynamic programming problems [Kumabe and Yoshida, 2022], spectral clustering [Peng and Yoshida, 2020], Euclidean $k$ -clustering [Yoshida and Ito, 2022], maximum matching problems [Yoshida and Zhou, 2021], and decision tree learning problem [Hara and Yoshida, 2023]. + +Online (consistent) $(k,z)$ -clustering While $(k,z)$ -clustering, which includes $k$ -means $(z = 2)$ and $k$ -median $(z = 1)$ as its special cases, has been studied extensively from various perspectives such as combinatorial optimization and probabilistic modeling, it can be NP-hard to obtain the exact solution [Impagliazzo et al., 2001]. Thus most theoretical works have been focused on designing approximation algorithms. In the online setting, Li et al. [2018] proposed a Bayesian adaptive online clustering algorithm that enjoys a minimal sublinear regret. However, the algorithm is allowed to output more than $k$ clusters. Without such assumption, Cohen-Addad et al. [2021] proposed the first algorithm that attains $\epsilon$ -approximate regret of $O(k\sqrt{d^3n}\log (\epsilon^{-1}dkn))$ for $k$ -means clustering under adversarial setting. + +On a separate vein, Lattanzi and Vassilvitskii [2017] proposed an online consistent $(k,z)$ -clustering algorithm that produces a $2^{O(z)}$ -approximate solution for the data points obtained so far at each step while maintaining an inconsistency bound of $O(k^2\log^4 n)$ . This implies that their algorithm only updates the output $O(k^2\log^4 n)$ many times. Then, Yoshida and Ito [2022] gave an online algorithm with approximation ratio $(1 + \epsilon)$ and inconsistency $\mathrm{poly}(d,k,2^z,\epsilon^{-1})\cdot \log n$ in the random-order model. We remark that the way how the losses are computed in Lattanzi and Vassilvitskii [2017], Yoshida and Ito [2022] is different from that of the online setting, which Cohen-Addad et al. [2021] and our paper considered. + +Online convex optimization and online principle component analysis (PCA) under the random-order model The online random-order optimization was proposed in Garber et al. [2020], which established a bound of $O(\log n)$ for smooth and strongly convex losses. This result is then improved by Sherman et al. [2021] while still requiring smooth and convex losses. + +The techniques and results are then extended to online PCA with the random-order setting, for which a regret of $O\left(\zeta^{-1}\sqrt{kn}\right)$ was established, where $\zeta$ is an instance-dependent constant. This recovers the regret for online PCA in the stochastic setting [Warmuth and Kuzmin, 2008, Nie et al., 2016]. We remark that PCA can be viewed as a special case of matrix approximation, in which the matrix being approximated is the covariance matrix of the data, and we discuss the more general problem of matrix approximation in this paper. + +# 3 Preliminaries + +For a positive integer $n$ , let $[n]$ denote the set $\{1, 2, \dots, n\}$ . For real values $a, b \in \mathbb{R}$ , $a \in (1 \pm \epsilon)b$ is a shorthand for $(1 - \epsilon)b \leq a \leq (1 + \epsilon)b$ . + +# 3.1 Offline Learning + +We consider a general class of learning problems. Let $\mathcal{X}$ be the input space, $\Theta$ be the parameter space, and $\ell : \Theta \times \mathcal{X} \to \mathbb{R}_+$ be a loss function. For simplicity, we assume the loss is bounded, i.e., + +$\ell(\theta, x) \leq 1$ . Given a set of $n$ data points $X \in \mathcal{X}^n$ , we are asked to learn a parameter $\theta \in \Theta$ that minimizes the objective value $\ell(\theta, X) := \sum_{x \in X} \ell(\theta, x)$ . We call this problem the offline learning problem. + +When the exact minimization of the loss function $\ell$ is NP-hard or computationally demanding, one may only hope to obtain an approximate solution efficiently. Specifically, for $\alpha > 0$ , we say a solution $\theta \in \Theta$ is $\alpha$ -approximate for $X \in \mathcal{X}^n$ if $\ell(\theta, X) \leq \alpha \cdot \min_{\tilde{\theta} \in \Theta} \ell(\tilde{\theta}, X)$ . The value $\alpha$ is called the approximation ratio of the solution. We say a (possibly randomized) algorithm $\mathcal{A}$ is $\alpha$ -approximate if the expected approximation ratio of the output solution is at most $\alpha$ . + +# 3.2 Online Learning with Random-Order Model + +In the online learning problem, instead of receiving all points at once, the data arrives sequentially throughout a time horizon $n$ . Specifically, the data point comes one by one, where $x_{t}$ comes at time $t \in [n]$ . At the time $t$ , using the collected data points $X_{t-1} := (x_{1}, \ldots, x_{t-1})$ , we are asked to output a parameter $\theta_{t} \in \Theta$ . Then we receive the data point $x_{t}$ and incur a loss of $\ell(\theta_{t}, x_{t})$ . In this work, we consider the random-order model, in which the data points $x_{1}, \ldots, x_{n}$ may be chosen adversarially, but their ordering is randomly permuted before the algorithm runs. + +To evaluate our performance, we use the notion of regret, which is the cumulative difference between our solution and the best solution in hindsight. In cases where obtaining the exact solution is hard, and one may only hope to obtain an approximate solution efficiently, we use the $\epsilon$ -approximate regret. + +Definition 3.1 ( $\epsilon$ -approximate regret for the random-order model). Given a (randomized) algorithm $\mathcal{A}$ that outputs a sequence of parameters $\theta_1, \ldots, \theta_n$ when given input $x_1, \ldots, x_n$ . The $\epsilon$ -approximate regret of $\mathcal{A}$ for the random-order model is defined as + +$$ +\operatorname {R e g r e t} _ {\epsilon} (n) := \underset {\mathcal {A}, \{x _ {t} \}} {\mathbb {E}} \left[ \sum_ {t = 1} ^ {n} \ell (\theta_ {t}, x _ {t}) - (1 + \epsilon) \cdot \min _ {\tilde {\theta} \in \Theta} \sum_ {t = 1} ^ {n} \ell (\tilde {\theta}, x _ {t}) \right]. +$$ + +where the randomness is over the internal randomness of $\mathcal{A}$ and the ordering of data points. When $\epsilon = 0$ , we simply call it the regret. + +In certain cases, online algorithms are required to maintain a good solution while minimizing inconsistency, which is quantified as the number of times the solution changes. This can be expressed formally as Inconsistency $(n) = \mathbb{E}_{\mathcal{A},\{x_t\}}[\sum_{t=1}^{n-1}\mathbb{I}\{\theta_t \neq \theta_{t+1}\}]$ , where $\mathbb{I}$ is the indicator function. + +# 3.3 Average sensitivity + +On a high level, the notion of average sensitivity describes the differences in the performance of a randomized algorithm with respect to input changes. This difference is captured by the total variation distance, which is defined below. + +Definition 3.2. For a measurable space $(\Omega, \mathcal{F})$ and probability measures $P, Q$ defined on $(\Omega, \mathcal{F})$ , The total variation distance between $P$ and $Q$ is defined as $\mathrm{TV}(P, Q) \coloneqq \sup_{A \in \mathcal{F}} |P(A) - Q(A)|$ . + +Equipped with this, the average sensitivity of a randomized algorithm is formally defined as the average total variation distance between the algorithm's output on two training data sets that differ by deleting one point randomly. For a dataset $X = (x_{1},\ldots ,x_{n})\in \mathcal{X}^{n}$ and $i\in [n]$ , let $X^{(i)}$ denote the set $(x_{1},\dots,x_{i - 1},x_{i + 1},\dots,x_{n})$ obtained by deleting the $i$ -th data point. Then, the following definition gives a detailed description of the notion: + +Definition 3.3 (Average Sensitivity [Varma and Yoshida, 2021, Yoshida and Ito, 2022]). Let $\mathcal{A}$ be a (randomized) algorithm that takes an input $X\in \mathcal{X}^n$ and outputs $\mathcal{A}(X)$ . For $\beta :\mathbb{Z}_{+}\rightarrow \mathbb{R}_{+}$ , we say that the average sensitivity of $\mathcal{A}$ is at most $\beta$ if + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \operatorname {T V} (\mathcal {A} (X), \mathcal {A} (X ^ {(i)})) \leq \beta (n), +$$ + +for any $X \in \mathcal{X}^n$ , where we identify $\mathcal{A}(X)$ and $\mathcal{A}(X^{(i)})$ with their distributions. + +# 4 Batch-to-Online Transformation in the Random-Order Model + +In this section, we describe a general framework that can transform any offline $(1 + \epsilon)$ -approximate algorithm into an online algorithm with low $\epsilon$ -approximate regret. Our goal is to show the following. + +Theorem 4.1. Let $\mathcal{A}$ be a (randomized) $(1 + \epsilon)$ -approximate algorithm for the offline learning algorithm with average sensitivity $\beta : \mathbb{Z}_{+} \to \mathbb{R}_{+}$ . Then, there exists an online learning algorithm in the random-order model such that $\mathrm{Regret}_{\epsilon}(n) = O\left(\sum_{t=1}^{n} \beta(t) + 1\right)$ . + +Our method is described in Algorithm 1. Let $\mathcal{A}$ be an approximation algorithm for the offline learning problem. Then, at each time step, based on the collected data $X_{t - 1}$ , we simply output $\theta_t = \mathcal{A}(X_{t - 1})$ . + +Algorithm 1: General batch-to-online conversion +Input: Offline approximation algorithm 1 for $t = 1,\dots ,n$ do 2 $\begin{array}{r}\big{\lfloor}\mathrm{Obtain}\theta_t\mathrm{by~runing}\mathcal{A}\mathrm{on}X_{t - 1}.\\ \big{\lfloor}\mathrm{Receive}x_t\mathrm{and}\ell (\theta_t,x_t). \end{array}$ 3 + +To show that Algorithm 1 achieves a low approximate regret when $\mathcal{A}$ has a low average sensitivity, the following lemma is useful. + +Lemma 4.2. Let $\mathcal{A}$ be a (randomized) algorithm for the offline learning problem with average sensitivity $\beta : \mathbb{Z}_{+} \to \mathbb{R}_{+}$ . Then for any input $X \in \mathcal{X}^{n}$ , we have + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \underset {\mathcal {A}} {\mathbb {E}} [ \ell (\mathcal {A} (X ^ {(i)}), x _ {i}) ] = \frac {1}{n} \sum_ {i = 1} ^ {n} \underset {\mathcal {A}} {\mathbb {E}} [ \ell (\mathcal {A} (X), x _ {i}) ] \pm \beta (n), +$$ + +where $x = a\pm b$ means $a - b\leq x\leq a + b$ + +Proof of Theorem 4.1. Consider Algorithm 1. For any $t \in [n]$ , we have + +$$ +\begin{array}{l} \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \ell \left(\theta_ {t + 1}, x _ {t + 1}\right) - \frac {1}{t} \ell \left(\theta_ {t + 1}, X _ {t}\right) \right] = \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \frac {1}{t} \sum_ {i = 1} ^ {t} \left(\ell \left(\theta_ {t + 1}, x _ {t + 1}\right) - \ell \left(\theta_ {t + 1}, x _ {i}\right)\right) \right] \\ = \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \frac {1}{t} \sum_ {i = 1} ^ {t} \left(\ell \left(\mathcal {A} \left(X _ {t}\right), x _ {t + 1}\right) - \ell \left(\mathcal {A} \left(X _ {t}\right), x _ {i}\right)\right) \right] \\ \leq \underset {\mathcal {A}, \left\{x _ {i} \right\}} {\mathbb {E}} \left[ \frac {1}{t} \sum_ {i = 1} ^ {t} \left(\ell \left(\mathcal {A} \left(X _ {t}\right), x _ {t + 1}\right) - \ell \left(\mathcal {A} \left(X _ {t} ^ {(i)}\right), x _ {i}\right)\right) \right] + \beta (t) \tag {By Lemma 4.2} \\ = \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \frac {1}{t} \sum_ {i = 1} ^ {t} \left(\ell (\mathcal {A} (X _ {t}), x _ {t + 1}) - \ell (\mathcal {A} (X _ {t} ^ {(i)}), x _ {t + 1})\right) \right] + \beta (t) \\ \leq \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \frac {1}{t} \sum_ {i = 1} ^ {t} \mathrm {T V} (\mathcal {A} (X _ {t}), \mathcal {A} (X _ {t} ^ {(i)})) \right] + \beta (t) \leq 2 \beta (t), \\ \end{array} +$$ + +where the last equality follows by replacing $x_{i}$ with $x_{t + 1}$ in $\ell(\mathcal{A}(X_t^{(i)}), x_i)$ because they have the same distribution, and the last inequality is by the average sensitivity of the algorithm. + +Rearranging the terms, we have + +$$ +\underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \ell \left(\theta_ {t + 1}, x _ {t + 1}\right) \right] \leq \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \frac {\ell \left(\theta_ {t + 1} , X _ {t}\right)}{t} \right] + 2 \beta (t) \leq \underset {\{x _ {i} \}} {\mathbb {E}} \left[ \frac {(1 + \epsilon) \mathrm {O P T} _ {t}}{t} \right] + 2 \beta (t), +$$ + +where $\mathrm{OPT}_t\coloneqq \min_\theta \ell (\theta ,X_t)$ is the optimal value with respect to $X_{t}$ , and the second inequality holds because the approximation ratio of $\theta_{t + 1}$ is $1 + \epsilon$ in expectation. + +Taking summation over both sides, we have + +$$ +\underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \sum_ {t = 1} ^ {n} \ell (\theta_ {t}, x _ {t}) \right] = \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \ell (\theta_ {1}, x _ {1}) \right] + \underset {\mathcal {A}, \{x _ {i} \}} {\mathbb {E}} \left[ \sum_ {t = 1} ^ {n - 1} \ell (\theta_ {t + 1}, x _ {t + 1}) \right] +$$ + +$$ +\leq 1 + \underset {\{x _ {i} \}} {\mathbb {E}} \left[ \sum_ {t = 1} ^ {n - 1} \frac {(1 + \epsilon) \mathrm {O P T} _ {t}}{t} \right] + 2 \sum_ {t = 1} ^ {n - 1} \beta (t). +$$ + +Fix the ordering $x_{1}, \ldots, x_{n}$ , and let $c_{i}$ ( $i \in [t]$ ) be the loss incurred by $x_{i}$ in $\mathrm{OPT}_n$ . In particular, we have $\mathrm{OPT}_n = \sum_{i=1}^{n} c_{i}$ . Note that $c_{i}$ 's are random variables depending on the ordering of data points, but their sum, $\mathrm{OPT}_n$ , is deterministic. Then, we have $\mathrm{OPT}_t \leq \sum_{i=1}^{t} c_{i}$ because $\mathrm{OPT}_t$ minimizes the loss up to time $t$ . Hence, we have + +$$ +\begin{array}{l} \underset {\{x _ {i} \}} {\mathbb {E}} \left[ \sum_ {t = 1} ^ {n} \frac {\mathrm {O P T} _ {t}}{t} \right] \leq \underset {\{x _ {i} \}} {\mathbb {E}} \left[ \sum_ {t = 1} ^ {n} \frac {\sum_ {i = 1} ^ {t} c _ {i}}{t} \right] = \underset {\{x _ {i} \}} {\mathbb {E}} \left[ \sum_ {i = 1} ^ {n} c _ {i} \sum_ {t = i} ^ {n} \frac {1}{t} \right] = \sum_ {i = 1} ^ {n} \underset {\{x _ {i} \}} {\mathbb {E}} [ c _ {i} ] \sum_ {t = i} ^ {n} \frac {1}{t} \\ = \frac {\mathrm {O P T} _ {n}}{n} \cdot \sum_ {i = 1} ^ {n} \sum_ {t = i} ^ {n} \frac {1}{t} = \frac {\mathrm {O P T} _ {n}}{n} \cdot n = \mathrm {O P T} _ {n}. \\ \end{array} +$$ + +Therefore, we have + +$$ +\mathbb {E} _ {\mathcal {A}, \{x _ {i} \}} \left[ \sum_ {t = 1} ^ {n} \ell (\theta_ {t}, x _ {t}) \right] - (1 + \epsilon) \mathrm {O P T} _ {n} = O \left(\sum_ {t = 1} ^ {n} \beta (t) + 1\right). +$$ + +![](images/c0ebea9d95e239463c5ede697ce3ef9cde399c79e4979adade86c4cb5bf38417.jpg) + +# 5 Approximation Algorithm with Low Average Sensitivity via Coreset + +To design approximation algorithms for the offline learning problem with low average sensitivity, we consider the following approach: We first construct a small subset of the input that well preserves objective functions, called a coreset, with small average sensitivity, and then apply any known approximation algorithm on the coreset. Coreset is formally defined as follows: + +Definition 5.1 (Har-Peled and Mazumdar [2004], Agarwal et al. [2005]). Let $\ell : \Theta \times \mathcal{X} \to \mathbb{R}_+$ be a loss function and let $X \in \mathcal{X}^n$ . For $\epsilon > 0$ , we say that a weighted set $(Y, w)$ with $Y \subseteq X$ and $w: Y \to \mathbb{R}_+$ is an $\epsilon$ -coreset of $X$ with respect to $\ell$ if for any $\theta \in \Theta$ , we have $\sum_{y \in Y} w(y) \ell(\theta, y) \in (1 \pm \epsilon) \sum_{x \in X} \ell(\theta, x)$ . + +Now, we consider a popular method for constructing coresets based on importance sampling and show that it enjoys a low average sensitivity. For a data $x \in X$ , its sensitivity $\sigma_X(x)^1$ is its maximum contribution to the loss of the whole dataset, or more formally + +$$ +\sigma_ {X} (x) = \sup _ {\theta \in \Theta} \frac {\ell (\theta , x)}{\ell (\theta , X)}. \tag {1} +$$ + +Algorithm 2: Coreset Construction Based on Sensitivity Sampling +Input: Loss function $\ell :\Theta \times \mathcal{X}\to \mathbb{R}_{+}$ dataset $X\in \mathcal{X}^n$ $m\in \mathbb{N}$ , and $\epsilon >0$ 1 For each $x\in X$ , compute $\sigma_X(x)$ and set $p(x) = \sigma_X(x) / \sum_{x'\in X}\sigma_X(x')$ +2 Let $S$ be an empty set. +3 for $i = 1,\ldots ,m$ do +4 Sample $x$ with probability $p(x)$ +5 Sample $\tilde{p}$ from $[p(x),(1 + \epsilon /2)p(x)]$ uniformly at random. +6 if $w(x)$ is undefined then +7 S S U {x}. +8 w(x) 1/.. +9 else +10 w(x) 1 w(x) + 1/.. +11 return (S,w). + +It is known that we can construct a coreset as follows: A data point $x \in X$ is sampled with probability $p(x) \coloneqq \sigma_X(x) / \sum_{x' \in X} \sigma_X(x')$ , and then its weight in the output coreset is increased by $1 / \tilde{p}$ , where + +$\tilde{p}$ is a slight perturbation of $p(x)$ . This process is to be repeated for a fixed number of times, where the exact number depends on the approximation ratio of the coreset. See Algorithm 2 for details. We can bound its average sensitivity as follows: + +Lemma 5.2. The average sensitivity of Algorithm 2 is $O\left(\epsilon^{-1}m / n\right)$ . + +A general bound on the number of times we need to repeat the process, i.e., $m$ in Algorithm 2, to obtain an $\epsilon$ -coreset is known (see, e.g., Theorem 5.5 of Braverman et al. [2016]). However, we do not discuss it here because better bounds are known for specific problems and we do not use the general bound in the subsequent sections. + +# 6 Online $(k,z)$ -Clustering + +In online applications, unlabelled data are abundant and their structure can be essential, and clustering serves as an important tool for analyzing them. In this section, as an application of our general batch-to-online transformation, we describe an online $(k,z)$ -clustering method that enjoys low regret. + +# 6.1 Problem setup + +The online $(k,z)$ -clustering problem [Cohen-Addad et al., 2021] is an instance of the general online learning problem described in Section 3. We describe the problem as follows: Let $k\geq 1$ be an integer and $z\geq 1$ be a real value. Over a time horizon $n$ , at each time step $t$ , a data point $x_{t}\in \mathbb{R}^{d}$ is given. Using the set of data points $X_{t - 1} = \{x_1,\ldots ,x_{t - 1}\}$ , we are asked to compute a set $Z_{t} = \{z_{1},\dots ,z_{k}\}$ of $k$ points in $\mathbb{R}^d$ that minimize $\ell (Z_t,x_t)\coloneqq \min_{j = 1,\dots,k}\| x_t - z_j\| _2^z$ , which is the $z$ -th power of the Euclidean distance between $x_{t}$ and the closest point in $Z_{t}$ . Note that $Z_{t}$ plays the role of $\theta_{t}$ in the general online learning problem. The regret and $\epsilon$ -approximate regret are defined accordingly. + +# 6.2 Method and results + +One important ingredient to our method is the coreset construction method proposed by Huang and Vishnoi [2020]. The method provides a unified two-stage importance sampling framework, which allows for a coreset with a size that is dimension independent. Specifically, the method constructs an $\epsilon$ -coreset of size $\tilde{O}\left(\min \left\{\varepsilon^{-2z - 2}k,2^{2z}\epsilon^{-4}k^2\right\}\right)$ in $\tilde{O}(ndk)$ time, where the $\tilde{O}$ hides polylogarithmic factors in $n$ and $k$ . We remark that the importance of sampling steps in the framework is similar to the ones described in Section 5, which thus allows us to analyze its average sensitivity. + +Algorithm 3 gives a brief description of our algorithm, while a detailed description is presented in the appendix. The algorithm adheres to the standard transformation approach, whereby an offline approximation algorithm is run on the coreset derived from the aggregated data. + +Algorithm 3: Online consistent $(k,z)$ -clustering +Input: Offline algorithm $\mathcal{A}$ for $(k,z)$ -clustering, approximation ratio $1 + \epsilon$ , $\epsilon \in (0,1)$ . +1 $\epsilon' \gets \epsilon/3$ . +2 for $t = 1,\ldots,n$ do +3 Construct an $\epsilon'$ -coreset $C_{t-1} = (S_{t-1},\omega_{t-1})$ on $X_{t-1}$ . +4 Obtain a cluster set $Z_t$ by running a PTAS $\mathcal{A}$ with approximation ratio of $(1 + \epsilon')$ on $C_{t-1}$ . +5 Receive $x_t \in \mathbb{R}^d$ and $\ell(Z_t,x_t) \in \mathbb{R}_+$ . + +Theorem 6.1. For any $\epsilon \in (0,1)$ , Algorithm 3 gives a regret bound of + +$$ +\operatorname {R e g r e t} _ {\epsilon} (n) \leq O \left(\left((1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log \frac {k n}{\epsilon} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k n}{\epsilon}\right) \log n\right). +$$ + +Moreover, there exists an algorithm that enjoys the same regret bound and an inconsistency bound of Inconsistency $(n) = O\left(\left((168z)^{10z}\epsilon^{-5z - 15}k^5\log (\epsilon^{-1}kn) + \epsilon^{-2z - 2}k\log k\log (\epsilon^{-1}kn)\right)\log n\right)$ for $(k,z)$ -clustering. + +Remark 6.1. When $z = 2$ , previous results for the adversarial setting show an $\epsilon$ -approximate regret bound of $O(k\sqrt{d^3n}\log (\epsilon^{-1}dkn))$ [Cohen-Addad et al., 2021]. In comparison, although our regret is for the random-order model, our method and results accommodate a range of values for $z$ , and the regret bound is only polylogarithmically dependent on $n$ and is independent of the dimension $d$ . + +# 7 Online Low-Rank Matrix Approximation + +Low-rank matrix approximation serves as a fundamental tool in statistics and machine learning. The problem is to find a rank- $k$ matrix that approximates an input matrix $\mathbf{A} \in \mathbb{R}^{d \times n}$ as much as possible. In this section, we apply the transformation framework to the offline approximation algorithm to obtain a low regret online algorithm. + +# 7.1 Problem setup + +Low-rank matrix approximation By the singular value decomposition (SVD), a rank- $r$ matrix $\mathbf{A} \in \mathbb{R}^{d \times n}$ can be decomposed as $\mathbf{A} = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^{\top}$ , where $\mathbf{U} \in \mathbb{R}^{d \times r}$ and $\mathbf{V} \in \mathbb{R}^{n \times r}$ are orthonormal matrices, $\boldsymbol{\Sigma} \in \mathbb{R}^{r \times r}$ is a diagonal matrix with $\mathbf{A}$ 's singular values on the diagonal. The best rank- $k$ approximation of $\mathbf{A}$ is given by + +$$ +\mathbf{A}_{k} = \mathbf{U}_{k}\boldsymbol{\Sigma}_{k}\mathbf{V}_{k}^{\top} = \operatorname *{argmin}_{\mathbf{B}\in \mathbb{R}^{d\times n}:\mathrm{rank}(\mathbf{B})\leq k}\| \mathbf{A} - \mathbf{B}\|_{F} , +$$ + +where $\| \cdot \|_F$ denotes the Frobenius norm, $\Sigma_k \in \mathbb{R}^{k \times k}$ is a diagonal matrix with $\mathbf{A}_k$ 's top $k$ singular values on the diagonal, and $\mathbf{U}_k \in \mathbb{R}^{d \times k}$ and $\mathbf{V}_k \in \mathbb{R}^{n \times k}$ are orthonormal matrices obtained from $\mathbf{U}$ and $\mathbf{V}$ , respectively, by gathering corresponding columns. The best rank- $k$ approximation can also be found by projecting $\mathbf{A}$ onto the span of its top $k$ singular vectors, that is, $\mathbf{A}_k = \mathbf{U}_k \mathbf{U}_k^\top \mathbf{A}$ . Then, we can say an orthonormal matrix $\mathbf{Z}$ is an $\epsilon$ -approximate solution if + +$$ +\left\| \mathbf {A} - \mathbf {Z Z} ^ {\top} \mathbf {A} \right\| _ {F} \leq (1 + \epsilon) \left\| \mathbf {A} - \mathbf {U} _ {k} \mathbf {U} _ {k} ^ {\top} \mathbf {A} \right\| _ {F}. +$$ + +The matrix approximation problem serves as an important tool in data analytics and is closely related to numerous machine learning methods such as principal component analysis and least squares analysis. When dealing with streaming data, the online version of the matrix approximation problem becomes a vital tool for designing online versions of the machine learning algorithms mentioned above. + +Online matrix approximation Through a time horizon of $n$ , we receive a column of $\mathbf{A}$ , $a_{t} \in \mathbb{R}^{d}$ at each time step $t$ . We are then asked to compute $\mathbf{Z}_{t} \in \mathbb{R}^{d \times k}$ that minimizes + +$$ +\ell (\mathbf {Z} _ {t}, a _ {t}) = \left\| a _ {t} - \mathbf {Z} _ {t} \mathbf {Z} _ {t} ^ {\top} a _ {t} \right\| _ {F}. +$$ + +Without loss of generality, we will assume that the losses are bounded between $[0, 1]$ . We remark that similar assumptions are also made in Nie et al. [2016]. + +The online matrix approximation problem serves as a core component of online machine learning algorithms such as principle component analysis. These algorithms are important to a range of applications, such as online recommendation systems and online experimental design [Warmuth and Kuzmin, 2008, Nie et al., 2016]. + +# 7.2 Method and results + +In the context of low-rank matrix approximation, the coreset of a matrix is called the projection-cost preserving samples, defined as follows: + +Definition 7.1 (Rank- $k$ Projection-Cost Preserving Sample Cohen et al. [2017]). For $n' < n$ , a subset of rescaled columns $\mathbf{C} \in \mathbb{R}^{d \times n'}$ of $\mathbf{A} \in \mathbb{R}^{d \times n}$ is a $(1 + \epsilon)$ projection-cost preserving sample if, for all rank- $k$ orthogonal projection matrices $\mathbf{X} \in \mathbb{R}^{d \times d}$ , $(1 - \epsilon)\| \mathbf{A} - \mathbf{X}\mathbf{A}\|_F^2 \leq \| \mathbf{C} - \mathbf{X}\mathbf{C}\|_F^2 \leq (1 + \epsilon)\| \mathbf{A} - \mathbf{X}\mathbf{A}\|_F^2$ . + +Such sketches that satisfy Definition 7.1 can be constructed via importance sampling-based routines, which are modifications of the "leverage scores". Specifically, for the $i$ -th column $a_{i}$ of matrix $A$ + +the ridge leverage score is defined as $\tau_{i}(\mathbf{A}) = a_{i}^{\top}\left(\mathbf{A}\mathbf{A}^{\top} + \frac{\|\mathbf{A} - \mathbf{A}_{k}\|_{F}^{2}}{k}\mathbf{I}\right)^{\dagger}a_{i}$ , where $\dagger$ denotes the Moore-Penrose pseudoinverse of a matrix [Cohen et al., 2017]. + +Now, we introduce our online matrix approximation algorithm in Algorithm 4, which builds upon our transformation framework. It computes the approximation of the matrix from the sketch derived from the aggregated matrix using ridge leverage scores. + +Algorithm 4: Online low rank matrix approximation +Input: Approximation parameters $\epsilon \in (0,1)$ +1 Set $\delta = O(\epsilon /n)$ and $m = O\left(\epsilon^{-2}k\log (\delta^{-1}k)\right)$ +2 for $t = 1,\ldots ,n$ do +3 Construct $\mathbf{A}_{t - 1}\in \mathbb{R}^{d\times (t - 1)}$ by concatenating $a_1,\dots a_{t - 1}$ +4 Let $\mathbf{C}_{t - 1}\in \mathbb{R}^{d\times m}$ be the zero matrix. +5 for $j = 1,\ldots ,m$ do +6 Sample the $i$ -th column $a_i\in \mathbb{R}^d$ of $\mathbf{A}_{t - 1}$ with probability $p_i\coloneqq \frac{\tau_i(\mathbf{A}_{t - 1})}{\sum_{j = 1}^{t - 1}\tau_j(\mathbf{A}_{t - 1})}$ +7 Sample $w\in \mathbb{R}$ uniformly from $[1 / \sqrt{tp_i},(1 + \epsilon) / \sqrt{tp_i} ]$ +8 Replace the $j$ -th column of $\mathbf{C}_{t - 1}$ with $w\cdot a_{i}$ +9 Set $\mathbf{Z}_t\in \mathbb{R}^{d\times k}$ to the top $k$ left singular vectors of $\mathbf{C}_t$ +10 Receive $a_{t}\in \mathbb{R}^{d}$ and $\ell (\mathbf{Z}_t,a_t)\in \mathbb{R}_+$ + +Theorem 7.2. For any $\epsilon \in (0,1)$ , Algorithm 4 has regret $\mathrm{Regret}_{\epsilon}(n) = O\left(\epsilon^{-2}k\log n\log (\epsilon^{-1}kn)\right)$ . Moreover, there exists an algorithm for online low-rank matrix approximation that enjoys the same regret bound and an inconsistency bound of Inconsistency $(n) = O\left(\epsilon^{-2}k\log n\log (\epsilon^{-1}kn)\right)$ . + +Remark 7.1. The online matrix approximation with the random-order setting has previously been investigated in the context of principle component analysis by Garber et al. [2020]. They established a regret of $O\left(\zeta^{-1}\sqrt{kn}\right)$ , where $\zeta$ is the smallest difference between two eigenvalues of $\mathbf{A}_t\mathbf{A}_t^\top$ . In contrast, our result gives a polylogarithmic result on $\epsilon$ -regret, which translates to an exact regret of $O\left(\epsilon \mathrm{OPT} + O\left(\epsilon^{-2}k\log n\log (\epsilon^{-1}kn)\right)\right)$ , with OPT being the minimum possible cumulative loss attained by the hindsight best approximate. + +# 8 Online Regression + +In the online regression problem, at each time step $t \in [n]$ , we are asked to output a vector $x_{t} \in \mathbb{R}^{d}$ and then we receive vectors $a_{t} \in \mathbb{R}^{d}$ and $b_{t} \in \mathbb{R}$ that incurs the loss of $\ell(x_{t}, a_{t}, b_{t}) = \|a_{t}^{\top} x_{t} - b\|_{2}$ . Without loss of generality, we assume that the losses are bounded between [0, 1]. We note that similar assumptions are also made in [Cesa-Bianchi et al., 1996, Ouhamma et al., 2021]. + +With our general reduction framework, we show an $\epsilon$ -regret upper bound as follows. + +Theorem 8.1. For any $\epsilon \in (0,1)$ , Algorithm 5 has regret $\mathrm{Regret}_{\epsilon}(n) = O\left(\epsilon^{-2}d\log n\log (\epsilon^{-1}dn)\right)$ . Moreover, there exists an algorithm for online regression that enjoys the same regret bound and an inconsistency bound of Inconsistency $(n) = O\left(\epsilon^{-2}d\log n\log (\epsilon^{-1}dn)\right)$ . + +Remark 8.1. In the stochastic setting, the online regression problem has been extensively investigated [Foster, 1991, Littlestone et al., 1995, Cesa-Bianchi et al., 1996, Ouhamma et al., 2021]. Using online ridge regression or forward algorithms, the regret is shown to be $O(d\log n)$ . In the random-order model setting, Garber et al. [2020], Sherman et al. [2021] give $O(\sqrt{n})$ -type regret when the matrix A has a small condition number. In comparison, our result attains polylogarithmic $\epsilon$ -approximate regret, while maintaining no requirement on the loss function or the condition number. Our result can be translated to an exact regret of $O\left(\epsilon \mathrm{OPT} + O\left(\epsilon^{-2}d\log n\log (\epsilon^{-1}dn)\right)\right)$ , with OPT being the minimum possible cumulative loss attained by the hindsight best parameter. + +# 8.1 Method and results + +Similar to the low-rank matrix approximation problem, we utilize the leverage score method to learn a subspace that preserves information regarding the regression. Specifically, we use the leverage score to learn a $\epsilon$ -subspace embedding, which is defined as follows. + +Definition 8.2 ( $\epsilon$ -Subspace Embedding). A matrix $\mathbf{S} \in \mathbb{R}^{m \times n}$ is said to be an $\epsilon$ -subspace embedding of $\mathbf{A} \in \mathbb{R}^{n \times d}$ if for any vector $x \in \mathbb{R}^d$ , we have $(1 - \epsilon)\| \mathbf{A}x \| \leq \| \mathbf{S}\mathbf{A}x \| \leq (1 + \epsilon)\| \mathbf{A}x \|$ . + +The subspace embedding serves the same functionality as coreset in the problem of online regression, it preserves the loss of information while enjoying a much lower dimension. In the online regression problem context, we define the leverage score as follows. + +Definition 8.3 (Leverage Score). Let $\mathbf{A} = \mathbf{U}\pmb{\Sigma}\mathbf{V}^{\top}$ be the singular value decomposition of $\mathbf{A} \in \mathbb{R}^{n\times d}$ . For $i \in [n]$ , the $i$ -th leverage score of $\mathbf{A}$ , is defined as $\tau_{i} = \| \mathbf{U}_{i,:}\|_{2}^{2}$ . + +With the leverage score, we propose Algorithm 5. The algorithm follows the general transformation framework, where the regression problem is solved at every step with the sketch derived from the aggregated matrix using leverage score. For notational convenience, we construct the sketch by appending rows instead of columns as we did in Section 7. + +Algorithm 5: Online consistent regression +Input: Approximation parameters $\epsilon \in (0,1)$ +1 Set $\delta = O(\epsilon /n)$ and $m = O\left(\epsilon^{-2}d\log (\delta^{-1}d)\right)$ +2 for $t = 1,\dots ,n$ do +3 Construct $\mathbf{A}_{t - 1}\in \mathbb{R}^{(t - 1)\times d}$ by stacking $a_1^\top ,\ldots a_{t - 1}^\top$ +4 Construct $b\in \mathbb{R}^{t - 1}$ by stacking $b_{1},\ldots ,b_{t - 1}$ +5 Set $\mathbf{S}^t\in \mathbb{R}^{m\times (t - 1)}$ be the zero matrix. +6 for $j = 1,\dots ,m$ do +7 Sample $i\in [t - 1]$ with probability $p_i\coloneqq \frac{\tau_i(\mathbf{A}_{t - 1})}{\sum_{j = 1}^{t - 1}\tau_j(\mathbf{A}_{t - 1})}$ +8 Sample $w\in \mathbb{R}$ uniformly from $\left[\frac{1}{\sqrt{mp_i}},\frac{1 + \epsilon}{\sqrt{mp_i}}\right]$ +9 Replace the $j$ -th row of $\mathbf{S}^t$ with $w\cdot e_i^\top$ , where $e_i\in \mathbb{R}^{t - 1}$ is a one-hot vector with 1 on the $i$ -th index. +10 Solve the regression problem $x_{t} = \min_{x}\| \mathbf{S}^{t}\mathbf{A}_{t - 1}x - \mathbf{S}^{t}b\|_{2}$ , e.g., by an iterative method such as Newton's method. +11 Receive $a_{t}\in \mathbb{R}^{d}$ $b_{t}\in \mathbb{R}$ , and loss $\ell (x_t,a_t,b_t)$ + +The subspace embedding result of Woodruff [2014] immediately shows the following: + +Theorem 8.4. For any $\epsilon, \delta \in (0,1)$ , if $m = O\left(\epsilon^{-2}d\log (\delta^{-1}d)\right)$ , then with probability $\geq 1 - \delta$ , $\mathbf{S}^t$ is an $\epsilon$ -subspace embedding for $\mathbf{A}_{t-1}$ with $O\left(\epsilon^{-2}d\log (\delta^{-1}d)\right)$ columns. + +To obtain Theorem 8.1, we first analyze the average sensitivity of the leverage score sampling. Then, with Theorem 8.4 and the general reduction Theorem 4.1, we obtain the regret bound. + +# 9 Experiments + +We here provide a preliminary empirical evaluation of our framework in the context of online $k$ -means clustering, and online linear regression, with the result shown in Figure 1. Our experiments are conducted with various approximation ratios and experimental setups ( $\epsilon = 0.1, 0.01, 0.001$ , with $k = 3$ or $k = 5$ clusters). We then compare the performance of the proposed algorithm to the hindsight optimal solution. For $k$ -means clustering, we obtain the hindsight optimal solution by applying $k$ -means++ to all the data. In the context of regression, we utilize the least square formula to compute the hindsight optimal solution. Our experimental results demonstrate that the proposed algorithm is highly effective, and its performance aligns with our theoretical findings. + +![](images/59c34ae98c34b818d1e5526f141f01f0c519cc07dcf0be006a4dfe8a51cf8c61.jpg) +(a) $k$ -means, 3 clusters + +![](images/d90bc8718ed75407d8a851221c0a7885a043aa6f4c4cf9cfdaa95568ed0ed64b.jpg) +(b) $k$ -means, 5 clusters +Figure 1: Experimental results for $k$ -means clustering with 3, 5 clusters and online regression. Each experiment is repeated with 5 different random seed to ensure reproducible results. The shaded region indicates the 1 standard deviation. + +![](images/dccabd59cf8aaf372ae948077d6432e8d3562351ddf37f37e5587fe19f3c3ad4.jpg) +(c) Regression + +# 10 Conclusion + +In this paper, we proposed a batch-to-online transformation framework that designs consistent online approximation algorithms from offline approximation algorithms. Our framework transforms an offline approximation algorithm with low average sensitivity to an online algorithm with low approximate regret. We then show a general method that can transform any offline approximation algorithm into one with low sensitivity by using a stable coreset. To demonstrate the generality of our framework, we applied it to online $(k,z)$ -clustering, online matrix approximation, and online regression. Through the transformation result, we obtain polylogarithmic approximate regret for all of the problems mentioned. + +# Acknowledgement + +This work is supported by JSPS KAKENHI Grant Number 20H05965 and 22H05001. + +# References + +Pankaj K Agarwal, Sariel Har-Peled, Kasturi R Varadarajan, et al. Geometric approximation via coresets. Combinatorial and computational geometry, 52(1):1-30, 2005. +Rajeev Agrawal, MV Hedge, and Demosthenis Teneketzis. Asymptotically efficient adaptive allocation rules for the multiarmed bandit problem with switching cost. IEEE Transactions on Automatic Control, 33(10):899-906, 1988. +Vladimir Braverman, Dan Feldman, Harry Lang, Adiel Statman, and Samson Zhou. New frameworks for offline and streaming coreset constructions. arXiv preprint arXiv:1612.00889, 2016. +Nicolo Cesa-Bianchi, Philip M Long, and Manfred K Warmuth. Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Transactions on Neural Networks, 7(3):604-619, 1996. +Nicolo Cesa-Bianchi, Ofer Dekel, and Ohad Shamir. Online learning with switching costs and other adaptive adversaries. Advances in Neural Information Processing Systems, 2013. +Michael B Cohen, Cameron Musco, and Christopher Musco. Input sparsity time low-rank approximation via ridge leverage score sampling. In Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms, 2017. +Vincent Cohen-Addad, Benjamin Guedj, Varun Kanade, and Guy Rom. Online $k$ -means clustering. In International Conference on Artificial Intelligence and Statistics, 2021. +Dan Feldman and Michael Langberg. A unified framework for approximating and clustering data. In Proceedings of the 43th Annual ACM SIGACT Symposium on Theory of Computing, 2011. + +Dean P Foster. Prediction in the worst case. The Annals of Statistics, pages 1084-1090, 1991. +Dan Garber, Gal Korcia, and Kfir Levy. Online convex optimization in the random order model. In International Conference on Machine Learning, 2020. +Sariel Har-Peled and Soham Mazumdar. On coresets for $k$ -means and $k$ -median clustering. In Proceedings of the 36th Annual ACM SIGACT Symposium on Theory of Computing, 2004. +Satoshi Hara and Yuichi Yoshida. Average sensitivity of decision tree learning. In International Conference on Learning Representations, 2023. +Lingxiao Huang and Nisheeth K Vishnoi. Coresets for clustering in Euclidean spaces: importance sampling is nearly optimal. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, 2020. +Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have strongly exponential complexity? Journal of Computer and System Sciences, 63(4):512-530, 2001. +Soh Kumabe and Yuichi Yoshida. Average sensitivity of dynamic programming. In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms, 2022. +Silvio Lattanzi and Sergei Vassilvitskii. Consistent $k$ -clustering. In International Conference on Machine Learning, 2017. +L Li, B Guedj, and S Loustau. A quasi-bayesian perspective to online clustering. Electronic Journal of Statistics, 12(2):3071-3113, 2018. +Nicholas Littlestone, Manfred K Warmuth, and Philip M Long. On-line learning of linear functions. Computational Complexity, 5:1-23, 1995. +Jiazhong Nie, Wojciech Kotlowski, and Manfred K Warmuth. Online PCA with optimal regret. The Journal of Machine Learning Research, 17(1):6022-6070, 2016. +Reda Ouhamma, Odalric-Ambrym Maillard, and Vianney Perchet. Stochastic online linear regression: the forward algorithm to replace ridge. Advances in Neural Information Processing Systems, 2021. +Pan Peng and Yuichi Yoshida. Average sensitivity of spectral clustering. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020. +Uri Sherman, Tomer Koren, and Yishay Mansour. Optimal rates for random order online optimization. Advances in Neural Information Processing Systems, 2021. +Nithin Varma and Yuichi Yoshida. Average sensitivity of graph algorithms. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, 2021. +Manfred K Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 9(Oct):2287-2320, 2008. +David P Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends® in Theoretical Computer Science, 10(1-2):1-157, 2014. +Yuichi Yoshida and Shinji Ito. Average sensitivity of Euclidean $k$ -clustering. In Advances in Neural Information Processing Systems, 2022. +Yuichi Yoshida and Samson Zhou. Sensitivity analysis of the maximum matching problem. In Innovations in Theoretical Computer Science Conference, pages 58:1-58:20, 2021. + +# A Proofs for Section 4 + +Lemma 4.2. Let $\mathcal{A}$ be a (randomized) algorithm for the offline learning problem with average sensitivity $\beta : \mathbb{Z}_{+} \to \mathbb{R}_{+}$ . Then for any input $X \in \mathcal{X}^{n}$ , we have + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X ^ {(i)}), x _ {i}) ] = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X), x _ {i}) ] \pm \beta (n), +$$ + +where $x = a\pm b$ means $a - b\leq x\leq a + b$ + +Proof. We have + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X ^ {(i)}), x _ {i}) ] \\ \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X), x _ {i}) ] + \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X), x _ {i}) ] - \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X ^ {(i)}), x _ {i}) ] \right| \\ \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\mathcal {A}} [ \ell (\mathcal {A} (X), x _ {i}) ] + \frac {1}{n} \sum_ {i = 1} ^ {n} \mathrm {T V} (\mathcal {A} (X), \mathcal {A} (X ^ {(i)})) \\ \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \underset {\mathcal {A}} {\mathbb {E}} [ \ell (\mathcal {A} (X), x _ {i}) ] + \beta (n). \\ \end{array} +$$ + +The other direction can be shown analogously. + +# B Proofs for Section 5 + +In this section, we prove Lemma 5.2. + +Lemma B.1. For any $i \in [n]$ and $x \in X^{(i)}$ , let $\theta^{(i)} = \operatorname{argmax}_{\theta} \frac{\ell(\theta, x)}{\sum_{x' \in X^{(i)}} \ell(\theta, x')}$ . Then, we have + +$$ +0 \leq \sigma_ {X ^ {(i)}} (x) - \sigma_ {X} (x) \leq \frac {\ell (\theta^ {(i)} , x _ {i}) \cdot \ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime}) \cdot \sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime})}. +$$ + +Proof. Denote $\theta = \operatorname{argmax}_{\theta} \frac{\ell(\theta, x)}{\sum_{x' \in X} \ell(\theta, x')}$ , then for the left-hand side of the inequality, we have + +$$ +\begin{array}{l} \sigma_ {X ^ {(i)}} (x) - \sigma_ {X} (x) \geq \frac {\ell (\theta , x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta , x ^ {\prime})} - \frac {\ell (\theta , x)}{\sum_ {x ^ {\prime} \in X} \ell (\theta , x ^ {\prime})} \\ \geq 0. \\ \end{array} +$$ + +For the second inequality, we have + +$$ +\begin{array}{l} \sigma_ {X ^ {(i)}} (x) - \sigma_ {X} (x) \leq \frac {\ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime})} - \frac {\ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime})} \\ = \frac {\ell (\theta^ {(i)} , x _ {i}) \cdot \ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime}) \cdot \sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime})}. \\ \end{array} +$$ + +![](images/8a2dc67ceffb2a8324b266e1d93bc881b95457b8a629385f6dc2bd32da394679.jpg) + +Lemma B.2. For any $i\in [n]$ , we have + +$$ +\sum_ {i = 1} ^ {n} \left| \sum_ {x \in X} \sigma_ {X} (x) - \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) \right| \leq \sum_ {x \in X} \sigma_ {X} (x). +$$ + +Proof. By Lemma B.1, we have + +$$ +\sum_ {x \in X} \sigma_ {X} (x) - \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) = \sigma_ {X} \left(x _ {i}\right) - \sum_ {x \in X ^ {(i)}} \left(\sigma_ {X ^ {(i)}} (x) - \sigma_ {X} (x)\right) \leq \sigma_ {X} \left(x _ {i}\right), +$$ + +and we have + +$$ +\begin{array}{l} \sum_ {x \in X} \sigma_ {X} (x) - \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) = \sigma_ {X} \left(x _ {i}\right) - \sum_ {x \in X ^ {(i)}} \left(\sigma_ {X ^ {(i)}} (x) - \sigma_ {X} (x)\right) \\ \geq \sigma_ {X} (x _ {i}) - \sum_ {x \in X ^ {(i)}} \frac {\ell (\theta^ {(i)} , x _ {i}) \cdot \ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime}) \cdot \sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime})} \\ = \sigma_ {X} \left(x _ {i}\right) - \frac {\ell \left(\theta^ {(i)} , x _ {i}\right)}{\sum_ {x ^ {\prime} \in X} \ell \left(\theta^ {(i)} , x ^ {\prime}\right)} \geq 0. \\ \end{array} +$$ + +Then, we have + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {n} \left| \sum_ {x \in X} \sigma_ {X} (x) - \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) \right| = \sum_ {i = 1} ^ {n} \left(\sum_ {x \in X} \sigma_ {X} (x) - \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)\right) \\ \leq \sum_ {i = 1} ^ {n} \sigma_ {X} (x _ {i}) \leq \sum_ {x \in X} \sigma_ {X} (x). \\ \end{array} +$$ + +![](images/8b5647891edda7cffeb18f100dcaf6f5e74677910d252d7ee0f5c19acae99cab.jpg) + +Lemma B.3. We have + +$$ +\sum_ {i = 1} ^ {n} \sum_ {x \in X ^ {(i)}} \left| \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} \right| \leq 2. +$$ + +Proof. First, we have + +$$ +\begin{array}{l} \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} \\ = \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} \left(1 - \frac {\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) - \sum_ {x \in X} \sigma_ {X} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)}\right) \\ = \sigma_ {X ^ {(i)}} (x) \frac {\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) - \sum_ {x \in X} \sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x) \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)} - \frac {1}{\sum_ {x \in X} \sigma_ {X} (x)} \left(\sigma_ {X ^ {(i)}} (x) - \sigma_ {X} (x)\right). \\ \end{array} +$$ + +We can bound this quantity from below and above by Lemma B.1, + +$$ +\begin{array}{l} \sigma_ {X ^ {(i)}} (x) \frac {\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime}) - \sum_ {x \in X} \sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x) \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} - \frac {1}{\sum_ {x \in X} \sigma_ {X} (x)} \frac {\ell (\theta^ {(i)} , x _ {i}) \ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime}) \cdot \sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime})} \\ \leq \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} \\ \leq \sigma_ {X ^ {(i)}} (x) \frac {\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) - \sum_ {x \in X} \sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x) \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)}. \\ \end{array} +$$ + +Then, we have + +$$ +\begin{array}{l} \left| \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)} \right| \\ \leq \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x \in X} \sigma_ {X} (x) \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} \left| \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime}) - \sum_ {x \in X} \sigma_ {X} (x) \right| \\ + \frac {1}{\sum_ {x \in X} \sigma_ {X} (x)} \frac {\ell (\theta^ {(i)} , x _ {i}) \ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime}) \cdot \sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime})}. \\ \end{array} +$$ + +It then follows, + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {n} \sum_ {x \in X ^ {(i)}} \left| \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} \right| \\ \leq \sum_ {i = 1} ^ {n} \sum_ {x \in X ^ {(i)}} \left(\frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x \in X} \sigma_ {X} (x) \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)} \Bigg | \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) - \sum_ {x \in X} \sigma_ {X} (x) \right| \\ \left. + \frac {1}{\sum_ {x \in X} \sigma_ {X} (x)} \frac {\ell (\theta^ {(i)} , x _ {i}) \ell (\theta^ {(i)} , x)}{\sum_ {x ^ {\prime} \in X} \ell (\theta^ {(i)} , x ^ {\prime}) \cdot \sum_ {x ^ {\prime} \in X ^ {(i)}} \ell (\theta^ {(i)} , x ^ {\prime})}\right) \\ = \sum_ {i = 1} ^ {n} \left(\frac {\left| \sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right) - \sum_ {x \in X} \sigma_ {X} (x) \right|}{\sum_ {x \in X} \sigma_ {X} (x)} + \frac {1}{\sum_ {x \in X} \sigma_ {X} (x)} \frac {\ell \left(\theta^ {(i)} , x _ {i}\right)}{\sum_ {x ^ {\prime} \in X} \ell \left(\theta^ {(i)} , x ^ {\prime}\right)}\right) \\ \leq 1 + \frac {1}{\sum_ {x \in X} \sigma_ {X} (x)} \\ \leq 2, \\ \end{array} +$$ + +where the second to last inequality is from Lemma B.2, and the last inequality is by $\sum_{x\in X}\sigma_X(x)\geq 1$ + +Lemma B.4. For $\epsilon > 0$ , let $X$ and $X'$ be sampled from the uniform distribution over $[B, (1 + \epsilon)B]$ and $[B', (1 + \epsilon)B']$ , respectively. Then, we have + +$$ +\operatorname {T V} \left(X, X ^ {\prime}\right) \leq \frac {1 + \epsilon}{\epsilon} \left| 1 - \frac {B ^ {\prime}}{B} \right|. +$$ + +Proof. The proof is implicit in Lemma 2.3 of Kumabe and Yoshida [2022]. + +![](images/d7d5dde7a3ae36ffa9224f4180a0afb2c294afe4a027be5ba6271657aed9ba54.jpg) + +Lemma 5.2. The average sensitivity of Algorithm 2 is $O\left(\epsilon^{-1}m / n\right)$ . + +Proof. The average sensitivity of importance sampling can be bounded by the sum of the average total variation distance between selected elements and that between assigned weights, conditioned on that the selected elements are the same. The former is bounded by + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} | C | \cdot \left(\frac {\sigma_ {X} (x _ {i})}{\sum_ {x \in X} \sigma_ {X} (x)} + \sum_ {x \in X ^ {(i)}} \left| \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} (x ^ {\prime})} \right|\right) \\ = O \left(\frac {| C |}{n}\right) + O \left(\frac {| C |}{n}\right) \\ = O \left(\frac {| C |}{n}\right), \\ \end{array} +$$ + +where the first equality is Lemma B.3. With Lemma B.4, we have the latter as + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} | C | \cdot \left(\sum_ {x \in X ^ {(i)}} \min \left\{\frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)}, \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)} \right\} \cdot \frac {1 + \epsilon}{\epsilon} \left| 1 - \frac {\sigma_ {X ^ {(i)}} (x) / \left(\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)\right)}{\sigma_ {X} (x) / \sum_ {x \in X} \sigma_ {X} (x)} \right|\right) +$$ + +$$ +\begin{array}{l} \leq \frac {1}{n} \sum_ {i = 1} ^ {n} | C | \cdot \left(\sum_ {x \in X ^ {(i)}} \frac {1 + \epsilon}{\epsilon} \left| \frac {\sigma_ {X} (x)}{\sum_ {x \in X} \sigma_ {X} (x)} - \frac {\sigma_ {X ^ {(i)}} (x)}{\sum_ {x ^ {\prime} \in X ^ {(i)}} \sigma_ {X ^ {(i)}} \left(x ^ {\prime}\right)} \right|\right) \\ = O \left(\frac {| C |}{\epsilon n}\right). \\ \end{array} +$$ + +Combining the two terms, we have the average sensitivity of importance sampling be bounded as + +$$ +O \left(\frac {| C |}{n}\right) + O \left(\frac {| C |}{\epsilon n}\right) = O \left(\frac {| C |}{\epsilon n}\right). +$$ + +# C Proofs for Section 6 + +# C.1 Algorithm + +We now introduced the detailed version of Algorithm 3. In Algorithm 7, we provide a detailed description of the coreset construction method for clustering, which is based on Huang and Vishnoi [2020]. To make the coreset construction enjoys small average sensitivity, we perturb the weights assigned (Line 6 and Line 11). We show that this preserves the approximation ratio while makes the overall algorithm insensitive. With this, we obtain the online consistent clustering Algorithm 6, which clusters data from a coreset at each step. + +Algorithm 6: Online consistent $(k,z)$ -clustering +Input: PTAS algorithm $\mathcal{D}$ for $(k,z)$ -clustering, approximation ratio $\epsilon, \delta \in (0,1)$ . +for $t = 1,\dots,n$ do + Construct an $\epsilon$ -coreset $C_{t-1} = (S_{t-1},\omega_{t-1})$ by running Algorithm 7 on $X_{t-1}$ . + Obtain cluster set $Z_t$ by running a PTAS $\mathcal{D}$ with approximation ratio of $(1+\epsilon)$ on $C_{t-1}$ . + Receive $x_t$ and $\ell(Z_t,x_t)$ . + +Algorithm 7: Coreset construction for clustering Huang and Vishnoi [2020] +Input: A set of point $X$ , approximation parameter $\epsilon, \delta \in (0,1)$ , integer $k, z$ . +1 Set $\epsilon = \epsilon / c$ for some large constant $c > 0$ . +2 Compute a $k$ -center set $C_t^* \subseteq \mathbb{R}^d$ as an $\epsilon$ -approximation of the $(k,z)$ -clustering problem over $X$ with the $D^z$ -sampling algorithm with an approximation ratio of $O(2^z \log k)$ . +3 For each $x \in X$ , compute the closest point to $x$ in $C^*$ , $c^*(x)$ , with ties broken arbitrarily. For each $c \in C^*$ , denote $X^c$ to be the set of points $x \in X$ with $c^*(x) = c$ . +4 For each $x \in X$ , let $\sigma_{1,X}(x) = 2^{2z + 2}\epsilon^2\left(\frac{\|x - c^*(x)\|_2^2}{\sum_{x \in X} \ell(C^*, x)} + \frac{1}{|X_{c^*(x)}|}\right)$ . +5 Pick a non-uniform random sample $D^1$ of $N_1 = O\left((168z)^{10z}\epsilon^{-5z - 15}k^5 \log \frac{k}{\delta}\right)$ points from $X$ , where each $x \in X$ is selected with probability $\frac{\sigma_{1,X}(x)}{\sum_{y \in X} \sigma_{1,X}(y)}$ . +6 For each $x \in D^1$ , sample $\tilde{u}_X(x)$ from $\left[\frac{\sum_{y \in X} \sigma_{1,X}(y)}{|D_t^1| \cdot \sigma_{1,X}(x)}, (1 + \epsilon)\frac{\sum_{y \in X} \sigma_{1,X}(y)}{|D_t^1| \cdot \sigma_{1,X}(x)}\right]$ . +7 Set $u_X(x) = \tilde{u}_X(x)$ +8 For each $c \in C^*$ , compute $D_c$ to be the set of points in $D^1$ whose closest point in $C^*$ is $c$ with ties broken arbitrarily. +9 For each $x \in D^1$ , let $\sigma_{2,X}(x) = \frac{u_X(x) \cdot \ell(C^*, x)}{\sum_{y \in D^1} u_X(y) \cdot \ell(C^*, y)}$ +10 Pick a non-uniform random sample $D^2$ of $N_2 = O\left(\epsilon^{-2z - 2}k \log k \log \frac{k}{\epsilon\delta}\right)$ points from $X_t$ , where each $x \in X_t$ is selected with probability $\frac{\sigma_{2,X}(x)}{\sum_{y \in D^1} \sigma_{2,X}(y)}$ +11 For each $x \in D^2$ , sample $\tilde{w}_X(x)$ from $\left[\frac{\sum_{y \in D^1} \sigma_{2,X}(y)}{|D^2| \cdot \sigma_{2,X}(x)}, (1 + \epsilon)\frac{\sum_{y \in D^1} \sigma_{2,X}(y)}{|D^2| \cdot \sigma_{2,X}(x)}\right]$ . +12 Set $w_X(x) = \tilde{w}_X(x)$ +13 For each $c \in C^*$ , let $w_X(c) = (1 + 10\epsilon) \sum_{x \in D_c} u_X(x) - \sum_{x \in D^2 \cap D_c} w_X(x)$ . +14 $S = D^2 \cup C^*$ +15 $w(x) = w_X(x)$ +16 Output ( $S, w$ ) + +# C.2 Analysis + +Theorem C.1. Algorithm 7 outputs an $\epsilon$ -coreset with probability at least $1 - \delta$ and has an average sensitivity of + +$$ +O \left(\frac {k + (1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {6} \log \frac {k}{\delta} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k}{\epsilon \delta} (1 + \epsilon^ {- 2 z - 2} k ^ {2} \log k \log \frac {k}{\epsilon \delta})}{n}\right). +$$ + +Proof. We first show that our coreset construction method gives an $\epsilon$ -coreset. + +We remark that the way we assigned the weights is a perturbed version of the method presented in Huang and Vishnoi [2020]. Then, to show that the perturbed version of the assignment of the weight still preserves the coreset approximation ratio, we show that the perturbed version still satisfies Corollary 15.3 of Feldman and Langberg [2011] and Theorem 14.5. Then using the same argument as Theorem 15.4 of Feldman and Langberg [2011], our coreset construction method gives an $\epsilon$ -coreset. + +In the first importance sampling stage, we only perturb $\tilde{u}_X(x)$ by a ratio of $1 + \epsilon$ , for each $x \in D^1$ . The same is applied in the second stage with $\tilde{w}_X(x)$ , for all $x \in D^2$ . For $c \in C^*$ , we have + +$$ +(1 + 10 \epsilon) \sum_ {x \in D _ {c}} u _ {X} (x) - \sum_ {x \in D ^ {2} \cap D _ {c}} w _ {X} (x) \leq (1 + \epsilon) \left((1 + 1 0 \epsilon) \sum_ {x \in D _ {c}} \tilde {u} _ {X} (x) - \sum_ {x \in D ^ {2} \cap D _ {c}} \tilde {w} _ {X} (x)\right). +$$ + +In all cases, the result from [Feldman and Langberg, 2011] still holds, as the weights are scaled by $(1 + \epsilon)$ at most. By the same argument that all perturbed weights are scaled by $(1 + \epsilon)$ at most, we have Theorem 15.4 of Feldman and Langberg [2011] holds with a ratio of $\epsilon (1 + \epsilon)$ . This results in an approximation of $1 + \epsilon (1 + \epsilon)$ by applying the argument of Theorem 15.4 of Feldman and Langberg [2011]. Rescale $\epsilon$ to $\epsilon /c$ for some constant $c > 0$ gives an approximation ratio of $1 + \epsilon$ and completes our argument that Algorithm 7 produces an $\epsilon$ -coreset with probability at least $1 - \delta$ . + +To show that our method enjoys low average sensitivity, we upper bound the average sensitivity of the two importance sampling stage of Algorithm 7 separately. + +The average sensitivity of importance sampling in the first stage can be bounded by the average total variation distance between selected elements and that between assigned weights, conditioned on the selected elements are the same. By Lemma 5.2, we can upper bound the average sensitivity of the first importance sampling stage to be $O(N_1 / n)$ , where $N_{1} = O\left((168z)^{10z}\epsilon^{-5z - 15}k^{5}\log (\delta^{-1}k)\right)$ . + +In the second stage, as points are selected with probability $\frac{\sigma_{2,X}(x)}{\sum_{y\in X}\sigma_{2,X}(y)}$ , we bound the total variation distance between selected points in a similar way as to that for the first stage. This gives a bound of $O(N_2 / n)$ , where $N_{2} = O\left(\epsilon^{-2z - 2}k\log k\log \frac{k}{\epsilon\delta}\right)$ . To bound the average total variation distance between assigned weights, we again apply the same argument as above and obtain a bound of $O(N_2 / n)$ . + +Combining these and that the average sensitivity of Line 2 of Algorithm 7 is $O(k / n)$ (by Lemma 2.2 of Yoshida and Ito [2022]), we have the average sensitivity as + +$$ +O \left(\frac {k + N _ {1} + N _ {2}}{n}\right) = O \left(\frac {k + (1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log \frac {k}{\delta} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k}{\epsilon \delta}}{n}\right). +$$ + +Theorem 6.1. For any $\epsilon \in (0,1)$ , Algorithm 3 gives a regret bound of + +$$ +\mathrm {R e g r e t} _ {\epsilon} (n) \leq O \left(\left((1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log \frac {k n}{\epsilon} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k n}{\epsilon}\right) \log n\right). +$$ + +Moreover, there exists an algorithm that enjoys the same regret bound and an inconsistency bound of Inconsistency $(n) = O\left(\left((168z)^{10z}\epsilon^{-5z - 15}k^5\log (\epsilon^{-1}kn) + \epsilon^{-2z - 2}k\log k\log (\epsilon^{-1}kn)\right)\log n\right)$ for $(k,z)$ -clustering. + +Proof. First, we note that the approximation ratio of the algorithm with respect to the aggregated loss is at most + +$$ +(1 - \delta) \left(1 + \frac {\epsilon}{3}\right) \left(1 + \frac {\epsilon}{3}\right) + \delta n \leq 1 + \epsilon +$$ + +from the choice of $\delta$ , i.e., $\delta = O(\epsilon / n)$ . + +Also, we note that the overall average sensitivity is + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {n} \beta (t) = \sum_ {t = 1} ^ {n} O \left(\frac {(1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log \frac {k}{\delta} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k}{\epsilon \delta}}{t}\right) \\ = O \left(\left((1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log {\frac {k}{\delta}} + \epsilon^ {- 2 z - 2} k \log {k} \log {\frac {k}{\epsilon \delta}}\right) \log {n}\right). \\ \end{array} +$$ + +Then substituting this into Theorem 4.1, for $z \geq 1$ , we have a regret bound of + +$$ +\begin{array}{l} \operatorname {R e g r e t} _ {\epsilon} (n) \leq O \left(\left((1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log \frac {k}{\delta} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k}{\epsilon \delta}\right) \log n\right) \\ = O \left(\left((1 6 8 z) ^ {1 0 z} \epsilon^ {- 5 z - 1 5} k ^ {5} \log \frac {k n}{\epsilon} + \epsilon^ {- 2 z - 2} k \log k \log \frac {k n}{\epsilon}\right) \log n\right). \\ \end{array} +$$ + +For the second claim about inconsistency, we first convert the average sensitivity bound to an inconsistency bound of the same order. This can be done by Lemma 4.2 of [Yoshida and Ito, 2022] and by arguing in a similar way as Lemma 4.5 of [Yoshida and Ito, 2022] by showing that there exists a computable probability transportation for the output of Algorithm 3. This thus yields an inconsistency of $O\left(\left((168z)^{10z}\epsilon^{-5z - 15}k^5\log (\epsilon^{-1}kn)\epsilon^{-2z - 2}k\log k\log (\epsilon^{-1}kn)\right)\log n\right)$ . + +# D Proofs for Section 7 + +Theorem 7.2. For any $\epsilon \in (0,1)$ , Algorithm 4 has regret $\mathrm{Regret}_{\epsilon}(n) = O\left(\epsilon^{-2}k\log n\log (\epsilon^{-1}kn)\right)$ . Moreover, there exists an algorithm for online low-rank matrix approximation that enjoys the same regret bound and an inconsistency bound of Inconsistency $(n) = O\left(\epsilon^{-2}k\log n\log (\epsilon^{-1}kn)\right)$ . + +Proof. Let $\delta > 0$ be determined later. By Theorem 6 from Cohen et al. [2017], with probability $1 - \delta$ , for any rank- $k$ orthogonal projection $\mathbf{X}$ , by sampling $m = O\left(\epsilon^{-2} k \log (\delta^{-1} k)\right)$ columns, in Line 4 of Algorithm 5, we have, + +$$ +\left(1 - \frac {\epsilon}{2}\right) \| \mathbf {A} _ {t} - \mathbf {X A} _ {t} \| _ {F} ^ {2} \leq \| \mathbf {C} _ {t} - \mathbf {X C} _ {t} \| _ {F} ^ {2} \leq \left(1 + \frac {\epsilon}{2}\right) \| \mathbf {A} _ {t} - \mathbf {X A} _ {t} \| _ {F} ^ {2}. +$$ + +With this, we note that the algorithm has an approximation of $1 + \epsilon$ as $1 + \epsilon /2 + \delta n\leq 1 + \epsilon$ by choosing the hidden constant in $\delta$ small enough. + +Note that by applying Lemma 5.2, this routine has an average sensitivity of $O(m / t) = O\left(\epsilon^{-2}k\log (\delta^{-1}k) / t\right)$ at any step $t$ . Then set $\mathbf{Z}_t$ to the top $k$ left singular vectors of $\mathbf{C}_t$ , and we have + +$$ +\left\| \mathbf {A} _ {t} - \mathbf {Z} _ {t} \mathbf {Z} _ {t} ^ {\top} \mathbf {A} _ {t} \right\| _ {F} \leq (1 + \epsilon) \left\| \mathbf {A} _ {t} - \mathbf {U} _ {k} \mathbf {U} _ {k} ^ {\top} \mathbf {A} _ {t} \right\| _ {F}. +$$ + +To obtain the regret, we calculate the overall average sensitivity as $\sum_{t=1}^{n} \beta(t) = \sum_{t=1}^{n} O\left(\epsilon^{-2} k \log (\delta^{-1} k) / t\right) = O\left(\epsilon^{-2} k \log n \log (\delta^{-1} k)\right)$ . Applying Theorem 4.1, and from the choice $\delta = O(\epsilon / n)$ , we have $\operatorname{Regret}_{\epsilon}(n) = O\left(\epsilon^{-2} k \log n \log (\epsilon^{-1} k n)\right)$ . + +For the second claim about inconsistency, we prove this convert the average sensitivity bound to an inconsistency bound of the same order. This can be done by Lemma 4.2 of [Yoshida and Ito, 2022] and by arguing in a similar way as Lemma 4.5 of [Yoshida and Ito, 2022] by showing that there exists a computable probability transportation for the output of Algorithm 3. + +# E Proof for Section 8 + +Theorem 8.1. For any $\epsilon \in (0,1)$ , Algorithm 5 has regret $\mathrm{Regret}_{\epsilon}(n) = O\left(\epsilon^{-2}d\log n\log (\epsilon^{-1}dn)\right)$ . Moreover, there exists an algorithm for online regression that enjoys the same regret bound and an inconsistency bound of Inconsistency $(n) = O\left(\epsilon^{-2}d\log n\log (\epsilon^{-1}dn)\right)$ . + +Proof. By Theorem 8.4, the algorithm has an approximation of $1 + \epsilon$ as $1 + \epsilon /2 + \delta n\leq 1 + \epsilon$ by choosing the hidden constant in $\delta$ small enough. + +Similar to the low-rank case, the average sensitivity of leverage score sampling at step $t$ is $O\left(\frac{d\log(d / \delta)}{\epsilon^2t}\right)$ . Summing over the average sensitivity + +$$ +\sum_ {t = 1} ^ {n} O \left(\frac {d \log (d / \delta)}{\epsilon^ {2} t}\right) = O \left(\frac {d \log n \log (d / \delta)}{\epsilon^ {2}}\right). +$$ + +Take this into Theorem 1, and with $\delta = O(\epsilon /n)$ , we have a $\epsilon$ -regret of $O\left(\epsilon^{-2}d\log n\log (dn / \epsilon)\right)$ . + +Similar to that of Theorem 6.1 and Theorem 7.2, we prove the second claim by converting the average sensitivity bound to an inconsistency bound of the same order. This can be done by Lemma 4.2 of [Yoshida and Ito, 2022] and by arguing in a similar way as Lemma 4.5 of [Yoshida and Ito, 2022] by showing that there exists a computable probability transportation for the output of Algorithm 3. \ No newline at end of file diff --git a/abatchtoonlinetransformationunderrandomordermodel/images.zip b/abatchtoonlinetransformationunderrandomordermodel/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..733ee895cbc421b278cdb5386c78be3d60bd87f3 --- /dev/null +++ b/abatchtoonlinetransformationunderrandomordermodel/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed4683e5c25d61aca8e39e5a5510b7209821fe315a6ffec285d4d2df1660132f +size 651837 diff --git a/abatchtoonlinetransformationunderrandomordermodel/layout.json b/abatchtoonlinetransformationunderrandomordermodel/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..88f85bec170ba762f452856e90aa10083ff06106 --- /dev/null +++ b/abatchtoonlinetransformationunderrandomordermodel/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6afedc8351f74d0db7f5af688e1133ccbe4c4f6891a148156853162e4d83187 +size 882628 diff --git a/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_content_list.json b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ac9a6103633140d77c3a3472c24331d9c13f33e9 --- /dev/null +++ b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af252f30ab211b6ad35238d701203852fe4224f6d7aef1a9ecc620253c313836 +size 122776 diff --git a/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_model.json b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2a8804a40ae44488668c3bc739ca1d4061575dc5 --- /dev/null +++ b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b39ec18ff266edabd6e51829bb34902cc1fbe7a1d9b91849cf09c126e77a976c +size 143378 diff --git a/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_origin.pdf b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..257579e3a5eca6fbd01fa137daff8e81c44e5fd9 --- /dev/null +++ b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/ab149958-3eaa-4935-8743-677771663d12_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a4b7246f1b7ac6807fcae8694547a90e092c5c06680b221c1c8c1a191c039a8 +size 3040367 diff --git a/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/full.md b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6fccdf177514c8aa22400bd3e83c48af80ce7ef8 --- /dev/null +++ b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/full.md @@ -0,0 +1,514 @@ +# A Bayesian Approach To Analysing Training Data Attribution In Deep Learning + +Elisa Nguyen + +Tübingen AI Center University of Tübingen + +Minjoon Seo + +KAIST AI + +Seong Joon Oh + +Tübingen AI Center University of Tübingen + +# Abstract + +Training data attribution (TDA) techniques find influential training data for the model's prediction on the test data of interest. They approximate the impact of down- or up-weighting a particular training sample. While conceptually useful, they are hardly applicable to deep models in practice, particularly because of their sensitivity to different model initialisation. In this paper, we introduce a Bayesian perspective on the TDA task, where the learned model is treated as a Bayesian posterior and the TDA estimates as random variables. From this novel viewpoint, we observe that the influence of an individual training sample is often overshadowed by the noise stemming from model initialisation and SGD batch composition. Based on this observation, we argue that TDA can only be reliably used for explaining deep model predictions that are consistently influenced by certain training data, independent of other noise factors. Our experiments demonstrate the rarity of such noise-independent training-test data pairs but confirm their existence. We recommend that future researchers and practitioners trust TDA estimates only in such cases. Further, we find a disagreement between ground truth and estimated TDA distributions and encourage future work to study this gap. Code is provided at https://github.com/ElisaNguyen/bayesian-tda. + +# 1 Introduction + +Understanding how machine learning models arrive at decisions is desirable for social, legal and ethical reasons, particularly for opaque deep learning models [1]. One approach to explanations is the data-centric approach of training data attribution (TDA). As the name suggests, TDA finds attributing training samples for a model decision, uncovering which part of the training data is relevant. The attribution $\tau$ of a training sample $z_{j}$ on another sample $z$ is usually defined as the change of model loss $\mathcal{L}$ on $z$ when the model is retrained without $z_{j}$ [2]: + +$$ +\tau \left(z _ {j}, z\right) := \mathcal {L} \left(z; \theta_ {\backslash j}\right) - \mathcal {L} \left(z; \theta\right) \tag {1} +$$ + +where $\theta$ is a model trained on the entire dataset $\mathcal{D}$ and $\theta_{\backslash j}$ is a model trained on the same set without $z_{j}$ . Since the direct computation of Equation 1 is expensive, various TDA techniques for approximating the quantity have been proposed, such as influence functions [3] or TracIn [4]. Their approximations are often based on some form of inner product between the parameter gradients $\nabla_{\theta}\mathcal{L}(z;\theta)$ and $\nabla_{\theta}\mathcal{L}(z_j;\theta)$ . + +Knowing how training samples attribute to a model decision provides an actionable understanding of the training data distribution, especially in cases of model error. TDA methods can identify the training samples that are most relevant to an error and therefore enable users to understand why the error occurred (e.g. due to domain mismatch of test and training data or wrongly labelled training data) [3]. Additionally, TDA gives them the tool to address the errors by e.g. changing the model directly through the training data. Even in non-erroneous cases, understanding the attributing training + +data may enable users affected by model decisions to contest the decisions if the attributing training data is noisy or of low quality [5]. + +At the same time, TDA methods, especially influence functions [3], have been criticised for their fragility when applied to deep models [6, 7, 8]. The main reasons are model complexity and the stochasticity of deep model training. While the former poses a challenge specifically for influence functions as they rely on strong convexity assumptions, the latter is a more general challenge [6, 9]. The randomness inherent to the training process does not only lead to variation in the learned model parameters but also in TDA scores, which makes them untrustworthy. Hence, K & Søgaard [9] recommend using expected TDA scores for increased stability. + +We argue that solely considering the expectation is not sufficient to ensure the reliability of TDA but requires inspecting the variance, too. We introduce a Bayesian perspective on the TDA task, noticing that there is no deterministic mapping from a dataset $\mathcal{D}$ to the corresponding model $\theta$ for deep neural networks. The learned model depends on the initialisation and batch composition in the stochastic gradient descent (SGD) optimiser. We capture the resulting randomness via Bayesian model posterior $p(\theta|\mathcal{D})$ over the parameter space [10, 11, 12]. In turn, the TDA estimate (Equation 1) is a random variable that depends on two posteriors, $p(\theta|\mathcal{D})$ and $p(\theta_{\backslash j}|\mathcal{D}_{\backslash j})$ . + +This viewpoint leads to a few insights into the practical usage and evaluation of TDA techniques. We confirm quantitatively that the ground-truth influence $\tau(z_j, z)$ is often dominated by the noise: $\sqrt{\operatorname{Var}(\tau)} > \mathbb{E}|\tau|$ . We argue that it is practically difficult to apply any TDA technique on pairs $(z_j, z)$ whose ground-truth attributions $\tau(z_j, z)$ are noisy in the first place. Likewise, any evaluation of TDA methods on such high-variance pairs would not be reliable. + +Nonetheless, we are optimistic that TDA techniques are useful in practice, particularly for train-test pairs with high signal-to-noise ratios: $\sqrt{\mathrm{Var}(\tau)}\ll \mathbb{E}|\tau |$ . We observe that such pairs are rare but consistently present in multiple experiments. We recommend that researchers and practitioners confine their usage to scenarios where the signal-to-noise ratios are expected to be large enough. + +Our contributions are as follows: (1) Bayesian formulation of the training data attribution (TDA) task. (2) Observation that the ground-truth TDA values are often unreliable and highly variable. (3) Recommendation for the community to use the TDA tools only when the expected noise level is low. (4) Experimental analysis of the contributing factors to the variance of ground-truth TDA values. (5) Observation that the TDA estimation methods capture local changes in the model with regard to the counterfactual question of "retraining without training sample $z_{j}$ ", while LOO retraining itself results in a more global change through the training procedure. + +# 2 Background + +We cover the background materials for the paper, including the concept, method, and evaluation of training data attribution (TDA) methods and Bayesian deep learning. + +# 2.1 Training data attribution (TDA) + +We introduce the TDA task, a few representative TDA methods, and existing evaluation strategies. + +TDA task. Given a deep model $f_{\theta}$ parametrised by $\theta$ , a training set $\mathcal{D} \coloneqq \{z_1, \dots, z_N\}$ , and a test sample $z$ , one is interested in the impact of a training sample $z_j$ on the model's behaviour on the test sample $z$ . In the TDA context, one is often interested in the counterfactual change in the loss value for $z$ after leave-one-out (LOO) training, when $z_j$ is excluded from the training set (Equation 1). TDA has been considered in different use cases, such as understanding the bias in word embeddings [13], fact tracing in language model outputs [14] and measuring the robustness of model predictions [5]. + +TDA methods. The conceptually most straightforward way to compute the difference due to LOO training (Equation 1) is to compute it directly. However, this is computationally expensive, as it involves the learning algorithm for obtaining $\theta_{\backslash j}$ for every $j$ . This gives rise to various TDA techniques that find approximate estimates $\tau'(z_j, z)$ of LOO. A prominent example of such approximation is the influence function (IF) method [3] based on [15]. Under strong smoothness assumptions, they + +approximate Equation 1 by: + +$$ +\tau^ {\prime} \left(z _ {j}, z\right) := - \nabla_ {\theta} \mathcal {L} (z; \theta) ^ {\top} H _ {\theta} ^ {- 1} \nabla_ {\theta} \mathcal {L} \left(z _ {j}; \theta\right) \tag {2} +$$ + +where $\nabla_{\theta}\mathcal{L}(z;\theta)$ and $\nabla_{\theta}\mathcal{L}(z_j;\theta)$ refer to the parameter gradients of $f_{\theta}$ for $z$ and $z_{j}$ respectively. Recognising the difficulty of scaling up the inverse Hessian computation $H_{\theta}^{-1}$ and the high dimensionality of operations in Equation 2, subsequent papers have proposed further approximations to speed up the computation [16, 17]. Charpiat et al. [18] have analysed the influence of $z_{j}$ on $z$ by dropping the need to compute the Hessian and formulating influence as the loss change when an additional training step (ATS) on $z_{j}$ is taken: + +$$ +\tau \left(z _ {j}, z\right) := \mathcal {L} \left(z; \theta_ {+ j}\right) - \mathcal {L} \left(z; \theta\right) \tag {3} +$$ + +where $\theta_{+j}$ is a learned model parameter with $\mathcal{D}$ and an additional step on $z_{j}$ . They propose two approximations: + +$$ +\text {G r a d - D o t} (\mathbf {G D}): \quad \tau^ {\prime} \left(z _ {j}, z\right) := \nabla_ {\theta} \mathcal {L} \left(z _ {j}; \theta\right) ^ {\top} \nabla_ {\theta} \mathcal {L} (z; \theta) \tag {4} +$$ + +$$ +\text {G r a d - C o s} (\mathbf {G C}): \quad \tau^ {\prime} (z _ {j}, z) := \frac {\nabla_ {\theta} \mathcal {L} (z _ {j} ; \theta)}{\| \nabla_ {\theta} \mathcal {L} (z _ {j} ; \theta) \|} ^ {\top} \frac {\nabla_ {\theta} \mathcal {L} (z ; \theta)}{\| \nabla_ {\theta} \mathcal {L} (z ; \theta) \|} \tag {5} +$$ + +This method is closely linked to TracIn [4] which computes the Grad-Dot not just at the end of the training, but averages the regular Grad-Dot similarities throughout the model training iterations. We note later in our analysis that within our Bayesian treatment of TDA, the TracIn method coincides conceptually with the Grad-Dot method. In our analysis, we study the sensitivity of LOO and the above TDA methods against noise. + +TDA evaluation. The primal aim of TDA methods is to measure how well they approximate the ground-truth LOO values. This is often done by measuring the correlation between the estimates from each TDA method and the ground-truth LOO values (Equation 1) [3, 6, 7, 9]. They use either a linear (Pearson) correlation or a rank (Spearman) correlation over a small number of train-test sample pairs $(z_{j}, z)$ due to the computational burden of computing the actual LOO values, especially for larger models. Usually, a few samples $z$ are chosen for a comparison against LOO, e.g. Koh & Liang [3] report results for one $z$ and Guo et al. [16] for 10 samples $z$ . In some cases, the ground-truth LOO is obtained by computing the change in loss after training further from the learned model parameters, e.g. [3]. Some works have adopted indirect evaluation metrics such as the retrieval performance of mislabelled or poisoned training data based on the TDA estimates [3, 19, 16, 9, 17]. In this work, we adopt the Pearson and Spearman correlation metrics and discuss ways to extend them when the target (LOO from same initialisation) and estimates (TDA) are both random variables. + +# 2.2 Bayesian deep learning. + +Bayesian machine learning treats the learned model as a posterior distribution over the parameter space, rather than a single point: + +$$ +p (\theta | \mathcal {D}) = p (\mathcal {D} | \theta) p (\theta) / p (\mathcal {D}). \tag {6} +$$ + +Bayesian ML nicely captures the intuition that the mapping from a training set $\mathcal{D}$ to the learned model $p(\theta|\mathcal{D})$ is not a deterministic mapping, especially for non-convex models like deep neural networks (DNNs). Depending on the initialisation, among other factors, DNN training almost always learns vastly different parameters. + +The estimation of the true posterior is indeed difficult for complex models like DNNs. The field of Bayesian deep learning is dedicated to the interpretation of certain random elements in DNN training as sources of randomness for the approximated Bayesian posteriors. For example, if Dropout [20] is used for training a model, it may be used at test time to let users sample $\theta$ from the posterior distribution $p(\theta | \mathcal{D})$ [21]. More generally used components like stochastic gradient descent (SGD) have also been interpreted as sources of randomness. The random walk induced by SGD iterations in the parameter space can be viewed as a Markov Chain Monte-Carlo sampler from the posterior distribution, after a slight modification of the optimisation algorithm (Stochastic Gradient Langevin Dynamics [10]). Similarly, the last few iterations of the vanilla SGD iterations may also be treated as samples from the posterior, resulting in more widely applicable Bayesian methods like Stochastic Weight Averaging (SWA) [12, 22]. Finally, the random initialisation of DNNs has also been exploited for modelling posterior randomness; training multiple versions of the same model with different initial parameters may be interpreted as samples from the posterior [11]. We show in the next section how the Bayesian viewpoint will help us model the sources of stochasticity for TDA estimates. + +![](images/2e1452067fc189212215abe7b3922a58397e6cdbe76abb1ac2e055ccb35f9d0a.jpg) +Figure 1: A Bayesian interpretation of training data attribution (TDA). + +![](images/fa01338fdabbc5400c19d8309192d94db671decfa6ecd04f54d357fddba94195.jpg) +Difference in loss (LOO) is a 1-D random variable. + +![](images/fa35aec0ee2504c5f6cd2add9b39c87bb6bf762b8623f5a3650a99fa5fa9acbe.jpg) + +# 3 A Bayesian perspective on training data attribution + +Training data attribution (TDA) $\tau(z_j, z)$ is defined as the attribution of one training sample $z_j$ to another sample $z$ in terms of how a target metric like the loss of a sample $\mathcal{L}(z, \theta)$ changes when the model is trained without $z_j$ (Equation 1). We note here that according to the definition, we are interested in the impact of the change in the dataset from $\mathcal{D}$ to $\mathcal{D}_{\backslash j}$ , rather than the change in the model parameter. From a Bayesian perspective, a change in the training dataset leads to a shift in the posterior distribution, $p(\theta | \mathcal{D}) \to p(\theta_{\backslash j} | \mathcal{D}_{\backslash j})$ , leading to the definition of TDA as a random variable: + +$$ +\tau \left(z _ {j}, z | \mathcal {D}\right) := \mathcal {L} \left(z; \mathcal {D} _ {\backslash j}\right) - \mathcal {L} (z; \mathcal {D}) = \mathcal {L} \left(z; \theta_ {\backslash j} \mid \mathcal {D} _ {\backslash j}\right) - \mathcal {L} (z; \theta | \mathcal {D}) \tag {7} +$$ + +where $\theta \sim p(\theta|\mathcal{D})$ and $\theta_{\backslash j} \sim p(\theta_{\backslash j}|\mathcal{D}_{\backslash j})$ . This interpretation is more natural, given the non-uniqueness of the mapping from a training dataset $\mathcal{D}$ to the optimal model parameter $\theta$ for general, non-convex models like DNNs. Alternatively, one could treat the model built from $\mathcal{D}$ as a fixed variable rather than a posterior as TDA is applied to a specific model in practice. The change of dataset from $\mathcal{D}$ to $\mathcal{D}_{\backslash j}$ however still introduces ambiguity in $\theta_{\backslash j}$ , which is captured in the Bayesian posterior $p(\theta_{\backslash j}|\mathcal{D}_{\backslash j})$ . In this study, we use the probabilistic formulation of TDA in Equation 7. + +Sampling TDA values. One could plug in various Bayesian DL techniques (§2.2) to compute samples of $p(\theta | \mathcal{D})$ , which can be used to get the samples of $\tau(z_j, z)$ . In our work, we use the Stochastic Weight Averaging (SWA) [12, 22] and Deep Ensemble (DE) [11] which are applicable to a wide class of deep models. More specifically, we obtain $T$ samples $\theta^{(1)}, \dots, \theta^{(T)} \sim p(\theta | \mathcal{D})$ either by taking the last $T$ model checkpoints of the SGD iterations (SWA) or by taking the last model checkpoints from $T$ different model initialisations (DE). The same is done for the counterfactual posterior $\theta_{\backslash j}^{(1)}, \dots, \theta_{\backslash j}^{(T)} \sim p(\theta_{\backslash j} | \mathcal{D}_{\backslash j})$ . This results in a mixture-of-Gaussian posterior, where DE samples correspond to centroids of the distribution. Our sampling approach is thus a version of stratified sampling, where the number of samples $T$ from a centroid is fixed and sampled IID. + +Statistical analysis on TDA. The simplest statistics for the TDA $\tau(z_j, z)$ are the mean and variance: + +$$ +\mathbb {E} [ \tau (z _ {j}, z) ] = \frac {1}{T} \sum_ {t} \mathcal {L} (z; \theta_ {\backslash j} ^ {(t)}) - \mathcal {L} (z; \theta^ {(t)}) \tag {8} +$$ + +$$ +\operatorname {V a r} [ \tau (z _ {j}, z) ] = \frac {1}{T ^ {2}} \sum_ {t, t ^ {\prime}} \left(\mathcal {L} (z; \theta_ {j} ^ {(t)}) - \mathcal {L} (z; \theta^ {(t ^ {\prime})}) - \mathbb {E} [ \tau (z _ {j}, z) ]\right) ^ {2} \tag {9} +$$ + +Our main interest lies in whether the influence of the training data $z_{j}$ on the test data $z$ is statistically significant and not dominated by the inherent noise of deep model training. For this purpose, we design a Student t-test [23] for quantifying the statistical significance. Our null and alternative hypotheses are: + +$$ +H _ {0}: \mu = 0 \quad H _ {1}: \mu \neq 0. \tag {10} +$$ + +We consider the test statistic based on sample mean and variance: + +$$ +t = \frac {\mu - \mathbb {E} [ \tau (z _ {j} , z) ]}{\sqrt {\operatorname {V a r s} [ \tau (z _ {j} , z) ] / T ^ {2}}}. \tag {11} +$$ + +Vars refers to the sample variance where the denominator in Equation 9 is $T^2 - 1$ instead. We report the significance of the absolute TDA $|\tau(z_j, z)|$ for every train-test pair $(z_j, z)$ by computing the p-value corresponding to the t-test statistic. The greater the p-value is, the greater the dominance of noise is for the TDA estimate. + +![](images/6443b3212c63ddd2d8ae381948fc1cee848bc4ebb9496b8905a2937b2025377e.jpg) +Figure 2: Sources of randomness for Bayesian posteriors. In each case, the training starts from initialisation $\theta_0$ . Depending on whether $z_{j}$ is included in the training data, one has either samples from the original posterior $p(\theta | \mathcal{D})$ or from the counterfactual posterior $p(\theta_{\setminus j} | \mathcal{D}_{\setminus j})$ . For deep ensemble [11], the randomness stems either from random initialisation (DE-Init) or from SGD batch composition (DE-Batch). For stochastic weight averaging (SWA) [12, 22], last few checkpoints of the training are treated as posterior samples. + +![](images/cc29ccb4289432496ec9c243f668be149aff5d1931e1076e33f3916169249908.jpg) + +![](images/74ca8453450358e4d961ed79175f16b79925786576127f48a5e62af6e58cd874.jpg) + +TDA methods likewise estimate random quantities. Approximate TDA methods like influence functions (IF), Grad-Dot, and Grad-Cos (§2.1) also predict random quantities $\tau'(z_j, z)$ . For example, IF predicts $\tau(z_j, z) \approx \tau'(z_j, z) \coloneqq -\nabla_\theta \mathcal{L}(z_j; \theta)^\top H_\theta^{-1} \nabla_\theta \mathcal{L}(z; \theta)$ , where one may sample $\theta$ from the posterior $p(\theta | \mathcal{D})$ . We note that IF suffers theoretical issues in its application to deep models, as convexity assumptions are not met. In practice, estimation algorithms make use of a damping term to ensure the positive definiteness of the inverse Hessian. Through a Bayesian lens, the damping term could be seen as an isotropic Gaussian prior centred at the origin. Similar statistical analyses on the TDA estimations can be performed as above, including the mean and variance computations and statistical testing for the significance of influence. + +Evaluating TDA as a random variable. Previously, the LOO-based TDA values $\tau(z_j, z)$ and the estimates from various approximate TDA methods $\tau'(z_j, z)$ are compared via correlation measures like Pearson or Spearman. Our treatment of those quantities as 1-D random variables poses a novel challenge for evaluation because there exists no inborn notion of ordering among 1-D random variables. We address the challenge by examining the approximation ability of TDA methods for both the first and second moments of the true TDA values $\tau(z_j, z)$ . More specifically, we compute the Pearson and Spearman correlation for both the mean (Equation 8) and variance (Equation 9) between the ground-truth $\tau(z_j, z)$ and estimated TDA $\tau'(z_j, z)$ values across multiple train-test pairs $(z_j, z)$ . + +# 4 Experiments + +We introduce our experimental settings, present analyses on factors contributing to the reliability of TDA values, compare TDA methods, and draw suggestions on the evaluation practice of TDA. + +# 4.1 Implementation details + +We illustrate the specific details of our implementation. See the Appendix for further information. + +TDA methods. We study different TDA methods from a Bayesian perspective. We test the methods introduced in §2.1 for estimating TDA: influence functions (IF) [3], Grad-Dot (GD) and Grad-Cos (GC) [18]. We use the PyTorch implementation of IF from Guo et al. [16] and modify it for our models. As the ground-truth target, we consider Leave-one-out training (LOO) [3]. For LOO, we remove of $z_{j}$ from the training set $\mathcal{D}$ by zeroing out the weight for sample $z_{j}$ towards the loss. Additionally, we include Charpiat et al.'s [18] notion of TDA that a training data point $z_{j}$ attributes more if an additional training step (ATS) on it changes the test loss more significantly. + +Inducing randomness in posterior $p(\theta | \mathcal{D})$ . In §2.2, we have introduced the interpretation of various elements around model training as sources of randomness for Bayesian posterior. We summarise our methods for inducing randomness in Figure 2. We use the notion of the Deep Ensemble (DE) [11] to sample from the posterior. In a variant of DE with the initialisation as the source of randomness (DE-Init), we train each of $T_{\mathrm{DE}}$ randomly initialised parameters $\theta_0^{(t)}$ on either $\mathcal{D}$ or $\mathcal{D}_{\backslash j}$ . The resulting parameter sets, $\theta^{(t)}$ and $\theta_{\backslash j}^{(t)}$ , are treated as samples from respective + +Table 1: Stability of TDA estimates. We report p-values for the ground-truth TDA $\tau (z_j,z)$ (LOO) and the estimated TDA values $\tau^{\prime}(z_j,z)$ (rest 4 columns). The p-values are averaged across all train-test pairs $(z_{j},z)$ . We use the CNN model throughout. + +
DataRandomnessLOOATSIFGDGC
MNIST3SWA+DE-Init0.3310.2540.3520.3630.003
SWA+DE-Batch0.0250.0390.0000.0000.000
CIFAR10SWA+DE-Init0.6920.4370.5750.5870.356
SWA+DE-Batch0.4870.2960.4840.5170.236
+ +![](images/3dc0e98c498c298b7bef6563c40d8b417f5bef49da8a909205390dea53f6b684.jpg) +Figure 3: Stability of TDA estimates per train-test pair. Distribution of p-values for ground-truth TDA (LOO) for different experiments. + +![](images/a5925bc22c6817ba8af902e1f029c391229057fb547aee3ded0b92b4ac2763e6.jpg) + +![](images/07ce7a78efa03db38bfc3d1e5e47dc81ac69ab2b34891eed56c98ebe75b30e20.jpg) + +![](images/5d376d5f1cc3ef0c288a9a52db023f4442c2b7c31bde7aa4712831d0f0cf2d9d.jpg) + +posteriors. We also consider the batch composition in stochastic gradient descent (SGD) as the source of randomness (DE-Batch). In this case, we train from one initial parameter $\theta_0$ with $T_{\mathrm{DE}}$ different random shuffles $\pi^{(t)}$ of the training sets $\mathcal{D}$ and $\mathcal{D}_{\backslash j}$ . This results in two sets of samples from the original and counterfactual posteriors. We increase the number of samples by taking the last $T_{\mathrm{SWA}}$ checkpoints as the Stochastic Weight Averaging (SWA) samples [12, 22]. For Grad-Dot, this coincides with the definition of TracIn [4] as we average the dot products across checkpoints. In total, we take $T = T_{\mathrm{DE}} \times T_{\mathrm{SWA}} = 10 \times 5$ samples from $p(\theta|\mathcal{D})$ and $p(\theta_{\backslash j}|\mathcal{D}_{\backslash j})$ to estimate $\tau(z_j, z)$ . + +Datasets $\mathcal{D}$ . To enable an exhaustive analysis of every train-test pair $(z_{j},z)$ , we define smaller datasets. We use variants of MNIST [24] limited to three classes (MNIST3), and CIFAR10 [25]. For MNIST3, we sample a training set of size 150 and a test set of size 900, i.e. 135,000 train-test pairs. For CIFAR10, we define the training and test set at size 500, i.e. 250,000 train-test pairs. + +Models. We consider two types of image classifiers, visual transformers (ViT, [26]) and convolutional neural networks [27], where we primarily study a two-layer (CNN2-L). We also include a three-layer version (CNN3-L) to study the factor of model complexity. For ViT variants, instead of full finetuning, we use LoRA adapter layers [28] to minimise the number of parameters being tuned. The number of trainable parameters of ViT+LoRA (597,514) is comparable to CNN3-L (620,362). + +# 4.2 Reliability of TDA evaluation + +We assess the reliability of TDA evaluation by measuring the degrees of noise in both the ground-truth TDA (LOO) $\tau(z_j, z)$ and the estimated TDA $\tau'(z_j, z)$ . The noise level is measured with the p-value of the Student-t hypothesis testing to determine if the absolute TDA values are significantly greater than the sample noise ( $\S 3$ ). + +We report the results in Table 1. Generally, we observe many TDA measurements, ground-truth and estimations likewise, are unstable with non-significant p-values $(>0.05)$ . In particular, even the ground-truth LOO shows p-values of 0.331 on MNIST3 and 0.692 for CIFAR10 (SWA+DE-Init). In these cases, the noise effectively dominates the signal and any evaluation that does not consider the variance in the posterior $p(\theta|\mathcal{D})$ is likely to be misleading. This confirms the reports in [9] that TDA values are sensitive to model initialisation. + +TDA methods often show similar levels of instability. For example, the IF attains p-values 0.352 and 0.575 on MNIST3 and CIFAR10, respectively, roughly matching the LOO case. Grad-Cos is an exception: it attains lower p-values than the other TDA methods (0.003 and 0.356 for MNIST3 and CIFAR10, respectively). We interpret this as an overconfident TDA estimation. Practitioners shall be wary of using TDA methods that are unreasonably stable when the ground-truth TDA itself is not. + +![](images/4df3cb33ffac86f341e7048fa59aff6c6f44cc333c65fd99f202301f76caaa37.jpg) +CNN2-L, MNIST3 + +![](images/95f01f81bf0a1cd9f3fd4b073b21228d7630729353ed0aeb93bcfef89294e637.jpg) +CNN2-L, CIFAR10 +Figure 4: Impact of training data size. Mean p-values of TDA methods with randomness induced by SWA+DE-Init. + +Table 2: Impact of model complexity. Mean p-values of the ground-truth TDA $\tau(z_j, z)$ (LOO) and estimated TDA values $\tau'(z_j, z)$ (the other 4 columns) with randomness induced by SWA+DE-Batch for MNIST3 and DE-Batch for CIFAR10. + +
ModelDataLOOATSIFGDGC
CNN2-LMNIST30.0250.0390.0000.0000.000
CNN3-LMNIST30.3700.3680.4640.4700.005
ViT+LoRAMNIST30.7860.5730.3690.3650.093
CNN2-LCIFAR100.6230.3740.5350.5340.314
CNN3-LCIFAR100.6870.4320.5790.5810.365
ViT+LoRACIFAR100.7770.7660.6860.6860.522
+ +# 4.3 Factors influencing the variability of TDA + +Based on the observation in §4.2 that TDA values are often dominated by noise, we delve into the factors that lead to the instability of data attributions. We inspect the contribution of model initialisation, training set size and model complexity. + +Source of randomness. From a Bayesian ML perspective, the stochasticity of TDA stems from the inherent uncertainty of the learned model posterior $p(\theta | \mathcal{D})$ . We consider two sources of randomness, model initialisation (DE-Init) and SGD batch composition (DE-Batch). Results are reported in Table 1. For MNIST3 and CIFAR10, we observe that DE-Batch introduces lower levels of noise in the TDA estimates (lower p-values). Particularly on MNIST3, both LOO and other TDA methods result in statistically significant p-values ( $< 0.005$ ). This implies that almost every training data $z_{j}$ is influencing every test data $z$ consistently across various batch compositions. We conclude that the greater source of variations for the TDA estimates is the model initialisation. + +Training set size. We study how training set size is a source of noise (cf. Figure 4). We train the CNN2-L with different-size datasets of MNIST3 and CIFAR10, where we vary the number of samples per class. Batches are composed differently depending on the dataset size, meaning that parameter updates are made after processing different data. In addition, we train a CNN2-L on the complete MNIST dataset and use a subset of the training data for our experiments (cf. Appendix B.4). The results show a tendency for high variation in TDA scores with larger datasets first after which a decrease in variation is observed. The initial increase makes sense as the number of combinations for batch composition increases with dataset size. As the batching is initialised randomly during training, batches are likely to be composed of different data for larger datasets. This leads to variation in the learned model parameters, in turn affecting the reliability of TDA. At the point of decrease, the TDA scores are rather small for all train-test pairs. The attribution of individual training samples to a model prediction is overall small in larger datasets, which leads to a decrease in variance. + +Model complexity. We study how model complexity is linked to the reliability of TDA estimates. See Table 2. We observe that, compared to the CNN models, a large ViT model trained with LoRA results in dramatically greater p-values. For example, for LOO on MNIST3, the p-value increases from 0.025 (CNN2-L) to 0.786. A similar trend is observed for other TDA methods. A less dramatic increase in p-values can also be observed with the addition of another layer to the CNN (i.e. from + +![](images/48da6018a2543a31c292840f674dc41acebf4581f3b43694782784271c7da301.jpg) + +![](images/262f63919f633677498e1f2b50ceafabab84b8a97d8b5f4d8d6b0cea1df16066.jpg) + +![](images/536bebb0a110fb72c3d3f1cf312637e5d3ce6fc9253ce108d49ad9607201194c.jpg) + +![](images/741a2f62207e514d9e2ff3cc36ade225b87f1827bc20a834f9f57109828562b3.jpg) +Figure 5: Correlation of TDA methods. Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. All results are based on the setting: CNN2-L, MNIST3, SWA+DE-Init. + +![](images/6205cf2e883fd5054fe1f1f01043849b453d8396f4036223ab27a151219bc18e.jpg) + +![](images/f4198b0adf8dd940a43b47cf07d1ac36e0e4626aa5930380a0dc26abb0e9e5cc.jpg) + +CNN2-L to CNN3-L). This implies that the reliability of TDA estimates decreases with increasing model complexity. While we limit the number of trainable parameters in our ViT by using LoRA to be comparable to CNN3-L, the p-values computed from TDA estimates are significantly larger. Larger models exhibit a larger parameter space so that noise stemming from model initialisation or batch composition is amplified. While we fix the model initialisation and dataset size, the batch composition still varies across the model parameters $\theta$ sampled from the posterior $p(\theta|\mathcal{D})$ per model. As both the CNNs and ViT are trained with the same sampled batch compositions, we attribute the strong increase of p-value to the model complexity. + +# 4.4 (Dis)agreement between TDA methods + +We test the reliability of different TDA methods. Ideally, all methods approximate the ground-truth TDA (LOO). Yet the results suggest that there are substantial differences among the methods. For example, Grad-Cos is much more stable than all others. Hence, we study TDA methods with respect to both their correlation with LOO and among each other using Pearson and Spearman correlation of mean and variance of the TDA distributions, as proposed in §3. + +Figure 5 shows the correlation matrices for one experiment (all experimental results in the Appendix). The results show that viewing TDA scores as distributions gives insights into the reliability of TDA methods: None of the tested TDA method's expected values $\hat{\mu}$ correlates with LOO. This implies that none of the TDA methods is a good approximation when the random factors are considered. The poor correlation of p-values of LOO indicates a disagreement in the train-test pairs considered low noise. We conclude that none of the tested methods reliably capture ground-truth TDA distributions. + +Interestingly, we notice a stronger correlation between all other methods particularly when looking at the correlations of p-values. We identify two groups based on positive correlation, i.e. ATS with IF and GD with GC. Among the two groups, there is a negative correlation which indicates that methods interpret the sign of the attribution differently. Between IF, GD and GC this makes sense, as there is a negative sign in the definition of IF (Equation 2) which is not present in GD and GC. Considering absolute correlation, IF and GD are strongly correlated which shows that the dot product is a valid alternative for IF as they produce similar score distributions. The correlation between GD and GC indicates that the normalisation of gradients does not have a strong impact on the estimated TDA. + +IF, GD and GC correlate considerably with ATS, which measures how the loss of a model on $z$ changes after doing one additional training step on $z_{j}$ . Practically, ATS represents the gradient update after $z_{j}$ , which is the same as the gradient $z_{j}$ itself. Therefore, it makes sense that the gradient-based approximation methods are close to ATS. We recognise a difference in the scope LOO and ATS + +address. LOO looks at TDA globally and encapsulates the whole training, whereas ATS considers a local scope with a small model change. As IF, GD and GC correlate with ATS, we observe that they also correspond to a local change in the model, which underlines and extends the argument of [7]: There is a gap between LOO and IF, and more generally between the global and local view on TDA. + +The TDA variance $\hat{\sigma}$ is noticeably well-correlated for TDA estimators and LOO, except for GC. This implies the existence of a consistent ranking of train-test pairs with stable attribution relationships. In particular, stable train-test pairs predicted by LOO are also likely to be stable pairs for TDA methods like ATS, IF, and GD. This motivates our final analysis and recommendation for evaluation in §4.5. + +# 4.5 Considerations on TDA evaluation from a Bayesian perspective + +Our analysis shows that both TDA estimates and ground-truth TDA values are affected by the noise stemming from the stochastic nature of deep model training. Hence, the practice of comparing against such a ground truth is destined to result in fragile estimates. We propose to treat TDA estimates as random variables which allows us to look at the evaluation from a Bayesian perspective: The comparison of TDA estimates against target TDA values is a comparison of two random variables. Since it is impossible to get rid of noise, it is better to compare distributions rather than point estimates. This provides an understanding of how well methods approximate the ground-truth distribution. + +We observe that p-values vary between individual train-test sample pairs $(z_{j}, z)$ ; not all TDA estimates are equally affected by stochasticity. Interestingly, the presence of low-noise pairs is consistent across the majority of our experiments (cf. Figure 3), with varying sizes of the low-noise fraction. We find that fixing model initialisation and a small dataset size gives rise to a larger number of low-noise pairs. + +We propose to focus on such low-noise pairs in TDA evaluation as their estimates are low in variance, leading to a more reliable evaluation. Identifying such pairs requires an analysis similar to this work: treating TDA values as distributions and sampling multiple times from the posterior to get an estimate of the noise. It is crucial to find low-noise pairs to base evaluations on and understand when TDA is applicable. If no low-variance pairs exist, TDA cannot be used. + +# 5 Related work + +We study the reliability of TDA methods and add to the existing body of work on the fragility of TDA methods. Previous studies focused primarily on IF [29, 6, 8, 9, 7]. We extend the analysis by additionally studying other TDA methods. While IFs are theoretically grounded in robust statistics [15], they are based on two assumptions which are not always fulfilled in the context of deep learning: Twice-differentiability and strict convexity of the loss [3]. Zhang & Zhang [29] and Basu et al. [6] point to the fragility of the influence scores due to the non-convexity of deep learning. Particularly increasing model size is connected to increased model curvature, which means that influence estimates are more fragile with larger models. They find that strong regularisation is needed to improve estimation quality. Our experiments verify the observation that fragility increases with model size, which we observe across methods. We add that sources of randomness in the training process attribute to the fragility of TDA methods with increasing model size. Furthermore, related work found that the size of the training set contributes to the fragility of influence estimates. The attribution of one sample in a large training set is marginal so both influence estimates and ground-truth influence scores (i.e., from retraining the model) are noisy [6, 8, 9]. Through a Bayesian lens, we connect the increased fragility with increasing dataset size to batch composition as well. Not only is the attribution of a single sample in a large dataset marginal [6] but batches have vastly different compositions in larger datasets, introducing noise. A recent work [7] states that influence functions in deep learning do not correspond to LOO and quantify gaps in the estimation stemming from model non-linearity. A different approach in TDA [30, 31] aims at predicting the expected model output given a set of data points, directly considering randomness stemming from model initialisation. K & Søgaard [9] recommend reporting expected TDA scores to increase estimation stability. This approach is closest to our work but misses the consideration of variance in TDA estimates which we include by taking a Bayesian viewpoint. + +In contrast to related work, we treat TDA values as distributions, which enables a novel perspective on the TDA task for deep models. We highlight the importance of considering the variance when studying reliability. + +# 6 Conclusion + +We adopt a Bayesian perspective on the training data attribution (TDA) methods to study their reliability when applied to deep models, given the stochastic nature of deep model training. By modelling TDA scores as distributions, we find that randomness in the training process, particularly due to parameter initialisation and batch composition, translates to variation in ground-truth TDA. We empirically observe that current estimation methods, such as influence functions, model a local change in the model whereas the ground truth attribution considers a global model change. Therefore, TDA is subject to inherent variance, leading us to suggest to the community: (1) When proposing a novel TDA method, one should view TDA from a Bayesian perspective and study the TDA estimates as distributions. (2) When using TDA, one should consider the variance to understand when the estimate can be trusted. + +Limitations. We perform an exhaustive analysis of TDA values $\tau$ and the estimates $\tau'$ for all train-test pairs $(z_{j}, z)$ . Because of considerable computational costs, we have subsampled the datasets. In practice, datasets are considerably larger. Moreover, we choose simple tasks to eliminate the need for an additional hyperparameter search for model training, as the principal focus is on studying TDA methods. We choose gradient-based TDA methods but acknowledge that there exist many more, that we do not address. Hence, we encourage further study of TDA methods to fill these gaps and recommend investigating TDA from a Bayesian perspective, particularly in the low-data regime. + +Broader impact. This paper contributes to the field of data-driven XAI which aims at helping humans understand the inner workings of opaque models through data-centric approaches. Our work contributes to understanding the reliability of TDA methods and rethinking their evaluation against a noisy ground truth, which could help assess when TDA is appropriate and reliable. + +# Acknowledgments and Disclosure of Funding + +Kay Choi has helped in designing Figures 1 and 2. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Elisa Nguyen. This work was supported by the Tübingen AI Center. + +# References + +[1] Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. A survey of methods for explaining black box models. ACM Comput. Surv., 51(5), 2018. ISSN 0360-0300. doi: 10.1145/3236009. URL https://doi.org/10.1145/3236009. +[2] Zayd Hammoudeh and Daniel Lowd. Training data influence analysis and estimation: A survey. ArXiv, abs/2212.04612, 2022. +[3] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1885-1894. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/koh17a.html. +[4] Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating training data influence by tracing gradient descent. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 19920-19930. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Paper.pdf. +[5] Jinghan Yang, Sarthak Jain, and Byron Wallace. How many and which training points would need to be removed to flip this prediction? In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2563-2576, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics. URL https://aclanthology.org/2023.eacl-main.188. + +[6] Samyadeep Basu, Phil Pope, and Soheil Feizi. Influence functions in deep learning are fragile. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=xHKVVHGDOEk. +[7] Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B Grosse. If influence functions are the answer, then what is the question? In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 17953-17967. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/7234e0c36fdbcb23e7bd56b68838999b-Paper-Conference.pdf. +[8] Jacob Epifano, Ravichandran Ramachandran, Aaron J. Masino, and Ghulam Rasool. Revisiting the fragility of influence functions. Neural networks : the official journal of the International Neural Network Society, 162:581-588, 2023. +[9] Karthikeyan K and Anders Søgaard. Revisiting methods for finding influential examples. CoRR, abs/2111.04683, 2021. URL https://arxiv.org/abs/2111.04683. +[10] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688, 2011. +[11] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017. +[12] Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 876-885. Association For Uncertainty in Artificial Intelligence (AUAI), 2018. +[13] Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. Understanding the origins of bias in word embeddings. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 803-811. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/brunet19a.html. +[14] Ekin Akyurek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. Towards tracing knowledge in language models back to the training data. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2429-2446, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022-findings-emnlp.180. +[15] Frank R. Hampel. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69(346):383-393, 1974. doi: 10.1080/01621459.1974.10482962. +[16] Han Guo, Nazneen Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. FastIF: Scalable influence functions for efficient model interpretation and debugging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10333-10350, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.808. URL https://aclanthology.org/2021.emnlp-main.808. +[17] Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. Scaling up influence functions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8179-8186, Jun. 2022. doi: 10.1609/aaai.v36i8.20791. URL https://ojs(aaai.org/index.php/AAAI/article/view/20791. +[18] Guillaume Charpiat, Nicolas Girard, Loris Felardos, and Yuliya Tarabalka. Input similarity from the neural network perspective. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/c61f571dbd2fb949d3fe5ae1608dd48b-Paper.pdf. + +[19] Rajiv Khanna, Been Kim, Joydeep Ghosh, and Sanmi Koyejo. Interpreting black box predictions using fisher kernels. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3382-3390. PMLR, 2019. +[20] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. +[21] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR, 2016. +[22] Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. Advances in neural information processing systems, 32, 2019. +[23] Student. The probable error of a mean. Biometrika, pages 1-25, 1908. +[24] Yann LeCun and Corinna Cortes. The mnist database of handwritten digits. 2005. +[25] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. +[26] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. +[27] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +[28] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9. +[29] Rui Zhang and Shihua Zhang. Rethinking influence functions of neural networks in the overparameterized regime. Proceedings of the AAAI Conference on Artificial Intelligence, 36 (8):9082-9090, Jun. 2022. doi: 10.1609/aaai.v36i8.20893. URL https://ojs.aaaai.org/index.php/AAAI/article/view/20893. +[30] Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Datamodels: Predicting predictions from training data. In Proceedings of the 39th International Conference on Machine Learning, 2022. +[31] Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. Trak: Attributing model behavior at scale. In International Conference on Machine Learning (ICML), 2023. +[32] Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft, 2022. +[33] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. +[34] Bichen Wu, Chenfeng Xu, Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Zhicheng Yan, Masayoshi Tomizuka, Joseph Gonzalez, Kurt Keutzer, and Peter Vajda. Visual transformers: Token-based image representation and processing for computer vision, 2020. + +# A Model training details + +We provide the source code at https://github.com/ElisaNguyen/bayesian-tda. All experiments were run on a single Nvidia 2080ti GPU. + +# A.1 Data sampling + +We use subsampled versions of the openly available MNIST [24] and CIFAR10 [25] datasets. For this, we first add an index which we use for randomly sampling a fixed number of images from each class. Table 3 includes the dataset sizes of the different experiments. + +# A.2 Model training + +CNN2-L has two convolutional layers followed by two fully connected linear layers, with GeLU activation. We use the Adam optimizer with a learning rate of 0.001 and a weight decay of 0.005. We use the cross-entropy loss and train the model for 15 epochs on MNIST3 and for 30 epochs on CIFAR10 with a batch size of 32. CNN3-L has 3 convolutional layers followed by two fully connected linear layers with GeLU activation. The hyperparameters are the same as for CNN2-L. For training the ViT with LoRA, we use the peft [32] and HuggingFace transformers library [33]. We start from the pretrained model checkpoint of [34] and finetune the LoRA model with the same hyperparameters as the CNN. + +An overview of the predictive performance measured in accuracy on the subsampled training and test sets is provided in Table 3. + +Hint for reproducibility. In particular, we use CrossEntropyLoss (reduction = 'none') during model training and also update this in the ViT training script modeling_vit.py. This is important for the LOO experiments, where we exclude a sample $z_{j}$ from contributing to the training by zeroing out the loss. + +Table 3: Predictive performance at ${95}\%$ CI across 10 runs (computed as 1.96*SE) + +
ExperimentAccuracytrainAccuracytest
ModelDataRandomness|Dtrain||Dtest|
CNN2-LMNIST3SWA+DE-Init309000.987±0.0100.953±0.003
CNN2-LMNIST3SWA+DE-Init609000.985±0.0070.970±0.004
CNN2-LMNIST3SWA+DE-Init909000.995±0.0030.939±0.005
CNN2-LMNIST3SWA+DE-Init1209000.999±0.0010.941±0.008
CNN2-LMNIST3SWA+DE-Init1509000.998±0.0040.970±0.003
CNN2-LMNIST3SWA+DE-Init1809001.000 ± 0.0000.985±0.001
CNN2-LCIFAR10SWA+DE-Init1005000.989±0.0200.260±0.010
CNN2-LCIFAR10SWA+DE-Init2005001.000±0.0000.328±0.007
CNN2-LCIFAR10SWA+DE-Init3005001.000±0.0000.360±0.005
CNN2-LCIFAR10SWA+DE-Init4005000.983±0.0200.362±0.011
CNN2-LCIFAR10SWA+DE-Init5005000.992±0.0140.377±0.012
CNN2-LCIFAR10SWA+DE-Init6005000.994±0.0050.394±0.011
CNN2-LMNIST3SWA+DE-Batch1509000.993±0.0020.975±0.002
CNN2-LCIFAR10SWA+DE-Batch5005000.991±0.0100.364±0.003
CNN3-LMNIST3SWA+DE-Init1509000.994±0.0050.971±0.008
CNN3-LCIFAR10SWA+DE-Init5005000.989±0.0080.363±0.011
ViT+LoRAMNIST3SWA+DE-Batch1509000.945±0.0080.935±0.005
ViT+LoRACIFAR10SWA+DE-Batch5005000.934±0.0060.892±0.008
+ +# B Additional experimental results + +We study the reliability of TDA estimates and values through a hypothesis test, where we report the p-values as an indication of the statistical significance of the TDA estimate. In this appendix, we + +![](images/cfbec1e072d49627d213186115c6d171a2f1dbb0d24023fad1ddcad36574f752.jpg) +Figure 6: P-values of ground-truth TDA (LOO) with increasing number of SWA samples (i.e., number of model checkpoints used as samples of $\theta$ ) for the CNN trained on MNIST3 and CIFAR10. + +![](images/f83110fdfcb2f0c9a69bdc6a5bf3b4ddb4183eb6e9b404168f7b7ab957b5ebd4.jpg) + +provide a complete overview of the analyses of p-values and correlations between different TDA methods. + +# B.1 DE vs. DE+SWA + +In our work, we sample trained models $\theta$ sampled from the posterior $p(\theta | \mathcal{D})$ . Concretely, we train the model across 10 different random seeds and record the checkpoints after each of the last five epochs of training. Each of these models represents a sample. We test how the stability of the TDA values $\tau(z_j, z)$ (LOO) behaves when we ensemble different numbers of checkpoints by investigating the mean p-values across all train-test pairs $(z_j, z)$ . Figure 6 shows that higher numbers of samples $\theta$ increase stability, therefore we use all available samples in our subsequent analyses. + +# B.2 Correlation analysis on low-noise pairs + +![](images/66c3d88f6e0377365f09078b4cf710f5feaa576663edc64254692c1d24bd85da.jpg) +Figure 7: Correlation analysis of low-noise train-test pairs Spearman rank correlation coefficients for TDA scores with $p < 0.05$ of LOO retraining of the 2-layer CNN models trained on the respective datasets. + +We analyse the Spearman correlation between the TDA approximation methods and ground-truth TDA (LOO) for low-noise train test pairs (i.e. where p-value of LOO scores is $\leq 0.05$ ). The correlation matrices are shown in Figure 7. Within the subset of low-noise train-test pairs, the Spearman correlation of the mean between LOO and the approximation methods is higher than when considering the whole dataset for both MNIST3 and CIFAR10. The correlation of $\hat{\sigma}$ of the + +low-noise analysis is similar to the results found in the correlation analysis of the whole dataset (cf. Figures 12, 15). Therefore, we observe similar trends when considering only the low-noise train test pairs to the complete dataset: There is a weak correlation between the TDA approximation methods and LOO, while the TDA approximation methods correlate strongly among themselves. This reflects the difference between the global change of retraining and the local approximations. + +# B.3 Mislabel identification + +![](images/102a38de14a61fd0f337f38b86a2dc4821fac3b46f6f051072e8f5f5de3dae5c.jpg) + +![](images/07f02c6083e4f60857284cbde732f61a7f2e6892420781c205381d8e98a52a88.jpg) + +![](images/73c0e571b9edf27df3d8f2192b2db91d0ab97b2e49d3686829e249ca7bbe4b75.jpg) +Figure 8: Fractions of mislabeled samples discovered with the deterministic definition of TDA (left, Equation 1) vs. a probabilistic definition of TDA (right, Equation 7). + +![](images/bc1ae553850d92dcad6582a3c06c2f1d72e5dd6f855f4e76f370556a7b39a747.jpg) + +A common way of evaluating TDA methods is through the auxiliary task of mislabel identification. We perform an this experiment with CNN2-L trained on MNIST3 $(|\mathcal{D}| = 150)$ and CIFAR10 $(\mathcal{D} = 500)$ . We follow the procedure from Koh & Liang [3]: First, a random $10\%$ of the datasets are mislabeled by the highest scoring incorrect label. We train the model using these mislabeled datasets (sample $T = 50$ times from the posterior). Then, we compute self-influence, which is the attribution of a sample to itself, with each TDA method. The mislabeled dataset is ranked according to self-influence and the quantity of interest is the fraction of mislabeled samples found when inspecting the top $x\%$ of the ranked dataset. In the analysis, we inspect (1) the range of mislabeled fractions discovered if we treat TDA deterministically, i.e. we compute the discovered fraction per $t \in T$ and report the range (cf. Figure 8 left); (2) the fraction of mislabeled samples discovered when we use the mean over the TDA scores of our posterior samples (cf. Figure 8 right). We find that deterministic TDA results in a large range of possible outcomes for identifying mislabeled samples. This means that it is harder to reliably identify mislabels when TDA is treated as point estimates. + +# B.4 Experiments on MNIST (|D|=60,000) + +Table 4: Mean p-values of ground-truth TDA (LOO) and estimated TDA values of with randomness included by SWA+DE-Init for CNN2-L trained on the complete MNIST dataset. + +
LOOATSIFGDGC
0.7610.3620.4640.4750.247
+ +In addition to our main experiments, we conduct a statistical test on the TDA scores obtained from a CNN2-L model trained on the complete MNIST $(|\mathcal{D}| = 60,000)$ dataset (cf. Figure 4). We base the analysis on $T = 50$ samples from the model posterior (10 random seeds $\times 5$ checkpoints) and a small random subsample of data (100 training samples $\times 10$ test samples $= 1000$ train-test pairs) due to the high computational cost of retraining. We find high $(\mathrm{p} > 0.05)$ variance in ground-truth TDA as well as TDA estimates, in line with findings from §4.3 Training set size. + +The results show that TDA estimates vary strongly for the small subset of train-test pairs (high p-values). We identify two main reasons: (1) MNIST is larger in training set size than the initial sets we used. The attribution of one sample to model behaviour is likely to be marginal and unstable. (2) Low-noise samples exist, but they are in the minority. This experiment shows that LOO is affected by the stochasticity of deep model training and TDA approximation methods fail to capture this. + +# B.5 All results: Stability of TDA values and estimates + +Table 5 presents the complete table of p-values of all tested TDA methods across all experiments of our work except for the experiments detailed in Appendices B.3 and B.4. Below, we display the histograms of p-values per experiment corresponding to each line in the table captioned with the experiment ID. The histograms show that low-noise train-test pairs $(z_{j},z)$ are present in all experiments involving the CNN model (i.e., experiments 1-8), where the number of pairs varies. Generally, we observe that there is no connection between the size of the dataset and the distribution of p-values. Furthermore, we notice that fixing the model initialisation (i.e., randomness induced by SWA+DE-Batch) increases the number of stable train-test pairs (cf. experiment 5 to 13, 11 to 14). However, in the case of the ViT experiments, stable train-test pairs are practically non-existent which shows that model complexity affects the stability of TDA. + +Table 5: Complete list of mean p-values of TDA values for all experiments. + +
ExperimentLOOATSIFGDGC
IDModelDataRandomness|Dtrain|
1CNN2-LMNIST3SWA+DE-Init300.0580.0880.1110.1100.020
2CNN2-LMNIST3SWA+DE-Init600.4210.1480.2510.2530.002
3CNN2-LMNIST3SWA+DE-Init900.7140.3550.4660.4700.209
4CNN2-LMNIST3SWA+DE-Init1200.6750.3460.4690.4720.218
5CNN2-LMNIST3SWA+DE-Init1500.3310.2540.3520.3630.003
6CNN2-LMNIST3SWA+DE-Init1800.3740.2540.3550.3560.001
7CNN2-LCIFAR10SWA+DE-Init1000.6650.4240.6070.6080.352
8CNN2-LCIFAR10SWA+DE-Init2000.5520.3970.4500.4520.399
9CNN2-LCIFAR10SWA+DE-Init3000.5430.3890.4560.4560.313
10CNN2-LCIFAR10SWA+DE-Init4000.6190.4180.5620.5680.344
11CNN2-LCIFAR10SWA+DE-Init5000.6920.4380.5750.5870.356
12CNN2-LCIFAR10SWA+DE-Init6000.6650.4470.5750.5790.358
13CNN2-LMNIST3SWA+DE-Batch1500.0250.0390.0000.0000.000
14CNN2-LCIFAR10SWA+DE-Batch5000.6230.3740.5350.5340.314
15CNN3-LMNIST3SWA+DE-Init1500.3700.3680.4640.4790.005
16CNN3-LCIFAR10SWA+DE-Init5000.6870.4320.5790.5810.365
17ViT+LoRAMNIST3SWA+DE-Batch1500.7860.5730.3690.3650.093
18ViT+LoRACIFAR10SWA+DE-Batch5000.7770.7660.6860.6860.522
+ +# B.6 All results: Correlation analysis + +In the main body of this paper, we report the Pearson and Spearman correlation matrices for experiment 3 (CNN2-L trained on MNIST3 with 50 samples per class and randomness induced by SWA+DE-Init). This section presents the complete overview of correlations between TDA methods across all experiments in Figures 10 - 27. We note that the observations and analyses in the main paper hold across experiments. + +![](images/04a8c6c516ebdeeceb5c045e9e3146cfb0deb1e02b3daa083ccf16e067e04344.jpg) +1 + +![](images/a57c163ad4b1225ec8b87b377b9700b31d831b50986180f8b624a0c9066cdac7.jpg) +2 + +![](images/c3ac026c5beefa70b17ab77fcc624bfec768bce724212356d2e27034c3ec2fae.jpg) + +![](images/15f582a0101973cff56f8168678c76b8b4acf7650f9034556c53b3eebd958642.jpg) + +![](images/dd29d3b6a96d90be507954c4520ff6bd8f2ebb0621427af64a2c7a760ec62538.jpg) + +![](images/8bda045074aceffba544ebc02d28cd41683dfc49442abdccaa1f2b560dcad6f2.jpg) + +![](images/ea82df181c717e4939b58f8faeac8d9dbd12e5daf4b1ecd2b22706d1a2483a58.jpg) +3 + +![](images/d0502f057b3173b80c64618eeaad04fe89a6f0d5135994eb3b6eec856030252e.jpg) +4 + +![](images/7aeb6682ba1ed66f56d36d6d7b13359fe3bfcd29f9541053aee5b129458ff822.jpg) +5 +9 + +![](images/df4b77bdebc146a767fb9b0675b89a4ad28c77fe0e5a80043db06acf8bdb50bb.jpg) +6 +10 + +![](images/81bed11eaefd51527962056df9770037dbb74d4cbeeddec05e14279a8c083da5.jpg) +7 + +![](images/f61d04b3347708e82fcb84b891998b5a4d690e978e308c1f5251e17ebde8eb2e.jpg) +8 + +![](images/19791453bec14d6668f673d2eccedafd6fb9c2585c40ac7d5ead07de142d7096.jpg) + +![](images/4e210850d312f315adbb821286a4afb6081f6cf3659f02c900b9d58402db1814.jpg) + +![](images/d4f7d5cd7c7faa461653f954b03e5533ecbce0d8684f0aaf95c8e8a1d78a0311.jpg) +11 +15 + +![](images/2733031230116a6569d39e1dd76517a2da050a2af7134751f652093a3ef2da5c.jpg) +12 +16 + +![](images/de04d2c7b526aeb8d161cbc90ef2416dc40f9c2b578d192f5161e0a82731356f.jpg) +13 +17 +Figure 9: Distribution of p-values for ground-truth TDA (LOO) for all experiments (IDs corresponding to IDs in Table 5). + +![](images/3898b0af85311a39ac898d5efea9c69c9aa7b60b8cdbb7a2d5d8ed7cdd894172.jpg) +14 +18 + +![](images/78c7f551c392c304cb83e17c14de70b7891709ccb62d48ddca07f0bbec3b4c67.jpg) +Figure 10: Experiment 1 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/c81a86b9a6bec82f166068d75b27a8f3dcfecd13b2548fead57ce8718be017c7.jpg) +Figure 11: Experiment 2 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/8a0fb2473ae769df1fc14d9166c231d37a347e132177ad2c99adf6c7c9ab3d5f.jpg) +Figure 12: Experiment 3 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/1b533e3fee4884f91c7b00a166bc0de39b4a8997e733ff55a91bb16beecb75a5.jpg) +Figure 13: Experiment 4 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/753c1c1872ed121ab2916a6244cd9294a9c2c0759cf7549a992ee04884c61d31.jpg) + +![](images/f2d88862875408f22a37e77fb23b10463d7e3d6afb232a858c164dae3c80c9c1.jpg) + +![](images/d7a512915ea9ca4de8aaf3e8db3170f1e5b26a7bad2c35eed42419f246ef7816.jpg) + +![](images/e9de31d4f46ded79fef8531ad60b42957fa5c95f62d2105c2a16ec27ef13fe7f.jpg) +Mean $\hat{\mu}$ + +![](images/6542569649623dcacdb2fc6822103f19b16283cffb0e808f8b4ce2496d83d8a0.jpg) +Standard deviation $\hat{\sigma}$ + +![](images/05c1a0e4dde8431067dc525341de317d7cd1753d5888372507d8a48d177146fb.jpg) +p-value +Figure 14: Experiment 5 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/b3859ef0c575dc48504c56e3cf0e7283dba67fc76410d5ffa6425be4f8670631.jpg) + +![](images/9f1f6792afa5a1349fd368bdbfae8ea7e18d0c979e7c2aad5ac35eb92f1347e6.jpg) + +![](images/553a28da420f8e307874d5e9745c5d5ed9d6ebe3e499cc7ccc75177e77d70d74.jpg) + +![](images/57f85f9d229947a2304e8ea5a620a145787ef2c4880792adb17704e343b1872c.jpg) +Mean $\hat{\mu}$ +Figure 15: Experiment 6 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/f42a2d5e9385f8dd9d12e1a44ff44b052a3be2f82548d6914375867c9c9f0aa6.jpg) +Standard deviation $\hat{\sigma}$ + +![](images/f6d02fee041f8445f69905dc8cb3a53390217ca6552211fc3022e005d5bdb7db.jpg) +p-value + +![](images/d13119ce929129ad212f60d3538c600f41d923f742cc9714bce7029d477afff0.jpg) +Figure 16: Experiment 7 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/92df3da20fe91570563be4366ae2dec18b723095f214d8fec83ea0e408897625.jpg) +Figure 17: Experiment 8 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/abe9b2875b78b45e77b03340550b5732de90408c87c97822ffa0135908d06275.jpg) +Figure 18: Experiment 9 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/a2ddc62a45df8d0ebdbadc5b8301f92b8f09297ebbb9fac02b660a0d300ebed4.jpg) +Figure 19: Experiment 10 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/49275b6e8a4572172e28ca6c3baf6cfa29a4cc55ba962358b8ae83e392965f25.jpg) +Figure 20: Experiment 11 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/e38be1db714ecfeec3282b9272d8b40ec0161820fa9013d1152a158cbbc78b01.jpg) +Figure 21: Experiment 12 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/c89deb5776df81060691b6e509b37cebe26ca041699a6c76e2a3271fc101119a.jpg) +Figure 22: Experiment 13 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/d931d36fa6ac3dbbd3d3ddfdbfc9245c5853f5142fc1457dfd228b2472932a06.jpg) +Figure 23: Experiment 14 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/61f6069c20760681d4cd4a36ffc6b4f2d6619898dc0574bfa4444152471b5b80.jpg) +Figure 24: Experiment 15 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/8f85246cf3f6f17715a142e8ba3372f81cf9baa583b24488d9b20486fae68bc0.jpg) +Figure 25: Experiment 16 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/91e4127696e98602415e176e5047fb41019131bc125b708f13f31be652c79797.jpg) +Figure 26: Experiment 17 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. + +![](images/8391c3a309dde87ba08a9d8520e914b3517c44eca57cefb7cbbfcee38e361394.jpg) +Figure 27: Experiment 18 (cf. Table 5): Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean $\hat{\mu}$ , TDA standard deviation $\hat{\sigma}$ , and TDA p-values. \ No newline at end of file diff --git a/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/images.zip b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..579a0b751085d6be0657ddf2776cdc52b8870b35 --- /dev/null +++ b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:768883cce265c9f155d71b5f4ae6b38fec82169d23e014b66c27d35f22d29ee4 +size 2151972 diff --git a/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/layout.json b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..856ff84f3bcf678061111009a5ea64875c3cc697 --- /dev/null +++ b/abayesianapproachtoanalysingtrainingdataattributionindeeplearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58f0ab44f8839c7d950bd16e61bf290bbb55fe93f9d51ca4a18710538ad15775 +size 704367 diff --git a/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_content_list.json b/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ae6931b61aa5b18b55efa0de0be6393091d06892 --- /dev/null +++ b/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9841185cfbaef49777d783a4afc1a36a197ca28e1166f2c9d9652749276861d +size 84997 diff --git a/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_model.json b/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..675e0b2646335c9b1b182fceb46b46cb5d2bc3b6 --- /dev/null +++ b/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5524609a42377bae2a07bfdc654c0545bdf19cb3353dcb73aacfd72eca03a53 +size 104509 diff --git a/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_origin.pdf b/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..267040fddeec612c70a9be669497b24dd1148568 --- /dev/null +++ b/abayesiantakeongaussianprocessnetworks/48f2bc22-f790-41da-a0a5-80de7f26fa8f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e239a680e588886a091d3dea05c98f75513a5e689736404ab0cbc5f834bdb2b2 +size 903992 diff --git a/abayesiantakeongaussianprocessnetworks/full.md b/abayesiantakeongaussianprocessnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3bf203f91e5e06df0ab1e14195b53cdc46eb9002 --- /dev/null +++ b/abayesiantakeongaussianprocessnetworks/full.md @@ -0,0 +1,402 @@ +# A Bayesian Take on Gaussian Process Networks + +# Enrico Giudice + +Dep. of Mathematics and Computer Science + +University of Basel, Basel, Switzerland + +enrico.giudice@unibas.ch + +# Jack Kuipers + +Dep. of Biosystems Science and Engineering + +ETH Zurich, Basel, Switzerland + +jack.kuipers@bsse.ethz.ch + +# Giusi Moffa + +Dep. of Mathematics and Computer Science, University of Basel, Basel, Switzerland + +and Division of Psychiatry, University College London, London, UK + +giusi.moffa@unibas.ch + +# Abstract + +Gaussian Process Networks (GPNs) are a class of directed graphical models which employ Gaussian processes as priors for the conditional expectation of each variable given its parents in the network. The model allows the description of continuous joint distributions in a compact but flexible manner with minimal parametric assumptions on the dependencies between variables. Bayesian structure learning of GPNs requires computing the posterior over graphs of the network and is computationally infeasible even in low dimensions. This work implements Monte Carlo and Markov Chain Monte Carlo methods to sample from the posterior distribution of network structures. As such, the approach follows the Bayesian paradigm, comparing models via their marginal likelihood and computing the posterior probability of the GPN features. Simulation studies show that our method outperforms state-of-the-art algorithms in recovering the graphical structure of the network and provides an accurate approximation of its posterior distribution. + +# 1 Introduction + +Bayesian networks (BNs) are a powerful tool for compactly representing joint distributions and the underlying relationships among a large set of variables [33]. These relationships are described via a directed acyclic graph (DAG), with each node in the graph representing a random variable. The joint distribution of a set of random variables $\mathbf{X} = \{X_1, \dots, X_n\}$ factorizes into conditional distributions for each variable given its parents in the DAG: + +$$ +p (\mathbf {X}) = \prod_ {i = 1} ^ {n} p \left(X _ {i} \mid \operatorname {P a} _ {X _ {i}}\right). \tag {1} +$$ + +The DAG provides a visual description of the dependency structure among the variables, where missing edges encode conditional independence relations among pairs of variables. Thanks to their inherent flexibility and their ability to combine expert knowledge with data, Bayesian networks have been applied to a large range of domains [2]. + +In the absence of full prior knowledge of the underlying graph, inference on the structure is necessary. For this purpose, a plethora of structure learning algorithms have been proposed, which involve either learning the structure from the conditional independence relations in the data (i.e. constraint-based) or assigning a score to each DAG and searching the graph space for high-scoring networks. A recent overview of current algorithms appears in Kitson et al. [23] while a reproducible workflow for + +benchmark studies is available in Rios et al. [37]. Hybrid methods combining both constraint- and score-based aspects generally offer current state-of-the-art performance. + +Typical scoring functions rely on measures of goodness of fit of the graphical model for the given data, such as the Bayesian Information Criterion (BIC) and the Mutual Information Test (MIT) [24, 3]. Another common choice is to use the graph marginal likelihood that we can derive by integrating over the parameter prior on a graphical model. The vast majority of score-based methods implement searches aiming to maximise the scoring function and return one single optimal structure. By complementing the marginal likelihood with a graph prior, one can score graphs according to their posterior distribution and sample from it. By fully characterizing the posterior distribution over graphs it is possible to go beyond optimal point estimates and enable Bayesian model averaging. In this context, approaches have emerged to capture both network and parameter uncertainties across a variety of research areas [27, 1, 32]. + +Likewise, we focus on Bayesian methods which sample DAGs according to their posterior distribution. Unlike approaches based on maximizing the score function, which return a single graph estimate, Bayesian approaches provide a full posterior distribution over DAGs, which we can use to perform inference on the network's features of interest. However, Bayesian structure inference typically requires a parametric model for the conditional distributions of each variable given its parents in the network to compute the posterior. Variations of the BDe score [21] are the usual choice for discrete-variable networks. Other information theory-based score functions [41, 3] are not Bayesian since they do not estimate a marginal likelihood but rather provide a measure of fitness of a DAG to the data based on the minimum description length principle. + +In the case of continuous-variable networks, most of the current research has focused on the linear-Gaussian case, due to the availability of the closed-form BGe score [12, 26]. When relaxing the linear-Gaussian assumption, current approaches fall outside of the Bayesian framework since they do not target a posterior probability for DAGs but rather search the DAG space for a high-scoring network by minimizing a generic loss function. For example, Elidan [6] employs rank correlation as a goodness-of-fit measure between a graph and the observed data; Sharma and van Beek [40] propose modeling the relations among variables via regression splines and scoring the networks via a cross-validated score. Using the formulation by Zheng et al. [52] of the structure learning problem as a constrained continuous optimization, Yu et al. [51] model the conditional distributions via a graph neural network. + +In this work, we develop a sampling scheme to perform fully Bayesian structure inference on generic continuous-variable networks with potentially non-linear relations among variables. We follow the strategy of Friedman and Nachman [9] and model the functional dependencies between each variable and its parents via Gaussian process priors. We extend their approach to the Bayesian framework by making structural inference based on the posterior distribution, which also involves treating the hyperparameters of the model as random. After a short review in Section 2 of Bayesian structure inference, Gaussian processes and how they can be used to parameterize BNs, Section 3 proposes a sampling scheme to perform fully Bayesian inference on nonlinear, continuous networks. In Sections 4 and 5 we evaluate and compare our algorithm to existing approaches on simulated and real data. + +# 2 Background + +# 2.1 Bayesian Structure Inference + +When modeling a set of variables $\mathbf{X}$ with a Bayesian network, a typical objective is to estimate the probability of a generic feature of interest $\Psi$ . Examples of such a feature are the presence and direction of certain edges, the topological ordering of nodes or conditional independence relations among variables. Given a complete set $D$ of observations of $\mathbf{X}$ , we can obtain the posterior probability of $\Psi$ by integrating the probability or presence of the feature over the posterior distribution of graphs: + +$$ +p (\Psi \mid D) = \sum_ {\mathcal {G}} p (\Psi \mid \mathcal {G}) p (\mathcal {G} \mid D), \quad p (\mathcal {G} \mid D) \propto p (D \mid \mathcal {G}) p (\mathcal {G}), \tag {2} +$$ + +wherein a key component is the likelihood of the data integrated over the prior distribution of the parameters $\theta$ of the conditional distributions: + +$$ +p (D \mid \mathcal {G}) = \int p (D \mid \mathcal {G}, \theta) p (\theta \mid \mathcal {G}) d \theta . \tag {3} +$$ + +The marginal likelihood $p(D \mid \mathcal{G})$ for a given DAG $\mathcal{G}$ is available in closed form for specific combinations of priors on parameters and likelihood functions [11, 12]. + +The number of DAGs grows super-exponentially in the number of nodes [38], hence exact posterior inference is still exponentially complex, making it effectively intractable for large networks [47]. Approximate inference for $p(\Psi \mid D)$ is possible if one can obtain samples $\mathcal{G}_1, \ldots, \mathcal{G}_M$ from the posterior distribution over graphs $p(\mathcal{G} \mid D)$ . + +Markov Chain Monte Carlo (MCMC) methods have successfully tackled the problem of posterior inference in the large and complex space of DAGs. The rationale consists of defining a Markov chain whose stationary distribution is the posterior distribution of interest $p(\mathcal{G} \mid D)$ . Madigan et al. [30] suggested a Metropolis-Hastings sampler: at each step, the algorithm proposes a new DAG $\mathcal{G}'$ and it accepts or rejects it according to a probability $\alpha$ . The value of $\alpha$ depends on the relative posterior probability $p(\mathcal{G}' \mid D)$ of the proposal graph compared to that of the current DAG: + +$$ +\alpha = \min \left\{1, \frac {p \left(\mathcal {G} ^ {\prime} \mid D\right) Q \left(\mathcal {G} , \mathcal {G} ^ {\prime}\right)}{p \left(\mathcal {G} \mid D\right) Q \left(\mathcal {G} ^ {\prime} , \mathcal {G}\right)} \right\} \tag {4} +$$ + +where $Q(\cdot, \cdot)$ are the transition probabilities from one DAG to another. In its original formulation, the algorithm proposes a new graph by adding or deleting one edge from the current DAG, with $Q$ uniformly distributed in the set of neighbouring DAGs (including the current one). + +Several refinements to the original method have been made over the years, providing for example more efficient proposal distributions [14, 20, 44]. The order MCMC approach by Friedman and Koller [8] introduced an important variation by sampling in the space of node orderings, which is smaller and smoother than the space of DAGs. Because of these attractive properties of the sample space, the order MCMC algorithm achieves superior mixing and convergence results compared to regular structure MCMC. Sampling orders however introduces a bias in the resulting posterior since node orders do not induce a uniform coverage in the DAG space [7]. The partition MCMC algorithm [25] corrects this bias in order MCMC by sampling in the larger space of ordered partitions of nodes to achieve unbiased samples from the posterior. + +Current state-of-the-art methods rely on conditional independence tests to obtain a first coarse estimate of the structure to use as a starting point for a score-based method [48]. More sophisticated approaches involve iteratively expanding the search space to correct for errors in its initial estimate [28]. To achieve efficient posterior inference [49, 28] in the selected search space it helps to precompute all combinations of scores potentially needed in the chain so it can run at a very low computational cost. + +Key to enabling efficient DAG sampling is decomposability, a property ensuring that we can express the posterior probability of a DAG $\mathcal{G}$ as a product of local scores that depend only on each variable and its parents in $\mathcal{G}$ : + +$$ +p (\mathcal {G} \mid D) \propto \prod_ {i} ^ {n} S \left(X _ {i}, \mathrm {P a} _ {X _ {i}} ^ {\mathcal {G}} \mid D\right). \tag {5} +$$ + +Decomposability guarantees that we need to recompute only those scores whose parent sets have changed, or we can precompute all needed combinations of scores efficiently [28]. + +# 2.2 Gaussian Processes + +Gaussian processes are a flexible tool used in machine learning for regression and classification tasks. Formally, a Gaussian Process (GP) is a distribution over functions such that every finite collection of its function values $\{f(x_1), f(x_2), \ldots, f(x_k)\}$ has a multivariate Gaussian distribution [36]. A GP is therefore fully specified by its mean function $m(x) = \mathbb{E}[f(x)]$ and covariance function $k(x, x') = \mathrm{Cov}[f(x), f(x')]$ . Due to their ability to model a large range of functional behaviours, GPs find common use as priors over regression functions $f(X) = \mathbb{E}(Y \mid X)$ . A common GP regression model assumes independent Gaussian additive noise: + +$$ +Y = f (X) + \varepsilon , \quad f (X) \sim \operatorname {G P} \left(0, k \left(x, x ^ {\prime}\right)\right), \quad \varepsilon \sim \mathcal {N} \left(\mu , \sigma^ {2}\right). \tag {6} +$$ + +Notably, GP models admit a closed-form expression of the marginal likelihood, in this case, the likelihood of the $N$ observations $y$ marginalised over the prior distribution of $f$ : + +$$ +p (y) = (2 \pi) ^ {- \frac {N}{2}} | K + \sigma^ {2} I | ^ {\frac {1}{2}} \exp \left(- \frac {1}{2} (y - \mu) ^ {\top} (K + \sigma^ {2} I) ^ {- 1} (y - \mu)\right) \tag {7} +$$ + +where $K$ is the $N\times N$ Gram matrix $K_{ij} = k(x_i,x_j)$ + +# 2.3 Gaussian Process Networks + +Gaussian process networks (GPNs) refer to Bayesian networks whose conditional distributions are modeled via Gaussian process priors [9]. The structural equation model defining the distribution of each variable $X_{i}$ given its parents in a GPN is + +$$ +X _ {i} = f _ {i} \left(\mathrm {P a} _ {X _ {i}}\right) + \varepsilon_ {i} \tag {8} +$$ + +where $\varepsilon_{i}$ has a normal distribution $\mathcal{N}(\mu, \sigma^2)$ independent of the data and a Gaussian process prior is placed on the function $f_{i}$ . Thanks to the nonparametric nature of GPs, the model in (8) can capture a wide range of functional dependencies between variables while maintaining the closed-form expression (7) for the marginal likelihood. + +The covariance function $k$ of the Gaussian process is typically parameterized by a set of hyperparameters $\theta$ which determine the behaviour of samples of $f_{i}$ such as smoothness, shape or periodicity. When these hyperparameters (together with the noise mean $\mu$ and variance $\sigma^2$ ) are unknown they are typically learned by maximizing the marginal likelihood. Since the marginal likelihood and its derivatives are available in closed form, the optimization can be performed efficiently via gradient ascent [36]. The GPN model enables structure learning of networks on continuous variables without the need to make strict parametric assumptions on the distributions of the variables. In scenarios even with low sample sizes, Friedman and Nachman [9] have shown that searching for the highest-scoring structure can accurately reconstruct the underlying DAG under different functional relationships. + +When estimating the score of a GPN, the common approach to learning the hyperparameters by simple maximization can however lead to problematic estimates, since many local optima may exist [4]. In addition, the resulting plug-in estimate of the marginal likelihood would not correctly reflect the uncertainty in the data of the full posterior. Bayesian model averaging provides a natural solution by integrating over the prior distribution of the hyperparameters to obtain a true marginal likelihood $p(D \mid \mathcal{G})$ , which we can then use to perform posterior inference on the structure. + +# 3 Bayesian Structure Inference for GPNs + +In this section, we describe a method to sample from the posterior distribution $p(\Psi \mid D)$ of a generic feature $\Psi$ of a GPN for continuous, non-linear data. To implement a fully Bayesian approach, we place priors over the hyperparameters $\theta$ of the kernel function and the Gaussian noise. + +$$ +X = f \left(\operatorname {P a} _ {X}\right) + \varepsilon , \quad \varepsilon \sim \mathcal {N} (\mu , \sigma^ {2}) +$$ + +$$ +f \sim \operatorname {G P} (0, k _ {\boldsymbol {\theta}} (., .)) \tag {9} +$$ + +$$ +\boldsymbol {\theta} \sim \pi (\boldsymbol {\theta}), \quad \mu , \sigma \sim \pi (\mu , \sigma). +$$ + +The priors ensure that the uncertainty in the functional relationships between each variable and its parents is fully accounted for. On the other hand, a maximum likelihood approach to estimating the hyperparameters could yield overly confident score functions and in turn misrepresent the posterior $p(\mathcal{G} \mid D)$ . Under the GPN model, the score remains decomposable, a necessary condition for its efficient evaluation. As in Section 2.1, making inference about network features $\Psi$ of interest is possible by sampling graphs from the posterior: + +$$ +p (\Psi \mid D) \approx \frac {1}{M} \sum_ {j = 1} ^ {M} p (\Psi \mid \mathcal {G} _ {j}), \quad \mathcal {G} _ {j} \sim p (\mathcal {G} _ {j} \mid D). \tag {10} +$$ + +Sampling graphs hinges on computing the scores of all variables given different combinations of their possible parent sets (see equation 5). + +Let $\Theta = \{\mu, \sigma, \theta\}$ be the $d$ -dimensional set of hyperparameters for a given node $X$ and its parents $\mathrm{Pa}_X$ . Unless stated otherwise, throughout the rest of the text we assume a uniform prior over all structures $p(\mathcal{G}) \propto 1$ . The score function is then the likelihood $(\overline{7})$ of the observations $x$ marginalized with respect to the hyperparameter priors: + +$$ +S (X, \mathrm {P a} _ {X}) = \int p (x \mid \mathrm {P a} _ {X}, \Theta) \pi (\Theta \mid \mathrm {P a} _ {X}) \mathrm {d} \Theta . \tag {11} +$$ + +If a variable $X$ has no parents then the Gram matrix of the kernel is zero and the score function reduces to a Gaussian marginal likelihood. Since the above score function is generally intractable, + +Algorithm 1 GP network sampling scheme +Input Data $D$ of $n$ variables, feature of interest $\Psi$ Output Posterior probability of the feature $p(\Psi \mid D)$ +1: for $j\in \{1,\dots ,M\}$ do +2: Sample DAG $\mathcal{G}_j$ according to its Laplace approximate posterior $q(\mathcal{G}_j\mid D)$ . $\triangleright$ Eq. (13,5) +3: for $i\in \{1,\ldots ,n\}$ do +4: Compute $S(X_{i},\mathrm{Pa}_{X_{i}}^{\mathcal{G}_{j}})$ via MC estimation. $\triangleright$ Eq. (12) +5: Compute posterior $p(\mathcal{G}_j\mid D)$ . $\triangleright$ Eq. (5) +6: Compute posterior probability of $\Psi$ via importance sampling. $\triangleright$ Eq. (14) + +one option is to use Monte Carlo (MC) approaches such as bridge sampling [31] to approximate it. Bridge sampling employs a Gaussian proposal distribution $g$ and a bridge function $h$ chosen to minimize the MSE of the resulting estimator. The bridge sampling estimator of (11) is then defined as + +$$ +S (X, \mathrm {P a} _ {X}) \approx \frac {\frac {1}{N _ {1}} \sum_ {i = 1} ^ {N _ {1}} p \left(x \mid \mathrm {P a} _ {X} , \Theta_ {i}\right) \pi \left(\Theta_ {i} \mid \mathrm {P a} _ {X}\right) h \left(\Theta_ {i} \mid \mathrm {P a} _ {X}\right)}{\frac {1}{N _ {2}} \sum_ {j = 1} ^ {N _ {2}} g \left(\Theta_ {j} ^ {*} \mid \mathrm {P a} _ {X}\right) h \left(\Theta_ {j} ^ {*} \mid \mathrm {P a} _ {X}\right)}, \tag {12} +$$ + +$$ +\Theta_ {i} \sim g (\Theta \mid \mathrm {P a} _ {X}), \quad \Theta_ {j} ^ {*} \sim p (\Theta \mid x, \mathrm {P a} _ {X}), \quad i = 1, \dots , N _ {1}, \quad j = 1, \dots , N _ {2}. +$$ + +The estimator is also a function of samples from the posterior of the hyperparameters $p(\Theta \mid x, \mathrm{Pa}_X)$ , which can easily be obtained via MCMC sampling. One can show that other approaches such as importance sampling or harmonic mean are special cases of bridge sampling [18]. MC methods based on bridge sampling provide consistent estimators for the marginal likelihood but may be biased for finite samples [50]. Nonetheless, such methods can become computationally expensive in high dimensions (i.e. for large parent sets). + +Since sampling DAGs via MCMC requires computing a large number of scores of potential parent sets, which may not all be represented in the final sample, we avoid computing these expendable scores by first running the MCMC algorithm using a Laplace approximation of the score (11) around the MAP value of $\Theta$ : + +$$ +S _ {\mathrm {L}} (X, \mathrm {P a} _ {X}) = p (x | \mathrm {P a} _ {X}, \tilde {\Theta}) \pi (\tilde {\Theta} | \mathrm {P a} _ {X}) \frac {(2 \pi) ^ {d / 2}}{| H | ^ {1 / 2}} \tag {13} +$$ + +with $\tilde{\Theta} = \operatorname*{argmax}_{\Theta}p(x|\mathrm{Pa}_X,\Theta)\pi (\Theta |\mathrm{Pa}_X)$ , and $H_{ij} = -\frac{\partial^2p(x|\mathrm{Pa}_X,\Theta)\pi(\Theta|\mathrm{Pa}_X)}{\partial\Theta_i\partial\Theta_j}\bigg|_{\Theta = \tilde{\Theta}}$ + +We denote the resulting posterior probability of a DAG $\mathcal{G}$ from this Laplace approximated score as $q(\mathcal{G} \mid D)$ to distinguish it from the true posterior $p(\mathcal{G} \mid D)$ . + +The Laplace approximate score provides an approximation of the posterior at a lower computational cost, speeding up considerably the running time of the MCMC algorithm used to sample graphs. After sampling $M$ graphs in the first step with the Laplace approximate score, we can make inference with respect to the true posterior by re-computing the scores and performing importance sampling. To estimate the posterior probability of a feature of interest $\Psi$ via importance sampling we evaluate + +$$ +p (\Psi \mid D) \approx \frac {\sum_ {j = 1} ^ {M} p (\Psi \mid \mathcal {G} _ {j}) w _ {j}}{\sum_ {j = 1} ^ {M} w _ {j}}, w _ {j} = \frac {p (\mathcal {G} _ {j} \mid D)}{q (\mathcal {G} _ {j} \mid D)} \tag {14} +$$ + +where $p(\mathcal{G}_j \mid D)$ and $q(\mathcal{G}_j \mid D)$ are the posterior probabilities of DAG $\mathcal{G}_j$ computed respectively with the bridge sampling MC estimate (12) and the Laplace approximation (13) for the score. The two posteriors do not need to be normalized for valid posterior inference on $\Psi$ . This is because the normalizing constants in the importance sampling weights (14) cancel out. The procedure is summarized as pseudo-code in Algorithm 1. + +Re-computing the score functions in a second step and implementing importance sampling is computationally advantageous compared to running the MCMC algorithm directly with the MC estimates of the scores. The advantage is simply due to the number of unique scores in the final chain being much lower than those evaluated or needed during the chain itself (see figure A.5 in the supplementary material). Regardless of the MCMC algorithm used, we expect a substantial improvement in run-time compared to using the MC scores directly. + +# 3.1 Implementation Details + +Bayesian inference and optimization of the hyperparameters were performed via the Stan interface RStan [43]. The library offers a highly efficient $\mathrm{C + + }$ implementation of the No U-turn sampler [22], providing state-of-the-art posterior inference on the hyperparameters. We performed MC estimation of the marginal likelihood via the bridgesampling package [19], which can easily be combined with fitted Stan models. + +In the implementation of Algorithm 1 we use order or partition MCMC to generate the network samples $q(\mathcal{G}_j \mid D)$ ; the procedure can however accommodate a variety of sampling methods, as long as they result in samples from the posterior. Given its good performance in benchmarking studies [37], we use the recent BiDAG [45] hybrid implementation for MCMC inference on the graph. The hybrid sampler requires an initial search space which is then improved upon to correct for possible estimation errors [28]. As initial search space we take the output of the dual PC algorithm [13]. For the bridge sampling estimator [12] of the marginal likelihood we used $N_1 = N_2 = 300$ particles from the proposal and posterior distribution over the hyperparameters. The proposal function $g$ was set to a normal distribution, with its first two moments chosen to match those of the posterior distribution. R code to implement Algorithm 1 and reproduce the results in Sections 4 and 5 is available at https://github.com/enricogiudice/LearningGPNs + +# 3.2 Score Equivalence + +Without any assumptions on the functional form of structural models, observational data cannot generally distinguish between any two DAGs within the same Markov equivalence class [34], since the joint distribution always factorizes according to either DAG. Scoring functions like the BGe that assign the same score to all DAGs in the same class satisfy score equivalence. Imposing parametric models on the local probability distributions of a Bayesian network may however break score equivalence. Specifically for GPNs, an alternative factorization may not admit a representation that follows the structural equation model in (8). Consequently, DAGs belonging to the same Markov equivalence class may display different scores and become identifiable beyond their equivalence class. For this reason, the GP score generally differentiates models according to the direction of any of their edges. + +Importantly, the identifiability of otherwise score equivalent DAGs is a direct consequence of the assumptions underpinning the GPN model. Because the functions $f_{i}$ are not generally invertible, the likelihood in (7) will assign higher values to functional dependencies that admit the GPN structural equation model. Further, Peters et al. [35] demonstrated that the asymmetry distinguishing between Markov equivalent factorizations holds beyond the case of non-invertible functions. Indeed, with the exception of the linear case, all structural models that take the form in (8) with additive Gaussian noise violate score equivalence and may identify a DAG beyond its Markov equivalence class. GPNs may encompass cases where equivalent DAGs are indistinguishable, such as when the functions $f_{i}$ are linear. In this case, the computed scores of equivalent DAGs will be similar, as long as the GP prior allows learning sufficiently linear relations. The numerical experiments in supplementary Section A.1 show that the GP-based score function displays near score equivalent behaviour when the data come from a joint Gaussian distribution, coherently with theoretical considerations [35]. + +# 4 Experimental Results + +We evaluate the Bayesian GP network inference scheme on data generated from known random networks with $n = 10$ nodes. The DAG structures are constructed from an Erdős-Rényi model, where each node has an independent probability of 0.2 to be connected with another with a higher topological ordering. For every node $X_{i}$ in each randomly generated network, we then sample 100 observations as a non-linear function of its parents. The nonlinear data are generated by transforming the parents' instances using a weighted combination of six Fourier components + +$$ +X _ {i} = \sum_ {j \mid X _ {j} \in \operatorname {P a} _ {X _ {i}}} \beta_ {i, j} \left\{w _ {i, j, 0} X _ {j} + \sum_ {k = 1} ^ {6} \left[ v _ {i, j, k} \sin (k X _ {j}) + w _ {i, j, k} \cos (k X _ {j}) \right] \right\} + \epsilon_ {i}. \tag {15} +$$ + +The weights $v_{i,j,k}$ and $w_{i,j,k}$ are sampled from a Dirichlet distribution with concentration parameters equal to seven positive exponentially decreasing values $\gamma_k = \frac{e^{-k / \lambda}}{\sum_k e^{-k / \lambda}}$ , for $k = \{0, \dots, 6\}$ . The + +parameter $\lambda$ controls the rate of exponential decay, with values of $\lambda$ close to zero providing mostly linear effects between variables and higher values resulting in increasingly non-linear relationships. + +The edge coefficients $\beta_{i,j}$ determine the strength of the dependencies between $X_{i}$ and its parents and are sampled from a uniform distribution on $(-2, -\frac{1}{2}) \cup (\frac{1}{2}, 2)$ ; the noise variable $\epsilon_{i}$ has a standard normal distribution. Instances of root nodes (nodes without parents) are also sampled from a standard normal. The linear-Gaussian case corresponds asymptotically to $\lambda = 0$ ; in this case, we set the weights $v_{i,j,k}$ and $w_{i,j,k}$ to zero except for $w_{i,j,0}$ . + +We compare all structure learning algorithms in terms of structural Hamming distance (SHD) [48], which compares estimated graphs $\mathcal{G}$ with the ground truth graph $\mathcal{G}^*$ . Following Lorch et al. [29], we compute the average SHD of the samples weighted by the posterior: + +$$ +\mathbb {E} - \operatorname {S H D} (p, \mathcal {G} ^ {*}) := \sum_ {\mathcal {G}} \operatorname {S H D} (\mathcal {G}, \mathcal {G} ^ {*}) p (\mathcal {G} | D). \tag {16} +$$ + +The $\mathbb{E}$ SHD summarizes how close the estimated posterior is to the true DAG; lower values are therefore desirable. It is however not a direct measure of the similarity of the estimated posterior to the true posterior. + +# 4.1 Choice of Priors + +Different choices of kernels for the GP prior result in different behaviours of the conditional expectation $f$ in equation (9) of a variable $X$ given its parents $\mathrm{Pa}_X$ . A simple model employs an additive kernel + +$$ +k (.,.) = \sum_ {i = 1} ^ {| \mathrm {P a} _ {X} |} k _ {\theta_ {i}} (.,.) \tag {17} +$$ + +which corresponds to modeling each variable $X$ as a sum of the individual contributions of each of its parents. The additive model serves to reduce the computational burden of calculating the scores by keeping the number of parameters as small as possible while preserving non-linearity in the model. The approach can however easily accommodate more complex relationships at a higher computational cost, such as an additive kernel with all first-order interactions [5]: + +$$ +k (.,.) = \tau_ {1} \sum_ {i = 1} ^ {| \mathrm {P a} _ {X} |} k _ {\theta_ {i}} (.,.) + \tau_ {2} \sum_ {i = 1} ^ {| \mathrm {P a} _ {X} |} \sum_ {j = i + 1} ^ {| \mathrm {P a} _ {X} |} k _ {\theta_ {i}} (.,.) k _ {\theta_ {j}} (.,.). \tag {18} +$$ + +For each parent $Z \equiv (\mathrm{Pa}_X)_i$ we used a squared exponential kernel function $k_{\theta_i}(z,z') = \exp \left(-\frac{\|z - z'\|^2}{2\theta_i^2}\right)$ , with each $\theta_i$ measuring the degree of non-linearity along the $Z$ -th dimension. The kernel function has unit variance since the data are always normalized in the structure learning process. We assign the following independent prior distributions to the hyperparameter set $\Theta = \{\mu, \sigma, \theta_i : i \in 1,\dots,|\mathrm{Pa}_X|\}$ of the GPN model (9): + +$$ +\theta_ {i} \sim \operatorname {I G} (2, 2), \quad \mu \sim \mathcal {N} (0, 1), \quad \sigma \sim \operatorname {I G} (1, 1). \tag {19} +$$ + +The inverse-gamma priors for each lengthscale $\theta_{i}$ and the noise variance suppress values near zero, which in either case would result in overfitting and degenerate behaviour of samples of $f$ [15]. The additional parameters $\tau_{1}$ and $\tau_{2}$ of the kernel function (18) were assigned independent IG(1, 1) priors. + +# 4.2 Results + +We compare two different versions of the GP-based structure learning scheme: the first one employs the order MCMC algorithm for sampling DAGs, and the second uses partition MCMC. We then compare both versions of the GPN sampling scheme with order and partition MCMC with the BGe score [28]. To reduce the computational burden of the simulations, we employ the simpler additive kernel [17] for the GP score. + +As an additional benchmark, we include the DiBS+ algorithm [29], which models the adjacency matrix probabilistically, using particle variational inference to approximate a posteriori over structures. In the simulations, we parameterize DiBS+ by a neural network with one hidden layer with 5 nodes. We also consider methods which rely on the constraint-based PC algorithm [42], which + +![](images/9f1a90e3bf57000804af6912fe023c1dc678b2f3523cc11eeb556744bcdabdde.jpg) + +![](images/706a88a5862d54ca51a3aef24a088598d3f580b4be7f93156cb56ced1da707cd.jpg) + +![](images/4162f0517cedb28a1c3b27b6a8e77814b876f587997b75d3bd2d599014015035.jpg) + +![](images/32bc90339d1ed1fe56e8ff7e2462b36f0f9367e7e0d9cf4f36e10c36474e0920.jpg) +GP, partition GP, order + +![](images/1f0ae5beb7caf4750d8e61ccc7a9a1a4b3e90494589e0ad15f5deedd9f4f7a70.jpg) +B +Figure 1: Distribution of $\mathbb{E}$ -SHD values for all the different algorithms. $\lambda = 0$ corresponds to linear-Gaussian data while higher values increase the degree of non-linearity of the relations among variables. + +![](images/82349c62d1e000043d22ce7ef0eacbf7e86b109ee91968c79348ef996b75816d.jpg) +Ge, partition + +![](images/b2fc8231089f2b9981392ef34197e6ee583f809a66c69b000b8ef05da6aa5c39.jpg) +BGe, order +kPC-HSIC + +![](images/6752589ce4394adae7ef9da98313c3ef40f12ae4411218a135cd3c547e829e86.jpg) +kPC-DC + +![](images/62cf42f609c1fa340cb072e1b1426c24cab638fcedd9560afa2bf499471a959d.jpg) +DiBS+ + +learns a network by testing for conditional independence among the variables. Since these are not Bayesian methods, we bootstrap the data to obtain different graph estimates via the PC algorithm [10]. To account for the non-linear dependencies between the variables we apply the kernel PC (kPC) algorithm [17] to the resampled data using two different independence tests: HSIC [16] and distance correlation [46]. + +Figure 1 shows, for three values of $\lambda$ , the distribution of $\mathbb{E}-\mathrm{SHD}$ values for the different algorithms over the 100 generated structures. For non-linear data ( $\lambda = 1$ ), both versions of the GP samplers outperform existing methods: the median $\mathbb{E}-\mathrm{SHD}$ for either of our algorithms is equal or lower than the bottom quartile of the BGe samplers. For linear ( $\lambda = 0$ ) and slightly non-linear data ( $\lambda = 0.5$ ), the GPN sampler performs competitively with the state-of-the-art BGe score-based sampling algorithms. All $\mathbb{E}-\mathrm{SHD}$ values are computed with respect to DAGs; the results comparing completed partially directed acyclic graphs (CPDAGs) are available in figure A.3 in the supplementary material. + +Figure 2 displays the run-times for the same methods as in figure 1. The run-times are stable across the different levels of non-linearity parameterized by $\lambda$ , with the k-PC algorithms being the most computationally expensive, followed by the GP score-based approach, DiBS+ and finally the BGe score-based MCMC methods which are the fastest. + +![](images/562d74d0897b9c53a8bc929f216e9bb13b68b870c5c259dcc78068b3d2f9b913.jpg) + +![](images/ee38f30db7f6116152fcd58fe23a36e0a44c9a0ef5596562a838ada47cf26d28.jpg) + +![](images/28be1542ae9201ab656ae7fde4213da52dc9f96082d8bd60c1d480a29756de3e.jpg) + +![](images/09c4d774ead42175e1f5b015375fb8a3a2d15ccf9cc708309e4f9417d3b6a63c.jpg) +GP, partition GP, order +Figure 2: Distribution of run-times for all the different algorithms. $\lambda = 0$ corresponds to linear-Gaussian data while higher values increase the degree of non-linearity of the relations among variables. + +![](images/618f3b206d84116aaa48b3ef1de30f5ce21a3401e3275e4844a90e70e722cb1b.jpg) +B + +![](images/d217673ad8e6f2007b4c1ed2b02f804a8223abcb737a45ccca33c56125c42881.jpg) +Ge, partition + +![](images/3c6fc443a13f0f8b3db3788c141ca6b767a2b5ac6894aa5a91edaa7da2fe68ad.jpg) +BGe, order +kPC-HSIC + +![](images/e3ce7dd7b74f5eda539c730e64ba74fa73e85bdba8bdd2a08accc6e033db2880.jpg) +kPC-DC + +![](images/4b2b6e2b9a49843039723507714547ad31401a52c893cc10c71708c66bf2d4ae.jpg) +DiBS+ + +![](images/fe2b65ae77ca5004eff1d2e118a1152dc2fe0624116bdc5910ed346b5b20e9d1.jpg) +Figure 3: Left: reverse K-L divergence between the true posterior and the BGe posterior (green), the Laplace approximate posterior (blue) and the posterior obtained via importance sampling (red) as a function of the number of sampled DAGs. Right: the true posterior (gray) together with the BGe posterior (green), the Laplace approximate posterior (blue) and the posterior obtained via importance sampling (red). The majority of DAGs have a very low true posterior probability and are therefore never sampled by the MCMC algorithms (see inset). + +![](images/607eeaba8decd363386590bb63b29f1c8f2a4c7871974706f6808bdf109d2d4c.jpg) + +Besides estimating accurately the structure, our approach can quantify the uncertainty in its estimates via the sampled graphs from the posterior distribution over GPNs. To evaluate the accuracy of the sampling approach in estimating the general posterior over structures, we compare its estimated posterior distribution over DAGs with the "true" posterior, obtained by enumerating every possible structure and computing its score directly with equation (12). Due to the exceedingly large number of DAGs, this approach is only feasible with small structures. The left panel of figure 3 shows the reverse Kullback-Leibler (K-L) divergence between the estimated and true posteriors for a given network with $n = 4$ nodes, as a function of the number of samples $M$ in the partition MCMC algorithm. The estimated ("GP, weighted") posterior probability $p(\mathcal{G} \mid D)$ for a generic DAG $\mathcal{G}$ is obtained by setting $p(\Psi \mid \mathcal{G}_j) = \mathbb{1}_{(\mathcal{G}_j = \mathcal{G})}$ in equation (14). Reverse K-L divergence was chosen as a metric since the algorithms assign a probability of zero to DAGs that were not sampled. + +The plot includes the divergence between the Laplace approximate posterior $q(\mathcal{G} \mid D)$ in equation (14) and the true posterior, as well as between the posterior obtained with the BGe score and the true posterior. The divergence of the Laplace approximation is reduced by roughly one order of magnitude by weighting the samples via importance sampling. For reference, the DiBS+ algorithm yields a reverse K-L divergence of 27.8 with 1000 samples, i.e. two orders of magnitude higher than our approach, despite also being allocated longer run-times (see figure A.4 in the supplementary material). Sampling a higher number of graphs with DiBS+ quickly becomes infeasible since its run-time scales quadratically with the number of samples [29], while MCMC sampling scales linearly with $M$ . The right panel of figure 3 shows the posterior distributions over the 543 possible DAGs, for $M = 10^4$ , obtained after complete enumeration and by sampling with different methods. The plots confirm that as the number of sampled DAGs increases, our approach can accurately reflect the full posterior uncertainty. The results also underline the importance of sampling from the hyperparameters' priors to obtain an accurate representation of the posterior, as the Laplace approximation of the marginal likelihood results in a highly biased approximation even when sampling a large number of DAGs. + +# 5 Application on Protein Signaling Networks + +We also applied the GP score-based structure inference to the flow cytometry dataset of Sachs et al. [39] to learn protein signaling pathways. The authors provide an accepted consensus network, which we used as reference. We considered the first experimental condition, consisting of 853 single-cell observations of $n = 11$ phosphoproteins and phospholipids in human T cells. The first three columns of table [1] show the performances of all the different algorithms in reconstructing the consensus network. The results include the GP model using the additive kernel (18) with all first-order interactions, denoted as $\mathrm{GP}^2$ . We measure the different algorithms in terms of the E-SHD, + +Table 1: Performance of the different algorithms in reconstructing the consensus network from flow cytometry data. The last two columns show the posterior probabilities of the two features experimentally validated by Sachs et al. [39], where the edge on the left should be present and the one on the right absent; up/down arrows indicate higher/ lower values are better. + +
E-SHD ↓E-TP ↑E-FP ↓Erk → Akt ↑Erk → PKA ↑
GP, partition14.57.24.710.75
GP, order14.66.84.410.42
GP2, partition16.76.66.310
GP2, order16.76.76.410.34
BGe, partition15.94.33.201
BGe, order15.34.42.70.170.98
kPC-HSIC175.55.50.690.26
kPC-DC16.965.80.720.28
DiBS+16.154.20.450.7
+ +as well as the $\mathbb{E}$ -TP and $\mathbb{E}$ FP, the absolute number of TP and FP edges in the DAGs weighted by the posterior, obtained by replacing the SHD in equation (16) by TP or FP, respectively. + +One of the benefits of Bayesian structure inference is the possibility of deriving posterior probabilities of specific edges of interest in the network. In their work, Sachs et al. [39] experimentally tested two relationships between the proteins by intervening directly on the cells. By means of small interfering RNA inhibition, they concluded that the inhibition of Ekt has a direct effect on Akt, while there was a weaker and non-significant effect on PKA. We therefore expect an edge $\mathrm{Erk} \rightarrow \mathrm{Akt}$ but no directed edge (or path) from Erk to PKA in the learned networks. The last two columns of table [1] display the posterior probabilities of these two features according to the different algorithms. GP- and BGe-based methods perform best, with the GP learner correctly assigning the highest probability to the edge $\mathrm{Erk} \rightarrow \mathrm{Akt}$ , while the lack of the edge $\mathrm{Erk}$ to PKA is predicted with a lower probability. The sparser BGe score-based methods assign a lower probability to the first edge, but correctly predict the absence of the second edge. + +# 6 Conclusions + +In this work, we have proposed a procedure that efficiently performs Bayesian structural inference on Gaussian process networks and estimates the posterior of generic features of interest. Building on the original GPN idea by Friedman and Nachman [9], we now embed it in a fully Bayesian framework. In particular, our approach involves placing priors on the hyperparameters of the score function and sampling DAGs via MCMC. Although more computationally expensive than a greedy search over the DAG space for a high-scoring network, the Bayesian approach allows one to accurately quantify the uncertainty in features of the network. This is made feasible by minimizing the number of scores to compute in an MCMC chain via importance sampling, as well as harnessing the advances in MC and MCMC methods [19, 28, 43, 45] that have been made since the introduction of GP networks. + +The flexible nature of GPs allows the corresponding BNs to model a large range of functional dependencies between continuous variables. In the case of linear dependencies, our method remains competitive with state-of-the-art methods and exhibits desirable properties such as (approximate) score equivalence. The versatility of the model is particularly useful in domains where the nature of relations among variables is unknown and strict parametric assumptions on the distributions of the variables want to be avoided. Based on the promising simulation results and the convenient properties of the proposed method, we believe that it holds potential for making accurate inference on the underlying structure of BNs in complex domains. + +# Acknowledgements + +The authors are grateful to acknowledge partial funding support for this work from the two Cantons of Basel through project grant PMB-02-18 granted by the ETH Zurich. + +# References + +[1] Federico Castelletti and Guido Consonni. Discovering causal structures in Bayesian Gaussian directed acyclic graph models. Journal of the Royal Statistical Society: Series A (Statistics in Society), 183:1727-1745, 2020. +[2] Rónan Daly, Qiang Shen, and Stuart Aitken. Learning Bayesian networks: approaches and issues. The Knowledge Engineering Review, 26:99-157, 2011. +[3] Luis M. de Campos. A scoring function for learning Bayesian networks based on mutual information and conditional independence tests. Journal of Machine Learning Research, 7: 2149-2187, 2006. +[4] David Duvenaud, James Lloyd, Roger Grosse, Joshua Tenenbaum, and Ghahramani Zoubin. Structure discovery in nonparametric regression through compositional kernel search. In Proceedings of the 30th International Conference on Machine Learning, volume 28, pages 1166-1174, 2013. +[5] David K. Duvenaud, Hannes Nickisch, and Carl Rasmussen. Additive Gaussian processes. In Advances in Neural Information Processing Systems, volume 24, pages 226-234, 2011. +[6] Gal Elidan. Lightning-speed structure learning of nonlinear continuous networks. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22, pages 355-363, 2012. +[7] Byron Ellis and Wing Hung Wong. Learning causal Bayesian network structures from experimental data. Journal of the American Statistical Association, 103:778-789, 2008. +[8] Nir Friedman and Daphne Koller. Being Bayesian about network structure: a Bayesian approach to structure discovery in Bayesian networks. Machine Learning, 50:95-125, 2003. +[9] Nir Friedman and Iftach Nachman. Gaussian process networks. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 211-219, 2000. +[10] Nir Friedman, Moises Goldszmidt, and Abraham Wyner. Data analysis with Bayesian networks: a bootstrap approach. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pages 196-205, 2013. +[11] Dan Geiger and David Heckerman. Learning Gaussian networks. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pages 235-243, 1994. +[12] Dan Geiger and David Heckerman. Parameter priors for directed acyclic graphical models and the characterization of several probability distributions. The Annals of Statistics, 30:1412-1440, 2002. +[13] Enrico Giudice, Jack Kuipers, and Giusi Moffa. The dual PC algorithm and the role of Gaussianity for structure learning of Bayesian networks. International Journal of Approximate Reasoning, 161:108975, 2023. +[14] Paolo Giudici and Robert Castelo. Improving Markov chain Monte Carlo model search for data mining. Machine Learning, 50:127-158, 2003. +[15] Robert B. Gramacy and Herbert K. H. Lee. Bayesian treed Gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103: 1119-1130, 2008. +[16] Arthur Gretton, Kenji Fukumizu, Choon Teo, Le Song, Bernhard Scholkopf, and Alex Smola. A kernel statistical test of independence. In Advances in Neural Information Processing Systems, volume 20, pages 585-592, 2007. +[17] Arthur Gretton, Peter Spirtes, and Robert Tillman. Nonlinear directed acyclic structure learning with weakly additive noise models. In Advances in Neural Information Processing Systems, volume 22, pages 1847-1855, 2009. + +[18] Quentin F. Gronau, Alexandra Sarafoglou, Dora Matzke, Alexander Ly, Udo Boehm, Maarten Marsman, David S. Leslie, Jonathan J. Forster, Eric-Jan Wagenmakers, and Helen Steingroever. A tutorial on bridge sampling. Journal of Mathematical Psychology, 81:80-97, 2017. +[19] Quentin F. Gronau, Henrik Singmann, and Eric-Jan Wagenmakers. Bridgesampling: An R package for estimating normalizing constants. Journal of Statistical Software, 92:1-29, 2020. +[20] Marco Grzegorczyk and Dirk Husmeier. Improving the structure MCMC sampler for Bayesian networks by introducing a new edge reversal move. Machine Learning, 71:265-305, 2008. +[21] David Heckerman and Dan Geiger. Learning Bayesian networks: a unification for discrete and Gaussian domains. Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, pages 274-284, 1995. +[22] Matthew D. Homan and Andrew Gelman. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15:1593-1623, 2014. +[23] Neville Kitson, Anthony Constantinou, Guo Zhigao, Yang Liu, and Kiattikun Chobtham. A survey of Bayesian network structure learning. Artificial Intelligence Review, 56:1-94, 2023. +[24] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques - adaptive computation and machine learning. The MIT Press, 2009. +[25] Jack Kuipers and Giusi Moffa. Partition MCMC for inference on acyclic digraphs. Journal of the American Statistical Association, 112:282-299, 2017. +[26] Jack Kuipers, Giusi Moffa, and David Heckerman. Addendum on the scoring of Gaussian directed acyclic graphical models. The Annals of Statistics, 42:1689-1691, 2014. +[27] Jack Kuipers, Thomas Thurnherr, Giusi Moffa, Polina Suter, Jonas Behr, Ryan Goosen, Gerhard Christofori, and Niko Beerenwinkel. Mutational interactions define novel cancer subgroups. Nature Communications, 9:4353, 2018. +[28] Jack Kuipers, Polina Suter, and Giusi Moffa. Efficient sampling and structure learning of Bayesian networks. Journal of Computational and Graphical Statistics, 31:639-650, 2022. +[29] Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, and Andreas Krause. DiBS: differentiable Bayesian structure learning. In Advances in Neural Information Processing Systems, volume 34, pages 24111-24123, 2021. +[30] David Madigan, Jeremy York, and Denis Allard. Bayesian graphical models for discrete data. International Statistical Review, 63:215-232, 1995. +[31] Xiao-Li Meng and Wing Hung Wong. Simulating ratios of normalizing constants via a simple identity: a theoretical exploration. Statistica Sinica, 6:831-860, 1996. +[32] Giusi Moffa, Jack Kuipers, Giuseppe Carrà, Cristina Crocamo, Elizabeth Kuipers, Matthias Angermeyer, Traolach Brugha, Mondher Toumi, and Paul Bebbington. Longitudinal symptomatic interactions in long-standing schizophrenia: a novel five-point analysis based on directed acyclic graphs. Psychological Medicine, pages 1-8, 2021. +[33] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers Inc., 1988. +[34] Judea Pearl. Causality: Models, reasoning, and inference. Cambridge University Press, 2000. +[35] Jonas Peters, Joris Mooij, Dominik Janzing, and Bernhard Scholkopf. Causal discovery with continuous additive noise models. Journal of Machine Learning Research, 15:2009-2053, 2013. +[36] Carl Edward Rasmussen. Gaussian processes in machine learning. Springer Berlin Heidelberg, 2004. + +[37] Felix L. Rios, Giusi Moffa, and Jack Kuipers. Benchpress: a scalable and platform-independent workflow for benchmarking structure learning algorithms for graphical models. arXiv:2107.03863, 2021. +[38] Robert W. Robinson. Counting labeled acyclic digraphs. In New Directions in Graph Theory, pages 239-273. New York: Academic Press, 1973. +[39] Karen Sachs, Omar Perez, Dana Pe'er, Douglas A. Lauffenburger, and Garry P. Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308:523-529, 2005. +[40] Charupriya Sharma and Peter van Beek. Scalable Bayesian network structure learning with splines. In Proceedings of The 11th International Conference on Probabilistic Graphical Models, volume 186, pages 181-192, 2022. +[41] Tomi Silander, Teemu Roos, and Petri Myllymäki. Locally minimax optimal predictive modeling with Bayesian networks. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, volume 5, pages 504-511, 2009. +[42] Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, prediction, and search. Springer New York, 1993. +[43] Stan Development Team. RStan: the R interface to Stan, R package version 2.21.8, 2023. URL https://mc-stan.org/. +[44] Chengwei Su and Mark E. Borsuk. Improving structure MCMC for Bayesian networks through Markov blanket resampling. Journal of Machine Learning Research, 17:1-20, 2016. +[45] Polina Suter, Jack Kuipers, Giusi Moffa, and Niko Beerenwinkel. Bayesian structure learning and sampling of Bayesian networks with the R package BiDAG. Journal of Statistical Software, 105:1-31, 2023. +[46] Gábor J. Székely, Maria L. Rizzo, and Nail K. Bakirov. Measuring and testing dependence by correlation of distances. The Annals of Statistics, 35:2769-2794, 2007. +[47] Topi Talvitie, Aleksis Vuoksenmaa, and Mikko Koivisto. Exact sampling of directed acyclic graphs from modular distributions. In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, pages 965-974, 2020. +[48] Ioannis Tsamardinos, Laura Brown, and Constantin Aliferis. The max-min hill-climbing Bayesian network structure learning algorithm. Machine Learning, 65:31-78, 2006. +[49] Jussi Viinikka, Antti Hyttinen, Johan Pensar, and Mikko Koivisto. Towards scalable Bayesian learning of causal DAGs. In Advances in Neural Information Processing Systems, volume 33, pages 6584-6594, 2020. +[50] Jackie S.T. Wong, Jonathan J. Forster, and Peter W.F. Smith. Properties of the bridge sampler with a focus on splitting the MCMC sample. Statistics and Computing, 30:799-816, 2020. +[51] Yue Yu, Jie Chen, Tian Gao, and Mo Yu. DAG-GNN: DAG structure learning with graph neural networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 7154-7163, 2019. +[52] Xun Zheng, Bryon Aragam, Pradeep K. Ravikumar, and Eric P. Xing. DAGs with NO TEARS: continuous optimization for structure learning. In Advances in Neural Information Processing Systems, volume 31, pages 9492-9503, 2018. \ No newline at end of file diff --git a/abayesiantakeongaussianprocessnetworks/images.zip b/abayesiantakeongaussianprocessnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5be394cb415c4f9c28d9f3c1d0d7c9e1b61ead02 --- /dev/null +++ b/abayesiantakeongaussianprocessnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:057115d8a6887c48db8fef87b759b79a707511ae74be7fd8d9615c71030010c0 +size 334107 diff --git a/abayesiantakeongaussianprocessnetworks/layout.json b/abayesiantakeongaussianprocessnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fbb770b0fdc9f9c6e2c423e13df30677d6395f95 --- /dev/null +++ b/abayesiantakeongaussianprocessnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95a6014434e7b8d338b08f75f6233c1dc4bbfcad5db7add13de6cb7da2b5d488 +size 476808 diff --git a/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_content_list.json b/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d4e4ae7b5528a241489f55c9dd7a3a8e4510f9a9 --- /dev/null +++ b/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a52fd7b3d1bfa5b71a17f76bd146f03784bc914373cb9fdaf24556ba2606bf6d +size 131976 diff --git a/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_model.json b/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..052549b5fb21410f8d35821b9a12304057043a73 --- /dev/null +++ b/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f41b8c2c3816c53157a535693f491427fcf02a9c4f6114e6a689e7a345054690 +size 159709 diff --git a/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_origin.pdf b/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d55410d225f5e8369320f087587b2bc2993aabe0 --- /dev/null +++ b/aboundedabilityestimationforcomputerizedadaptivetesting/88062dec-4efa-4a71-b480-95f4deea8ec3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85bb19a8695ba7964723f160bbd0fb132564fc056b3f99c3c433e0897b2e2f0e +size 1098632 diff --git a/aboundedabilityestimationforcomputerizedadaptivetesting/full.md b/aboundedabilityestimationforcomputerizedadaptivetesting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..343873d796ac67f4da4ae948ae24b4a51333f123 --- /dev/null +++ b/aboundedabilityestimationforcomputerizedadaptivetesting/full.md @@ -0,0 +1,544 @@ +# A Bounded Ability Estimation for Computerized Adaptive Testing + +Yan Zhuang $^{1,2}$ , Qi Liu $^{1,2}$ ; GuanHao Zhao $^{1,2}$ , Zhenya Huang $^{1,2}$ , Weizhe Huang $^{1,2}$ , Zachary A. Pardos $^{3}$ , Enhong Chen $^{1,2}$ , Jinze Wu $^{2,4}$ , Xin Li $^{2,4}$ + +1: Anhui Province Key Laboratory of Big Data Analysis and Application, University of Science and Technology of China + +2: State Key Laboratory of Cognitive Intelligence + +3: University of California, Berkeley + +4: iFLYTEK Co., Ltd + +{zykb, ghzhao0223, hwz871982879, hxwjz} $@$ mail.ustc.edu.cn, + +{qiliuql,huangzhy,cheneh,leexin}@ustc.edu.cn, pardos@berkeley.edu + +# Abstract + +Computerized adaptive testing (CAT), as a tool that can efficiently measure student's ability, has been widely used in various standardized tests (e.g., GMAT and GRE). The adaptivity of CAT refers to the selection of the most informative questions for each student, reducing test length. Existing CAT methods do not explicitly target ability estimation accuracy since there is no student's true ability as ground truth; therefore, these methods cannot be guaranteed to make the estimate converge to the true with such limited responses. In this paper, we analyze the statistical properties of estimation and find a theoretical approximation of the true ability: the ability estimated by full responses to question bank. Based on this, a Bounded Ability Estimation framework for CAT (BECAT) is proposed in a data-summary manner, which selects a question subset that closely matches the gradient of the full responses. Thus, we develop an expected gradient difference approximation to design a simple greedy selection algorithm, and show the rigorous theoretical and error upper-bound guarantees of its ability estimate. Experiments on both real-world and synthetic datasets, show that it can reach the same estimation accuracy using $15\%$ less questions on average, significantly reducing test length. + +# 1 Introduction + +As the landscape of education is changing rapidly, especially after COVID-19, many schools and institutions move from in-class to online platforms, providing individualized education, such as educational measurement and recommendation. They are looking to "right-size" the learning experience of students according to their ability level [1, 2]. To this end, Computerized Adaptive Testing (CAT) [3] becomes an indispensable tool to efficiently measure student's ability in the areas of standardized testing, computer tutoring, and online courses, through automatically selecting best-suited questions for individual students. Compared with the time-consuming and burdensome paper-and-pencil tests, CAT has been proven to require fewer questions to reach the same measurement accuracy [4, 2]. + +A typical CAT system is shown in Figure 1: At test step $t$ , the Cognitive Diagnosis Model, e.g., Item Response Theory (IRT), as the user model based on psychology, first uses student's previous $t$ item answer responses to estimate his/her current ability $\theta^t$ . IRT family has been used for ability estimation in several state assessments, such as OECD/PISA Project [5, 6]. Next, the selection algorithm selects the next item from the entire question bank according to some criteria [7, 8, 9]. Most of them are + +![](images/1d8e234d2299cbc4979eaa58a7e9c6aa1f7b1b72f032f5885c9e6c36d4b1f2d8.jpg) +Figure 1: An illustration of the CAT system: At test step $t \in [1, \dots, T]$ , the selection algorithm uses the current ability estimate $\theta^t$ to select the next question $q_{t+1}$ from the question bank. When the test stops, the $\theta^T$ (i.e., the final estimate of his/her true ability $\theta_0$ ) will be output. + +informativeness metrics such as selecting the question with difficulty closest to his/her current ability estimate $\theta^t$ , i.e., the student's probability of answering it correctly is closest to $50\%$ [7]. Obviously, the selection algorithm is the core component to realize CAT's adaptivity and seeks to answer the following question about accuracy and efficiency: Can we estimate student's true ability by asking him/her as few questions as possible, with negligible estimation error? + +From the perspective of machine learning, CAT can be viewed as a parameter estimation problem with the least cost: it is essentially to select the fewest data samples (questions to be answered) sequentially from the whole unlabeled data (question bank), so that after obtaining their labels (correct/wrong responses), model's hidden parameters (student true ability $\theta_0$ ) can be accurately estimated. Unfortunately, the exact true ability of student is unknown even to the students themselves, thus it is impossible to find such ground truth in datasets to design/train selection algorithms. As a result, most selection algorithms are not designed explicitly with the goal of accurate and efficient estimation. Existing approaches either select representative/diverse items solely from question feature space [9] (deviating from the goal of ability estimation), or require additional training overhead (e.g., Reinforcement Learning-based methods [10, 11, 12, 13]). Although these implicit methods achieve good results in experiments, the theoretical guarantee on approximating student's true ability is also critical for reliable CAT systems especially in standardized tests. + +Obviously, the biggest challenge of designing reliable explicit methods is: student's true ability $\theta_0$ is unknown. Therefore, in this work, we propose a general (upper-)Bounded Ability Estimation CAT framework (BECAT), which explicitly targets the accuracy and efficiency of ability estimation. Due to the unknown $\theta_0$ , we first find its theoretical approximation $\theta^*$ as the alternative: the ability estimated by his/her full responses on the entire question bank. Hence, our key idea is to select questions such that the estimate can best approximate the ability estimated by full responses. Specifically, we propose an expected gradient difference approximation method based on recent data efficiency/summary technique [14, 15, 16], and design a practical greedy selection algorithm in submodular function, which essentially finds representative items to approximate the gradient of full responses. We further provide the theoretical analysis about its upper-bound of ability estimation error. + +To validate BECAT's effectiveness, we conduct experiments on three real-world datasets from different educational platforms. Empirical results show that this simple greedy selection achieves state-of-the-art performance compared with other implicit methods. The main contributions are: + +- To better estimate the unknown $\theta_0$ , we find its theoretical approximation as the new target for designing an explicit selection algorithm. Based on this, we formally redefine and transform CAT into an adaptive subset selection problem in data summary manner for the first time. +- An effective expected gradient-based selection algorithm is proposed to select appropriate items, which exactly minimizes the estimation error term, therefore admitting theoretical guarantees on ability estimation in CAT systems. +- We show the generality of BECAT — it can be applied to any gradient-based method, including IRT and neural network methods. We observe that BECAT outperforms existing CAT methods at reducing test length, requiring $10\% - 20\%$ less questions to reach the same estimation accuracy. + +# 2 Problem Definitions of CAT + +For accurate and efficient assessment, CAT needs to sequentially select best-fitting questions for each student from the question bank $Q$ ; then uses the corresponding responses for ability estimation. + +When the test stops, the final estimate is output as the result/score of this test. The goal of CAT is to accurately estimate examinee's true ability $\theta_0$ , while minimizing the number of questions asked [17]. + +# 2.1 Preliminaries + +Specifically, at test step $t \in [1,2,\dots,T]$ , given the student's previous $t$ responses $S_{t} = \{(q_{1},y_{1}),\ldots ,(q_{t},y_{t})\}$ , where $\{q_i\}_{i = 1}^t \subseteq Q$ are selected sequentially by the selection algorithm and $y$ is the binary outcomes of correct or incorrect; student's current ability can be estimated by minimizing the empirical risk (e.g., binary cross-entropy) from the whole ability space $\Theta$ : + +$$ +\theta^ {t} = \underset {\theta \in \Theta} {\arg \min } \sum_ {i \in S _ {t}} l _ {i} (\theta) = \underset {\theta \in \Theta} {\arg \min } \sum_ {i \in S _ {t}} - \log p _ {\theta} \left(q _ {i}, y _ {i}\right), \tag {1} +$$ + +where $p_{\theta}(q_i, y_i)$ represents the probability of the response $(q_i, y_i)$ towards a student with $\theta$ , and the specific form of $p_{\theta}$ is determined by IRT. Since the size of $S_t$ is small, Standard Gradient Descent [18, 19] is sufficient to minimize Eq.(1), and requires the computations of $\sum_{i \in S_t} \nabla l_i(\theta) - \sum_{i \in S_t} \nabla l_i(\theta)$ — sum of the gradients over the previous $t$ response data. It takes repeated steps in the opposite direction of the gradient, thus leading to a minimum of the empirical risk in Eq.(1). + +Next, the selection algorithm selects the next question $q_{t + 1}$ from bank $Q$ according to various criteria [7, 8, 12, 13]. The above process will be repeated for $T$ times², i.e., $|S| = T$ ( $T \leq 20$ in most tests [13]), ensuring the final step estimate $\theta^T$ close to the true $\theta_0$ , i.e., + +Definition 1 (Traditional Definition of CAT). At each step $t$ , it will select the most suitable/informative question, according to student's current ability $\theta^t$ . When the test ends ( $t = T$ ), the final ability estimate $\theta^T = \arg \min_{\theta \in \Theta} \sum_{i \in S} l_i(\theta)$ can approximate the true ability: + +$$ +\min _ {| S | = T} \left\| \theta^ {T} - \theta_ {0} \right\|. \tag {2} +$$ + +Unfortunately, directly solving the above optimization problem is infeasible. Because the ground truth ability $\theta_0$ cannot be obtained and, even students themselves cannot know the exact value. As a result, traditional informativeness-based methods [7, 8] use asymptotic statistical properties of Maximum Likelihood Estimation to reduce estimation uncertainty, e.g., selecting the one whose difficulty is closest to student's current ability $\theta^t$ ; but they are all IRT-specific, i.e., they can not be applied into recent neural networks methods. Although recent active learning-based [9] and reinforcement learning-based [12, 13] methods achieve good experimental results, there is no evidence that they can theoretically guarantee that estimate can efficiently approach $\theta_0$ , which is unacceptable for CAT systems applied in standardized tests. Testing reliability requires not only satisfactory experimental results, but also good theoretical guarantees [3]. + +# 2.2 New Definition of CAT + +Given that there is no such ground truth $\theta_0$ in the dataset, thus, for designing explicit selection algorithms, we find its approximation as the new target. + +Proposition 1. The student's one ability estimate $\theta^{*}$ , estimated by his/her full responses to the entire question bank $Q$ , is an approximation of his/her true ability $\theta_{0}$ , that is, + +$$ +\theta^ {*} \approx \theta_ {0} \tag {3} +$$ + +Proof. When we use consistent estimation approaches, such as Maximum Likelihood Estimation (cross-entropy loss) in Eq.(1), we have $\lim_{t\to \infty}p(|\theta^t -\theta_0|\geq \epsilon) = 0$ , where $t$ is the number of responses (steps) for ability estimation. The size of CAT's question bank is finite (i.e., $t\in [0,|Q|]$ ) and $\theta^{*} = \lim_{t\to |Q|}\theta^{t}$ , thus $\theta^{*}\approx \lim_{t\to \infty}\theta^{t}\approx \theta_{0}$ , i.e., $\theta^{*}$ can be regarded as an approximation of $\theta_0$ . + +Since this proposition exploits estimator's asymptotic property and may make the approximation not perfect. For example, both the bank size $|Q|$ and various perturbations in student's response + +![](images/d18cb28678f20ac7268b111b9dc5b9e8bf05119047c507c0924ef596ffc02ffc.jpg) +Figure 2: (a) Simulation experiments about Proposition 1 using MSE: $\mathbb{E}[\| \theta^{*} - \theta_{0}\|^{2}]$ . In addition to the normal situation (blue), we also show the MSE under different perturbations, for example: Slip $5\%$ means that the label has a $5\%$ probability of changing from 1 to 0; Guess $25\%$ means that the label changes from 0 to 1 with $25\%$ . (b) The illustration of the optimization problem: selecting subset $S$ to cover the whole response data on $Q$ . The rectangles represent a student's full responses to the bank $Q$ , and $w(i,j)$ measures the similarity of response pair $(i,j)$ . + +![](images/40a3ce178809775d331ab97b2bfe92a21d6e00bc7efb0c593c98fa1ba2c9f372.jpg) + +can impact this proposition. Thus, we also conduct simulation experiments to further verify it: we randomly sample $100\theta_0$ from $\Theta$ as groundtruth, using the smallest EXAM dataset $(|Q| = 1650)$ in Section 4; then use IRT with these $\theta_0$ to simulate the response behavior (correct/wrong) of 100 students. Figure 2(a) shows that when the bank size exceeds $300\left(\approx |Q| / 5\right)$ , the estimated $\theta^{*}\approx \theta_{0}$ (blue). Even if some extreme perturbations (e.g., guess and slip factors [13]) are added, their MSE will not exceed 0.1. Therefore, it is reasonable to replace $\theta_0$ with $\theta^{*}$ as the ground truth in optimization. + +In this way, the selection algorithm can aim to approach $\theta^{*}$ instead of the unknown $\theta_0$ . We can design explicit selection algorithms: Select a subset of questions $S$ from the bank $Q$ , so that the student's ability is estimated only on the subset $S$ while still (approximately) converging to the optimal solution $\theta^{*}$ (i.e., the estimate that would be obtained if optimizing on the full responses to $Q$ ). As mentioned in Preliminaries, the ability estimation usually adopts cross-entropy loss with gradient computations, and denote the full gradient of the loss w.r.t. ability parameters by $\sum_{i\in Q}\nabla l_i(\theta)$ — sum of the gradients over full responses. Thus, + +Definition 2 (New Definition of CAT). It will adaptively find a subset $S$ of size $T$ and the corresponding weight $\{\gamma\}_{j}$ that approximates the full gradient: minimizing their difference for all the possible ability values of the optimization parameter $\theta \in \Theta$ : + +$$ +\min _ {| S | = T} \| \theta^ {T} - \theta^ {*} \| \Rightarrow \min _ {| S | = T} \max _ {\theta \in \Theta} \| \sum_ {j \in S} \gamma_ {j} \nabla l _ {j} (\theta) - \sum_ {i \in Q} \nabla l _ {i} (\theta) \|. \tag {4} +$$ + +Since we know nothing about the range of student's ability when optimizing, we consider the worst-case approximation error $(\max_{\theta \in \Theta})$ instead of a particular $\theta$ . After finding such $S$ and the associated weights $\{\gamma\}_{j}$ , the gradient updates on $S$ will be similar to the gradient on $Q$ regardless of the value of $\theta$ , thus making the estimate close to the target $\theta^{*}$ . In this way, CAT can be regarded as a subset selection optimization problem in a data-efficiency manner. Also, we find that it is consistent with recent Coreset techniques [20, 15, 21], which approximate the gradients of the full data, so that the model is trained only on the subset while still (approximately) converging to the optimal solution (i.e., the model parameters that would be obtained if training on the full data). + +However, compared with the traditional Coreset problem, the biggest technical challenge is: the gradients on bank $(\sum_{i\in Q}\nabla l_i(\theta))$ cannot be calculated without labels. Only the few questions that have been answered in previous steps (i.e., $S_{t}$ ) have the corresponding labels. Thus, to simplify the problem, we assume for the moment that student's full responses are available. In Section 3.1, we will further propose an expected approximate method to address this. + +# 3 The BECAT framework + +In this section, to solve the above optimization problem Eq.(4), we design a simple greedy algorithm in submodular functions. More importantly, we provide an upper bound on the expected error of the ability estimate when using our method. + +**Optimization.** The above subset selection problem is NP-hard, thus, we transform it based on the recent Coreset method. It proves that the subset $S$ that minimizes the error of estimating the full gradient is upper-bounded by a submodular facility location function that has been used in various summarization applications [22, 23]. Thus + +$$ +\begin{array}{l} \min _ {| S | = T} \max _ {\theta \in \Theta} \| \sum_ {j \in S} \gamma_ {j} \nabla l _ {j} (\theta) - \sum_ {i \in Q} \nabla l _ {i} (\theta) \| \Rightarrow \min _ {| S | = T} \max _ {\theta \in \Theta} \sum_ {i \in Q} \min _ {j \in S} \| \nabla l _ {i} (\theta) - \nabla l _ {j} (\theta) \| \\ \Rightarrow \max _ {| S | = T} \sum_ {i \in Q} \max _ {j \in S} w (i, j), \tag {5} \\ \end{array} +$$ + +where $w(i,j) \triangleq d - \max_{\theta \in \Theta} \| \nabla l_i(\theta) - \nabla l_j(\theta) \|$ is the gradient similarity between response pair $i = (q_i, y_i)$ and $j = (q_j, y_j)$ for this student. The associated weight of the response $j$ , $\gamma_j = \sum_{i \in Q} \mathbf{1}[j = \arg \max_{s \in S} w(i,s)]$ , is the number of responses in $Q$ that are most similar to $j \in S$ . + +Given a subset $S$ , $\sum_{i \in Q} \max_{j \in S} w(i, j)$ in Eq.(5) quantifies the coverage of the whole response data on $Q$ , by summing the similarities $w$ between every $i \in Q$ and its closest item $j \in S$ . The semantics of this optimization problem is shown in Figure 2(b). The larger the value of $w(i, j)$ , the smaller their gradient difference in ability estimation for all the possible ability $\theta \in \Theta$ , which means these two responses $i$ and $j$ have similar importance/influence on the student's ability estimation. Thus, the transformed problem in Eq.(5) is equivalent to selecting the most representative responses to form the subset $S$ , which shares the same idea (i.e., selecting "representative" items) with previous selection algorithms [12, 9], active learning methods [24, 25] and unsupervised learning [26, 27]. + +Define a monotone non-decreasing submodular function — the facility location function $F: 2^{Q} \to \mathbb{R}$ : $F(S) = \sum_{i \in Q} \max_{j \in S} w(i, j)$ . The submodular optimization provides a near-optimal solution with a $(1 - 1/e)$ -approximation bound [28], with simple greedy algorithm for selecting the $t$ -th question: + +$$ +q _ {t} = \arg \max _ {(q, y) \in Q \backslash S _ {t - 1}} \Delta ((q, y) | S _ {t - 1}). \tag {6} +$$ + +where $\Delta((q, y) | S_{t-1}) = F(\{(q, y)\} \cup S_{t-1}) - F(S_{t-1})$ and $S_{t-1}$ is the set of previous $t-1$ responses of this student in CAT. + +# 3.1 Expected Gradient Difference Approximation + +However, the above selection algorithm is impractical in CAT. Because we cannot get student's full responses to bank $Q$ , as a result, the gradient difference $\| \nabla l_i(\theta) - \nabla l_j(\theta) \|$ in $w(i,j)$ can not be calculated without related answer correctness labels. Actually, at step $t$ , only the responses of previous $t-1$ steps (i.e., $S_{t-1}$ ) can be obtained. Therefore, we propose an expected gradient difference approximation method to replace the original to measure their similarity, then the new similarity function $\widetilde{w}(i,j)$ is: + +$$ +\widetilde {w} (i, j) \triangleq d - \max _ {\theta \in \Theta} \mathbb {E} _ {y \sim p _ {\theta t}} \left[ \| \nabla l _ {i} (\theta) - \nabla l _ {j} (\theta) \| \right], \tag {7} +$$ + +where the normed gradient difference is calculated as an expectation $\mathbb{E}_{y\sim p_{\theta t}}$ over the possible labelings, since student's response labels $y$ to the candidate questions are unknown in the selection step. Moreover, for more accurate approximation and to make full use of the available previous $t - 1$ responses, in Eq.(7), the expectation is determined by the current estimate $\theta^t$ . This method can be regarded as a gradient difference approximation based on "soft pseudo-labels". Thus, the selection of the next question $q_{t}$ no longer requires the student's real answer correctness labels: + +$$ +q _ {t} = \arg \max _ {q \in Q \backslash S _ {t - 1}} \Delta (q | S _ {t - 1}). \tag {8} +$$ + +where $\Delta(q|S_{t-1}) = \widetilde{F}(\{q\} \cup S_{t-1}) - \widetilde{F}(S_{t-1})$ , and $\widetilde{F}(S) = \sum_{i \in Q} \max_{j \in S} \widetilde{w}(i,j)$ . Also, we uncover some important conclusions about this simple expected approximation: + +# Algorithm 1: The BECAT framework + +Require: $Q$ - question bank, $f$ - IRT or neural network methods. + +Initialize: Initialize the responses data $S_0 \gets \emptyset$ . + +1 for $t = 1$ to $T$ do + +2 Select question $q_{t}$ based on $\widetilde{w}(i,j)$ : $q_{t} \gets \arg \max_{q \in Q \setminus S_{t-1}} \Delta(q|S_{t-1})$ . +3 Get student's related answer correctness label $y_{t}$ .. $\bar{S}_t\gets S_{t - 1}\cup \{(q_t,y_t)\} .$ +4 Update the weights $\{\gamma_j\}_{j = 1}^t$ .. $\gamma_{j}\gets \sum_{i\in Q}\mathbf{1}[j = \arg \max_{s\in S_{t}}\widetilde{w} (i,s)]$ +5 Update student's ability estimate: $\theta^t\gets \arg \min_{\theta \in \Theta}\sum_{i\in S_t}\gamma_i l_i(\theta).$ + +Output: The student's final ability estimate $\theta^T$ . + +Lemma 1. When we replace the original gradient difference in $w(i,j)$ with $\widetilde{w}(i,j)$ , the corresponding designed selection algorithm using submodular function $\widetilde{F}$ is actually approximately solving the following optimization problem: + +$$ +\min _ {| S | = T} \max _ {\theta \in \Theta} \mathbb {E} _ {y} \left[ \| \sum_ {j \in S} \gamma_ {j} \nabla l _ {j} (\theta) - \sum_ {i \in Q} \nabla l _ {i} (\theta) \| \right] \tag {9} +$$ + +Based on the conclusion in Lemma 1, we can assume that, after optimization, the preconditioned expected gradient can be approximated by an error of at most $\epsilon$ : $\mathbb{E}_y\left[\| \sum_{i\in S}\gamma_j\nabla l_j(\theta) - \sum_{i\in Q}\nabla l_i(\theta)\|\right]\leq \epsilon$ . Then we find the theoretical guarantees for ability estimation when applying gradient-based estimation method to the subset $S$ found by it: + +Theorem 1 (Expected estimation error bound). Assume that the loss function for ability estimation is $\alpha$ -strongly convex (e.g., $IRT$ ). Let $S$ be a weighted subset obtained by the proposed method. Then with learning rate $\frac{1}{\alpha}$ , ability estimation in gradient descent applied to the subsets has the following expected estimation error bound: + +$$ +\mathbb {E} \left[ \| \theta^ {t + 1} - \theta^ {*} \| ^ {2} \right] \leq \frac {2 \epsilon D \alpha + \sigma_ {l} ^ {2} + 2 \sigma_ {f} D \alpha H _ {p} \left(\theta^ {t} , \theta^ {*}\right)}{\alpha^ {2}} \tag {10} +$$ + +$$ +w h e r e \quad H _ {p} \left(\theta^ {t}, \theta^ {*}\right) = \mathbb {E} _ {\left(q, y\right) \sim p _ {\theta^ {t}}} \left[ \frac {1}{p _ {\theta^ {*}} (q , y)} \right] \tag {11} +$$ + +where $\theta^{*}$ is the optimal estimate using full responses, $\sigma$ is an upper bound on the norm of the gradients, and $D = \max_{\theta} \| \theta - \theta^{*} \|$ . + +All the proofs can be found in Appendix. The above theorem shows that despite not being able to obtain student's full response, such simple expected gradient difference approximation can make the estimate error upper bounded at each step. This theorem is attainable for the case where the loss is strongly convex, such as the cross-entropy loss of the classic L2-regularized IRT [29]. We will further verify the performance of other cases (e.g., neural network-based methods) in experiments. + +Theorem 1 also suggests that, to minimize the expected error bound, the CAT systems should try to minimize $H_{p}(\theta^{t},\theta^{*})$ that can be regarded as a type of statistical distance: measuring how probability distribution $p_{\theta^t}$ is different from $p_{\theta^*}$ . Moreover, we find that with the help of the consistency estimation (i.e., binary cross-entropy) at each step, $H_{p}(\theta^{t},\theta^{*})$ can reach its theoretical minimum: + +Theorem 2. Assume that $\theta^t$ , estimated by the cross-entropy loss in Eq.(1), can minimize the empirical risk i.e., $\sum_{i\in S_t}l_i(\theta^t) = 0$ . Then $H_{p}(\theta ,\theta^{*})$ can take its minimum when $\theta = \theta^t$ , that is + +$$ +H _ {p} \left(\theta^ {t}, \theta^ {*}\right) \leq H _ {p} \left(\theta , \theta^ {*}\right), \quad \forall \theta \in \Theta \tag {12} +$$ + +Therefore, the ability estimation methods commonly used in CAT can actually help minimize this upper bound. The proofs and related experiments can be found in the Appendix Cand E.4. + +Complexity Analysis of BECAT. Algorithm 1 presents the pseudo-code of our BECAT framework. A naive implementation of our selection algorithm in Eq.(8) has the complexity of $O(|Q|^2 |\Theta|)$ , because at each step we have to: (1) find the question from the bank $O(|Q|)$ that (2) maximizes + +the marginal gain $\Delta (q|S_{t - 1}) = \widetilde{F} (\{q\} \cup S_{t - 1}) - \widetilde{F} (S_{t - 1})$ with complexity $O(|Q||\Theta |)$ . To make BECAT faster and more scalable from the above two aspects, we adopt two speed-up tricks: lazy evaluations [30, 31] and multifaceted estimation [32] (See Appendix D for implementation details). Also, we compare the time (second) spent on question selection by different methods in Appendix E. + +# 4 Experiments + +Evaluation Method. The goal of CAT is to estimate the student's ability accurately with the fewest steps. Therefore, there are usually two tasks to verify the performance of different CAT methods following prior works [9, 12]: (1) Student Score Prediction: To evaluate the ability estimate output by CAT, the estimate can be used for predicting the student's binary response (correct/wrong) on the questions he/she has answered in the held-out response data. Thus, Prediction Accuracy (ACC) and AUC are used for evaluations [33]; (2) Simulation of Ability Estimation: This is CAT's traditional evaluation methods. Since the ground truth of student ability $\theta_0$ is not available, we artificially generate the $\theta_0$ and further simulate student-question interaction process. Thus, we can use Mean Square Error (MSE) metric. See Appendix E for the details of these two evaluation methods. + +Datasets. We conduct experiments on three educational benchmark datasets, namely ASSIST, NIPS-EDU, and EXAM. ASSIST [34] is collected from an online educational system ASSISTments and consists of students' practice logs on mathematics. NIPS-EDU [35] refers to the large-scale dataset in NeurIPS 2020 Education Challenge, which is collected from students' answers to questions from Eedi (an educational platform). The EXAM dataset was supplied by iFLYTEK Co., Ltd., which collected the records of junior high school students on mathematical exams. The statistics of the datasets are shown in appendix. The code can be found in the github: https://github.com/bigdata-ustc/EduCAT. + +Compared Approaches. To verify the generality of BECAT, in addition to the traditional IRT, we also compare the neural network-based model NeuralCDM [36]: It can cover many IRT and cognitive diagnosis models, such as MIRT [37] and MF [38, 39]. For the selection algorithm, we mainly use the following SOTA algorithms as baselines: Random: The random selection strategy is a benchmark to quantify the improvement of other methods; FSI [7] and KLI [8] select the question with the maximum Fisher/Kullback-Leibler information, which measures the amount of information that a question carries about the unknown parameter $\theta$ . They are specially designed for IRT. MAAT [9] utilizes Active Learning [40] to measure the uncertainty caused by each candidate question. BOBCAT [12] and NCAT [13] recast CAT as a bilevel optimization and Reinforcement Learning problem respectively, and train a data-driven selection algorithm from student response data. + +# 4.1 Results and Discussion + +In this section, we compare the performance on two classic CAT tasks introduced above to evaluate the effectiveness and efficiency of our proposed BECAT framework. Also, we conduct a qualitative investigation of the characteristics of the selected questions, and gain deeper insights on why BECAT leads to more accurate ability estimation. + +Task1: Student Score Prediction. Following prior work [13], we also fix the max length $T = 20$ and calculate the ACC and AUC at step 5, 10 and 20 on three datasets for Student Score Prediction task and the results are shown in Table 3. We find that: + +(1) The explicit BECAT framework achieves the best overall performances on the three datasets. It performs significantly better than all the other methods, where the relative performance improvements are as least $1.5\%$ with respect to ACC@20 and $1.1\%$ with respect to AUC@20 on average on ASSIST. This result indicates that BECAT can provide accurate ability estimates at the end of the exam. Also, it even surpasses the implicit selection algorithms based on deep learning, such as NCAT and BOBCAT. This phenomenon shows that compared to focusing on modeling complex student-question interactions, targeting the accuracy of estimation indeed achieves amazing results. + +(2) BECAT's performance on large-scale datasets (e.g., NIPS-EDU) is better. From Table 3, on NIPS-EDU dataset (the bank size is 27613), BECAT can achieve $2.48\%$ AUC gain (on average) above the famous FSI baseline. On the other two datasets ASSIST and EXAM, the average improvement is only + +Table 1: The performance of different methods on Student Score Prediction with ACC and AUC metrics. “-” indicates the information/uncertainty-based selection algorithms (e.g., FSI) cannot be applied to the deep learning method. The boldfaced indicates the statistically significant improvements (p-value $< 0.01$ ) over the best baseline. +(a) Performances on ASSIST + +
CDMIRTNeuralCDM
Metric@StepACC/AUC@5ACC/AUC@10ACC/AUC@20ACC/AUC@5ACC/AUC@10ACC/AUC@20
Random71.01/70.6872.20/71.9173.07/72.6171.52/71.1972.66/72.0672.67/72.83
FSI71.77/71.3372.94/72.4873.24/73.54---
KLI71.93/71.3872.73/72.5273.17/73.57---
MAAT72.20/71.5472.33/72.5873.22/73.0872.36/70.9872.52/72.3371.74/72.27
BOBCAT72.31/71.6872.36/72.2873.70/73.3972.69/71.4572.89/72.8473.87/72.84
NCAT72.28/71.5372.55/72.3173.81/73.5072.28/71.5972.63/72.3773.90/73.59
BECAT71.92/71.4473.01/72.7373.96/73.6172.30/71.6073.11/72.9774.13/73.70
+ +(b) Performances on NIPS-EDU + +
CDMIRTNeuralCDM
Metric@StepACC/AUC@5ACC/AUC@10ACC/AUC@20ACC/AUC@5ACC/AUC@10ACC/AUC@20
Random66.45/69.0568.23/71.6670.23/74.8267.19/69.3268.44/71.5670.57/74.99
FSI67.70/70.6069.62/73.6271.03/76.24---
KLI67.09/69.7969.27/73.3070.42/75.73---
MAAT66.70/70.3269.13/72.4169.07/74.4667.86/70.1270.07/72.5870.66/75.83
BOBCAT69.51/74.4270.94/75.7371.73/76.5871.13/76.0072.52/77.8773.47/79.00
NCAT67.30/72.1170.68/75.8071.91/76.6670.47/74.1072.81/77.9973.47/79.12
BECAT66.98/73.1571.61/75.8772.00/76.8271.33/76.3073.09/78.3473.58/79.36
+ +(c) Performances on EXAM + +
CDMIRTNeuralCDM
Metric@StepACC/AUC@5ACC/AUC@10ACC/AUC@20ACC/AUC@5ACC/AUC@10ACC/AUC@20
Random77.58/70.3478.59/71.9180.40/74.2279.80/72.5879.80/74.8179.80/78.40
FSI77.37/70.5778.79/72.2181.01/74.89---
KLI77.37/70.5778.79/72.2181.01/74.70---
MAAT76.97/70.3878.79/72.1280.61/74.6582.82/70.3282.83/74.1183.82/79.44
BOBCAT80.81/68.1783.84/72.0483.43/72.8878.18/78.2478.19/81.4778.18/79.49
NCAT80.92/70.7283.99/72.7184.02/74.2982.30/78.7783.19/81.4781.53/79.49
BECAT80.99/70.7483.85/72.8884.29/75.0082.84/78.7583.22/81.4984.77/79.70
+ +0.32%. This finding inspires us: BECAT is more adaptable to practical large-scale testing situations, and can retrieve the most suitable questions from the massive candidate questions. However, it cannot be ignored that the BECAT cannot surpass all other methods at the beginning of the exam. For example, on NIPS-EDU dataset, it is about 2.53% behind BOBCAT on ACC@5. This is because the student's response data available in the initial stage of exam is limited, and the data-driven methods (e.g., BOBCAT and NCAT) can be pre-trained on large-scale student response datasets to learn the interaction patterns, thus addressing this cold-start problem [41]. Thus, adapting the proposed explicit algorithm to data-driven frameworks is a very promising future work. + +Task 2: Simulation of Ability Estimation. The goal of a practical CAT system is to accurately estimate student's ability. We conduct the Simulation of Ability Estimation experiment on the EXAM dataset using the mean square error $\mathbb{E}[\| \theta^t -\theta_0\| ^2 ]$ between the ability estimate $\theta^t$ and the true ability $\theta_0$ at each step. Figure 3(a) reports the results of different methods on IRT. As the number of questions selected increases, we find that the BECAT method can always achieve much lower estimation errors, especially in the middle stage. Some implicit methods that do not aim at estimation accuracy perform better in the initial stage (e.g., NCAT), but the final accuracy still lags behind BECAT framework. Also, compared with the widely used FSI, the proposed BECAT can reach the same estimation error using up to $20\%$ less questions. On average, it can reach the same estimation accuracy using $15\%$ less questions, which demonstrates its efficiency in ability estimation, i.e., reducing test length. + +The Characteristics of the Selected Questions. To gain deeper insights on why BECAT leads to more accurate estimation, we take a close look at the characteristics of the selected questions. First, for IRT, we output the difficulty and discrimination parameters of the selected questions and draw a scatter chart in Figure 3(b). We find that it tends to choose those questions with high discrimination, + +![](images/be3baaaa04e89e03b11562179096d026e415d60b76500889ac3dea7be1b1a718.jpg) +(a) + +![](images/48d5552a0f014e6e45d006e6d7ed60a13eaf5e6946804d78ffaa162556692061.jpg) +(b) + +![](images/8e576c46456e1904a2f4cd53207f3169f226405f68ed960e82a473e86b1c75d7.jpg) +(c) +Figure 3: (a) The error of ability estimation on EXAM dataset. (b) The characteristics (i.e., discrimination and difficulty) of the questions selected for 10 students in IRT, where grey dots represent all the questions in the bank, and the “*” represent the ones selected by BECAT. (c) The Jaccard similarity coefficient of the selected questions. + +and their difficulty is scattered and roughly concentrated in the middle difficulty area, which may be caused by the fact that most of the students are of middle-ability [42]. Second, for NeuralCDM, to gain a better insight into the knowledge concepts (e.g., Geometry in mathematics) covered by the selected questions, and the association between BECAT and other methods. Figure 3(c) shows the Jaccard similarity coefficient of questions' concepts. Questions selected by the same type of method have a high overlap in knowledge concepts, such as FSI and KLI, BOBCAT and NCAT. MAAT and FSI have the highest similarity scores with BECAT: 1) Although BECAT does not directly adopt the concept features in the selection, it has a high score to MAAT that directly targets knowledge concept coverage/diversity, thus making the measurement more comprehensive. 2) The high similarity (with FSI) proves that BECAT is not only general but also capable of selecting informative items. + +# 5 Related Works + +Computerized Adaptive Testing Computerized Adaptive Testing (CAT) technology has been widely used in many standardized tests, such as GMAT, and the multistage testing in GRE is also its special case of CAT [17]. It is an iterative procedure, mainly including Item Response Theory and a question selection algorithm. The following reviews these two components separately: + +(1) Item Response Theory (IRT). It is built on psychometric theory and has become popular in educational assessment to provide more individualized feedback about a student's latent ability [43, 44]. It assumes that the examinee's ability is unchanged throughout a test, thus the ability can be estimated using his/her previous response on questions in gradient-based optimization [32]. The classic form is the two-parameter logistic (2PL): $p$ (the response to question $j$ is correct) = sigmoid $(a_{j}(\theta - b_{j}))$ , where $a_{j}, b_{j} \in \mathbb{R}$ represent each question's discrimination and difficulty respectively that are pre-calibrated before testing [29], and $\theta \in \mathbb{R}$ is student's ability to be estimated. Recently, many studies [36, 45, 46] combine cognitive diagnosis and utilize neural networks to model such student-question interaction (e.g., NeuralCDM [36]). +(2) Selection Algorithms. The selection algorithm is the core component to realize CAT's adaptivity – accurately estimating student's ability with the fewest test steps. Traditional algorithms are based on some uncertainty or information metrics, e.g., the famous Fisher Information (FSI). Based on it, many methods [8, 47, 48, 49] have been proposed to introduce additional information in selection. Since they are not general and not applicable to recent neural network methods, MAAT [9] uses active learning to select diverse and representative items in question's feature space. Recently, BOBCAT [12] and NCAT [13] regard CAT as a Reinforcement Learning (RL) problem and train selection algorithms directly from large-scale student response data. Due to the unknown of the $\theta_0$ , their goal is to minimize the student performance prediction loss of the estimate on the held-out responses data, which is also implicit and prone to biases in training data. In this paper, BECAT is general and explicitly targets the accuracy and efficiency of ability estimation. Compared with previous implicit methods, we find that it exhibits superior performance both theoretically and experimentally. However, various biases, such as those introduced in test item design and respondent pool selection, can affect the validity of estimating a student's true ability [50]. While our approach seeks to improve + +the efficiency with which student ability is estimated, it does not diminish the need for test designers to mitigate sources of bias introduced outside of the model fitting process. + +Data Efficiency. Another closely related literature is data efficiency (or data summary) [51, 15, 52]. To alleviate various costs (computational [53, 54] or labeling costs [40]), data efficiency is used to carefully select or generate some samples from dataset on par with the full data. Its specific implementation methods include Coreset [15, 21, 16, 20], Active Learning [40, 55, 56], Data Distillation [57], etc. For example, recent Coreset approaches [15, 21, 20] try to find a subset that closely approximates the full gradient, i.e., the sum of the gradients of the whole training samples. In this paper, Coreset helps us transform our optimization problem in Section 2.2, but the gradient calculation requires labels, which is obviously not applicable to the CAT scenario (student's response labels cannot be obtained before the question selection). Therefore, we improve it and design an expected gradient difference approximation method and provide good upper-bound guarantees to the optimal solution, which is one of the main contributions of this paper. + +# 6 Conclusion + +This paper focuses on the explicit approach for accurate and efficient estimation of student's true ability $\theta_0$ . Given that the ground truth $\theta_0$ is unavailable, we find its theoretical approximation: the ability estimated by the full responses to the question bank, and use it as the optimization goal to design a Bounded Ability Estimation CAT framework (BECAT). For practical use in CAT scenario, we propose a simple but effective expected gradient difference approximation in the greedy selection algorithm. We further analyze its theoretical properties and prove the error upper-bound of the ability estimation on questions found by BECAT. Through extensive experiments on three real-world education datasets, we demonstrate that BECAT can achieve the best estimation accuracy and outperform existing CAT methods at reducing test length. + +# Acknowledgments and Disclosure of Funding + +This research was partially supported by grants from the National Key Research and Development Program of China (No. 2021YFF0901003), the National Natural Science Foundation of China (Grants No. U20A20229, No. 62106244), and UC Berkeley MicroGrant from the Vice Provost of Undergraduate Education. + +# References + +[1] Zachary A Pardos. Big data in education and the models that love them. Current opinion in behavioral sciences, 18:107-113, 2017. +[2] Jill-Jenn Vie, Fabrice Popineau, Éric Bruillard, and Yolaine Bourda. A review of recent advances in adaptive assessment. Learning analytics: Fundamentals, applications, and trends, pages 113-142, 2017. +[3] Wim J Van der Linden and Peter J Pashley. Item selection and ability estimation in adaptive testing. In Computerized adaptive testing: Theory and practice, pages 1-25. Springer, 2000. +[4] Andrew S Lan, Andrew E Waters, Christoph Studer, and Richard G Baraniuk. Sparse factor analysis for learning and content analytics. Journal of Machine Learning Research (JMLR), 2014. +[5] Wynne Harlen. The Assessment of Scientific Literacy in the OECD/PISA Project, pages 49-60. Springer Netherlands, Dordrecht, 2001. +[6] Zheng Zhang, Qi Liu, Hao Jiang, Fei Wang, Yan Zhuang, Le Wu, Weibo Gao, and Enhong Chen. Fairlisa: Fair user modeling with limited sensitive attributes information. Advances in Neural Information Processing Systems, 2023. +[7] Frederic M Lord. Applications of item response theory to practical testing problems. Routledge, 2012. + +[8] Hua-Hua Chang and Zhiliang Ying. A global information approach to computerized adaptive testing. Applied Psychological Measurement, 20(3):213-229, 1996. +[9] Haoyang Bi, Haiping Ma, Zhenya Huang, Yu Yin, Qi Liu, Enhong Chen, Yu Su, and Shijin Wang. Quality meets diversity: A model-agnostic framework for computerized adaptive testing. In 2020 IEEE International Conference on Data Mining (ICDM), pages 42–51. IEEE, 2020. +[10] Darkhan Nurakhmetov. Reinforcement learning applied to adaptive classification testing. In Theoretical and Practical Advances in Computer-based Educational Measurement, pages 325-336. Springer, Cham, 2019. +[11] Xiao Li, Hanchen Xu, Jinming Zhang, and Hua-hua Chang. Deep reinforcement learning for adaptive learning systems. arXiv preprint arXiv:2004.08410, 2020. +[12] Aritra Ghosh and Andrew Lan. Bobcat: Bilevel optimization-based computerized adaptive testing. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 2410–2417. International Joint Conferences on Artificial Intelligence Organization, 8 2021. +[13] Yan Zhuang, Qi Liu, Zhenya Huang, Zhi Li, Shuanghong Shen, and Haiping Ma. Fully adaptive framework: Neural computerized adaptive testing for online education. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4):4734-4742, Jun. 2022. +[14] Dan Feldman. Introduction to core-sets: an updated survey. CoRR, abs/2011.09384, 2020. +[15] Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020. +[16] Zixiu Wang, Yiwen Guo, and Hu Ding. Robust and fully-dynamic coreset for continuous-and-bounded learning (with outliers) problems. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 14319-14331. Curran Associates, Inc., 2021. +[17] Hua-Hua Chang. Psychometrics behind computerized adaptive testing. Psychometrika, 80(1):1-20, 2015. +[18] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157-166, 1994. +[19] Rainer Gemulla, Erik Nijkamp, Peter J. Haas, and Yannis Sismanis. Large-scale matrix factorization with distributed stochastic gradient descent. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11, page 69-77, New York, NY, USA, 2011. Association for Computing Machinery. +[20] Omead Pooladzandi, David Davini, and Baharan Mirzasoleiman. Adaptive second order coresets for data-efficient machine learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 17848-17869. PMLR, 17-23 Jul 2022. +[21] Krishnateja Killamsetty, Durga S, Ganesh Ramakrishnan, Abir De, and Rishabh Iyer. Gradmatch: Gradient matching based data subset selection for efficient deep model training. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5464-5474. PMLR, 18-24 Jul 2021. +[22] H. Lin, J. Bilmes, and S. Xie. Graph-based submodular selection for extractive summarization. In IEEE Automatic Speech Recognition and Understanding Workshop, 2009. +[23] Ehsan Kazemi, Morteza Zadimoghaddam, and Amin Karbasi. Scalable deletion-robust submodular maximization: Data summarization with privacy and fairness constraints. In International conference on machine learning, pages 2544–2553. PMLR, 2018. + +[24] Sheng-jun Huang, Rong Jin, and Zhi-Hua Zhou. Active learning by querying informative and representative examples. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010. +[25] Zheng Wang and Jieping Ye. Querying discriminative and representative samples for batch mode active learning. ACM Trans. Knowl. Discov. Data, 9(3), feb 2015. +[26] Christos Boutsidis, Petros Drineas, and Michael W Mahoney. Unsupervised feature selection for the k-means clustering problem. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. +[27] Qi Liu, Biao Xiang, Enhong Chen, Hui Xiong, Fangshuang Tang, and Jeffrey Xu Yu. Influence maximization over large-scale social networks: A bounded linear approach. In Proceedings of the 23rd ACM international conference on conference on information and knowledge management, pages 171–180, 2014. +[28] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions. CORE Discussion Papers RP, 1978. +[29] Susan E Embretson and Steven P Reise. Item response theory. Psychology Press, 2013. +[30] Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In J. Stoer, editor, Optimization Techniques, pages 234-243, Berlin, Heidelberg, 1978. Springer Berlin Heidelberg. +[31] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie Glance. Cost-effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '07, page 420-429, New York, NY, USA, 2007. Association for Computing Machinery. +[32] Yan Zhuang, Qi Liu, Zhenya Huang, Zhi Li, Binbin Jin, Haoyang Bi, Enhong Chen, and Shijin Wang. A robust computerized adaptive testing approach in educational question retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 416-426, New York, NY, USA, 2022. Association for Computing Machinery. +[33] Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, and Guoping Hu. Ekt: Exercise-aware knowledge tracing for student performance prediction. IEEE Transactions on Knowledge and Data Engineering, 33(1):100-115, 2019. +[34] Zachary A Pardos, Ryan SJD Baker, Maria OCZ San Pedro, Sujith M Gowda, and Supreeth M Gowda. Affective states and state tests: Investigating how affect throughout the school year predicts end of year learning outcomes. In Proceedings of the third international conference on learning analytics and knowledge, pages 117-124, 2013. +[35] Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E Turner, Richard G Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, and Cheng Zhang. Diagnostic questions: The neurips 2020 education challenge. arXiv preprint arXiv:2007.12061, 2020. +[36] Fei Wang, Qi Liu, Enhong Chen, Zhenya Huang, Yuying Chen, Yu Yin, Zai Huang, and Shijin Wang. Neural cognitive diagnosis for intelligent education systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6153-6161, 2020. +[37] Mark D. Reckase. Multidimensional Item Response Theory Models, pages 79-112. Springer New York, New York, NY, 2009. +[38] Andreas Tscher. Collaborative filtering applied to educational data mining. Journal of Machine Learning Research, 2010. +[39] Michel C. Desmarais. Mapping question items to skills with non-negative matrix factorization. SIGKDD Explor. Newsl., 13(2):30-36, may 2012. + +[40] Anita Krishnakumar. Active learning literature survey. 07 2007. +[41] Maksims Volkovs, Guangwei Yu, and Tomi Poutanen. Dropoutnet: Addressing cold start in recommender systems. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. +[42] Ucik Fitri Handayani, Cholis Sa'dijah, Sisworo Sisworo, Mukhtamilatus Sa'diyah, and Lathiful Anwar. Mathematical creative thinking skill of middle-ability students in solving contextual problems. volume 2215, page 060007, 04 2020. +[43] Zhemin Zhu, David Arthur, and Hua-Hua Chang. A new person-fit method based on machine learning in cdm in education. British Journal of Mathematical and Statistical Psychology, 75(3):616-637, 2022. +[44] Duc Nguyen and Anderson Ye Zhang. A spectral approach to item response theory. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 38818-38830. Curran Associates, Inc., 2022. +[45] Xinping Wang, Caidie Huang, Jinfang Cai, and Liangyu Chen. Using knowledge concept aggregation towards accurate cognitive diagnosis. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2010–2019, 2021. +[46] Weibo Gao, Qi Liu, Zhenya Huang, Yu Yin, Haoyang Bi, Mu-Chun Wang, Jianhui Ma, Shijin Wang, and Yu Su. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 501-510, 2021. +[47] Lawrence M Rudner. An examination of decision-theory adaptive testing procedures. In annual meeting of the American Educational Research Association, 2002. +[48] Wim J van der Linden. Bayesian item selection criteria for adaptive testing. Psychometrika, 63(2):201-216, 1998. +[49] Wim JJ Veerkamp and Martijn PF Berger. Some new item selection criteria for adaptive testing. Journal of Educational and Behavioral Statistics, 22(2):203-226, 1997. +[50] Ronald L Flaughter. The many definitions of test bias. American Psychologist, 33(7):671, 1978. +[51] Amina Adadi. A survey on data-efficient algorithms in big data era. Journal of Big Data, 8(1):24, Jan 2021. +[52] Baharan Mirzasoleiman, Kaidi Cao, and Jure Leskovec. Coresets for robust training of deep neural networks against noisy labels. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 11465-11477. Curran Associates, Inc., 2020. +[53] Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, STOC '04, page 291-300, New York, NY, USA, 2004. Association for Computing Machinery. +[54] Hongbin Pei, Bo Yang, Jiming Liu, and Kevin Chen-Chuan Chang. Active surveillance via group sparse bayesian learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3):1133-1148, 2022. +[55] Yi-Fan Yan, Sheng-Jun Huang, Shaoyi Chen, Meng Liao, and Jin Xu. Active learning with query generation for cost-effective text classification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):6583-6590, Apr. 2020. +[56] Wenbin Cai, Yexun Zhang, Ya Zhang, Siyuan Zhou, Wenquan Wang, Zhuoxiang Chen, and Chris Ding. Active learning for classification with maximum model change. ACM Trans. Inf. Syst., 36(2), aug 2017. + +[57] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A. Efros. Dataset distillation, 2018. +[58] Angela Nedic and Dimitri Bertsekas. Convergence rate of incremental subgradient algorithms. In Stochastic optimization: algorithms and applications, pages 223-264. Springer, 2001. +[59] George B Arfken and Hans J Weber. Mathematical methods for physicists, 1999. +[60] Satoru Fujishige. Submodular systems and related topics. In Mathematical Programming at Oberwolfach II, pages 113-131. Springer, 1984. +[61] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. +[62] Andrew P Bradley. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7):1145-1159, 1997. +[63] Ying Cheng. When cognitive diagnosis meets computerized adaptive testing: Cd-cat. Psychometrika, 74(4):619-632, 2009. + +# A Proofs of Lemma 1 + +We first proof the following Lemma + +Lemma 1. When we replace the original gradient difference in $w(i,j)$ with $\widetilde{w}(i,j)$ , the corresponding designed selection algorithm using submodular function $\widetilde{F}$ is actually approximately solving the following optimization problem: + +$$ +\min _ {| S | = T} \max _ {\theta \in \Theta} \mathbb {E} _ {y} \left[ \| \sum_ {j \in S} \gamma_ {j} \nabla l _ {j} (\theta) - \sum_ {i \in Q} \nabla l _ {i} (\theta) \| \right] \tag {13} +$$ + +Proof. Following the theoretical analysis in the recent Coreset method CRAIG [15], we first define a mapping function $h$ from set $Q$ to $S$ to a mapping function: $\forall i \in Q, h(i) \in S$ . It assigns every response data point $i \in Q$ to one of the elements $j$ in $S$ . Then, for any arbitrary ability parameter $\theta \in \Theta$ we can write + +$$ +\begin{array}{l} \sum_ {i \in Q} \nabla l _ {i} (\theta) = \sum_ {i \in Q} \left[ \nabla l _ {i} (\theta) - \nabla l _ {h (i)} (\theta) + \nabla l _ {h (i)} (\theta) \right] (14) \\ = \sum_ {i \in Q} \left[ \nabla l _ {i} (\theta) - \nabla l _ {h (i)} (\theta) \right] + \sum_ {j \in S} \gamma_ {j} \nabla l _ {j} (\theta) (15) \\ \end{array} +$$ + +Subtracting and taking the expected norm of the both sides, we get an upper bound on the error. According to the triangle inequality, we have + +$$ +\mathbb {E} \left[ \| \sum_ {i \in Q} \nabla l _ {i} (\theta) - \sum_ {i \in S} \gamma_ {j} \nabla l _ {j} (\theta) \| \right] \leq \sum_ {i \in Q} \mathbb {E} \left[ \| \nabla l _ {i} (\theta) - \nabla l _ {h ^ {\prime} (i)} (\theta) \| \right]. \tag {16} +$$ + +When the mapping function $h$ is to map each element in $Q$ to the one in $S$ that is closest to its expected gradient, the right side of inequality (16) is minimized, or minimum expected distance between the gradient: $h(i) = \arg \min_{j\in S}\mathbb{E}\left[\| \nabla l_i(\theta) - \nabla l_j(\theta)\| \right]$ . Therefore, the upper bound of the expected gradient difference can be further constrained: + +$$ +\min _ {| S | = T} \mathbb {E} \left[ \| \sum_ {i \in Q} \nabla l _ {i} (\theta) - \sum_ {i \in S} \gamma_ {j} \nabla l _ {j} (\theta) \| \right] \leq \sum_ {i \in Q} \min _ {j \in S} \mathbb {E} \left[ \| \nabla l _ {i} (\theta) - \nabla l _ {j} (\theta) \| \right]. \tag {17} +$$ + +Next, define a similarity function $\widetilde{w}(i,j)$ which measures the expected gradient similarity between response pair $i$ and $j$ : $\widetilde{w}(i,j) = d - \max_{\theta \in \Theta} \mathbb{E}\left[\|\nabla l_i(\theta) - \nabla l_j(\theta)\|\right]$ , and $d = \max_{i \in Q, j \in S} \max_{\theta \in \Theta} \|\nabla l_i(\theta) - \nabla l_j(\theta)\|$ is the maximum pairwise gradient distance. Thus, the optimization problem (Eq.(13)) can also be transformed as: + +$$ +\max _ {| S | = T} \sum_ {i \in Q} \max _ {j \in S} \widetilde {w} (i, j). \tag {18} +$$ + +Following the same way of origin problem, its corresponding submodular $\widetilde{F}(S) = \sum_{i \in Q} \max_{j \in S} \widetilde{w}(i,j)$ , which is the same with our proposed method. Thus, the designed selection algorithm is the greedy algorithm of the optimization problem (Eq.(13)). + +# B Proofs of Theorem 1 + +Theorem 1 (Expected estimation error bound). Assume that the loss function for ability estimation is $\alpha$ -strongly convex (e.g., IRT). Let $S$ be a weighted subset obtained by the proposed method. Then with learning rate $\frac{1}{\alpha}$ , ability estimation in gradient descent applied to the subsets has the following expected estimation error bound: + +$$ +\mathbb {E} \left[ \| \theta^ {t + 1} - \theta^ {*} \| ^ {2} \right] \leq \frac {2 \epsilon D \alpha + \sigma_ {l} ^ {2} + 2 \sigma_ {f} D \alpha H _ {p} \left(\theta^ {t} , \theta^ {*}\right)}{\alpha^ {2}} \tag {19} +$$ + +$$ +w h e r e H _ {p} \left(\theta^ {t}, \theta^ {*}\right) = \mathbb {E} _ {\left(q, y\right) \sim p _ {\theta t}} \left[ \frac {1}{p _ {\theta *} (q , y)} \right] \tag {20} +$$ + +where $\theta^{*}$ is the optimal estimate using full responses, $\sigma$ is an upper bound on the norm of the gradients, and $D = \max_{\theta} \| \theta - \theta^{*} \|$ . + +Proof. We now provide the expected estimation error bound for strongly convex functions building on the analysis of [15, 58]. Let $g^{t} = \frac{1}{|Q|}\sum_{i\in Q}\nabla l_{i}(\theta^{t})$ , $g_{S}^{t} = \sum_{i\in S}\gamma_{i}\nabla l_{i}(\theta^{t})$ , and normalize the subset weights at every iteration i.e., $\sum_{j\in S}\gamma_{j} = 1$ . Let $L(\theta) = \sum_{i\in S}\gamma_{i}l_{i}(\theta)$ be the weighted subset training loss parameterized by ability parameters $\theta$ , and we have: + +$$ +\begin{array}{l} \left\| \theta^ {t + 1} - \theta^ {*} \right\| ^ {2} = \left\| \theta^ {t} - \eta g _ {S} ^ {t} - \theta^ {*} \right\| ^ {2} \\ = \| \theta^ {t} - \theta^ {*} \| ^ {2} - 2 \eta \left(g _ {S} ^ {t}\right) ^ {\top} \left(\theta^ {t} - \theta^ {*}\right) + \eta^ {2} \| g _ {S} ^ {t} \| ^ {2} \\ \leq \left\| \theta^ {t} - \theta^ {*} \right\| ^ {2} - 2 \eta \left[ L \left(\theta^ {t}\right) - L \left(\theta^ {*}\right) \right] + \eta^ {2} \left\| g _ {S} ^ {t} \right\| ^ {2} \\ \leq \| \theta^ {t} - \theta^ {*} \| ^ {2} - 2 \eta \left[ \left(g _ {S} ^ {*}\right) ^ {\top} \left(\theta^ {t} - \theta^ {*}\right) + \frac {\alpha}{2} \| \theta^ {t} - \theta^ {*} \| ^ {2} \right] \\ + \eta^ {2} \| g _ {S} ^ {t} \| ^ {2} \quad (\alpha - \text {s t r o n g l y c o n v e x}). \tag {21} \\ \end{array} +$$ + +According to Cauchy-Schwarz inequality, we have + +$$ +\left| \left(g _ {S} ^ {*}\right) ^ {\top} \left(\theta^ {t} - \theta^ {*}\right) \right| \leq \left\| g _ {S} ^ {*} \right\| \left\| \theta^ {t} - \theta^ {*} \right\|. \tag {22} +$$ + +Thus + +$$ +\left\| \theta^ {t + 1} - \theta^ {*} \right\| ^ {2} \leq \left\| \theta^ {t} - \theta^ {*} \right\| ^ {2} + 2 \eta \left[ \| g _ {S} ^ {*} \| \| \theta^ {t} - \theta^ {*} \| \right] - \eta \alpha \| \theta^ {t} - \theta^ {*} \| ^ {2} + \eta^ {2} \| g _ {S} ^ {t} \| ^ {2} \tag {23} +$$ + +Taking expectation with respect to the randomness in the label (i.e., the correctness of response) decided by $\theta^t$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \theta^ {t + 1} - \theta^ {*} \| ^ {2} \right] \leq (1 - \eta \alpha) \mathbb {E} \left[ \| \theta^ {t} - \theta^ {*} \| ^ {2} \right] \\ + 2 \eta \mathbb {E} \left[ \| g _ {S} ^ {*} \| \| \theta^ {t} - \theta^ {*} \| \right] + \eta^ {2} \mathbb {E} \left[ \| g _ {S} ^ {t} \| ^ {2} \right]. \tag {24} \\ \end{array} +$$ + +Assuming gradients have a bounded norm $\| \nabla l_{j}(\theta)\| \leq \sigma_{l}$ and $\| \nabla f_j(\theta)\| \leq \sigma_f$ . Thus, from reverse triangle inequality, we can write + +$$ +\mathbb {E} \left[ \| g _ {S} ^ {t} \| ^ {2} \right] = \mathbb {E} \left[ \| \sum_ {j \in S} \gamma_ {j} \nabla l _ {j} \left(\theta^ {t}\right) \| ^ {2} \right] \leq \sigma_ {l} ^ {2}. \tag {25} +$$ + +From Lemma 1, we can assume that the subset $S$ and corresponding per-element weights $\gamma_{j}$ can approximate the full gradient with an expected error at most $\epsilon >0$ , i.e., $\mathbb{E}_{y\sim p_{\theta^t}}\left[\| \sum_{i\in Q}\nabla l_i(\theta) - \sum_{i\in S}\gamma_j\nabla l_j(\theta)\|\right]\leq \epsilon$ . Thus, from reverse triangle inequality $\mathbb{E}[\| g_S^*\| ]\leq \mathbb{E}[\| g^*\| ] + \epsilon$ and $\mathbb{E}[\| g^{*}\| ]$ can be further derived as follows: + +$$ +\begin{array}{l} \mathbb {E} [ \| g ^ {*} \| ] = \frac {1}{| Q |} \mathbb {E} \left[ \| \sum_ {i \in Q} \nabla l _ {i} (\theta^ {*}) \| \right] \leq \frac {1}{| Q |} \sum_ {i \in Q} \mathbb {E} \left[ \| \nabla l _ {i} (\theta^ {*}) \| \right] \\ = \frac {1}{| Q |} \sum_ {i \in Q} \mathbb {E} _ {y _ {i} \sim p _ {\theta t}} \left[ \| \nabla_ {\theta = \theta^ {*}} (- y _ {i} \ln f _ {i} (\theta) - (1 - y) \ln (1 - f _ {i} (\theta))) \| \right] \\ = \frac {1}{| Q |} \sum_ {i \in Q} \mathbb {E} _ {y _ {i} \sim p _ {\theta t}} \left[ \| - \frac {y _ {i}}{f _ {i} \left(\theta^ {*}\right)} \nabla f _ {i} \left(\theta^ {*}\right) + \frac {1 - y _ {i}}{1 - f _ {i} \left(\theta^ {*}\right)} \nabla f _ {i} \left(\theta^ {*}\right) \| \right] \\ = \frac {1}{| Q |} \sum_ {i \in Q} \| \nabla f _ {i} \left(\theta^ {*}\right) \| \mathbb {E} _ {y _ {i}} \left| \frac {y _ {i}}{f _ {i} \left(\theta^ {*}\right)} - \frac {1 - y _ {i}}{1 - f _ {i} \left(\theta^ {*}\right)} \right| \\ = \frac {1}{| Q |} \sum_ {i \in Q} \| \nabla f _ {i} \left(\theta^ {*}\right) \| \sum_ {y \in \{0, 1 \}} \frac {p _ {\theta^ {*}} \left(q _ {i} , y _ {i} = y\right)}{p _ {\theta^ {*}} \left(q _ {i} , y _ {i} = y\right)} \\ = \frac {1}{| Q |} \sum_ {i \in Q} \| \nabla f _ {i} \left(\theta^ {*}\right) \| \mathbb {E} _ {\left(q _ {i}, y _ {i}\right) \sim p _ {\theta^ {t}}} \left[ \frac {1}{p _ {\theta^ {*}} \left(q _ {i} , y _ {i}\right)} \right] \\ \leq \sigma_ {f} \mathbb {E} _ {(q, y) \sim p _ {\theta^ {t}}} \left[ \frac {1}{p _ {\theta^ {*}} (q , y)} \right], \tag {26} \\ \end{array} +$$ + +where $p_{\theta}(q,y)$ is the response distribution of the student with the ability $\theta$ , and $f_{i}(\theta) = p_{\theta}(q_{i},y_{i} = 1)$ is the output of IRT. Also assuming that $\| \theta - \theta^{*} \| \leq D$ , we have, + +$$ +\begin{array}{l} \mathbb {E} \left[ \| g _ {S} ^ {*} \| \| \theta^ {t} - \theta^ {*} \| \right] \leq D \mathbb {E} [ \| g ^ {*} \| ] + \epsilon D \\ \leq \sigma_ {f} D \mathbb {E} _ {(q, y) \sim p _ {\theta t}} \left[ \frac {1}{p _ {\theta *} (q , y)} \right] + \epsilon D. \tag {27} \\ \end{array} +$$ + +Combining the Equation (24) to (26) and let $\eta = \frac{1}{\alpha}$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \theta^ {t + 1} - \theta^ {*} \| ^ {2} \right] \leq \frac {2 \epsilon D \alpha + \sigma_ {l} ^ {2}}{\alpha^ {2}} + \frac {2 \sigma_ {f} D}{\alpha} \mathbb {E} _ {(q, y) \sim p _ {\theta^ {t}}} \left[ \frac {1}{p _ {\theta^ {*}} (q , y)} \right] \\ = \frac {2 \epsilon D \alpha + \sigma_ {l} ^ {2} + 2 \sigma_ {f} D \alpha H _ {p} \left(\theta^ {t} , \theta^ {*}\right)}{\alpha^ {2}}, \tag {28} \\ \end{array} +$$ + +where $H_{p}(\theta^{t},\theta^{*}) = \mathbb{E}_{(q,y)\sim p_{\theta^{t}}}\left[\frac{1}{p_{\theta^{*}}(q,y)}\right]$ . So we complete the proof. + +![](images/f7599be6f8c30f3b008ba1a5e44c62f59d28e0dfe0ff4b0c2a9a8316c5757ac7.jpg) + +# C Proofs of Theorem 2 + +Theorem 2. Assume that $\theta^t$ , estimated by the cross-entropy loss, can minimize the empirical risk i.e., $\sum_{i\in S_t}l_i(\theta^t) = 0$ . Then $H_{p}(\theta ,\theta^{*})$ can take its minimum when $\theta = \theta^t$ , that is + +$$ +H _ {p} \left(\theta^ {t}, \theta^ {*}\right) \leq H _ {p} \left(\theta , \theta^ {*}\right), \quad \forall \theta \in \Theta \tag {29} +$$ + +Proof. When we use the binary cross-entropy (BCE) loss to estimate the ability $\theta$ and minimize the empirical risk at step $t$ , we have + +$$ +\begin{array}{l} \theta^ {t} = \arg \min _ {\theta} \sum_ {i \in S _ {t}} l _ {i} (\theta) \\ = \arg \max _ {\theta} \sum_ {i \in S _ {t}} y _ {i} \log f _ {i} (\theta) + (1 - y _ {i}) \log (1 - f _ {i} (\theta)) \\ = \arg \max _ {\theta} \sum_ {i \in S _ {t}} \log p _ {\theta} \left(q _ {i}, y _ {i}\right) \\ \approx \arg \max _ {\theta} \mathbb {E} _ {(q, y) \sim p _ {\theta_ {0}}} \log p _ {\theta} (q, y), \tag {30} \\ \end{array} +$$ + +where $f_{i}(\theta) = p_{\theta}(q_{i},y_{i} = 1)$ is the output of the IRT. We argue that the student's response to the question $q$ is determined by the true ability $\theta_0$ in the CAT process, i.e., $y_{i} = \arg \max_{y}p_{\theta_{0}}(q_{i},y)$ . Therefore, when the above empirical risk achieves its minimum, i.e., $\sum_{i\in S_t}l_i(\theta^t) = 0$ , $\mathbb{E}_{(q,y)\sim p_{\theta_0}}\log p_\theta (q_i,y_i)$ can reach the maximum 0, and the real and predicted responses are the same: + +$$ +y _ {\max } = \arg \max _ {y} p _ {\theta^ {t}} (q, y) = \arg \max _ {y} p _ {\theta_ {0}} (q, y), \forall q \tag {31} +$$ + +and the predicted probability should approach 1: $p_{\theta^t}(q, y_{max}) \to 1$ . + +Next, we discuss the minimization of the statistical distance $H_{p}(\theta^{t},\theta^{*}) = \mathbb{E}_{(q,y)\sim p_{\theta^{t}}}[1 / p_{\theta^{*}}(q,y)]$ term in the upper bound in Theorem 1. In general, make now the simplifying assumption that distribution $p_{\theta^*}(q,y)$ is smooth and has a global maximum probability $p_{max}$ , attained at a point $(q,y_{max})$ for question $q$ , so that $1 / p_{\theta^{*}}(q,y)$ has a global minimum at the same $y_{max}$ . Obviously, we can choose: + +$$ +p _ {\theta^ {t}} (q, y) = \delta (y - y _ {\max },) \tag {32} +$$ + +where $\delta$ is the delta function [59], making $\mathbb{E}_{(q,y)\sim p_{\theta^t}}[1 / p_{\theta^*}(q,y)]$ to its minimum. Based on the above findings, in the CAT situation (binary classification), the optimal distribution $p_{\theta^t}$ (minimizing the $H_{p}$ in the upper bound) needs to: have the same classification result as $p_{\theta^*}$ , i.e., $y_{max} = \arg \max_y p_{\theta^t}(q,y) = \arg \max_y p_{\theta^*}(q,y) \approx \arg \max_y p_{\theta_0}(q,y)$ , and its corresponding probability is as large as possible, i.e., $p_{\theta^t}(q,y_{max}) \to 1$ . In this case, $H_{p}(\theta^{t},\theta^{*})$ can take its minimum. All of these findings are consistent with the conclusion that minimizing BCE loss for ability estimation in Eq.(30) and (31). So we complete the proof. + +![](images/dd0b4c854ec87341e277c1b5e0dfe670b1e5b5210257e4cb7e27fd1ea9cc331a.jpg) +Figure 4: The illustration of the distance measurements in similarity function $\widetilde{w}(i,j)$ : replace the entire ability space $\Theta$ with the student's possible ability estimates $\Theta_{t} = \{\theta_{i}^{t}\}_{i=1}^{m}$ to reduce search space and speed up the selection. + +# D Implementation Details of BECAT + +To improve its complexity, we provide two implementation tricks: + +(1) Lazy Evaluations. By exploiting the submodularity, we use the lazy evaluations approach presented in [30, 31] to speed up the selection process and make the running time faster in practice. At step $t$ , the greedy selection algorithm must identify the question $q$ with maximum marginal gain $\Delta(q|S_{t-1})$ . Instead, this lazy method involves using a max-heap $O(1)$ lookup and $O(\log(n))$ insertion) to keep an upper bound on the gain of each question that comes from submodularity, i.e., the marginal benefits of any question $q \in Q$ are monotonically nonincreasing during the selection: + +$$ +\Delta (q \mid S _ {t - 1}) \geq \Delta (q \mid S _ {t}) \quad \forall q \in Q. \tag {33} +$$ + +Instead of recomputing $\Delta(q|S_{t-1})$ at each step for each element $q \in Q$ (requiring $O(|Q|)$ computations), the accelerated lazy algorithm maintains a list of upper bounds $\Delta'(q)$ (initialized to $\infty$ ) on the marginal gains sorted in decreasing order (max-heap order). Specifically, at each step, the algorithm first selects the maximal from this ordered list, i.e., the top of the heap. It then updates this bound $\Delta'(q) \gets \Delta(q|S_{t-1})$ in the heap. As soon as the $\Delta'(q)$ is still at the top of the heap, then submodularity Eq.(33) guarantees that $\Delta(q|S_{t-1}) \geq \Delta(q'|S_{t-1})$ for all $q' \neq q$ , and therefore we do not need to evaluate any more items. If it does not satisfy this condition, we just insert it with $\Delta'(q)$ as the new upper bound and repeat the above procedure until the qualified question is selected. While the worst case is the same, in practice this method has enormous speedups over the standard greedy algorithm [60]. + +(2) Reducing the Ability Space $\Theta$ In our method, note that the similarity function $\widetilde{w}(i,j) = d - \max_{\theta \in \Theta} \mathbb{E}_{y \sim p_{\theta t}}[\|\nabla l_i(\theta) - \nabla l_j(\theta)\|]$ requires finding the worst case over the entire ability parameter space $\Theta$ , which is too expensive for CAT systems. In fact, it only needs to be calculated in each student's own possible ability space. In other words, we can find an ability subspace $\Theta_t \subseteq \Theta$ specialized to each student. We utilize a novel MLE-based estimation approach [32], which models student's multifaceted nature and sequentially generates a set of possible abilities $\Theta_t = \{\theta_i^t\}_{i=1}^m$ at each step $t$ : + +$$ +\theta_ {i} ^ {t} = \arg \min _ {\theta_ {i}} \sum_ {i \in S _ {t}} l _ {i} \left(\theta_ {i}\right) - \frac {\lambda}{2} \left\| \theta_ {i} - \bar {\theta} _ {i} \right\| ^ {2} \quad f o r i = 1, \dots , m, \tag {34} +$$ + +where $\bar{\theta}_i$ is the average of previous $i - 1$ estimates and the term $\left\| \theta_i - \bar{\theta}_i\right\|^2$ ensures the diversity of abilities in $\{\theta_i^t\}_{i = 1}^m$ . We refer the reader to [32] for more details about this estimation method. As shown in Figure 4 we replace the entire parameter space in the optimization problem with student's $m$ potential estimates $\Theta_t = \{\theta_i^t\}_{i = 1}^m$ . The complexity can be reduced to $O(|Q|^2 m)$ and $m \ll |\Theta|$ . + +# E Details of Experiment + +The impact of test length. We use simulation experiments to verify the choice of max length $T$ : Due to the unknown of the true ability $\theta_0$ , we artificially generate it and conduct the Simulation + +![](images/135ba4d6a130d4646516ea66b235d5114eb56de101650f9095f35458185f6564.jpg) +Figure 5: Simulation experiments of ability estimation using MSE: $\mathbb{E}[\| \theta^t -\theta_0\| ^2 ]$ + +of Ability Estimation experiment on the EXAM dataset using the mean square error $\mathbb{E}[\| \theta^t -\theta_0\| ^2 ]$ between the ability estimate $\theta^t$ at each step and the true ability $\theta_0$ (Figure 5): The classic Fisher method can reduce the evaluation error quickly and 20 is sufficient for the length of a typical adaptive test. Thus, we fix the max length $T = 20$ + +Experimental Implementation Details. As 20 is sufficient for the length of a typical test [13], we also fix the max length $T = 20$ . We implement all the methods with PyTorch. We set batch size to 64 and the learning rate to 0.001, and optimize all the parameters using the Adam algorithm [61] on a Tesla V100-SXM2-32GB GPU. + +# E.1 Statistics of the datasets + +Table 2: Statistics of the datasets + +
DatasetASSISTNIPS-EDUEXAM
#Students20,704220,2749,214
#Questions15,07127,6131,650
#Response logs1,768,25319,181,192133,398
#Response logs per student85.4187.0814.48
#Response logs per question117.33694.6480.85
+ +# E.2 Detailed Evaluation Method + +The goal of CAT is to estimate the student's ability accurately with the fewest steps. However, since the true ability cannot be obtained as the ground truth, there are usually two tasks to verify the performance of different CAT methods following previous works [9, 12]: 1) Student Score Prediction and 2) Simulation of Ability Estimation: + +1) Student Score Prediction. To evaluate the ability estimate output by the CAT system, this estimate can be used for predicting the student's scores (correct or wrong) on the questions he/she has answered in the held-out response data. This is an indirect evaluation method. Following the common strategy [12], we use $70\% - 20\% - 10\%$ students for training, validation, and testing respectively, and the students in validation/testing set won't appear in training. The training set is used for initializing some question's parameters in IRT (e.g., difficulty parameter in IRT), and the data-driven selection algorithm baselines (e.g., BOBCAT). In the validation/testing, the responses of each student $i$ are further divided into the candidate $(Q_{i})$ and meta $(M_{i})$ question sets to simulate CAT procedure, following [9, 12]. Specifically, at each step, (1) different selection algorithms first select a question from $Q_{i}$ ; (2) IRT then updates the ability estimate with his/her response to it; (3) evaluate this estimate's accuracy by predicting binary-valued responses (correct or wrong) on the held-out meta set $M_{i}$ . This task assumes that the more accurate the score prediction is, the more accurate the ability + +![](images/c9a39223b3d920ae820d0666478e251d160072b5e1faba65cf3393f10b20527e.jpg) +Figure 6: Comparison of the time consumed by different methods at the selection step. The implementation tricks provide $2 \times$ to $3 \times$ speedup. + +estimate is. This task considers that the accuracy of score prediction can reflect the accuracy this ability estimates. Thus, from this binary classification perspective, we use Prediction Accuracy (ACC) [46] and Area Under ROC Curve (AUC) [62] for the evaluation of different selection algorithms. + +2) Simulation of Ability Estimation. This is CAT's traditional evaluation method [2]. Since the ground truth of student ability $\theta_0$ is not available, we artificially generate their $\theta_0$ and further simulate student-question interaction process within CAT systems. For the rationality of the generated $\theta_0$ , we use all the students' responses in the dataset to estimate their ability $\{\theta_0^1, \theta_0^2, \dots, \theta_0^N\}$ as the ground truth [9, 63]. Also, the dataset is used to learn questions' parameters and fix them. Different from the first task, such settings can simulate students with $\theta_0$ responding to any question in $Q$ , thus the candidate question in selection is the entire bank $Q$ . Specifically, (1) different selection algorithms first select a question from the entire bank $Q$ ; (2) IRT then updates the estimate with the response to it; (3) evaluate this estimate's accuracy by computing the difference between the estimated and the true ability. In this way, we can use Mean Square Error (MSE) to evaluate the accuracy of estimation. + +# E.3 Implementation Tricks for Speedups + +To solve the BECAT optimization problem, we need to calculate the gradients of all items in question bank, leading to high computation requirements for large datasets/examinations. To this end, we consider two speed-up tricks: lazy evaluations and multifaceted estimation. Lazy evaluations take advantage of submodularity to avoid calculating the conditional gain $\Delta(q|S_{t-1})$ of all the candidate items. The multifaceted estimation method [32] can effectively reduce the ability space when calculating the similarity between two items, thus reducing the time to calculate $\Delta(q|S_{t-1})$ of each item. In Figure 6, we compare the time (second) spent on question selection by different methods3, and find that the proposed implement tricks give the average speedup of $3\times$ , and achieve $7\times$ to recent MAAT. And it is almost the same time as the traditional informativeness-based method KLI. This demonstrates that our greedy selection algorithm in BECAT is fast in practice. + +# E.4 BECAT Analysis + +In this section, we will further analyze its selection time/latency, the effectiveness of Theorem 1 and 2, and the characteristics of questions selected by BECAT, respectively. + +Upper Bound Analysis of Estimation In Theorem 1, the most important component is $H_{p}(\theta^{t},\theta^{*})$ , which determines the upper bound on estimation error and the convergence behavior of the proposed BECAT method. Therefore, to verify the effectiveness and theoretical guarantees of our explicit selection algorithm, i.e., the estimation error bound conclusion in Theorem 1, we compute $\bar{H}_{p}$ for each step on the strongly convex loss function: the cross-entropy loss of the L2-regularized IRT. This experiment is still based on the simulation setting in Appendix E.2 and the results are shown in + +![](images/10e19024c7cde6229d6f04e823560ceb96c8f383b5dd8b840fae798f4cd90102.jpg) +(a) + +![](images/9b9128acc0baea26842fe5d666a8945bb6f5a3138dfc234b99725981f7012ec2.jpg) +(b) +Figure 7: (a) The value of $H_{p}(\theta^{t},\theta^{*})$ in the upper-bound of estimation. (b) Normed difference between the full gradient on $|\dot{Q}|$ and the gradient of the question subset found by different methods. +Figure 7(a): In the first few steps, the $H_{p}(\theta^{t},\theta^{*})$ can rapidly decrease to the minimum. As the test progresses, the ability estimate tends to be accurate (Figure 7(a)), and the upper-bound $H_{p}$ remains near the minimum, which reflects the good convergence behavior of BECAT. This demonstrates that our expected gradient difference approximation is reliable in practice. + +Gradient Approximation. In Proposition 1 we show that $\theta^{*}$ is an approximation of student true ability. To approach the new target $\theta^{*}$ , an approximation to the full gradient of the bank $Q$ is required. Figure 7(b) demonstrates the norm of the difference between the weighted gradient of the questions subset found by BECAT and the full gradient. This figure also compares the normed gradient difference of other subsets found by other methods where each response is weighted by $|Q| / |S|$ . The gradient difference is calculated by sampling the full gradient at various points in the parameter space. Note that the gradient difference obtained by BECAT decreases significantly with the increase of $t$ and it is much smaller than that of other methods, which proves that the expected gradient difference approximation method is accurate. Combining the experimental results on the two tasks in Experiments, the better the prediction/estimation performance of the method, the smaller the gradient difference. This demonstrates that the closeness to $\theta^{*}$ reflects the closeness to the true ability $\theta_{0}$ , which further proves the rationality of using $\theta^{*}$ as a new target. + +Table 3: The variance results of different methods on ACC and AUC metrics. +(a) Variance results on ASSIST + +
CDMIRTNeuralCDM
Metric@StepACC/AUC@5ACC/AUC@10ACC/AUC@20ACC/AUC@5ACC/AUC@10ACC/AUC@20
Random0.0378/0.04620.0090/0.02260.0052/0.01420.0064/0.00720.0013/0.00420.0012/0.0038
FSI0.0446/0.02570.0147/0.00760.0067/0.0024---
KLI0.0163/0.00580.0042/0.00370.0023/0.0019---
MAAT0.0121/0.02020.0082/0.01500.0162/0.02860.0123/0.00730.0083/0.02760.0253/0.0242
BOBCAT0.0120/0.01520.0052/0.00570.0165/0.01480.0054/0.00660.0047/0.00240.0126/0.0023
NCAT0.0063/0.00410.0065/0.00550.0012/0.00070.0032/0.00310.0037/0.00290.0035/0.0020
BECAT0.0100/0.01360.0062/0.00400.0055/0.00230.0022/0.00220.0019/0.00110.0012/0.0010
+ +(b) Variance results on NIPS-EDU + +
CDMIRTNeuralCDM
Metric@StepACC / AUC@5ACC / AUC@10ACC / AUC@20ACC / AUC@5ACC / AUC@10ACC / AUC@20
Random0.0185 / 0.04060.0183 / 0.04310.0182 / 0.04780.0185 / 0.04060.0183 / 0.04310.0182 / 0.0478
FSI0.0270 / 0.03500.0269 / 0.03570.0274 / 0.0401---
KLI0.0231 / 0.03150.0218 / 0.02780.0196 / 0.0247---
MAAT0.0207 / 0.03470.0232 / 0.03770.0256 / 0.04120.0192 / 0.02980.0216 / 0.03060.0253 / 0.0363
BOBCAT0.0203 / 0.03110.0185 / 0.02670.0169 / 0.02470.0197 / 0.03150.0190 / 0.02890.0200 / 0.0310
NCAT0.0178 / 0.02460.0159 / 0.02140.0142 / 0.01980.0176 / 0.02580.0163 / 0.02320.0169 / 0.0246
BECAT0.0216 / 0.02940.0185 / 0.02480.0169 / 0.02250.0204 / 0.03040.0167 / 0.02510.0165 / 0.0225
+ +(c) Variance results on EXAM + +
CDMIRTNeuralCDM
Metric@StepACC / AUC@5ACC / AUC@10ACC / AUC@20ACC / AUC@5ACC / AUC@10ACC / AUC@20
Random0.0171 / 0.01890.0114 / 0.01360.0084 / 0.00860.0025 / 0.01070.0071 / 0.01020.0070 / 0.0090
FSI0.0195 / 0.01120.0097 / 0.00990.0093 / 0.0069---
KLI0.0195 / 0.01120.0097 / 0.00990.0076 / 0.0098---
MAAT0.0193 / 0.01150.0104 / 0.00940.0082 / 0.00650.0191 / 0.01120.0182 / 0.00990.0053 / 0.0062
BOBCAT0.0036 / 0.01330.0070 / 0.01140.0086 / 0.00920.0142 / 0.00870.0095 / 0.00910.0012 / 0.0021
NCAT0.0033 / 0.01460.0005 / 0.00040.0004 / 0.00040.0160 / 0.01200.0148 / 0.00900.0003 / 0.0003
BECAT0.0196 / 0.01210.0022 / 0.00060.0020 / 0.00070.0172 / 0.01340.0063 / 0.00120.0090 / 0.0029
\ No newline at end of file diff --git a/aboundedabilityestimationforcomputerizedadaptivetesting/images.zip b/aboundedabilityestimationforcomputerizedadaptivetesting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..88698a8c8efef80a95cf8cdaafa829e3e2497353 --- /dev/null +++ b/aboundedabilityestimationforcomputerizedadaptivetesting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0408c2a0666a13bfa8465e685b6b03c46d128537a8db9eb24d0d0f341f9370c8 +size 882485 diff --git a/aboundedabilityestimationforcomputerizedadaptivetesting/layout.json b/aboundedabilityestimationforcomputerizedadaptivetesting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bc835d4d16723fc964edae6c139bac45cc1d9d6c --- /dev/null +++ b/aboundedabilityestimationforcomputerizedadaptivetesting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df1e9e274e4fa3204a1570ae90343a54a62e9cc2b595d5ecb93b1cfbaba589cc +size 799982 diff --git a/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_content_list.json b/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b877a8446b1311dd7fd78bdb9d755207ed18b99b --- /dev/null +++ b/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dd2add61251473152c9fdf78a29392be219dc1d4b6570076ea1798760cd1601 +size 136302 diff --git a/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_model.json b/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..cc372f7e0c49225bd02cfd72d8cf5154ba60ce2f --- /dev/null +++ b/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56bd7e3a367a4d0efb482792880507132ecba896e5e618f2ad68be1bc6215e26 +size 160728 diff --git a/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_origin.pdf b/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..31a23c7947aa9cf9709394c119b8c7b3d9c35d75 --- /dev/null +++ b/acausalframeworkfordecomposingspuriousvariations/2f817ec8-0d08-4fec-a83f-e52138b5f28a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c805857b8c2b9dcbc2a504cf553c94415f2cfd508bd563a448fa71508a2d5d54 +size 598482 diff --git a/acausalframeworkfordecomposingspuriousvariations/full.md b/acausalframeworkfordecomposingspuriousvariations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bc8b900429536ee8a592e627236b4e3f5849ae33 --- /dev/null +++ b/acausalframeworkfordecomposingspuriousvariations/full.md @@ -0,0 +1,793 @@ +# A Causal Framework for Decomposing Spurious Variations + +Drago Plecko and Elias Bareinboim + +Department of Computer Science + +Columbia University + +dp3144@columbia.edu, eb@cs.columbia.edu + +# Abstract + +One of the fundamental challenges found throughout the data sciences is to explain why things happen in specific ways, or through which mechanisms a certain variable $X$ exerts influences over another variable $Y$ . In statistics and machine learning, significant efforts have been put into developing machinery to estimate correlations across variables efficiently. In causal inference, a large body of literature is concerned with the decomposition of causal effects under the rubric of mediation analysis. However, many variations are spurious in nature, including different phenomena throughout the applied sciences. Despite the statistical power to estimate correlations and the identification power to decompose causal effects, there is still little understanding of the properties of spurious associations and how they can be decomposed in terms of the underlying causal mechanisms. In this manuscript, we develop formal tools for decomposing spurious variations in both Markovian and Semi-Markovian models. We prove the first results that allow a non-parametric decomposition of spurious effects and provide sufficient conditions for the identification of such decompositions. The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine, and we empirically demonstrate its use. + +# 1 Introduction + +Understanding the relationships of cause and effect is one of the core tenets of scientific inquiry and the human ability to explain why events occurred in the way they did. Hypotheses on possible causal relations in the sciences are often generated based on observing correlations in the world, after which a rigorous process using either observational or experimental data is employed to ascertain whether the observed relationships are indeed causal. One common way of articulating questions of causation is through the average treatment effect (ATE), also known as the total effect (TE), given by + +$$ +\mathbb {E} [ y \mid d o (x _ {1}) ] - \mathbb {E} [ y \mid d o (x _ {0}) ], \tag {1} +$$ + +where $do(\cdot)$ symbolizes the do-operator [9], and $x_0, x_1$ are two distinct values attained by the variable $X$ . Instead of just quantifying the causal effect, researchers are more broadly interested in determining which causal mechanisms transmit the change from $X$ to $Y$ . Such questions have received much attention and have been investigated under the rubric of causal mediation analysis [3, 12, 10, 14]. + +Often, however, the causal relationship may be entirely absent or account only for a part of the initially observed correlation. In these cases, the spurious (or confounded) variations between $X$ and $Y$ play a central role in explaining the phenomenon at hand. Interestingly, though, tools for decomposing spurious variations are almost entirely missing from the literature in causal inference1. + +Phenomena in which spurious variations are of central importance are abundant throughout the sciences. For instance, in medicine, the phenomenon called the obesity paradox signifies the counterintuitive association of increased body fat with better survival chances in the intensive care unit (ICU) [6]. While the full explanation is still unclear, evidence in the literature suggests that the relationship is not causal [5], i.e., it is explained by spurious variations. Spurious variations also play a central role in many epidemiological investigations [13]. In occupational epidemiology, for example, the relationship of exposure to hazardous materials with cancer is confounded by other hazardous working conditions and lifestyle characteristics [4], and such spurious variations themselves may be the target of scientific inquiry. Quantities that measure such spurious variations (or a subset thereof) are called spurious effects in this paper. + +Spurious variations are key in applications of fair and explainable AI as well. For instance, consider the widely recognized phenomenon in the literature known as redlining [15, 7], in which the location where loan applicants live may correlate with their race. Applications might be rejected based on the zip code, disproportionately affecting certain minority groups. Furthermore, in the context of criminal justice [8], the association of race with increased probability of being classified as high-risk for recidivism may in part be explained by the spurious association of race with other demographic characteristics (we take a closer look at this issue in Sec. 5). Understanding which confounders affect the relationship, and how strongly, is an important step of explaining the phenomenon, and also determining whether the underlying classifier is deemed as unfair and discriminatory. + +These examples suggest that a principled approach for decomposing spurious variations may be a useful addition to the general toolkit of causal inference, and may find its applications in a wide range of settings from medicine and public health all the way to fair and explainable AI. For concreteness, in this paper we will consider the quantity + +$$ +P (y \mid x) - P (y \mid d o (x)), +$$ + +![](images/cdcf0d6f960ee27adad0409783a51d0cce621aa2fbff8289c677d69937cea580.jpg) +Figure 1: Exp-SE representation. + +which we will call the experimental spurious effect (Exp-SE, for short). This quantity, shown graphically in Fig. 1, captures the difference in variations when observing $X = x$ vs. intervening that $X = x$ , which can be seen as the spurious counterpart of the total effect. Interestingly, the Exp-SE quantity is sometimes evoked in the causal inference literature, i.e., + +$$ +P (y \mid x) - P (y \mid d o (x)) = 0 \tag {2} +$$ + +is known as the zero-bias condition [2, 9, Ch. 6]. This condition allows one to test for the existence of confounding between the variables $X$ and $Y$ . A crucial observation is that, in many cases, the quantity itself may be of interest (instead of only its null), as it underpins the spurious variations. + +Against this background, we note that tools that allow for decomposing the Exp-SE quantity currently do not exist in the literature. Our goal in this manuscript is to fill in this gap, and provide a formalism that allows for non-parametric decompositions of spurious variations. Specifically, our contributions are the following: + +(i) We introduce the notion of a partially abducted submodel (Def. 1), which underpins the inference procedure called Partial Abduction and Prediction (Alg. 2) (akin to Balke & Pearl 3-step procedure [9, Ch. 7]). Building on this new primitive, we prove the first non-parametric decomposition result for spurious effects in Markovian models (Thm. 1), +(ii) Building on the insights coming from the new procedure, we prove the decomposition result for settings when unobserved confounding is present (Semi-Markovian models) (Thm. 3). +(iii) We develop sufficient conditions for identification of spurious decompositions (Thm 2, 4). + +# 2 Preliminaries + +We use the language of structural causal models (SCMs) as our basic semantical framework [9]. A structural causal model (SCM) is a tuple $\mathcal{M} \coloneqq \langle V, U, \mathcal{F}, P(u) \rangle$ , where $V, U$ are sets of endogenous (observables) and exogenous (latent) variables respectively, $\mathcal{F}$ is a set of functions $f_{V_i}$ , one for each $V_i \in V$ , where $V_i \gets f_{V_i}(\mathrm{pa}(V_i), U_{V_i})$ for some $\mathrm{pa}(V_i) \subseteq V$ and $U_{V_i} \subseteq U$ . $P(u)$ is a strictly positive probability measure over $U$ . Each SCM $\mathcal{M}$ is associated to a causal diagram $\mathcal{G}$ [9] over + +Figure 2: Graphical representations of the SCM in Ex. 1. +![](images/f28f760955cebc4f26209981a1433df6eb65d05c911bcee2695a7ee505825f1e.jpg) +(a) Causal diagram corresponding to the SCM in Ex. 1. + +![](images/5bc3099aab387631d68911864f9cc6a745138a8249ca7288e18bac66b8b95beb.jpg) +(b) Extended representation of Ex. 1, latent variables in red. + +![](images/c27c621b321c47234446a6fe68ee64f8808b9368583733f65c00b1d1de155e33.jpg) +(c) Diagram of Ex. 1 under $do(X = x)$ intervention. + +the node set $V$ where $V_{i}\rightarrow V_{j}$ if $V_{i}$ is an argument of $f_{V_j}$ , and $V_{i}\leftrightarrow V_{j}$ if the corresponding $U_{V_i},U_{V_j}$ are not independent [2]. A model with no bidirected edges is called Markovian, while a model with bidirected edges is called Semi-Markovian. An instantiation of the exogenous variables $U = u$ is called a unit. By $Y_{x}(u)$ we denote the potential response of $Y$ when setting $X = x$ for the unit $u$ , which is the solution for $Y(u)$ to the set of equations obtained by evaluating the unit $u$ in the submodel $\mathcal{M}_x$ , in which all equations in $\mathcal{F}$ associated with $X$ are replaced by $X = x$ . In a slight abuse of notation, we also replace $Y = y$ with just $y$ whenever the former is clear from the context. We next introduce an important inferential procedure for solving different tasks in causal inference. + +# 2.1 Abduction, Action and Prediction + +The steps of the abduction-action-prediction method can be summarized as follows: + +Algorithm 1 (Abduction, Action and Prediction [9]). Given an SCM $\langle \mathcal{F}, P(u) \rangle$ , the conditional probability $P(Y_C \mid E = e)$ of a counterfactual sentence "if it were $C$ then $Y$ ", upon observing the evidence $E = e$ , can be evaluated using the following three steps: + +(i) Abduction - update $P(u)$ by the evidence $e$ to obtain $P(u \mid e)$ , +(ii) Action - modify $\mathcal{F}$ by the action $do(C)$ , where $C$ is an antecedent of $Y$ , to obtain $\mathcal{F}_C$ , +(iii) Prediction - use the model $\langle \mathcal{F}_C, P(u \mid e) \rangle$ to compute the probability of $Y_C$ . + +In the first step, the probabilities of the exogenous variables $U$ are updated according to the observed evidence $E = e$ . Next, the model $\mathcal{M}$ is modified to a submodel $\mathcal{M}_C$ . The action step allows one to consider queries related to interventions or imaginative, counterfactual operations. In the final step, the updated model $\langle \mathcal{F}_C, P(u \mid e) \rangle$ is used to compute the conditional probability $P(y_C \mid e)$ . There are two important special cases of the procedure. Whenever the action step is empty, the procedure handles queries in the first, associative layer of the Pearl's Causal Hierarchy (PCH, [2]). Whenever the abduction step is empty, but the action step is not, the procedure handles interventional queries in the second layer of the PCH. The combination of the two steps, more generally, allows one to consider queries in all layers of the PCH, including the third, counterfactual layer. In the following example, we look at the usage of the procedure on some queries. + +Example 1 (Abduction, Action, Prediction). Consider the following SCM: + +$$ +\mathcal {F}: \left\{ \begin{array}{l} X \leftarrow f _ {X} \left(U _ {X}, U _ {X Z}\right) \\ Z \leftarrow f _ {Z} \left(U _ {Z}, U _ {X Z}\right) \\ Y \leftarrow f _ {Y} \left(X, Z, U _ {Y}\right), \end{array} \right. \tag {3} +$$ + +with $P(U_{X}, U_{XZ}, U_{Z}, U_{Y})$ the distribution over the exogenous variables. The causal diagram of the model is shown in Fig. 2a, with an explicit representation of the exogenous variables in Fig. 2b. + +We are first interested in the query $P(y \mid x)$ in the given model. Based on the abduction-prediction procedure, we can simply compute that: + +$$ +P (y \mid x) = \sum_ {u} \mathbb {1} (Y (u) = y) P (u \mid x) = \sum_ {u} \mathbb {1} (Y (u) = y) P \left(u _ {z}, u _ {y}\right) P \left(u _ {x}, u _ {x z} \mid x\right). \tag {6} +$$ + +where the first step follows from the definition of the observational distribution, and the second step follows from noting the independence $U_Z, U_Y \perp U_X, U_{XZ}, X$ . In the abduction step, we can compute the probabilities $P(u_x, u_{xz} \mid x)$ . In the prediction step, query $P(y \mid x)$ is computed based on Eq. 6. + +Based on the procedure, we can also compute the query $P(y_{x})$ (see Fig. 2c): + +$$ +P \left(y _ {x}\right) = \sum_ {u} \mathbb {1} \left(Y _ {x} (u) = y\right) P (u) = \sum_ {u} \mathbb {1} \left(Y \left(x, u _ {x z}, u _ {z}, u _ {y}\right) = y\right) P (u). \tag {7} +$$ + +where the first step follows from the definition of an interventional distribution, and the second step follows from noting that $Y_{x}$ does not depend on $u_{x}$ . In this case, the abduction step is void, since we are not considering any specific evidence $E = e$ . The value of $Y(x,u_{xz},u_z,u_y)$ can be computed from the submodel $\mathcal{M}_x$ . Finally, using Eq. 7 we can perform the prediction step. We remark that + +$$ +\mathbb {1} (Y (x, u _ {x z}, u _ {z}, u _ {y}) = y) = \sum_ {u _ {x}} \mathbb {1} (Y (u _ {x}, u _ {x z}, u _ {z}, u _ {y}) = y) P (u _ {x} \mid x, u _ {x z}, u _ {z}, u _ {y}), \tag {8} +$$ + +by the law of total probability and noting that $X$ is a deterministic function of $u_{x}, u_{xz}$ . Thus, $P(y_{x})$ also admits an alternative representation + +$$ +\begin{array}{l} P \left(y _ {x}\right) = \sum_ {u} \mathbb {1} \left(Y \left(u _ {x}, u _ {x z}, u _ {z}, u _ {y}\right) = y\right) P \left(u _ {x} \mid x, u _ {x z}, u _ {z}, u _ {y}\right) P \left(u _ {x z}, u _ {z}, u _ {y}\right) (9) \\ = \sum_ {u} \mathbb {1} (Y (u) = y) P \left(u _ {x} \mid x, u _ {x z}\right) P \left(u _ {x z}, u _ {z}, u _ {y}\right), (10) \\ \end{array} +$$ + +where Eq. 10 follows from using the independencies among $U$ and $X$ in the graph in Fig. 2b. We revisit the representation in Eq. 10 in Ex. 2. + +# 3 Foundations of Decomposing Spurious Variations + +After getting familiar with the abduction-action-prediction procedure, our next task is to introduce a new procedure that allows us to decompose spurious effects. First, we define the concept of a partially abducted submodel: + +Definition 1 (Partially Abducted Submodel). Let $U_{1}, U_{2} \subseteq U$ be a partition of the exogenous variables. Let the partially abducted (PA, for short) submodel with respect to the exogenous variables $U_{1}$ and evidence $E = e$ be defined as: + +$$ +\mathcal {M} ^ {U _ {1}, E = e} := \langle \mathcal {F}, P (u _ {1}) P (u _ {2} \mid u _ {1}, E) \rangle . \tag {11} +$$ + +In words, in the PA submodel, the typically obtained posterior distribution $P(u \mid e)$ is replaced by the distribution $P(u_2 \mid u_1, e)$ . Effectively, the exogenous variables $U_1$ are not updated according to evidence. The main motivation for introducing the PA model is that spurious variations arise whenever we are comparing units of the population that are different, a realization dating back to Pearson in the 19th century [11]. To give a formal discussion on what became known as Pearson's shock, consider two sets of differing evidence $E = e$ and $E = e'$ . After performing the abduction step, the variations between posterior distributions $P(u \mid e)$ and $P(u \mid e')$ will be explained by all the exogenous variables that precede the evidence $E$ . In a PA submodel, however, the posterior distribution $P(u_1)P(u_2 \mid u_1, e)$ will differ from $P(u_1)P(u_2 \mid u_1, e')$ only in variables that are in $U_2$ , while the variables in $U_1$ will induce no spurious variations. Note that if $U_1 = U$ , then the PA submodel will introduce no spurious variations, a point to which we return in the sequel. + +We now demonstrate how the definition of a PA submodel can be used to obtain partially abducted conditional probabilities: + +Proposition 1 (PA Conditional Probabilities). Let $P(Y = y \mid E = e^{U_1})$ denote the conditional probability of the event $Y = y$ conditional on evidence $E = e$ , defined as the probability of $Y = y$ in the PA submodel $\mathcal{M}^{U_1, E = e}$ (i.e., the exogenous variables $U_1$ are not updated according to the evidence). Then, we have that: + +$$ +P (Y = y \mid E = e ^ {U _ {1}}) = \sum_ {u _ {1}} P \left(U _ {1} = u _ {1}\right) P (Y = y \mid E = e, U _ {1} = u _ {1}). \tag {12} +$$ + +# 3.1 Partial Abduction and Prediction + +Based on the notion of a PA submodel, we can introduce the partial-abduction and prediction procedure: + +Figure 3: Graphical representations of the SCM in Ex. 1. +![](images/c1a4b9f4bcf73b2fff1e0e3ce0ec4cbf6a11fe077241a9246f8e427d5f58cf74.jpg) +(a) Causal diagram corresponding to the SCM in Ex. 3. + +![](images/6995bfea90821f33b31bf0d4ea0d9d75690224ab110ebd667f16c2ee5bc0c3ca.jpg) +(b) Extended graphical representation of the SCM in Ex. 3, latent variables in red. + +Algorithm 2 (Partial Abduction and Prediction). Given an SCM $\langle \mathcal{F}, P(u) \rangle$ , the conditional probability $P(Y = y \mid E = e^{U_1})$ of an event $Y = y$ upon observing the evidence $e$ , in a world where variables $U_1$ are unresponsive to evidence, can be evaluated using the following two steps: + +(i) Partial Abduction - update $P(u)$ by the evidence $e$ to obtain $P(u_{1})P(u_{2} \mid u_{1}, e)$ , where $(u_{1}, u_{2})$ is a partition of the exogenous variables $u$ , +(ii) Prediction - use the model $\langle \mathcal{F}, P(u_1)P(u_2 \mid u_1, e) \rangle$ to compute the probability of $Y = y$ . + +In the first step of the algorithm, we only perform partial abduction. The exogenous variables $U_{2}$ are updated according to the available evidence $E = e$ , while the variables $U_{1}$ retain their original distribution $P(u_{1})$ and remain unresponsive to evidence. This procedure allows us to consider queries in which only a subset of the exogenous variables respond to the available evidence. We next explain what kind of queries fall within this scope, beginning with an example: + +Example 2 (Partial Abduction and Prediction). Consider the model in Eq. 3-5. We are interested in computing the query: + +$$ +\begin{array}{l} P (y \mid x ^ {U _ {x z}, U _ {z}}) = \sum_ {u} \mathbb {1} (Y (u) = y) P \left(u _ {x z}, u _ {z}\right) P \left(u _ {x}, u _ {y} \mid u _ {x z}, u _ {x}, x\right) (13) \\ = \sum_ {u} \mathbb {1} (Y (u) = y) P \left(u _ {x z}, u _ {z}\right) P \left(u _ {y}\right) P \left(u _ {x} \mid u _ {x z}, u _ {x}, x\right) (14) \\ = \sum_ {u} \mathbb {1} (Y (u) = y) P \left(u _ {x z}, u _ {z}, u _ {y}\right) P \left(u _ {x} \mid u _ {x z}, u _ {x}, x\right), (15) \\ \end{array} +$$ + +where the first step follows from Prop. 1, and the remaining steps from conditional independencies between the $U$ variables and $X$ . Crucially, the query yields the same expression as in Eq. 10 that we obtained for $P(y_{x})$ in Ex. 1. Therefore, the conditional probability $P(y \mid x^{U_{xz}}, U_z)$ in a world where $U_{XZ}, U_Z$ are unresponsive to evidence is equal to the interventional probability $P(y_x)$ . + +As the example illustrates, we have managed to find another procedure that mimics the behavior of the interventional $(do(X = x))$ operator in the given example. Interestingly, however, in this procedure, we have not made use of the submodel $\mathcal{M}_x$ that was used in the abduction-action-prediction procedure. We next introduce an additional example that shows how the new procedure allows one to decompose spurious variations in causal models: + +Example 3 (Spurious Decomposition). Consider an SCM compatible with the graphical representation in Fig. 3b (with exogenous variables $U$ shown explicitly in red), and the corresponding Semi-Markovian causal diagram in Fig. 3a. We note that, based on the partial abduction-prediction procedure, the following two equalities hold: + +$$ +P (y \mid x) = P (y \mid x ^ {\emptyset}) \tag {16} +$$ + +$$ +P \left(y _ {x}\right) = P \left(y \mid x ^ {U _ {x z _ {1}}}, U _ {x z _ {2}}\right), \tag {17} +$$ + +which shows that + +$$ +E x p - S E _ {x} (y) = P \left(y \mid x ^ {\emptyset}\right) - P \left(y \mid x ^ {U _ {x z _ {1}}, U _ {x z _ {2}}}\right). \tag {18} +$$ + +The experimental spurious effect can be written as a difference of conditional probabilities $y \mid x$ in a world where all variables $U$ are responsive to evidence vs. a world in which $U_{XZ_1}, U_{XZ_2}$ are + +![](images/744cf701d3db48e62eb090136448e9bfbd7240f0f4b277fd1a8c9aa768f013ab.jpg) +(a) $\operatorname{Exp - SE}_x^{\emptyset ,U_{1X}}(y)$ + +![](images/ef928e9f1103b39830b5738c3ea831a09124e299888b1dcf1d2074c4faeda602.jpg) +(b) $\operatorname{Exp} - \operatorname{SE}_x^{U_1X, \{U_{1X}, U_{2X}\}}(y)$ . +Figure 4: Graphical representation of how the Exp-SE effect is decomposed in Ex. 3. + +
ProcedureSCMQueries
Abduction-Prediction⟨F, P(u | E)⟩Layer 1
Action-Prediction⟨Fx, P(u)⟩Layer 2
Abduction-Action-Prediction⟨Fx, P(u | E)⟩Layers 1, 2, 3
Partial Abduction-Prediction⟨F, P(u1)P(u2 | E)⟩Layers 1, 2, 3
+ +Table 1: Summary of the different procedures and the corresponding probabilistic causal models. + +unresponsive to evidence. Furthermore, we can also consider a refinement that decomposes the effect + +$$ +E x p - S E _ {x} (y) = \underbrace {P \left(y \mid x ^ {\emptyset}\right) - P \left(y \mid x ^ {U _ {x z _ {1}}}\right)} _ {\text {v a r i a t i o n s o f} U _ {x z _ {1}}} + \underbrace {P \left(y \mid x ^ {U _ {x z _ {1}}}\right) - P \left(y \mid x ^ {U _ {x z _ {1}} , U _ {x z _ {2}}}\right)} _ {\text {v a r i a t i o n s o f} U _ {x z _ {2}}}, \tag {19} +$$ + +allowing for an additive, non-parametric decomposition of the experimental spurious effect. + +The first term in Eq. 19, shown in Fig. 8a, encompasses spurious variations explained by the variable $U_{XZ_1}$ . The second term, in Fig. 4b, encompasses spurious variations explained by $U_{XZ_2}$ . + +For an overview, in Tab. 1 we summarize the different inferential procedures discussed so far, indicating the structural causal models associated with them. + +# 4 Non-parametric Spurious Decompositions + +We now move on to deriving general decomposition results for the spurious effects. Before doing so, we first derive a new decomposition result for the TV measure, not yet appearing in the literature (due to space constraints, all proofs are given in Appendix A): + +Proposition 2. Define the total variation (TV) measure as $TV_{x_0,x_1}(y) = P(y \mid x_1) - P(y \mid x_0)$ , and the total effect TE as $TE_{x_0,x_1}(y) = P(y_{x_1}) - P(y_{x_0})$ . The total variation measure can be decomposed as: + +$$ +T V _ {x _ {0}, x _ {1}} (y) = T E _ {x _ {0}, x _ {1}} (y) + \left(E x p - S E _ {x _ {1}} (y) - E x p - S E _ {x _ {0}} (y)\right). \tag {20} +$$ + +The above result clearly separates out the causal variations (measured by the TE) and the spurious variations (measured by Exp-SE terms) within the TV measure. The seminal result from [10] can be used to further decompose the TE measure. In the sequel, we show how the Exp-SE terms can be further decomposed, thereby reaching a full non-parametric decomposition of the TV measure. + +# 4.1 Spurious Decompositions for the Markovian case + +When using the definition of a PA submodel, the common variations between $X,Y$ can be attributed to (or explained by) the unobserved confounders $U_{1},\ldots ,U_{k}$ . In order to do so, we first define the notion of an experimental spurious effect for a set of latent variables: + +Definition 2 (Spurious effects for Markovian models). Let $\mathcal{M}$ be a Markovian model. Let $Z_{1},\ldots ,Z_{k}$ be the confounders between variables $X$ and $Y$ sorted in any valid topological order, and denote the corresponding exogenous variables as $U_{1},\ldots ,U_{k}$ , respectively. Let $Z_{[i]} = \{Z_1,\dots,Z_i\}$ and + +$Z_{-[i]} = \{Z_{i + 1},\dots ,Z_k\}$ $U_{[i]}$ and $U_{-[i]}$ are defined analogously. Define the experimental spurious effect associated with variable $U_{i + 1}$ as + +$$ +E x p - S E _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right). \tag {21} +$$ + +The intuition behind the quantity $\mathrm{Exp - SE}_x^{U_{[i]},U_{[i + 1]}}(y)$ can be explained as follows. The quantity $P(y\mid x^{U_{[i]}})$ captures all the variations in $Y$ induced by observing that $X = x$ apart from those explained by the latent variables $U_{1},\ldots ,U_{i}$ , which are fixed a priori and not updated. Similarly, the quantity $P(y\mid x^{U_{[i + 1]}})$ captures the variations in $Y$ induced by observing that $X = x$ , apart from those explained by $U_{1},\ldots ,U_{i},U_{i + 1}$ . Therefore, taking the difference of the two quantities measures the variation in $Y$ induced by observing that $X = x$ that is explained by the latent variable $U_{i + 1}$ . + +Based on this definition, we can derive the first key non-parametric decomposition of the experimental spurious effect that allows the attribution of the spurious variations to the latent variables $U_{i}$ : + +Theorem 1 (Latent spurious decomposition for Markovian models). The experimental spurious effect $\text{Exp-SE}_x(y)$ can be decomposed into latent variable-specific contributions as follows: + +$$ +E x p - S E _ {x} (y) = \sum_ {i = 0} ^ {k - 1} E x p - S E _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = \sum_ {i = 0} ^ {k - 1} P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right). \tag {22} +$$ + +An illustrative example of applying the theorem is shown in Appendix B.1. Thm. 1 allows one to attribute spurious variations to latent variables influencing both $X$ and $Y$ . The key question is when such an attribution, as shown in Eq. 22, can be computed from observational data in practice (known as an identifiability problem [9]). In fact, when variables are added to the PA submodel in topological order, the attribution of variations to the latents $U_{i}$ is identifiable, as we prove next: + +Theorem 2 (Spurious decomposition identification in topological ordering). The quantity $P(y \mid x^{U_{[i]}})$ can be computed from observational data using the expression + +$$ +P (y \mid x ^ {U _ {[ i ]}}) = \sum_ {z} P (y \mid z, x) P \left(z _ {- [ i ]} \mid z _ {[ i ]}, x\right) P \left(z _ {[ i ]}\right), \tag {23} +$$ + +rendering each term of decomposition in Eq. 22 identifiable from the observational distribution $P(v)$ . + +We discuss in Appendix B.2 why a decomposition that does not follow a topological order of the variables $U_{i}$ is not identifiable. + +# 4.2 Spurious Decompositions in Semi-Markovian Models + +In the Markovian case, considered until now, there was a one-to-one correspondence between the observed confounders $Z_{i}$ and their latent variables $U_{i}$ . This, however, is no longer the case in Semi-Markovian models. In particular, it can happen that there exist exogenous variables $U_{j}$ that induce common variations between $X, Y$ , but affect more than one confounder $Z_{i}$ . We are interested in $U_{j} \subseteq U$ that have causal (directed) paths to both $X, Y$ , described by the following definition: + +Definition 3 (Trek). Let $\mathcal{M}$ be an SCM corresponding to a Semi-Markovian model. Let $\mathcal{G}$ be the causal diagram of $\mathcal{M}$ . A trek $\tau$ in $\mathcal{G}$ (from $X$ to $Y$ ) is an ordered pair of causal paths $(g_{l}, g_{r})$ with a common exogenous source $U_{i} \in U$ . That is, $g_{l}$ is a causal path $U_{i} \to \dots \to X$ and $g_{r}$ is a causal path $U_{i} \to \dots \to Y$ . The common source $U_{i}$ is called the top of the trek (ToT for short), denoted $\text{top}(g_{l}, g_{r})$ . A trek is called spurious if $g_{r}$ is a causal path from $U_{i}$ to $Y$ that is not intercepted by $X$ . + +When decomposing spurious effects, we are in fact interested in all the exogenous variables $U_{i}$ that lie on top of a spurious trek between $X$ and $Y$ . It is precisely these exogenous variables that induce common variations between $X$ and $Y$ . Using any subset of the variables that are top of spurious treks, we define a set-specific notion of a spurious effect: + +Definition 4 (Exogenous set-specific spurious effect). Let $U_{sToT} \subseteq U$ be the subset of exogenous variables that lie on top of a spurious trek between $X$ and $Y$ . Suppose $A, B \subseteq U_{sToT}$ are two nested subsets of $U_{sToT}$ , that is $A \subseteq B$ . We then define the exogenous experimental spurious effect with respect to sets $A, B$ as + +$$ +E x p - S E _ {x} ^ {A, B} (y) = P \left(y \mid x ^ {A}\right) - P \left(y \mid x ^ {B}\right). \tag {24} +$$ + +![](images/a54585c76b0211856a2f79ef2e920679adee86a608d5a0ea83c98a45aaffcb03.jpg) +Figure 5: Quantity $\mathrm{Exp - SE}_x^{A,B}(y)$ as a graphical contrast. Dots indicate arbitrary observed confounders along the indicated pathway. + +The above definition is analogous to Def. 2, but we are now fixing different subsets of the tops of spurious treks. Def. 2 supports partial abduction of exogenous variables that are not on top of a spurious trek, but we are seldom interested in these since they do not induce covariations of $X, Y$ . The quantity $\mathrm{Exp - SE}_x^{A,B}(y)$ is presented as a graphical contrast in Fig. 5. In particular, the set of tops of spurious treks $U_{sToT}$ is partitioned into three parts $(U_A, U_{B\backslash A}, U_{BC})$ . The causal diagram in the figure is informal, and the dots $(\dots)$ represent arbitrary possible observed confounders that lie along indicated pathways. On the l.h.s. of the figure, the set $U_A$ does not respond to the conditioning $X = x$ , whereas $U_{B\backslash A}, U_{BC}$ do. This is contrasted with the r.h.s., in which neither $U_A$ nor $U_{B\backslash A}$ respond to $X = x$ , whereas $U_{BC}$ still does respond to the $X = x$ conditioning. The described contrast thus captures the spurious effect explained by the tops of spurious treks in $U_{B\backslash A}$ . + +Analogous to Thm. 1, we next state a variable-specific decomposition of the spurious effect, which is now with respect to exogenous variables that are top of spurious treks: + +Theorem 3 (Semi-Markovian spurious decomposition). Let $U_{sToT} = \{U_1, \dots, U_m\} \subseteq U$ be the subset of exogenous variables that lie on top of a spurious trek between $X$ and $Y$ . Let $U_{[i]}$ denote the variables $U_1, \dots, U_i$ ( $U_{[0]}$ denotes the empty set $\emptyset$ ). The experimental spurious effect $Exp - SE_x(y)$ can be decomposed into variable-specific contributions as follows: + +$$ +E x p - S E _ {x} (y) = \sum_ {i = 0} ^ {m - 1} \operatorname {E x p - S E} _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = \sum_ {i = 0} ^ {k - 1} P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right). \tag {25} +$$ + +An example demonstrating the Semi-Markovian decomposition is given in Appendix B.3. We next discuss the question of identification. We begin by discussing how to annotate the exogenous variables given a Semi-Markovian causal diagram: + +Definition 5 (Top of trek from the causal diagram). Let $\mathcal{M}$ be a Semi-Markovian model and let $\mathcal{G}$ be the associated causal diagram. A set of nodes fully connected with bidirected edges is called a clique. A maximal clique $C_i$ is such that there is no clique $C_i'$ such that $C_i \subsetneq C_i'$ . The set of variables $U_{sT_{oT}}$ can be constructed from the causal diagram in the following way: + +(I) initialize $U_{sToT} = \emptyset$ +(II) for each maximal clique $C_i$ , consider the associated exogenous variable $U_{C_i}$ pointing to each node in the clique; if there exists a spurious trek between $X$ and $Y$ with a top in $U_{C_i}$ , add $U_{C_i}$ to $U_{sToT}$ . + +After defining the explicit construction of the set $U_{sToT}$ , we define the notion of the anchor set: + +Definition 6 (Anchor Set). Let $U_1, \ldots, U_l \subseteq U$ be a subset of the exogenous variables. We define the anchor set $AS(U_1, \ldots, U_l)$ of $(U_1, \ldots, U_l)$ as the subset of observables $V$ that are directly influenced by any of the $U_i$ 's, + +$$ +A S \left(U _ {1}, \dots , U _ {l}\right) = \bigcup_ {i = 1} ^ {l} \operatorname {c h} \left(U _ {i}\right). \tag {26} +$$ + +Another important definition is that of anchor set exogenous ancestral closure: + +# SCM $\mathcal{M}$ + +$$ +\begin{array}{l} Z _ {1} \leftarrow B (0. 5) \\ Z _ {2} \leftarrow B \left(0. 4 + 0. 2 Z _ {1}\right) \\ Z _ {3} \leftarrow B \left(0. 3 + 0. 3 Z _ {1} Z _ {2}\right) \\ X \leftarrow B \left(0. 2 + \lambda_ {1} Z _ {1} + \lambda_ {2} Z _ {2} + \lambda_ {3} Z _ {3}\right) \\ Y \leftarrow B \left(0. 1 + 0. 2 X + \lambda_ {1} Z _ {1} + \lambda_ {2} Z _ {2} + \lambda_ {3} Z _ {3}\right) \\ \end{array} +$$ + +# Causal Diagram $\mathcal{G}$ + +![](images/91604ce10423921b623dace2f38451e6115e70a304b641dba84a0b6b6e5b57fc.jpg) +Table 2: SCM and causal diagram for the Synthetic A example. + +Definition 7 (Anchor Set Exogenous Ancestral Closure). Let $U_s \subseteq U$ be a subset of the exogenous variables. Let $AS(U_s)$ denote the anchor set of $U_s$ , and let $\mathrm{an}^{\mathrm{ex}}(AS(U_s))$ denote all exogenous variables that have a causal path to any variable in $AS(U_s)$ . $U_s$ is said to satisfy anchor set exogenous ancestral closure (ASEAC) if + +$$ +U _ {s} = \operatorname {a n} ^ {e x} \left(A S \left(U _ {s}\right)\right). \tag {27} +$$ + +Based on the above, we provide a sufficient condition for identification in the Semi-Markovian case: Theorem 4 (ID of variable spurious effects in Semi-Markovian models). Let $U_s \subseteq U_{sToT}$ . The quantity $P(y \mid x^{U_s})$ is identifiable from observational data $P(V)$ if the following hold: + +(i) $X\notin AS(U_s),Y\notin AS(U_s)$ +(ii) $U_{s}$ satisfies anchor set exogenous ancestral closure, $U_{s} = \mathrm{an}^{ex}(AS(U_{s}))$ + +Some instructive examples grounding Defs. 5-7 and Thm. 4 can be found in Appendix B.4. In words, the conditional expectation of $Y$ given $X$ in the partially abducted submodel w.r.t. a set $U_{s}$ is identifiable whenever (i) neither $X$ nor $Y$ are elements of the anchor set of $U_{s}$ and (ii) the set $U_{s}$ satisfies the anchor set exogenous ancestral closure. Thm. 4 provides a sufficient, but not a necessary condition for identification. An additional discussion of the conditions is given in Appendix C. We hope to address in future work an algorithmic way for identifying spurious effects in full generality. + +# 5 Experiments + +We now apply our framework to a synthetic example (called Synthetic A) with a known ground truth, summarized in Tab. 2 where the SCM $\mathcal{M}$ and the causal diagram $\mathcal{G}$ are given. The source code for the experiment can be found in our repository. For this example, we set the parameters $\lambda_{1} = \lambda_{2} = \lambda_{3} = 0.2$ . We then vary each parameter $\lambda_{i} \in [0,0.2]$ (while keeping the other two parameters fixed), which changes the value of the effect associated with latent variable $U_{i}$ . The effects associated with each $U_{i}, i \in \{1,2,3\}$ are computed based on the decomposition in Thm. 1: + +$$ +\operatorname {E x p} - \mathrm {S E} _ {x} ^ {U _ {1}} (y) := \operatorname {E x p} - \mathrm {S E} _ {x} ^ {\emptyset , U _ {1}} (y) = P (y \mid x ^ {\emptyset}) - P (y \mid x ^ {U _ {1}}) \tag {28} +$$ + +$$ +\operatorname {E x p} - \mathbf {S E} _ {x} ^ {U _ {2}} (y) := \operatorname {E x p} - \mathbf {S E} _ {x} ^ {U _ {1}, \{U _ {1}, U _ {2} \}} (y) = P (y \mid x ^ {U _ {1}}) - P (y \mid x ^ {U _ {1}, U _ {2}}) \tag {29} +$$ + +$$ +\operatorname {E x p} - \operatorname {S E} _ {x} ^ {U _ {3}} (y) := \operatorname {E x p} - \operatorname {S E} _ {x} ^ {\left\{U _ {1}, U _ {2} \right\}, \left\{U _ {1}, U _ {2}, U _ {3} \right\}} (y) = P \left(y \mid x ^ {U _ {1}, U _ {2}}\right) - P \left(y \mid x ^ {U _ {1}, U _ {2}, U _ {3}}\right). \tag {30} +$$ + +The key task is to compute the ground truth values of $P(y \mid x^{U_{[i]}})$ for different values of $i$ . According to Def. 1, we want to obtain the conditional distribution of $Y$ given $X = x$ but subject to not updating $U_{[i]}$ according to the evidence $X = x$ . Based on the true SCM, this can be done efficiently using rejection sampling as follows: + +(1) Take $N$ samples from the SCM $\mathcal{M}$ in Tab. 2, +(2) For all samples $k \in \{1, \dots, N\}$ with $u^{(k)}$ such that + +$$ +X \left(u ^ {(k)}\right) \neq x, \tag {31} +$$ + +re-sample the part of the unit $u^{(k)}$ that is not included in $U_{[i]}$ (e.g., if $U_{[i]} = \{U_1, U_2\}$ , latent $u_1^{(k)}, u_2^{(k)}$ are not re-sampled but $u_3^{(k)}$ is) and replace $u^{(k)}$ with this new sample, + +![](images/4ee9f350c0b0dbd3ab93c198242019147d7e2eb35a770360d01f5368bf07d353.jpg) +Figure 6: Experimental results on the Synthetic A example. Lines indicate the estimated values, dots the ground truth obtained from the SCM using rejection sampling, and the $95\%$ confidence intervals are indicated with color. As expected, increasing the $\lambda_{i}$ coefficient increases the spurious effect associated with the latent variable $U_{i}$ . + +(3) Evaluate the mechanisms $\mathcal{F}$ of $\mathcal{M}$ for all units $u^{(k)}$ , +(4) If there exists a sample $k$ with $X(u^{(k)})\neq x$ go back to Step (2), +(5) Return the mean of the $Y$ variables $\frac{1}{N}\sum_{k = 1}^{N}Y^{(k)}$ . + +Notice that the described procedure gives us samples from the distribution $P(y \mid x^{U_{[i]}})$ . The values of $U_{[i]}$ are sampled only once and are not updated after the initial sampling. Other values in $U$ , however, are sampled anew until their values are such that they are compatible with the evidence $X = x$ . Therefore, the procedure guarantees that $U_{[i]}$ do not respond to the evidence, whereas the complement of $U_{[i]}$ does, allowing us to compute $P(y \mid x^{U_{[i]}})$ and in turn the expressions in Eqs. 28-30. The effects are also estimated from observational data based on the identification expressions in Thm. 2. Fig. 6 demonstrates that the SCM-based ground truth matches the estimates based on Thm. 2. + +# 6 Conclusions + +In this paper, we introduced a general toolkit for decomposing spurious variations in causal models. In particular, we introduced a new primitive called partially abducted submodel (Def. 1), together with the procedure of partial abduction and prediction (Alg. 2). This procedure allows for new machinery for decomposing spurious variations in Markovian (Thm. 1) and Semi-Markovian (Thm. 3) models. Finally, we also developed sufficient conditions for identification of such spurious decompositions (Thms. 2, 4), and demonstrated the approach empirically (Sec. 5). The main limitation of our approach is the need for a fully-specified causal diagram, which may be challenging in practice. However, from a fully specified graph and the data, our tools for decomposing spurious effects give a fine-grained quantification of what the main confounders are. As is common in causal inference, the granularity of the obtained knowledge needs to be matched with the strength of the causal assumptions (in this case, specifying the causal diagram). Conversely, in the absence of such assumptions, fine-grained quantitative knowledge about these effects cannot be obtained in general [2], and we hypothesize that precise quantification of spurious effects is not attainable in the absence of a causal diagram. Finally, we discuss another technical solution that may alleviate some of the difficulty of causal modeling. Recently, cluster diagrams have been proposed [1], in which one can consider groups of confounders (instead of considering each confounder separately), and thus the specification of causal assumptions becomes less demanding (due to clustering, the number of nodes in the graph is smaller). However, causal decompositions as described in this paper can still be applied to cluster diagrams. This offers a way to choose a different level of granularity for settings where domain knowledge may not be specific enough to elicit a full causal diagram. + +# References + +[1] T. V. Anand, A. H. Ribeiro, J. Tian, and E. Bareinboim. Causal effect identification in cluster dags. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37(10), pages 12172-12179, 2023. +[2] E. Bareinboim, J. D. Correa, D. Ibeling, and T. Icard. On pearl's hierarchy and the foundations of causal inference. In Probabilistic and Causal Inference: The Works of Judea Pearl, page 507-556. Association for Computing Machinery, New York, NY, USA, 1st edition, 2022. +[3] R. M. Baron and D. A. Kenny. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of personality and social psychology, 51(6):1173, 1986. +[4] H. Checkoway, N. Pearce, and D. Kriebel. Research methods in occupational epidemiology, volume 34. Monographs in Epidemiology and, 2004. +[5] A. Decruyenaere, J. Steen, K. Colpaert, D. D. Benoit, J. Decruyenaere, and S. Vansteelandt. The obesity paradox in critically ill patients: a causal learning approach to a casual finding. Critical Care, 24(1):1-11, 2020. +[6] V. Hainer and I. Aldhoon-Hainerova. Obesity paradox does exist. Diabetes care, 36 (Supplement_2):S276-S281, 2013. +[7] J. Hernandez. Redlining revisited: mortgage lending patterns in sacramento 1930-2004. International Journal of Urban and Regional Research, 33(2):291-313, 2009. +[8] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. How we analyzed the compas recidivism algorithm. ProPublica (5 2016), 9, 2016. +[9] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000. 2nd edition, 2009. +[10] J. Pearl. Direct and indirect effects. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, page 411-420, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. +[11] K. Pearson. Iv. mathematical contributions to the theory of evolution.—v. on the reconstruction of the stature of prehistoric races. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 1(192):169-244, 1899. +[12] J. M. Robins and S. Greenland. Identifiability and exchangeability for direct and indirect effects. Epidemiology, pages 143-155, 1992. +[13] K. J. Rothman, S. Greenland, T. L. Lash, et al. Modern epidemiology, volume 3. Wolters Kluwer Health/Lippincott Williams & Wilkins Philadelphia, 2008. +[14] T. VanderWeele. Explanation in causal inference: methods for mediation and interaction. Oxford University Press, 2015. +[15] Y. Zenou and N. Boccard. Racial discrimination and redlining in cities. Journal of Urban Economics, 48(2):260-285, 2000. +[16] J. Zhang and E. Bareinboim. Non-parametric path analysis in structural causal models. In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence, 2018. + +Acknowledgements This research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. We would like to thank Inwoo Hwang for providing very useful comments on the paper during the NeurIPS conference. + +# Supplementary Material for A Causal Framework for Decomposing Spurious Variations + +The source code for reproducing the experiments can be found in our code repository. + +# A Theorem and Proposition Proofs + +# A.1 Proof of Prop. 2 + +Proof. Note that TV and TE are defined as: + +$$ +\mathbf {T V} _ {x _ {0}, x _ {1}} (y) = P (y \mid x _ {1}) - P (y \mid x _ {0}) \tag {32} +$$ + +$$ +\mathrm {T E} _ {x _ {0}, x _ {1}} (y) = P \left(y _ {x _ {1}}\right) - P \left(y _ {x _ {0}}\right). \tag {33} +$$ + +We can expand the TV measure in the following way: + +$$ +\begin{array}{l} \mathrm {T V} _ {x _ {0}, x _ {1}} (y) = P \left(y \mid x _ {1}\right) - P \left(y \mid x _ {0}\right) (34) \\ = P \left(y \mid x _ {1}\right) - P \left(y _ {x _ {1}}\right) + P \left(y _ {x _ {1}}\right) - P \left(y \mid x _ {0}\right) (35) \\ = \underbrace {P \left(y \mid x _ {1}\right) - P \left(y _ {x _ {1}}\right)} _ {\operatorname {E x p} - \operatorname {S E} _ {x _ {1}} (y)} + \underbrace {P \left(y _ {x _ {1}}\right) - P \left(y _ {x _ {0}}\right)} _ {\operatorname {T E} _ {x _ {0}, x _ {1}} (y)} + \underbrace {P \left(y _ {x _ {0}}\right) - P \left(y \mid x _ {0}\right)} _ {- \operatorname {E x p} - \operatorname {S E} _ {x _ {0}} (y)} (36) \\ = \mathrm {T E} _ {x _ {0}, x _ {1}} (y) + \operatorname {E x p - S E} _ {x _ {1}} (y) - \operatorname {E x p - S E} _ {x _ {0}} (y), (37) \\ \end{array} +$$ + +showing the required result. + +# A.2 Proof of Thm. 1 + +Proof. Note that + +$$ +\sum_ {i = 0} ^ {k - 1} \operatorname {E x p} - \operatorname {S E} _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = \sum_ {i = 0} ^ {k - 1} P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right) \tag {38} +$$ + +is a telescoping sum, and thus we have that + +$$ +\begin{array}{l} \sum_ {i = 0} ^ {k - 1} \operatorname {E x p} - \operatorname {S E} _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = \sum_ {i = 0} ^ {k - 1} P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right) (39) \\ = P (y \mid x ^ {\emptyset}) - P (y \mid x ^ {U _ {[ k ]}}) (40) \\ = P (y \mid x) - P \left(y _ {x}\right) (41) \\ = \operatorname {E x p} - \operatorname {S E} _ {x} (y), (42) \\ \end{array} +$$ + +completing the proof of the theorem. + +# A.3 Proof of Thm. 2 + +Proof. Notice that fixing a specific value for the variables $(U_{1},\ldots ,U_{k}) = (u_{1},\ldots ,u_{k})$ also gives a unique value for the variables $(Z_{1},\ldots ,Z_{k}) = (z_{1},\ldots ,z_{k})$ . Therefore, we can write + +$$ +\begin{array}{l} P \left(y \mid x ^ {U _ {[ i ]}}\right) = \sum_ {u _ {[ i ]}} P \left(u _ {[ i ]}\right) P \left(y \mid x, u _ {[ i ]}\right) (43) \\ = \sum_ {u _ {[ i ]}} P \left(u _ {[ i ]}\right) P \left(y \mid x, u _ {[ i ]}, z _ {[ i ]} \left(u _ {[ i ]}\right)\right) (44) \\ = \sum_ {z _ {[ i ]}} \sum_ {u _ {[ i ]}} P \left(u _ {[ i ]}\right) \mathbb {1} \left(Z _ {[ i ]} \left(u _ {[ i ]}\right) = z _ {[ i ]}\right) P (y \mid x, z _ {[ i ]}) (45) \\ = \sum_ {z _ {[ i ]}} P \left(z _ {[ i ]}\right) P \left(y \mid x, z _ {[ i ]}\right) (46) \\ = \sum_ {z} P (y \mid x, z) P \left(z _ {- [ i ]} \mid x, z _ {[ i ]}\right) P \left(z _ {[ i ]}\right). (47) \\ \end{array} +$$ + +□ + +The above proof makes use of the fact that the exogenous variables $U_{i}$ are considered in the topological ordering in the decomposition in Eq. 22, since in this case a fixed value of $u_{[i]}$ implies a fixed value of $z_{[i]}$ . However, when considering decompositions that do not follow a topological ordering, this is not the case, and we lose the identifiability property of the corresponding effects, as shown in the example in Appendix B.2. + +# A.4 Proof of Thm. 3 + +Proof. The proof is analogous to the proof of Thm. 1, the only difference being that there is no longer a 1-to-1 of the latent variables $U_{i}$ with the observed confounders $Z_{i}$ . Rather, each $U_{i}$ may correspond to one or more $Z_{i}$ variables. However, we still have that + +$$ +\sum_ {i = 0} ^ {k - 1} \operatorname {E x p} - \operatorname {S E} _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = \sum_ {i = 0} ^ {k - 1} P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right) \tag {48} +$$ + +is a telescoping sum, and thus we have that + +$$ +\begin{array}{l} \sum_ {i = 0} ^ {k - 1} \operatorname {E x p} - \operatorname {S E} _ {x} ^ {U _ {[ i ]}, U _ {[ i + 1 ]}} (y) = \sum_ {i = 0} ^ {k - 1} P \left(y \mid x ^ {U _ {[ i ]}}\right) - P \left(y \mid x ^ {U _ {[ i + 1 ]}}\right) (49) \\ = P \left(y \mid x ^ {\emptyset}\right) - P \left(y \mid x ^ {U _ {[ k ]}}\right) (50) \\ = P (y \mid x) - P \left(y _ {x}\right) (51) \\ = \operatorname {E x p} - \operatorname {S E} _ {x} (y), (52) \\ \end{array} +$$ + +completing the proof of the theorem. + +![](images/9987f0d2a6810a9afbd310c2ab35ddd7d908a4a25186d700154645b317d3dcd4.jpg) + +# A.5 Proof of Thm. 4 + +Proof. Let $U_{PA}$ be the set of exogenous variables not updated according to evidence, and suppose that (i) $X, Y \notin \mathrm{AS}(U_{PA})$ ; (ii) $U_{PA} = \mathrm{an}^{\mathrm{ex}}(AS(U_{PA}))$ . Note that + +$$ +\begin{array}{l} P \left(y \mid x ^ {U _ {P A}}\right) \stackrel {\text {(d e f)}} {=} \sum_ {u _ {P A}} P \left(u _ {P A}\right) P \left(y \mid x, u _ {P A}\right) (53) \\ = \sum_ {u _ {P A}, z _ {A S}} P \left(u _ {P A}\right) P \left(y \mid x, u _ {P A}, z _ {A S}\right) P \left(z _ {A S} \mid x, u _ {P A}\right), (54) \\ \end{array} +$$ + +where $Z_{AS}$ is the anchor set of $U_{PA}$ and the second line follows from the law of total probability. Consider any exogenous ancestor of $Z_{AS}$ , denoted by $U_{z}$ . By condition (ii) of ancestral closure, $U_{z}$ must be in $U_{PA}$ . Therefore, $U_{PA}$ contains all exogenous ancestors of $Z_{AS}$ . Consequently, a fixed value of $u_{PA}$ also implies a value of $Z_{AS}$ , labeled $z_{AS}$ . This means that + +$$ +P \left(z _ {A S} \mid x, u _ {P A}\right) = \mathbb {1} \left(Z _ {A S} \left(u _ {P A}\right) = z _ {A S}\right). \tag {55} +$$ + +Next, suppose there is an open path from $U_{PA}$ to $Y$ when conditioning on $X$ , $Z_{AS}$ , labeled $U_{PA,i} \rightarrow Z_s \xrightarrow{\rightarrow} Z_{s'} \rightarrow \dots \rightarrow Y$ . By definition of the anchor set, $Z_{AS}$ must contain the first variable on this path, $Z_s$ , and $Z_s$ is different from $X, Y$ . Consider first the case with the arrow from $Z_s$ outgoing, $U_{PA,i} \rightarrow Z_s \rightarrow \dots \rightarrow Y$ . When conditioning on $Z_{AS}$ , this path is closed since $Z_s \in Z_{AS}$ , yielding a contradiction. Consider then the second case with the arrow incoming into $Z_s$ , $U_{PA,i} \rightarrow Z_s \leftarrow Z_{s'} \rightarrow \dots \rightarrow Y$ . Since $Z_{s'}$ points to $Z_s$ , $Z_{s'}$ differs from $X, Y$ . Furthermore, by anchor set exogenous ancestral closure, the exogenous variable of $Z_{s'}$ , labeled $U_{s'}$ , must also be in $U_{PA}$ . Hence, $Z_{AS}$ contains $Z_{s'}$ , and $Z_{s'}$ cannot be a collider on this path, so conditioning on $Z_{AS}$ blocks the path, again yielding a contradiction. We conclude that no open path between $U_{PA}$ and $Y$ exists when conditioning on $Z_{AS}, X$ . Therefore, it holds that + +$$ +P (y \mid x, u _ {P A}, z _ {A S}) = P (y \mid x, z _ {A S}). \tag {56} +$$ + +Finally, by plugging in Eqs. 55-56 into Eq. 54 we obtain that + +$$ +P (y \mid x ^ {U _ {P A}}) = \sum_ {u _ {P A}, z _ {A S}} P (u _ {P A}) P (y \mid x, z _ {A S}) \mathbb {1} \left(Z _ {A S} \left(u _ {P A}\right) = z _ {A S}\right) \tag {57} +$$ + +![](images/b2d0826badf14764ff898e7d710ddf1f3cd9ea8d2c595f9318b64d6c05a8b12d.jpg) +Figure 7: Markovian causal diagram used in Ex. 4 with explicitly drawn latent variables $U_{1}, U_{2}$ . + +$$ += \sum_ {z _ {A S}} P (y \mid x, z _ {A S}) \underbrace {\sum_ {u _ {P A}} P \left(u _ {P A}\right) \mathbb {1} \left(Z _ {A S} \left(u _ {P A}\right) = z _ {A S}\right)} _ {P \left(z _ {A S}\right) \text {b y d e f i n i t i o n}} \tag {58} +$$ + +$$ += \sum_ {z _ {A S}} P (y \mid x, z _ {A S}) P (z _ {A S}), \tag {59} +$$ + +therefore witnessing identifiability of $P(y \mid x^{U_{PA}})$ and completing the proof. + +![](images/c19a59d32e1398df5f507f119ffd2bd8bac128c272da1e5f56ce960c48a84a74.jpg) + +# B Examples + +# B.1 Markovian Decomposition Example + +Example 4 (Latent variable attribution in a Markovian model). Consider the following SCM $\mathcal{M}^*$ : + +$$ +\mathcal {M} ^ {*}: \left\{ \begin{array}{l l} Z _ {1} \leftarrow B (0. 5) & (6 0) \\ Z _ {2} \leftarrow B \left(0. 4 + 0. 2 Z _ {1}\right) & (6 1) \\ X \leftarrow B \left(0. 3 + 0. 2 Z _ {1} + 0. 2 Z _ {2}\right) & (6 2) \\ Y \leftarrow X + Z _ {1} + Z _ {2}, & (6 3) \end{array} \right. +$$ + +and the causal diagram in Fig. 7. We wish to decompose the quantity $\text{Exp-SE}_x(y)$ into the variations attributed to the latent variables $U_1, U_2$ . Following the decomposition from Thm. 1 we can write + +$$ +\begin{array}{l} E x p - S E _ {x} (y \mid x _ {1}) = \underbrace {\mathbb {E} (y \mid x _ {1}) - \mathbb {E} (y \mid x _ {1} ^ {U _ {1}})} _ {U _ {1} c o n t r i b u t i o n} \tag {64} \\ + \underbrace {\mathbb {E} (y \mid x _ {1} ^ {U _ {1}}) - \mathbb {E} (y \mid x _ {1} ^ {U _ {1} , U _ {2}})} _ {U _ {2} c o n t r i b u t i o n}. \\ \end{array} +$$ + +We now need to compute the terms appearing in Eq. 64. In particular, we know that + +$$ +\begin{array}{l} \mathbb {E} \left(y \mid x _ {1} ^ {U _ {1}, U _ {2}}\right) = \mathbb {E} \left(y \mid d o \left(x _ {1}\right)\right) (65) \\ = 1 + \mathbb {E} \left(Z _ {1} \mid d o \left(x _ {1}\right)\right) + \mathbb {E} \left(Z _ {2} \mid d o \left(x _ {1}\right)\right) (66) \\ = 1 + \mathbb {E} \left(Z _ {1}\right) + \mathbb {E} \left(Z _ {2}\right) = 1 + 0. 5 + 0. 5 = 2. (67) \\ \end{array} +$$ + +Similarly, we can also compute + +$$ +\mathbb {E} (y \mid x _ {1}) = 1 + P \left(Z _ {1} = 1 \mid x _ {1}\right) + P \left(Z _ {2} = 1 \mid x _ {1}\right), \tag {68} +$$ + +where $P(Z_{1} = 1 \mid x_{1})$ can be expanded as + +$$ +\begin{array}{l} P \left(Z _ {1} = 1 \mid x _ {1}\right) = \frac {P \left(Z _ {1} = 1 , X = 1\right)}{P (X = 1)} (69) \\ = \frac {P \left(Z _ {1} = 1 , X = 1 , Z _ {2} = 1\right) + P \left(Z _ {1} = 1 , X = 1 , Z _ {2} = 0\right)}{P (X = 1)} (70) \\ = \frac {0 . 5 * 0 . 6 * 0 . 7 + 0 . 5 * 0 . 4 * 0 . 5}{0 . 5} = 0. 6 2. (71) \\ \end{array} +$$ + +The value of $P(Z_{2} = 1 \mid x_{1})$ is computed analogously and also equals 0.62, implying that $\mathbb{E}(y \mid x_{1}) = 1 + 0.62 + 0.62 = 2.24$ . Finally, we want to compute $\mathbb{E}(y \mid x_{1}^{U_{1}})$ , which equals + +$$ +\mathbb {E} \left(y \mid x _ {1} ^ {U _ {1}}\right) = 1 + P \left(Z _ {1} = 1 \mid x _ {1} ^ {U _ {1}}\right) + P \left(Z _ {2} = 1 \mid x _ {1} ^ {U _ {1}}\right). \tag {72} +$$ + +![](images/14e3ab344f9c28533dbbf54e557dc7419c950abfa7510cd805b95aa7c11392b2.jpg) +(a) $\operatorname{Exp} - \operatorname{SE}_x^{\emptyset, U_1}(y)$ . + +![](images/a9a633b3c462069c40eefdcf1763f7b316619f2a9cd123fc7c4838dfb69d484e.jpg) +(b) $\operatorname{Exp - SE}_x^{U_1,\{U_1,U_2\}}(y)$ . +Figure 8: Graphical representation of Exp-SE effect decomposition in Ex. 4. + +By definition, $P(Z_{1} = 1 \mid x_{1}^{U_{1}}) = P(Z_{1} = 1) = 0.5$ . For $P(Z_{2} = 1 \mid x_{1}^{U_{1}})$ we write + +$$ +\begin{array}{l} P \left(Z _ {2} = 1 \mid x _ {1} ^ {U _ {1}}\right) = \sum_ {z _ {1}} P \left(Z _ {2} = 1 \mid x _ {1}, z _ {1}\right) P \left(z _ {1}\right) (73) \\ = \frac {1}{2} \left[ \frac {P \left(Z _ {2} = 1 , X = 1 , Z _ {1} = 1\right)}{P \left(X = 1 , Z _ {1} = 1\right)} + \frac {P \left(Z _ {2} = 1 , X = 1 , Z _ {1} = 0\right)}{P \left(X = 1 , Z _ {1} = 0\right)} \right] (74) \\ = \frac {1}{2} \left[ \frac {0 . 2 1}{0 . 3 1} + \frac {0 . 2 1}{0 . 3 1} \right] \approx 0. 6 8, (75) \\ \end{array} +$$ + +implying that $\mathbb{E}(y\mid x_1^{U_1}) = 2.18$ . Putting everything together, we found that + +$$ +\underbrace {E x p - S E _ {x} (y \mid x _ {1})} _ {= 0. 2 4} = \underbrace {E x p - S E _ {x} ^ {\emptyset , U _ {1}} (y \mid x _ {1})} _ {= 0. 0 6 f r o m U _ {1}} + \underbrace {E x p - S E _ {x} ^ {U _ {1} , \{U _ {1} , U _ {2} \}} (y \mid x _ {1})} _ {= 0. 1 8 f r o m U _ {2}}. \tag {76} +$$ + +The terms appearing on the r.h.s. of Eq. 76 are shown as graphical contrasts in Fig. 8. On the left side of Fig. 8a, $U_{1}, U_{2}$ are responding to the conditioning $X = x$ , compared against the right side where only $U_{2}$ is responding to the conditioning $X = x$ . In the second term, in Fig. 8b, on the left only $U_{2}$ responds to $X = x$ , compared against the right side in which neither $U_{1}$ nor $U_{2}$ respond to $X = x$ conditioning. + +# B.2 Non-topological Counterexample + +Example 5 (Non-identification of latent spurious decomposition). Consider two $SCMs\mathcal{M}_1,\mathcal{M}_2$ Both SCMs have the same set of assignment equations $\mathcal{F}$ given by + +$$ +\mathcal {F} := \left\{ \begin{array}{l} Z _ {1} \leftarrow U _ {1} \\ Z _ {2} \leftarrow \left\{ \begin{array}{l l} Z _ {1} & i f U _ {2} = 1 \\ 1 - Z _ {1} & i f U _ {2} = 2 \\ 1 & i f U _ {2} = 3 \\ 0 & i f U _ {2} = 4 \end{array} \right. \\ X \leftarrow \left(Z _ {1} \wedge U _ {X 1}\right) \vee \left(Z _ {2} \wedge U _ {X 2}\right) \vee U _ {X} \\ Y \leftarrow X + Z _ {1} + Z _ {2}, \end{array} \right. \tag {78} +$$ + +and the causal diagram given in Fig. 7. The two SCMs differ in the distribution over the latent variables. In particular, for $\mathcal{M}_1$ we have + +$$ +P ^ {\mathcal {M} _ {1}} (U): \left\{ \begin{array}{c} U _ {1}, U _ {X 1}, U _ {X 2}, U _ {X} \sim \text {B e r n o u l l i} (0. 5) \\ U _ {2} \sim \operatorname {M u l t i n o m} \left(4, 1, \left(0, \frac {1}{4}, \frac {1}{2}, \frac {1}{4}\right)\right), \end{array} \right. \tag {81} +$$ + +and for $\mathcal{M}_2$ + +$$ +P ^ {\mathcal {M} _ {2}} (U): \left\{ \begin{array}{c} U _ {1}, U _ {X 1}, U _ {X 2}, U _ {X} \sim \text {B e r n o u l l i} (0. 5) \\ U _ {2} \sim \operatorname {M u l t i n o m} \left(4, 1, \left(\frac {1}{4}, \frac {1}{2}, \frac {1}{4}, 0\right)\right). \end{array} \right. \tag {83} +$$ + +That is, the only difference between $P^{\mathcal{M}_1}(U)$ and $P^{\mathcal{M}_2}(U)$ is in how $U_2$ attains its value. In fact, one can check that the observational distributions $P^{\mathcal{M}_1}(V)$ and $P^{\mathcal{M}_2}(V)$ are the same. However, + +![](images/4af1defee27ac3bedfd0c6c0479383efea0e3ee57125f80c0e2966725a56b485.jpg) +Figure 9: Causal diagram appearing in Exs. 6-7. + +when computing $\mathbb{E}^{\mathcal{M}}(y\mid x_0^{U_2})$ we have that + +$$ +\mathbb {E} ^ {\mathcal {M} _ {1}} \left(y \mid x _ {0} ^ {U _ {2}}\right) = 1 \tag {85} +$$ + +$$ +\mathbb {E} ^ {\mathcal {M} _ {2}} \left(y \mid x _ {0} ^ {U _ {2}}\right) = 0. 9 3, \tag {86} +$$ + +showing that the quantity $\mathbb{E}^{\mathcal{M}}(y\mid x_0^{U_2})$ is non-identifiable. + +The example illustrates that even in the Markovian case, when the variables are not considered in a topological order (in the example above, the variable $U_{2}$ was considered without the variable $U_{1}$ being added first), we might not be able to identify the decomposition of the spurious effects. + +# B.3 Semi-Markovian Decomposition Example + +Example 6 (Semi-Markovian spurious decomposition). Consider the following SCM $\mathcal{M}$ : + +$$ +\mathcal {F}, P (U): \left\{ \begin{array}{l} Z _ {1} \leftarrow U _ {1} \wedge U _ {1 X} \\ Z _ {2} \leftarrow U _ {2} \vee U _ {2 X} \\ X \leftarrow U _ {X} \wedge \left(U _ {1 X} \vee U _ {2 X}\right) \\ Y \leftarrow X + Z _ {1} + Z _ {2} \\ U _ {1}, U _ {2}, U _ {1 X}, U _ {2 X}, U _ {X} \stackrel {{i. i. d.}} {{\sim}} \text {B e r n o u l l i} (0. 5). \end{array} \right. \tag {87} +$$ + +The causal diagram $\mathcal{G}$ associated with $\mathcal{M}$ is given in Fig. 9. The exogenous variables that lie on top of a spurious trek are $U_{1X}, U_{2X}$ . Therefore, following the decomposition from Thm. 3, we can attribute spurious variations to these two variables: + +$$ +\begin{array}{l} E x p - S E _ {x} (y \mid x _ {1}) = \underbrace {\mathbb {E} (y \mid x _ {1}) - \mathbb {E} (y \mid x _ {1} ^ {U _ {1 X}})} _ {U _ {1 X} c o n t r i b u t i o n} \tag {92} \\ + \underbrace {\mathbb {E} (y \mid x _ {1} ^ {U _ {1 X}}) - \mathbb {E} (y \mid x _ {1} ^ {U _ {1 X} , U _ {2 X}})} _ {U _ {2 X} c o n t r i b u t i o n}. \\ \end{array} +$$ + +We now compute the terms appearing in Eq. 92. In particular, we know that + +$$ +\begin{array}{l} \mathbb {E} \left(y \mid x _ {1} ^ {U _ {1 X}, U _ {2 X}}\right) = \mathbb {E} \left(y \mid d o \left(x _ {1}\right)\right) = 1 + \mathbb {E} \left(Z _ {1} \mid d o \left(x _ {1}\right)\right) + \mathbb {E} \left(Z _ {1} \mid d o \left(x _ {1}\right)\right) (93) \\ = 1 + \mathbb {E} \left(Z _ {1}\right) + \mathbb {E} \left(Z _ {2}\right) = 1 + 0. 2 5 + 0. 7 5 = 2. (94) \\ \end{array} +$$ + +Similarly, we can also compute + +$$ +\mathbb {E} (y \mid x _ {1}) = 1 + P \left(Z _ {1} = 1 \mid x _ {1}\right) + P \left(Z _ {2} = 1 \mid x _ {1}\right), \tag {95} +$$ + +Now, $P(Z_{1} = 1 \mid x_{1}) = \frac{P(Z_{1} = 1, x_{1})}{P(x_{1})}$ , and we know that $X = 1$ if and only if $U_{X} = 1$ and $U_{1X} \vee U_{2X} = 1$ , which happen independently with probabilities $\frac{1}{2}$ and $\frac{3}{4}$ , respectively. Next, $Z_{1} = 1, X = 1$ happens if and only if $U_{X} = 1, U_{1X} = 1$ and $U_{1} = 1$ , which happens with probability $\frac{1}{8}$ . Therefore, we can compute + +$$ +P \left(Z _ {1} = 1 \mid x _ {1}\right) = \frac {\frac {1}{8}}{\frac {1}{2} * \frac {3}{4}} = \frac {1}{3}. \tag {96} +$$ + +Furthermore, we similarly compute that $Z_{2} = 1$ , $X = 1$ happens if either $U_{X} = 1$ , $U_{2X} = 1$ or $U_{X} = 1$ , $U_{2X} = 0$ , $U_{2} = 1$ , $U_{1X} = 1$ which happens disjointly with probabilities $\frac{1}{4}$ , $\frac{1}{16}$ , respectively. Therefore, + +$$ +P \left(Z _ {2} = 1 \mid x _ {1}\right) = \frac {\frac {1}{4} + \frac {1}{1 6}}{\frac {1}{2} * \frac {3}{4}} = \frac {5}{6}. \tag {97} +$$ + +Putting everything together we obtain that + +$$ +\mathbb {E} (y \mid x _ {1}) = 1 + \frac {1}{3} + \frac {5}{6} = \frac {1 3}{6}. \tag {98} +$$ + +Finally, we want to compute $\mathbb{E}(y\mid x_1^{U_1X})$ , which equals + +$$ +\mathbb {E} \left(y \mid x _ {1} ^ {U _ {1} X}\right) = 1 + P \left(Z _ {1} = 1 \mid x _ {1} ^ {U _ {1} X}\right) + P \left(Z _ {2} = 1 \mid x _ {1} ^ {U _ {1} X}\right). \tag {99} +$$ + +Now, to evaluate these expressions, we distinguish two cases, namely (i) $U_{1X} = 1$ and (ii) $U_{1X} = 0$ . In the first case, $P(Z_1 \mid x_1) = \frac{1}{2}$ and $P(Z_2 = 1 \mid x_1) = \frac{3}{4}$ . In the second case, $P(Z_1 \mid x_1) = 0$ and $P(Z_2 = 1 \mid x_1) = 1$ . Therefore, we can compute + +$$ +P \left(Z _ {1} = 1 \mid x _ {1} ^ {U _ {1 X}}\right) = \frac {1}{2} P _ {U _ {1 X} = 1} \left(Z _ {1} \mid x _ {1}\right) + \frac {1}{2} P _ {U _ {1 X} = 0} \left(Z _ {1} \mid x _ {1}\right) = \frac {1}{4} \tag {100} +$$ + +$$ +P \left(Z _ {2} = 1 \mid x _ {1} ^ {U _ {1 X}}\right) = \frac {1}{2} P _ {U _ {1 X} = 1} \left(Z _ {2} \mid x _ {1}\right) + \frac {1}{2} P _ {U _ {1 X} = 0} \left(Z _ {2} \mid x _ {1}\right) = \frac {7}{8}, \tag {101} +$$ + +which implies that $\mathbb{E}(y\mid x_1^{U_{1X}}) = \frac{17}{8}$ . Finally, this implies that + +$$ +\underbrace {E x p - S E _ {x} \left(y \mid x _ {1}\right)} _ {= \frac {1}{6}} = \underbrace {E x p - S E _ {x} ^ {\emptyset , U _ {1 X}} \left(y \mid x _ {1}\right)} _ {= \frac {1}{2 4} f r o m U _ {1 X}} + \underbrace {E x p - S E _ {x} ^ {U _ {1 X} , \left\{U _ {1 X} , U _ {2 X} \right\}} \left(y \mid x _ {1}\right)} _ {= \frac {1}{8} f r o m U _ {2 X}}. \tag {102} +$$ + +The terms appearing on the r.h.s. of Eq. 102 are shown as graphical contrasts in Fig. 4. On the left side of Fig. 4a, $U_{1X}, U_{2X}$ are responding to the conditioning $X = x$ , compared against the right side where only $U_{2X}$ is responding to the conditioning $X = x$ . In the second term, in Fig. 4b, on the left only $U_{2X}$ responds to $X = x$ , compared against the right side in which neither $U_{1X}$ nor $U_{2X}$ respond to $X = x$ conditioning. + +# B.4 Semi Markovian Identification Examples + +Example 7 (Spurious Treks). Consider the causal diagram in Fig. 7. In the diagram, latent variables $U_{1}, U_{2}$ both lie on top of a spurious trek because: + +$$ +X \leftarrow Z _ {1} \leftarrow U _ {1} \rightarrow Z _ {1} \rightarrow Y \text {i s a s p u r i o u s t r e k w i t h t o p} U _ {1} +$$ + +$$ +X \leftarrow Z _ {2} \leftarrow U _ {2} \rightarrow Z _ {2} \rightarrow Y \text {i s a s p u r i o u s t r e k w i t h t o p} U _ {2}. +$$ + +There are also other spurious treks with $U_{1}$ on top, such as $X \gets Z_{1} \gets U_{1} \to Z_{1} \to Z_{2} \to Y$ . + +Example 7 (continued - $U_{sToT}$ construction). We continue with Ex. 6 and the causal graph in Fig. 9 and perform the steps as follows: + +(i) initialize $U_{sToT} = \emptyset$ + +(ii) note that $\{X,Z_1\}$ create a maximal clique, since: + +(a) they are connected with a bidirected edge and thus form a clique, +(b) $\{X,Z_1,Z_2\}$ do not form a clique, due to the bidirected edge $Z_{1}\leftrightarrow Z_{2}$ not being present, +(c) $\{X,Z_1,Y\}$ , $\{X,Z_1,Z_2,Y\}$ do not form a clique, since $Y$ is not incident to any bidirected edges, +(d) thus, the clique $\{X,Z_1\}$ is also maximal. + +Let the variable $U_{1X}$ be associated with this clique, pointing to $X$ , $Z_1$ , and note that $U_{1X}$ lies on top of a spurious trek between $X, Y$ , + +(iii) similarly, $\{X,Z_2\}$ also create a maximal clique, associated with the variable $U_{X2}$ , pointing to $X,Z_2$ , that lies on top of a spurious trek between $X,Y$ , +(iv) the node $Y$ also forms a maximal clique and is associated with the variable $U_{Y}$ that does not lie on top of a spurious trek (it does not have a path to $X$ ). + +Therefore, we have constructed the set $U_{sToT} = \{U_{1X}, U_{2X}\}$ . + +![](images/20dce31e5c3791dd93f2866397ffdb7d4a4d7a898d4403ee3cb408748918cfb7.jpg) +(a) Causal diagram in Ex. 8. + +![](images/bdbf6721063df8ff5d37364fdd4220d728428a59190b10f5a6cfeb70e0167fb5.jpg) +(b) Causal diagram in Ex. 9. +Figure 10: Causal diagrams in Exs. 8-9. + +Example 7 (continued - anchor set). For the set $U_{sToT} = \{U_{1X}, U_{2X}\}$ associated with the causal diagram in Fig. 9, the anchor sets can be computed as follows: + +$$ +A S \left(U _ {1 X}\right) = \{X, Z _ {1} \}, \tag {103} +$$ + +$$ +A S \left(U _ {2 X}\right) = \{X, Z _ {2} \}, \tag {104} +$$ + +$$ +A S \left(U _ {1 X}, U _ {2 X}\right) = \{X, Z _ {1}, Z _ {2} \}. \tag {105} +$$ + +Example 7 (continued - anchor set exogenous ancestral closure). Consider the causal diagram in Fig. 9. With respect to the diagram, we have that + +$$ +\left. \operatorname {a n} ^ {e x} \left(A S \left(U _ {1 X}\right)\right) = \operatorname {a n} ^ {e x} (X, Z _ {1}) = \left\{U _ {1 X}, U _ {2 X} \right\}, \right. \tag {106} +$$ + +$$ +\left. \operatorname {a n} ^ {e x} \left(A S \left(U _ {2 X}\right)\right) = \operatorname {a n} ^ {e x} (X, Z _ {2}) = \left\{U _ {1 X}, U _ {2 X} \right\}, \right. \tag {107} +$$ + +$$ +\operatorname {a n} ^ {e x} \left(A S \left(\left\{U _ {1 X}, U _ {2 X} \right\}\right)\right) = \operatorname {a n} ^ {e x} (X, Z _ {1}, Z _ {2}) = \left\{U _ {1 X}, U _ {2 X} \right\}. \tag {108} +$$ + +Therefore, $\{U_{1X}, U_{2X}\}$ satisfies anchor set exogenous ancestral closure, whereas $U_{1X}$ and $U_{2X}$ do not, since for instance $U_{1X}$ has $X$ in its anchor set, but $X$ has $U_{2X}$ as its ancestor. + +We now consider an example of effect identification based on Thm. 4: + +Example 8 (Thm. 4 Application). Consider the causal diagram in Fig. 10a. Consider the query $\mathbb{E}(y \mid x_1^{U_{12}})$ associated with a partially abducted submodel in which the noise variable $U_{12}$ determining the values of $Z_1, Z_2$ is not updated according to evidence. Based on Thm. 4, we verify that + +(i) $X,Y$ are not in the anchor set $AS(U_{12}) = \{Z_1,Z_2\}$ +(ii) $\mathrm{an}^{ex}(AS(U_{12})) = \mathrm{an}^{ex}(Z_1,Z_2) = U_{12}$ meaning that $U_{12}$ satisfies anchor set exogenous ancestral closure (ASEAC). + +Therefore, the query $\mathbb{E}(y\mid x_1^{U_{12}})$ is identifiable from observational data. To witness, we expand the query as: + +$$ +\begin{array}{l} \mathbb {E} \left(y \mid x _ {1} ^ {U _ {1 2}}\right) = \sum_ {u _ {1 2}} P \left(u _ {1 2}\right) \mathbb {E} \left(y \mid x _ {1}, u _ {1 2}\right) (109) \\ = \sum_ {u _ {1 2}, z _ {1}, z _ {2}} P (u _ {1 2}) \mathbb {E} (y \mid x _ {1}, u _ {1 2}, z _ {1}, z _ {2}) \mathbb {1} \left(Z _ {1} (u _ {1 2}) = z _ {1}, Z _ {2} (u _ {1 2}) = z _ {2}\right) (110) \\ = \sum_ {u _ {1 2}, z _ {1}, z _ {2}} P (u _ {1 2}) \mathbb {E} (y \mid x _ {1}, z _ {1}, z _ {2}) \mathbb {1} \left(Z _ {1} \left(u _ {1 2}\right) = z _ {1}, Z _ {2} \left(u _ {1 2}\right) = z _ {2}\right) (111) \\ = \sum_ {z _ {1}, z _ {2}} \mathbb {E} (y \mid x _ {1}, z _ {1}, z _ {2}) \sum_ {u _ {1 2}} P \left(u _ {1 2}\right) \mathbb {1} \left(Z _ {1} \left(u _ {1 2}\right) = z _ {1}, Z _ {2} \left(u _ {1 2}\right) = z _ {2}\right) (112) \\ = \sum_ {z _ {1}, z _ {2}} \mathbb {E} (y \mid x _ {1}, z _ {1}, z _ {2}) P (z _ {1}, z _ {2}) (113) \\ = \sum_ {z _ {1}, z _ {2}, z _ {3}} \mathbb {E} (y \mid x _ {1}, z _ {1}, z _ {2}, z _ {3}) P \left(z _ {1}, z _ {2}\right) P \left(z _ {3} \mid x _ {1}, z _ {1}, z _ {2}\right), (114) \\ \end{array} +$$ + +providing an identification expression from observational data. + +# C Discussion of Thm. 4 + +Thm. 4 introduces a sufficient condition for the identification of quantities under partial abduction (Def. 2). Here, we discuss why some of the conditions in the theorem are necessary, and provide an example that further elucidates the theorem's scope. + +Example 9 (Non-Identification in Semi-Markovian Models). Consider the causal diagram in Fig. 10b and consider two SCMs $\mathcal{M}_1, \mathcal{M}_2$ constructed as follows. Both SCMs have the same set of assignment equations $\mathcal{F}$ , given by + +$$ +\mathcal {F} := \left\{ \begin{array}{l l} Z _ {1} \leftarrow \left\{ \begin{array}{l l} 1 & \text {i f} U _ {1 X} > 4 \\ 0 & \text {i f} U _ {1 X} \leq 4 \end{array} \right. \\ Z _ {2} \leftarrow U _ {2} \\ X \leftarrow \left\{ \begin{array}{l l} 1 & \text {i f} U _ {1 X} \in \{1, 5 \} \\ Z _ {2} & \text {i f} U _ {1 X} \in \{2, 6 \} \\ 1 - Z _ {2} & \text {i f} U _ {1 X} \in \{3, 7 \} \\ 0 & \text {i f} U _ {1 X} \in \{4, 8 \} \end{array} \right. \\ Y \leftarrow Z _ {1} \vee Z _ {2}. \end{array} \right. \tag {118} +$$ + +The two SCMs differ in the distribution over the latent variables. In particular, for $\mathcal{M}_1$ we have + +$$ +P ^ {\mathcal {M} _ {1}} (U): \left\{ \begin{array}{l} U _ {2} \sim \text {B e r n o u l l i} (0. 6) \\ U _ {1 X} \sim \text {M u l t i n o m} (8, 1, (\frac {1}{8}, \frac {1}{8}, \frac {1}{8}, \frac {1}{8}, \frac {1}{8}, \frac {1}{8}, \frac {1}{8}), \end{array} \right. \tag {119} +$$ + +and for $\mathcal{M}_2$ + +$$ +P ^ {\mathcal {M} _ {2}} (U): \left\{ \begin{array}{l} U _ {2} \sim \text {B e r n o u l l i} (0. 6) \\ U _ {1 X} \sim \text {M u l t i n o m} (8, 1, \left(0, \frac {1}{4}, \frac {1}{4}, 0, 0, \frac {1}{4}, \frac {1}{4}, 0\right)), \end{array} \right. \tag {122} +$$ + +That is, the only difference between $P^{\mathcal{M}_1}(U)$ and $P^{\mathcal{M}_2}(U)$ is in how $U_{1X}$ attains its value. Furthermore, we can verify that $\mathcal{M}_1$ and $\mathcal{M}_2$ generate the same observational distribution, given by following distribution table + +
Z1Z2XP(Z1,Z2,X)
0000.10
1000.10
0100.15
1100.15
0010.10
1010.10
0110.15
1110.15
+ +and $Y$ given simply as a deterministic function of $Z_{1}, Z_{2}$ . Now, suppose we are interested in computing the conditional probability of $Y$ given $X = x_{1}$ in a partially abducted submodel where $U_{1X}$ does not respond to evidence. The quantity $\mathbb{E}(y \mid x_{1}^{U_{1X}})$ can be computed as + +$$ +\begin{array}{l} \mathbb {E} \left(y \mid x _ {1} ^ {U _ {1 X}}\right) = \sum_ {u _ {1 x} = 1} ^ {8} P \left(u _ {1 x}\right) \mathbb {E} \left(y \mid x _ {1}, u _ {1 x}\right) (123) \\ = \sum_ {u _ {1 x} = 1} ^ {8} P \left(u _ {1 x}\right) \mathbb {E} \left(y \mid x _ {1}, u _ {1 x}, z _ {2}\right) P \left(z _ {2} \mid x _ {1}, u _ {1 x}\right) (124) \\ = \sum_ {u _ {1 x} = 1} ^ {8} P \left(u _ {1 x}\right) P \left(z _ {2} \mid x _ {1}, u _ {1 x}\right) \left[ Z _ {1} \left(u _ {1 x}\right) \vee z _ {2} \right], (125) \\ \end{array} +$$ + +where $P(Z_{2} = 1 \mid x_{1}, u_{1x}) = 0.6$ for $u_{1x} \in \{1, 4, 5, 8\}$ , $P(Z_{2} = 1 \mid x_{1}, u_{1x}) = 0$ for $u_{1x} \in \{3, 7\}$ , and $P(Z_{2} = 1 \mid x_{1}, u_{1x}) = 1$ for $u_{1x} \in \{4, 8\}$ . Evaluating the expression in Eq. 125 for the two + +SCMs yields + +$$ +\mathbb {E} ^ {\mathcal {M} _ {1}} \left(y \mid x _ {1} ^ {U _ {1} X}\right) = \frac {3 1}{4 0} \tag {126} +$$ + +$$ +\mathbb {E} ^ {\mathcal {M} _ {2}} \left(y \mid x _ {1} ^ {U _ {1 X}}\right) = \frac {3}{4}, \tag {127} +$$ + +demonstrating that the quantity $\mathbb{E}(y\mid x_1^{U_{1X}})$ is not identifiable for the diagram in Fig. 10b. + +The above example provides some intuition about why the variable $X$ cannot be in the anchor set of the variables $U_{PA}$ that are not updated according to evidence. Very similarly, an analogous example demonstrating that the variable $Y$ cannot be in the anchor set of $U_{PA}$ can also be constructed. \ No newline at end of file diff --git a/acausalframeworkfordecomposingspuriousvariations/images.zip b/acausalframeworkfordecomposingspuriousvariations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a8d57abcc3f19d64e9693ae6f0f679aa8449b630 --- /dev/null +++ b/acausalframeworkfordecomposingspuriousvariations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82d839f85cd6e2203f62406d3b444370c94d3abfd0b30220b98d7eeac0e8ee3a +size 989701 diff --git a/acausalframeworkfordecomposingspuriousvariations/layout.json b/acausalframeworkfordecomposingspuriousvariations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f4d21db0ccbb66c5aba283274e68316cd4bf67a4 --- /dev/null +++ b/acausalframeworkfordecomposingspuriousvariations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5607924e32c02a98b794205bf9132928d210ccba0df8d6360206a2517145a6aa +size 984115 diff --git a/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_content_list.json b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b0670891497d408ad5f876c6b59a7b7bbacf92e9 --- /dev/null +++ b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7c446ef13f7440e9085d476161883723db407528fbe1b7b667157f345ed45fe +size 71824 diff --git a/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_model.json b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2e1187d8881fb2bd0606cb4d7fcfc6e6f08cf274 --- /dev/null +++ b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caa91454c7bdf7d23e37383e798b71cb53d2a7b1b45a8265fe2a4f20c6e1d466 +size 92669 diff --git a/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_origin.pdf b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7d93e1b9dd1fb4b1ce1b218ca7b7829527906e8c --- /dev/null +++ b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/4d1e235c-024e-4334-b8e4-8e4815528e0e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49515297c5e2cdcc4a108ef1dd884c1e12e1bacb3e11093c57fca06447306af9 +size 1217932 diff --git a/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/full.md b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a988d26b3bcfb66d1b3e9b040e27c912b9c94cab --- /dev/null +++ b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/full.md @@ -0,0 +1,234 @@ +# A Closer Look at the Robustness of Contrastive Language-Image Pre-Training (CLIP) + +Weijie $\mathbf{T u}^{1}$ Weijian Deng Tom Gedeon2,3 + +1The Australian National University 2Curtin University 3University of O'Buda + +firstname_lastname@anu.edu.au firstname_lastname@curtin.edu.au + +# Abstract + +Contrastive Language-Image Pre-training (CLIP) models have demonstrated remarkable generalization capabilities across multiple challenging distribution shifts. However, there is still much to be explored in terms of their robustness to the variations of specific visual factors. In real-world applications, reliable and safe systems must consider other safety objectives beyond classification accuracy, such as predictive uncertainty. Yet, the effectiveness of CLIP models on such safety-related features is less-explored. Driven by the above, this work comprehensively investigates the safety objectives of CLIP models, specifically focusing on three key properties: resilience to visual factor variations, calibrated uncertainty estimations, and the ability to detect anomalous inputs. To this end, we study 83 CLIP models and 127 ImageNet classifiers. They are diverse in architecture, (pre)training distribution and training strategies. We consider 10 visual factors (e.g., shape and pattern), 5 types of out-of-distribution data, and 8 natural and challenging test conditions with different shift types, such as texture, style, and perturbation shifts. Our study has unveiled several previously unknown insights into CLIP models. For instance, they are not consistently more calibrated than other ImageNet models, which contradicts existing findings. Additionally, our analysis underscores the significance of training source design by showcasing its profound influence on the three safety-related properties. We believe our comprehensive study can shed light on and help guide the development of more robust and reliable CLIP models. + +# 1 Introduction + +By leveraging natural language supervision, CLIP has made significant progress in enhancing the zero-shot capabilities of models, unleashing their potential for remarkable out-of-distribution generalization performance [1, 2]. For example, CLIP models perform zero-shot classification without being explicitly trained on the target dataset, and they exhibit strong robustness to challenging natural distributional shifts [3-7]. Understanding such behavior of CLIP models is critical for advancing the next generation of image-text foundation models. Current research on this topic has explored various aspects of CLIP models, including dataset creation [8], reproducible scaling law [9], fine-tuning approaches [10], and training distribution [11]. + +In this work, we conduct an in-depth analysis of the safety-related properties of CLIP models, with a particular emphasis on their robustness and reliability across diverse testing environments. Specifically, we delve into the three critical safety-related properties, namely: 1) robustness to visual factors, to assess whether CLIP models can maintain robust when encountering varying visual factors, such as pose, size, color, lighting, and occlusions; 2) out-of-distribution (OOD) detection, to evaluate the capability of CLIP models to detect instances with labels that are not part of the training distribution; and 3) predictive uncertainty, to investigate whether CLIP models can provide calibrated predictions that accurately reflect their uncertainty in different testing conditions. + +Our research offers a comprehensive examination of advantages and drawbacks of CLIP models across various critical facets. Building on prior research that has highlighted robustness of CLIP, we further study their performance across specific factors such as pose and lighting. Additionally, although CLIP models have demonstrated their efficacy in detecting OOD data [12], it is still uncertain whether this ability remains consistent when the training distribution, fine-tuning datasets, model size, and architecture are altered. Moreover, beyond classification accuracy, it is also important to evaluate whether CLIP models offer reliable uncertainty estimation across various distributions. + +In light of the aforementioned questions, we evaluate 51 zero-shot CLIP models with varying visual encoder architectures, training sources, and dataset sizes, as well as 32 ImageNet fine-tuned CLIP models. To establish a baseline, we compare these models against 127 ImageNet models without language-image pre-training. We examine 10 visual factors variations present in the ImageNet validation set [13], including object pose, lighting, and background, to assess models' visual factors-level robustness. As for OOD detection, we employ ImageNet as an in-distribution (ID) set following [12] and test on 5 types of OOD scenarios. Furthermore, to investigate the predictive uncertainty, we use a set of canonical ImageNet distributions, such as texture, style, and perturbation shifts. Below we present key observations and insights obtained from our study: + +- CLIP models are generally more robust than ImageNet classifiers on 6 visual factors. However, they can be less robust on factors like object pose; In addition, training distribution plays an important role in CLIP robustness against visual factors (Section 4.1). +- CLIP models are biased towards shape when making predictions. However, we have also found that this bias diminishes after fine-tuning on ImageNet and becomes similar to other ImageNet models that are pre-trained on more data. (Section 4.2). +- When trained on the same source, classification accuracy of CLIP models correlates with their OOD detection performance (Section 5). +- CLIP models are not always more calibrated than other ImageNet models, which contradicts existing findings [14]. Our research highlights the impact of training data distribution and quantity on these observations (Section 6). +- Compared to other groups of models, CLIP maintains reasonably good uncertainty estimates under distribution shifts after ID calibration with temperature scaling. (Section 6). + +# 2 Related Work + +Robustness focuses on investigating the resilience of machine learning models to various forms of distribution shift at test time. To this end, a commonly used approach is to introduce artificial transformations onto images, such as, style transfer [15], corruptions and perturbations [16, 17]. Moreover, many real-world datasets are introduced to assess model robustness under a natural distributional shift [3-7]. For instance, Idrissi et al. [13] propose ImageNet-X by relabelling ImageNet validation set to provide detailed labels for naturally occurring factors such as different pose, background and lighting, to identify models' underlying failure patterns. + +OOD detection targets at identifying test data that do not belong to any of classes modeled in training distribution [18-20]. A large number of methods are proposed for deep learning models, including generative model-based methods [21-28] and discriminative model-based methods [29, 18, 30, 31, 20, 32]. For example, maximum softmax probability [18] is used as the metric to detect OOD samples. Moreover, the above approaches mainly study OOD detection for a task-specific model using only visual information. In contrast, as CLIP models enjoy popularity, zero-shot OOD detection [12], is proposed, where the objective becomes filtering out input from the task of disinterest. + +Predictive uncertainty aims to classify images with calibrated prediction probabilities so as to match the empirical frequency of correctness [33, 34]. Several works improve uncertainty estimations through post-hoc calibration on validation sets [33, 34]. Moreover, some other works show calibration can be improved by directly applying methods, such as ensembling [35] and pre-training [36]. Ovadia et al. [37] point out that calibration methods become less effective under distribution shift. Minderer et al. [14] suggest that CLIP models are well-calibrated given its accuracy. Based on these observations, this work comprehensively studies the quality of predictive uncertainty given by CLIP. + +# 3 Experimental Setup + +# 3.1 Models of Interest + +Contrastive language-image pre-training models: We use 51 zero-shot CLIP models (CLIP) and 32 ImageNet fine-tuned CLIP models (CLIP-FT). They have different visual encoders, including slightly modified ResNet [38], ConvNeXt [39], and ViT [40]. There are two training sources (LAION [41] and WIT [1]) and multiple sizes of training datasets from 80 million to 2 billion. For the CLIP-FT models, the vision tower of CLIP is fine-tuned on ImageNet-1K. We consider two fine-tuning procedures, one directly fine-tuned on ImageNet-1K [42], and the other first fine-tuned on ImageNet-12K, a subset of ImageNet-22K before fine-tuning on ImageNet-1K. Unless specified, we use the default prompt template by [1] for zero-shot CLIP models. + +Compared models: we use 127 ImageNet models with various architectures, including Convolutional Neural Networks (e.g., ResNet [38] and ConvNeXt [39]), Vision Transformers (e.g., ViT [40] and Swin [43]) and all-MLP architectures [44, 45] (e.g., MLP-Mixer [45]). Following [46], we divide them into three categories: (i) Standard Models. This group consists of models supervised on the ImageNet training set. (ii) Contrastive learning models. This category contains 8 models pretrained by contrastive learning. There are 6 training algorithms investigated, including InsDis [47], MoCo [48], SimCLR [49]; (iii) Pre-trained on more data. This group contains models pre-trained on a significantly larger dataset (e.g., ImageNet-21K) than the ImageNet training set. All the above models, including CLIP, are publicly available on TIMM [50] and OpenCLIP [51] + +# 3.2 Test Sets and Metrics + +Robustness. We first pinpoint failure patterns of models by testing on ImageNet-X [13], which is relabelling of ImageNet validation by 16 naturally occurring factors. This work mainly considers 10 factors labelled with a sufficient number of test samples: Pose, Background, Pattern, Color, Smaller, Shape, Partial View, Subcategory, Texture and Larger. The metric is accuracy, and high is better. We evaluate on cue-conflict stimuli and Stylized-ImageNet [15] to measure model bias towards shape. + +OOD Detection. We use large-scale OOD detection benchmark which is build up on ImageNet: in-distribution (ID) ImageNet v.s. {iNaturalist [52], SUN [53], PLACES [54], TEXTURE [55] and ImageNet-O [7]} (OOD). The metrics are the area under the receiver operating characteristic curve (AUROC) and the higher is better; false positive rate (FPR@95) when true positive rate is at $95\%$ and a lower score means better performance. + +Calibration. We study ID and OOD datasets, where ImageNet validation is ID dataset and OOD datasets are: ImageNet-V2 [3], ImageNet-Rendition [5], ImageNet-Adversarial [7], ImageNet-Sketch [4], ObjectNet [6] and ImageNet-Vid-Robust [56]. Metrics are estimated calibration error (ECE) [57] and negative log likelihood (NLL). A lower ECE or NLL indicates better calibration. + +# 3.3 Analytical Methodology + +In our endeavor to understand the underlying factors that influence the performance of CLIP models, we delve into six primary aspects: 1) training distribution, evaluating the effect of data source; 2) model architecture, looking into the potential effects of different structural choices on model performance; 3) dataset quantity, probing the interplay between the amount of data available for training and the model's efficiency; 4) contrastive loss, understanding its specific role in training dynamics 5) fine-tuning, and 6) test-time prompt, assessing the impact of prompts during the evaluation on model outputs. We follow the analytical methodology of seminal work [46] and a series of following works like [8, 11, 58]) to study the influential factor. Specifically, within the performance trends observed across all models, any factor causing a deviation from these trends is recognized as influential. Notably, in our research, we mainly emphasize and discuss such influential factors within each facet of our investigation. + +# 4 Visual Factor-Level Robustness + +The unprecedented robustness of CLIP models has spurred intense research efforts to identify the underlying factors responsible for their performance under distribution shifts. Recent studies provide + +![](images/2bde949bd0840ef4f2a7fdd4f22f6e1f2d9f446f8b0c544313d2af03cfb3cc5d.jpg) +Figure 1: The models performance on the subset of ImageNet-X annotated with a given visual factor (y-axis) to their overall accuracy on the whole ImageNet-X (x-axis). Each point represents a model. The x-axis and y-axis are probit transformed following [46]. The black dashed line represents the ideal robust models whose performance on each visual factor is the same as the overall performance. The blue straight lines are fit with robust linear regression [59]. We include models supervised on ImageNet-1K, pre-trained on more data, contrastive learning models, CLIP models trained on two data distributions and their fine-tuned counterparts. We find that CLIP are generally more robust on six out of ten factors, but are less robust against Pose than other groups of models. + +valuable insights on the design of training source [11, 8]. Our research builds upon previous findings on the robustness of CLIP models and focuses on the potential failure types of the model. Instead of solely measuring overall accuracy across distributions, we investigate the behavior of CLIP models when faced with varying visual factors such as Pose and Background. + +# 4.1 CLIP Models Generally Exhibit Better Factor-Level Robustness Than Other Models + +Factor-level effective robustness. In our study, we extend the concept of overall effective robustness [46] to visual factor-level. Specifically, it measures a model's ability to achieve higher accuracy on the subset annotated by a specific visual factor compared to what is expected based on its overall accuracy on ImageNet-X. Figure 1 displays the accuracy on the subset annotated by a specific visual factor relative to the overall accuracy on ImageNet-X. + +CLIP models are generally more robust than other ImageNet models on 6 out of 10 visual factors. Figure 1 highlights several insights into the factor-level robustness of CLIP models. First, we find that CLIP models are more robust than other models on six out of ten visual factors, including Subcategory, Smaller, Color, Shape, Texture, and Larger. Specifically, CLIP models exhibit higher factor-level effective robustness than other models on each of these factors. Second, we observe that CLIP models are less robust than other models on Pose and Partial View factors. Third, CLIP models show a similar trend to other models on the Background factor. In addition, Idrissi et al. [13] observe that data augmentations can improve robustness to related factors, but with spill-over effects to unrelated factors. We speculate that the data augmentations used for training CLIP models may introduce the similar effects. + +Training distributions lead to different trends in CLIP models. The choice of training distribution impacts factor-level robustness of CLIP models. Specifically, we find that training on different datasets (i.e., LAION and WIT) forms distinct trends on each visual factor for CLIP, and there is no single training source that always leads to higher factor-level robustness than another. For instance, + +![](images/f2e4016e692d8b8ad37049cc4d4ec2e00396058195152178e930c3827eb0e49a.jpg) +Figure 2: Shape bias analysis of CLIP, CLIP fine-tuned (CLIP-FT), models pretrained on more data (Pretrain), and standard models. Large points mean larger models within the group. We observe that CLIP models are more shape-biased. + +
SourceBackboneShape biasIN-ValSIN
LAIONViT/H-14 (336/224)0.42 /0.510.89 /0.880.28 /0.32
ViT/L-14 (336/224)0.41 /0.470.88 /0.880.27 /0.31
ViT/B-16 (384/224)0.35 /0.430.87 /0.860.23 /0.25
ViT/B-32 (384/224)0.33 /0.450.85 /0.830.21 /0.22
ConvNeXt-B (384/224)0.31 /0.380.87 /0.860.17 /0.21
WITViT/L-14 (336/224)0.39 /0.450.88 /0.880.24 /0.30
ViT/B-16 (384/224)0.35 /0.410.87 /0.860.22 /0.23
+ +Table 1: The influence of input resolution when finetuning CLIP on Shape bias and ImageNet-Val(idation) and Stylized ImageNet (SIN) accuracy. The higher value in a model pair is in bold. With same backbone, the model fine-tuned with a larger input resolution is more accurate on IN-Val but less shape-biased and less accurate on SIN. + +we observe that CLIP models trained on LAION demonstrate higher robustness on Shape factor than those trained on WIT, while this reverses for Background and Pose factors. The results show a mixed observation on Large factor. Furthermore, we further point out that CLIP models trained on different subsets of LAION i.e., LAINON-80M, LAION-400M, and LAION-2B) follow the same trend. The above observations highlight the importance of the choice of training source in determining not only the overall accuracy but also the factor-level behaviors of CLIP models. This suggests that visual factor-level robustness should be considered when designing the training source for CLIP models. + +CLIP fine-tuned models perform slightly better than more data pre-trained models. We compared CLIP fine-tuned models (CLIP-FT) with other models that are pre-trained on more data and find that CLIP-FT shows improvement in overall accuracy and robustness on visual factors of Subcategory, Shape, and Pattern. However, no additional robustness gain is observed on other factors. Additionally, CLIP-FT models outperform zero-shot CLIP on variations such as Pattern and Partial View, indicating their superiority in handling visual factors. It would be intriguing to explore fine-tuning techniques that maintain or improve the factor-level robustness of zero-shot CLIP. + +# 4.2 Texture Bias v.s. Shape Bias + +CLIP exhibits a shape bias. We conduct experiments using the cue-conflict stimuli dataset [15] to investigate the presence of shape bias in the model's predictions. Shape bias, in this context, refers to the proportion of accurate predictions made based on object shapes. Figure 2 presents a visualization of the shape bias exhibited by the models, which are grouped according to their training methods (zero-shot, CLIP finetuning, more data pre-trained, and standard training) and architecture (transformer versus CNN). Our findings indicate that, among the four training methods, CLIP models are more likely to make predictions based on shape compared to the other three groups. Furthermore, while the transformer is reported to have a stronger shape bias than CNN [60, 61], we observe that CLIP using CNN as the vision encoder also exhibit a strong shape bias. + +Model size solely does not explain the shape bias of CLIP. We further observe that larger CLIP models do not necessarily have higher shape bias than smaller-size ones. For example, both trained on LAION-80M, CLIP-ViT/L-14 has 0.54 shape bias, which is 0.09 lower than CLIP-ViT/B-32. This implies that the shape bias of CLIP models cannot be attributed solely to model size. Based on the above observations, we speculate that the shape bias of CLIP may be attributed to its objective, which involves training the model to associate text and image pairs. + +CLIP models have a tendency towards texture bias after fine-tuning. Our study reveals that shape bias in CLIP weakens after fine-tuning on ImageNet. Moreover, the fine-tuned CLIP models exhibit a shape bias comparable to models that are pre-trained on larger datasets. This finding is consistent when using transformer and CNN as visual encoder. Moreover, these results illustrate that fine-tuning discards the shape-biased property of zero-shot CLIP, which may affect model robustness [62, 15]. + +Larger input image resolution during fine-tuning of CLIP results in a stronger bias towards texture. In Table 1, we observe that an input resolution during fine-tuning is a important factor to shape bias: increasing the input resolution during fine-tuning leads to better performance on ImageNet + +![](images/e5f2610d040bd42ddd7e44a923f02579e177bd3e06cd678459dea002685c1627.jpg) +Figure 3: OOD sample identification capability of models vs. ID dataset classification accuracy. The OOD detection ability is measured by AUROC $(\uparrow)$ and FPR@95 $(\downarrow)$ . Each point represents a model. We plot the results on iNaturalist, SUN, PLACES, TEXTURE and ImageNet-O. The blue straight lines are fit with robust linear regression [59]. We observe that training distribution has a greater impact than training dataset quantity on the OOD detection performance of CLIP. Moreover, after additionally fine-tuning on ImageNet-12K, CLIP are generally better at detecting OOD samples than those directly fine-tuned on ImageNet-1K. + +validation but also results in more texture-biased models with lower accuracy on Stylized-ImageNet. Across seven pairs of experiments and two training sources, we observe this pattern consistently. Given that input resolution is a crucial model dimension [63-65], it would be interesting to study its effects on shape bias beyond classification accuracy when devising scaling strategies. + +# 5 Out-of-Distribution Detection + +Zero-shot CLIP allows for a flexible definition of in-distribution (ID) classes without re-training the model. Namely, they can conduct zero-shot OOD detection. The current findings suggest that zero-shot CLIP models are competitive with other state-of-the-art models [12, 66]. Based on this finding, we conduct an extensive analysis to determine whether the purported benefits persist across various training sources, subsets, and network architectures. In the experiments, for zero-shot CLIP models, we utilize maximum concept matching [12] to detect OOD data. For models that are trained or fine-tuned on ImageNet-1K, we employ maximum softmax score [18] for OOD detection. + +For CLIP models from the same source, their ID accuracy correlates with OOD detection performance. Our study includes CLIP models trained on two sources (WIT and LAION). Given the same training source, our study, conducted across five challenging OOD scenarios, reveals a strong correlation between the ID accuracy of zero-shot CLIP models and their OOD detection performance (measured by AUROC and FPR@95). This observation suggests that the zero-shot classification accuracy of CLIP models on ID data can serve as a reliable indicator of their OOD detection performance. In contrast, such a trend is not as strong for standard models and more data-pre-trained models. Additionally, CLIP-FT models fine-tuned on ImageNet-1K do not exhibit such a clear correlation. + +Training source influences the trend of CLIP. Upon closer examination of the training distribution, we have observed that the correlation trend between ID accuracy and OOD detection performance is + +largely dependent on the training source. As illustrated in Figure 3, our research shows two distinct trends between CLIP models trained on WIT and those trained on LAION. Moreover, with the same ID accuracy, CLIP models trained on WIT exhibit superior OOD detection performance compared to their counterparts trained on LAION on three OOD scenarios. This further indicates the importance of training source selection for CLIP. When developing dataset curation methods, it is valuable to investigate the influence of training sources on OOD detection performance. + +Fine-tuning procedure influences the OOD detection ability of CLIP. First, we point out that fine-tuning enhances classification performance of CLIP, but this improvement does not necessarily translate to better OOD detection accuracy. Some CLIP-FT models even achieve worse OOD detection performance than Zero-shot CLIP models. Our analysis of CLIP-FT reveals a distinction between two groups of CLIP-FT, based on their fine-tuning procedures: the first group is fine-tuned solely on ImageNet-1K, while the second group undergoes additional fine-tuning on ImageNet-12K. We observe that this additional fine-tuning procedure has a substantial impact on the model's ability to detect OOD examples. As depicted in Figure 3, despite not leading to an improvement in classification accuracy, CLIP-FT models with additional fine-tuning on ImageNet-12K show better OOD detection performance across all OOD scenarios. As future work, it is valuable to investigate this observation further and explore alternative fine-tuning procedures that yield improved OOD detection performance. Moreover, exploring impacts of fine-tuning datasets other than ImageNet-1K and ImageNet-12K would be another interesting direction. + +# 6 Prediction Uncertainty + +In order to better understand the well-calibrated phenomenon of zero-shot CLIP models reported by Minderer et al. [14], our research systematically analyzes the calibration behavior of CLIP models under various training conditions. Specifically, we examine the calibration performance of CLIP models trained on different training distributions, varied training set sizes, and different architectures. Furthermore, we also investigate the calibration performance of CLIP models after fine-tuning to gain a better understanding of their overall performance. + +# 6.1 Zero-Shot CLIP Models Are Not Consistently More Calibrated Than Other Models + +Both training data distribution and quantity impact CLIP's calibration. Figure 4 presents the model calibration of CLIP models in relation to classification accuracy under distribution shifts. We observe that CLIP models trained on different distributions or quantities are not consistently grouped together. For example, CLIP models trained on WIT and LAION tend to cluster in separate regions. Moreover, when training CLIP models on different subsets of the LAION dataset, models with similar classification accuracy can exhibit varying levels of calibration performance. It would be interesting to further check the impacts of data curation techniques on CLIP calibration performance. + +While CLIP models are generally reported to have superior calibration compared to other models [14], our observations reveal that this finding does not always hold. Particularly, we notice that CLIP models trained on LAION-80M dataset exhibit much lower calibration performance when compared to standard models. The observation of [14] is primarily made on CLIP models trained on WIT. However, when we broaden our perspective to include the alternative training distribution provided by LAION and its various subsets, our observations become varied. This emphasizes the significance of careful training source design for CLIP. Furthermore, it suggests that when evaluating dataset curation, it is crucial to consider its impact on the calibration performance of CLIP models. + +CLIP fine-tuned models exhibit a trade-off between calibration and classification. On each test set in Figure 4, we consistently observe that after fine-tuning, CLIP models tend to have higher classification accuracy and lower calibration error. It is worth noting that additionally fine-tuning on ImageNet-12K does not alter this phenomenon, in contrast to its impact on OOD detection. Moreover, other model groups, including those pre-trained on more data, do not exhibit a trade-off between calibration and classification. We also observe some fine-tuned CLIP models achieve better calibration compared to their zero-shot counterparts before calibration. + +![](images/4e5b8a04a0bf2379b03001b684a02cb5c3507bc6bceac159cf4848808310ba2a.jpg) +Figure 4: Model calibration performance with respect to their classification accuracy. We report results on in-distribution test set, ImageNet-V2-A, ImageNet-R and ImageNet-A. Two metrics are considered: ECE $(\downarrow)$ and NLL $(\downarrow)$ , we also include calibration performance after calibration with temperature scaling. Each point represents a model. We use colors to represent model groups. For zero-shot CLIP, we additionally use shapes to indicate training distribution and quantity. We observe that CLIP models could be less calibrated than standard models. The training distribution and quantity are the key factors influencing the calibration performance of CLIP models. Temperature scaling reveals a consistent trend of CLIP models, and they tend to lie on a distinct trend from other models. + +# 6.2 Temperature Scaling Reveals Well-Calibrated Properties of Zero-Shot CLIP Models + +Post-hoc calibration can be adopted to remedy over- or under-confidence. Here, we use temperature scaling [33] to calibrate model predictions. Following the protocol in [67], we divide the validation set of ImageNet into two halves: one for temperature scaling (ID calibration set), and the other one for ID test. We report results on both ID and OOD test sets. + +Calibration performance of CLIP models after temperature scaling (Temp-scaled) correlates with their classification accuracy. In Figure 4, we explore how temperature scaling affects different groups of CLIP models. These groups are categorized based on the amount and source of their training data. After applying temperature scaling and evaluating using the NLL metric, we observe a consistent pattern among these CLIP groups. What is intriguing is that, after temperature scaling, for models with similar image classification accuracy, zero-shot CLIP models achieve better calibration performance compared to other models, including their fine-tuned counterparts. This trend persists across various testing scenarios, encompassing ID, OOD, and when assessing calibration using both NLL and ECE metrics. + +ID calibration of CLIP models transfers to OOD test sets. While it has been reported by Ovadia et al. [37] that ID calibration often struggles to generalize under distribution shifts, our study reveals + +![](images/41b15226e6f7e63018c589bafe87e29911c6d30e705c1f9003c80844e8c4aaea.jpg) +Figure 5: Influence of test time prompt on CLIP on robustness to visual factors, OOD detection, and predictive uncertainty. We include five CLIP models trained on WIT. We use different colors to denote different model architectures and utilize various shapes to represent deployed prompt sets. The dashed grey line is fit with robust linear regression [59] by the original CLIP-WIT models using 80 prompts. We see that the prompts of sizes 1, 5 and 30 decrease the classification performance of CLIP, but may not change the visual factor robustness of CLIP. + +a promising phenomenon for CLIP models. Specifically, after calibrating CLIP models on ID calibration set, we observe improved calibration results on OOD test sets. For example, on ImageNet-A, the calibration errors of CLIP models decrease after temperature scaling, unlike other models that do not clearly exhibit such improvement. This indicates that CLIP models are relatively easier to calibrate across diverse distributions, highlighting their potential for robust and reliable applications. + +# 7 Discussion on Influence of Test Time Prompts + +So far in the experiments, we utilize the prompt provided by [1]. In this section, we investigate the effect of test time prompts on three safety objectives. We additionally examine another three sets of prompts, one of size 1: "a photo of a {label}", a set of 5 prompts used by [12], and a set of 30 prompts. We conduct the experiment on five CLIP models: RN50, RN50×64, ViT-B/16, ViT-B/32 and ViT/L-14-336px trained on WIT. Figure 5 shows the performance of CLIP models using different sets of prompts on three safety objectives. + +We show that utilization of few prompts (e.g., one prompt) generally leads to a decrease in overall classification performance. However, the impact on the three properties is mixed. First, when evaluating factor-level robustness on Pattern, we find the adoption of different prompts does not alter the robustness: models still adhere to the linear trend established by CLIP trained on WIT using 80 prompts. Second, for OOD detection, using 5 prompts yield higher OOD detection performance than with 80 prompts on SUN. Also, for calibration, using fewer prompts (e.g., only one prompt) achieves lower calibration error than using all 80 prompts. This raises an important question: how to tune prompts to achieve better classification accuracy as well as better calibration and OOD detection performance? It would be interesting to study this when conducting prompt learning [68-70]. + +# 8 Conclusion and Discussion + +Our research provides a valuable contribution to the ongoing discourse surrounding the effectiveness of CLIP models in the context of robustness to visual factors, OOD detection, and the reliability of uncertainty estimation. In pursuit of these objectives, we conduct extensive experiments encompassing three critical tasks and perform comparative analyses between CLIP models and various groups of models. Our observations provide valuable insights into the advantages of CLIP models. First, CLIP models demonstrate superior visual factor-level robustness compared to other ImageNet models. Furthermore, while maintaining comparable accuracy on in-distribution dataset, CLIP models also exhibit competitive performance in OOD detection across commonly used benchmarks such as iNaturalist and ImageNet-O. Lastly, CLIP models are relatively easier to calibrate across diverse distributions. Furthermore, our study highlights the significance of training source design, as it profoundly influences the behavior of CLIP models across all three objectives. + +This work leaves open many interesting directions for future research and we discuss a few. First, this work primarily studies CLIP and its fine-tuned models due to its simplicity and effectiveness. We + +reckon this work as an anchor point and hope the framework of this analysis could generalize to other image-text foundation models, such as ALIGN [2] and BASIC [71]. Second, our study includes two academic training sources, namely WIT and LAION, for CLIP. It is valuable to investigate whether our observations generalize to other training sources. We believe that our study can shed light to and build up the understanding towards the design of multi-modal datasets. Lastly, in addition to three safety-critical tasks, there are other important fields to analyze such as mis-classification samples detection [18]. By providing a more detailed and nuanced understanding of the performance of CLIP models, we hope our observation and insights can inform future developments in the field and help to drive progress towards more robust and effective vision-language models. + +Broader impacts. While much research endeavors that aim to enhance the performance of machine learning models could be leveraged for negative societal applications, we believe that this paper points towards a positive direction for the development of machine learning methods in broader society. Particularly, our study understands the influence of training set on CLIP performance on robustness, out-of-distribution detection, and predictive uncertainty. A better understanding is beneficial in establishing trustworthy machine learning systems and high-quality multi-modal datasets for training. + +Acknowledgement. We thank all anonymous reviewers and ACs for their constructive comments and valuable suggestions in improving this paper. + +# References + +[1] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. +[2] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021. +[3] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize toImagenet? In International Conference on Machine Learning, pages 5389-5400. PMLR, 2019. +[4] Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems, 2019. +[5] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340-8349, 2021. +[6] Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in Neural Information Processing Systems, 2019. +[7] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262-15271, 2021. +[8] Thao Nguyen, Gabriel Ilharco, Mitchell Wortsman, Sewoong Oh, and Ludwig Schmidt. Quality not quantity: On the interaction between dataset design and robustness of clip. In Advances in Neural Information Processing Systems, 2022. +[9] Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. +[10] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7959-7971, 2022. + +[11] Alex Fang, Gabriel Ilharco, Mitchell Wortsman, Yuhao Wan, Vaishaal Shankar, Achal Dave, and Ludwig Schmidt. Data determines distributional robustness in contrastive language image pre-training (clip). In International Conference on Machine Learning, pages 6216-6234. PMLR, 2022. +[12] Yifei Ming, Ziyang Cai, Jieuxiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. Delving into out-of-distribution detection with vision-language representations. In Advances in Neural Information Processing Systems, 2022. +[13] Badr Youbi Idrissi, Diane Bouchacourt, Randall Balestriero, Ivan Evtimov, Caner Hazirbas, Nicolas Ballas, Pascal Vincent, Michal Drozdal, David Lopez-Paz, and Mark Ibrahim. Imagenet-x: Understanding model mistakes with factor of variation annotations. In International Conference on Learning Representations, 2022. +[14] Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. In Advances in Neural Information Processing Systems, pages 15682-15694, 2021. +[15] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations, 2018. +[16] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, 2019. +[17] Eric Mintun, Alexander Kirillov, and Saining Xie. On interaction between augmentations and corruptions in natural corruption robustness. In Advances in Neural Information Processing Systems, pages 3571-3583, 2021. +[18] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2016. +[19] Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334, 2021. +[20] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In Advances in Neural Information Processing Systems, pages 21464-21475, 2020. +[21] Mu Cai and Yixuan Li. Out-of-distribution detection via frequency-regularized generative models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5521–5530, 2023. +[22] Polina Kirichenko, Pavel Izmailov, and Andrew G Wilson. Why normalizing flows fail to detect out-of-distribution data. In Advances in Neural Information Processing Systems, pages 20578-20589, 2020. +[23] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? In International Conference on Machine Learning, 2018. +[24] Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng-Keen Wong, and Fuxin Li. Open set learning with counterfactual images. In European Conference on Computer Vision, pages 613–628, 2018. +[25] Poojan Oza and Vishal M Patel. C2ae: Class conditioned auto-encoder for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2307–2316, 2019. +[26] Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In Advances in Neural Information Processing Systems, 2019. +[27] Joan Serrà, David Álvarez, Vicenç Gómez, Olga Slizovskaia, José F Núñez, and Jordi Luque. Input complexity and out-of-distribution detection with likelihood-based generative models. In International Conference on Learning Representations, 2019. +[28] Zhisheng Xiao, Qing Yan, and Yali Amit. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In Advances in Neural Information Processing Systems, pages 20685–20696, 2020. +[29] Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1563-1572, 2016. + +[30] Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10951-10960, 2020. +[31] Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2017. +[32] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, 2018. +[33] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321-1330. PMLR, 2017. +[34] Khanh Nguyen and Brendan O'Connor. Posterior calibration and exploratory analysis for natural language processing models. In Conference on Empirical Methods in Natural Language Processing, 2015. +[35] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, 2017. +[36] Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning, pages 2712-2721. PMLR, 2019. +[37] Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems, 2019. +[38] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[39] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. +[40] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020. +[41] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa R Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5b: An open large-scale dataset for training next generation image-text models. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview.net/forum?id=M3Y74vmSMcY. +[42] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009. +[43] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021. +[44] Xiaohan Ding, Chunlong Xia, Xiangyu Zhang, Xiaojie Chu, Jungong Han, and Guiguang Ding. Repmlp: Re-parameterizing convolutions into fully-connected layers for image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. +[45] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. In Advances in Neural Information Processing Systems, pages 24261-24272, 2021. +[46] Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. In Advances in Neural Information Processing Systems, pages 18583-18599, 2020. +[47] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via nonparametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733-3742, 2018. + +[48] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738, 2020. +[49] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pages 1597-1607. PMLR, 2020. +[50] Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019. +[51] Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, July 2021. URL https://doi.org/10.5281/zenodo.5143773. If you use this software, please cite it as below. +[52] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8769-8778, 2018. +[53] Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3485-3492, 2010. +[54] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1452-1464, 2017. +[55] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3606-3613, 2014. +[56] Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. Do image classifiers generalize across time? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9661-9669, 2021. +[57] Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2015. +[58] John P Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In International Conference on Machine Learning, pages 7721-7735, 2021. +[59] Peter J Huber. Robust statistics. In International Encyclopedia of Statistical Science, pages 1248-1251. Springer, 2011. +[60] Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. In Advances in Neural Information Processing Systems, pages 23296-23308, 2021. +[61] Chongzhi Zhang, Mingyuan Zhang, Shanghang Zhang, Daisheng Jin, Qiang Zhou, Zhongang Cai, Haiyu Zhao, Xianglong Liu, and Ziwei Liu. Delving deep into the generalization of vision transformers under distribution shifts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7277-7286, 2022. +[62] Robert Geirhos, Patricia Rubisch, Jonas Rauber, Carlos R Medina Temme, Claudio Michaelis, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Inducing a human-like shape bias leads to emergent human-level distortion robustness in cnns. Journal of Vision, 19(10):209c-209c, 2019. +[63] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105-6114. PMLR, 2019. +[64] Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. Revisiting resnets: Improved training and scaling strategies. In Advances in Neural Information Processing Systems, pages 22614-22627, 2021. +[65] Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In International Conference on Machine Learning, pages 10096-10106. PMLR, 2021. + +[66] Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Exploring the limits of out-of-distribution detection. In Advances in Neural Information Processing Systems, pages 7068-7081, 2021. +[67] Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. Calibration of neural networks using splines. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=eQe8DEWNN2W. +[68] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16816-16825, 2022. +[69] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337-2348, 2022. +[70] Aleksandar Shtedritski, Christian Rupprecht, and Andrea Vedaldi. What does clip know about a red circle? visual prompt engineering for vlms. arXiv preprint arXiv:2304.06712, 2023. +[71] Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. CoRR, abs/2111.10050, 2021. \ No newline at end of file diff --git a/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/images.zip b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3e2ff3013e3ed4e203b5d139abfd8adfe46c74a7 --- /dev/null +++ b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa5a5f3bd6003b0b132f59f0ee8726fa770e55bb02cc6fe1fa89dbe4e2814de3 +size 358492 diff --git a/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/layout.json b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b96b2dde749894b5f069b31d1e6bb0c282e766da --- /dev/null +++ b/acloserlookattherobustnessofcontrastivelanguageimagepretrainingclip/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:872db3a28cee9e5ec615bc7da846d13d8f07127ca93789598873e83ecac7fddf +size 280530 diff --git a/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_content_list.json b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c90d8fa88fb0c52651c111fb9d7bbaefa0cbd774 --- /dev/null +++ b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c2ac566abd8a98b01a6fde1e773adbd97714966aa49565ac662bb460ad78f39 +size 76755 diff --git a/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_model.json b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_model.json new file mode 100644 index 0000000000000000000000000000000000000000..47eb7a726286d7fba4a3271f1cd98c5ad7ea0f75 --- /dev/null +++ b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7012e87eab130680f1cf6d6b7de6f262738414c39e9807340d69dda7b08c969 +size 92025 diff --git a/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_origin.pdf b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ec625e68702f10df1601814ea6c17823fe1a78a2 --- /dev/null +++ b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/e6a282e0-b609-4ab2-91b1-3d9d4cc64d46_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ad83c62c0317555ee513bf63f66f3ef23474d853c2aa6c5ac80112ef6f8025b +size 1025064 diff --git a/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/full.md b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/full.md new file mode 100644 index 0000000000000000000000000000000000000000..511710c379609e054eea927ff5035f3cd0673019 --- /dev/null +++ b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/full.md @@ -0,0 +1,305 @@ +# A Combinatorial Algorithm for Approximating the Optimal Transport in the Parallel and MPC Settings + +Nathaniel Lahn + +Radford University +nlahn@radford.edu + +Sharath Raghvendra + +Virginia Tech sharathr@vt.edu + +Kaiyi Zhang* + +Virginia Tech kaiyiz@vt.edu + +# Abstract + +Optimal Transport is a popular distance metric for measuring similarity between distributions. Exact and approximate combinatorial algorithms for computing the optimal transport distance are hard to parallelize. This has motivated the development of numerical solvers (e.g. Sinkhorn method) that can exploit GPU parallelism and produce approximate solutions. + +We introduce the first parallel combinatorial algorithm to find an additive $\varepsilon$ -approximation of the OT distance. The parallel complexity of our algorithm is $O(\log(n) / \varepsilon^2)$ where $n$ is the total support size for the input distributions. In Massive Parallel Computation (MPC) frameworks such as Hadoop and MapReduce, our algorithm computes an $\varepsilon$ -approximate transport plan in $O(\log(\log(n / \varepsilon)) / \varepsilon^2)$ rounds with $O(n / \varepsilon)$ space per machine; all prior algorithms in the MPC framework take $\Omega(\log n)$ rounds. We also provide a GPU-friendly matrix-based interpretation of our algorithm where each step of the algorithm is row or column manipulation of the matrix. Experiments suggest that our combinatorial algorithm is faster than the state-of-the-art approximate solvers in the GPU, especially for higher values of $n$ . + +# 1 Introduction + +Optimal transport (OT) is a useful metric for measuring similarity between distributions and has numerous applications [2, 4, 5, 9, 12, 29], including image retrieval [28], GAN training [23], and interpolation between distributions [7]. Given two distributions $\mu$ and $\nu$ , this metric captures the minimum-cost plan for transporting mass from $\mu$ to $\nu$ . + +More formally, in the optimal transport problem, we are given two discrete distributions $\mu$ and $\nu$ whose supports are the point sets $A$ and $B$ , respectively. For each point $a \in A$ (resp. $b \in B$ ), we associate a probability of $\mu_{a}$ (resp. $\nu_{b}$ ) with it such that $\sum_{a \in A} \mu_{a} = \sum_{b \in B} \nu_{b} = 1$ . We refer to each point of $A$ as a demand point and each point in $B$ as a supply point. For any edge $(a, b) \in A \times B$ , we are given a cost $c(a, b)$ ; we assume that the costs are scaled so that the largest cost edge is 1. Let $\beta c(a, b)$ be the cost of transporting a supply amount of $\beta$ from $b$ to $a$ . A transport plan is a function $\sigma: A \times B \to \mathbb{R}_{\geq 0}$ that assigns a non-negative value to each edge of $G$ , indicating the amount of supply transported along the edge. The transport plan $\sigma$ is such that the total supplies transported into (resp. from) any demand (resp. supply) node $a \in A$ (resp. $b \in B$ ) is bounded by the demand (resp. supply) at $a$ (resp. $b$ ). The cost of the transport plan, denoted by $c(\sigma)$ , is given by $\sum_{(a, b) \in A \times B} \sigma(a, b)c(a, b)$ . In this optimal transport problem, we are interested in finding a minimum-cost transport plan that transports all of the supply, denoted by $\sigma^{*}$ . We also define an + +*Following convention from Theoretical Computer Science, all authors are ordered in alphabetical order. The code used for the experiments reported in this paper is available at: https://github.com/kaiyiz/ Combinatorial-Parallel-OT + +$\varepsilon$ -approximate transport plan to be any transport plan $\sigma$ with a cost $c(\sigma) \leq c(\sigma^{*}) + \varepsilon$ that transports all of the supply. + +The special case where $A$ and $B$ each contain $n$ points each and where every point in $A$ (resp. $B$ ) has a demand of $1/n$ (resp. supply of $1/n$ ) is called the assignment problem. In this special case, there is an optimal transport plan with a special structure; specifically, when there are $n$ vertex-disjoint edges $(a, b)$ with $\sigma(a, b) = 1/n$ , they form a perfect matching. Let the cost of any matching $M$ , denoted by $c(M)$ be the total cost of all of its edges, i.e., + +$$ +c (M) = \sum_ {(a, b) \in M} c (a, b). +$$ + +Given a perfect matching $M$ , the cost of the corresponding transport plan is simply $1 / n\sum_{(a,b)\in M}c(a,b) = (1 / n)c(M)$ . For simplicity in exposition, in the context of the assignment problem, we will uniformly scale all the demands and supplies from $1 / n$ to 1. This does not change the optimal transport plan. It, however, increases the cost of the optimal transport plan to $c(M)$ , an increase by a factor of $n$ . Thus, for the assignment problem, finding the optimal transport plan is equivalent to finding a minimum-cost perfect matching $M^{*}$ . + +Similarly, for an $\varepsilon > 0$ , after scaling the demands and supplies from $1/n$ to $1$ , an $\varepsilon$ -approximate transport plan corresponds to a perfect matching $M$ with cost $c(M) \leq c(M^{*}) + \varepsilon n$ . We refer to such a perfect matching as $\varepsilon$ -approximate matching. Thus, for the assignment problem, finding an $\varepsilon$ -approximate transport plan corresponds to finding an $\varepsilon$ -approximate matching. + +Related Work: For discrete distributions, the optimal transport problem can be formulated as a minimum-cost flow problem and solved using any LP-solver. The best-known exact and approximate solver for optimal transport, as well as the assignment problem, are graph-based combinatorial algorithms [25, 19, 17, 27]. These solvers, however, are known to be difficult to parallelize. For instance, the best $\varepsilon$ -approximate OT solver, in terms of sequential running time, was given by Lahn et al. [19]. This combinatorial algorithm runs in $O(n^{2} / \varepsilon + n / \varepsilon^{2})$ time, and is a non-trivial adaptation of the classical combinatorial exact algorithm by Gabow and Tarjan (GT-algorithm) for the transportation problem [13]. The Lahn et al. algorithm runs no more than $\lfloor 2 / \varepsilon \rfloor + 1$ iterations, where each iteration executes a Dijkstra's shortest path search to find and augment along a set of "augmenting paths". This algorithm is the state-of-the-art in the sequential setting; see [22] for a discussion on the various algorithms. Unfortunately, however, the flow augmentations have to be done in a sequential manner, making this algorithm hard to parallelize. + +Motivated by the need for efficient scalable solutions, machine learning researchers have designed highly parallelizable approximate solvers that generate an $\varepsilon$ -approximate transport plan in $\tilde{O}(n^2/\varepsilon^{O(1)})$ sequential time and $\tilde{O}(1/\varepsilon^{O(1)})$ parallel time. Perhaps the most successful among these is an entropy regularized version of the optimal transport, which can be solved using the Sinkhorn-Knopp method [11,8] and produces an $\varepsilon$ -approximation to the optimal transport in $\tilde{O}(n^2/\varepsilon^2)$ sequential time and $\tilde{O}(1/\varepsilon^2)$ parallel time [10]. The simplicity of this algorithm has lead to an efficient GPU implementation. From a theoretical standpoint, Jambulapati et al. [16] designed a dual extrapolation algorithm using an area convex mapping [31] to achieve an improved parallel complexity of $O((\log(n)\log\log n)/\varepsilon)$ . However, as noted by Lin et al. [22], despite its sound theoretical guarantees, the lack of simplicity and the difficulties of implementation make this algorithm by Jambulapati et al. less competitive, and the Sinkhorn algorithm remains the state-of-the-art for approximating the Optimal Transport on GPUs. + +Despite being the state-of-the-art in sequential settings, combinatorial algorithms have remained difficult to parallelize and all known exact or approximate combinatorial algorithms run in only slightly sub-linear parallel time [14]. In this paper, we design the first parallel combinatorial algorithm that takes only $O(\log (n) / \varepsilon^2)$ parallel time and finds an $\varepsilon$ -approximation transport plan. + +Our algorithm also improves upon existing algorithms in the massive parallel computation frameworks such as Hadoop, MapReduce, Dryad and Spark. In the Massively Parallel Computing (MPC) model, we are given a set of machines, where each machine has a bounded amount of memory. For this model, the base assumption is that communication between machines is the performance bottleneck, and the goal is to minimize the number of synchronized communication rounds, where each round consists of a period of local computation on each machine, followed by a period of message communication between machines. It is well known that any standard parallel algorithm that takes $O(f(n))$ time can + +be directly translated to an algorithm under the MPC model that runs in $O(f(n))$ communication rounds. However, algorithms that are more specialized for the MPC model can achieve drastically faster computation times, often having a sub-logarithmic number of rounds. For example, it has long been known how to compute a maximal matching in $O(\log n)$ parallel time [15], but only recently was a breakthrough made that shows how to compute a maximal matching in $O(\log \log n)$ rounds under the MPC model [3]. Maximal matching is a substantially simpler problem than both the assignment and OT problems. For these OT problems, known parallel $\varepsilon$ -approximation algorithms immediately yield an $\tilde{O} (\log (n) / \varepsilon)$ round algorithm for the MPC model [16]. To our knowledge, no specialized MPC algorithms are known for the either problem. Thus, we provide the first sub-logarithmic round $\varepsilon$ -approximation algorithm for both the assignment and OT problems. We obtain this bound by leveraging the recent breakthrough MPC algorithm for maximal matching by [3]. + +Our Results: In this paper, we present a very simple combinatorial algorithm to compute an $\varepsilon$ -approximate transport plan in $O(n^{2} / \varepsilon^{2})$ sequential time and $O(\log (n) / \varepsilon^{2})$ parallel time. For the special case of the assignment problem, the sequential execution time of our algorithm improves to $O(n^{2} / \varepsilon)$ . We also provide a GPU implementation of our algorithm that outperforms the implementation of the Sinkhorn algorithm provided by Python Optimal Transport library [11]. + +Our algorithm also extends to the well-known Massive Parallel Computation (MPC) frameworks such as MapReduce, Hadoop, Dryad and Spark. In the MPC model, our algorithm computes a $\varepsilon$ -approximate transport plan in $O(\log (\log n) / \varepsilon^2)$ rounds with $O(n)$ memory per machine. Our algorithm is based on the popular push-relabel framework [14] for computing minimum-cost flow. + +Theorem 1.1. Given an $\varepsilon >0$ , there is an algorithm that computes an $\varepsilon$ -approximate matching in $O(n^{2} / \varepsilon)$ time. Furthermore, one can execute this algorithm in expected $O(\log (n) / \varepsilon^2)$ parallel time or in expected $O(\log (\log n) / \varepsilon^2)$ rounds, with $O(n)$ memory per machine, in the MPC model. + +Extension to the Optimal Transport problem: For any $\varepsilon > 0$ , Lahn et al. [19] showed that computing an $\varepsilon$ -approximation of the optimal transport between two discrete distributions containing $n$ points in their support reduces to an instance of an unbalanced assignment problem with $n / \varepsilon$ points. We apply this reduction and slightly adapt our algorithm from Theorem [1.1] to obtain our result for the optimal transport problem (Theorem [1.2]). In this paper, we present details of our algorithm for the assignment problem. Details of the adaptation of our algorithm to the optimal transport by using the reduction of [19] is presented in the Section B of the appendix. + +Theorem 1.2. Given an $\varepsilon >0$ , there is an algorithm that computes an $\varepsilon$ -approximate transport plan in $O(n^{2} / \varepsilon^{2})$ time. Furthermore, one can execute this algorithm in expected $O(\log (n) / \varepsilon^2)$ parallel time or in $O(\log (\log (n / \varepsilon)) / \varepsilon^2)$ rounds with $O(n / \varepsilon)$ memory per machine in the MPC model. + +From a theoretical stand-point, we provide the first parallel combinatorial algorithm for approximating the optimal transport with a expected parallel execution time of $O(\log (n) / \varepsilon^2)$ . The sequential execution time of $O(n^{2} / \varepsilon)$ for our algorithm for the assignment problem matches with the current state-of-the-art for the problem [16, 19]. We also provide the first sub-logarithmic round algorithm that approximates the optimal transport plan in the MPC model. + +From a practical stand-point, for both the assignment problem and the OT problem, we provide an implementation that exploits GPU parallelism. Experiments suggest that both of our GPU implementations outperform the GPU implementation of the state-of-the-art Sinkhorn algorithm provided by the Python Optimal Transport library [11] in terms of running time, while achieving the same level of accuracy. + +Our Approach: Our algorithmic approach is based on the popular push-relabel framework for computing network flows. For the assignment problem, our algorithm maintains a matching $M$ and a set of dual weights $y(\cdot)$ on vertices of $A \cup B$ . The algorithm runs in $O(1 / \varepsilon^2)$ iterations and in each iteration, it executes three steps. First, it greedily computes a maximal matching $M'$ . In the second step, it uses $M'$ to update the matching $M$ (the push step). Finally, it updates the dual weights (relabel step). Our proof of correctness is based on the standard dual feasibility conditions used to compute minimum-cost maximum cardinality matchings, with some modifications made to better accommodate our additive-approximate setting. Our main technical difference from standard push-relabel techniques is the novel running time analysis for the additive approximate setting. In particular, we show that the number of iterations required by our algorithm is just $O(1 / \varepsilon^2)$ . Within each iteration, the push and relabel steps take only $O(n)$ sequential time and $O(1)$ parallel time. The + +only non-trivial step, therefore, is the computation of a maximal matching which can be done in $O(n^{2})$ sequential time and $O(\log n)$ parallel time [15]. Maximal matchings can also be computed in $O(\log \log n)$ rounds in the massively parallel computation (MPC) model [3]. As a result, our algorithm can also be executed in $O(\log (\log n) / \varepsilon^2)$ rounds in the MPC model, for the assignment problem. We extend our algorithm to also approximate the optimal transport plan by using the reduction of Lahn et al. [19] (see Section B of the appendix for details). + +Organization: In Section 2.1 we present the definitions required to describe our algorithm. In Section 2.2 we present our algorithm for the assignment problem. For simplicity of exposition, we present an algorithm that computes a $3\varepsilon$ -approximation of the optimal solution to the assignment problem. To obtain an $\varepsilon$ -approximation, one can simply choose the error factor in the algorithm to be $\varepsilon/3$ . In Section 3 we prove the sequential complexity of our algorithm for the assignment problem. In Section 4 we analyze the complexity of our algorithm in the parallel and MPC settings and also describe a GPU-friendly implementation of our matching algorithm. Finally, we present the experimental results in Section 5. In the appendix, Section B we extend our algorithm to the optimal transport problem. All missing proofs can also be found in the appendix, Section A. + +# 2 Algorithm + +In this section, given an input to the assignment problem and a value $0 < \varepsilon < 1$ , we present an algorithm that computes a $3\varepsilon$ -approximate matching. + +# 2.1 Preliminaries + +We begin by introducing the terminologies required to understand our algorithm for the assignment problem. For any matching $M$ , we say that any vertex $v \in A \cup B$ is free if $v$ is not matched in $M$ and matched otherwise. Our algorithm critically uses the notion of a maximal matching which we introduce next. For any bipartite graph that is not necessarily complete, any matching $M$ is maximal if and only if at least one end point of every edge in the graph is matched in $M$ . Thus, if a matching is not maximal, there is at least one edge between two free vertices. One can, therefore, compute a maximal matching in a greedy fashion by iteratively picking such an edge and adding it to the matching. + +For every edge $(u,v)\in A\times B$ , we transform its cost so that it becomes an integer multiple of $\varepsilon$ as follows: + +$$ +\bar {c} (u, v) = \varepsilon \left\lfloor c (u, v) / \varepsilon \right\rfloor \tag {1} +$$ + +The rounding of edge costs may introduce an error that is bounded by $\varepsilon$ for each edge and by at most $\varepsilon n$ for any matching. Our algorithm assigns a dual weight $y(v)$ for every $v\in A\cup B$ such that a set of relaxed dual feasibility conditions are satisfied. A matching $M$ along with dual weights $y(\cdot)$ is $\varepsilon$ -feasible if, for every edge $(a,b)\in A\times B$ , + +$$ +y (a) + y (b) \leq \bar {c} (a, b) + \varepsilon \quad \text {i f} (a, b) \notin M \tag {2} +$$ + +$$ +y (a) + y (b) = \bar {c} (a, b) \quad \text {i f} (a, b) \in M \tag {3} +$$ + +In Lemma 3.1, we show that any $\varepsilon$ -feasible matching produced by our algorithm has a cost within an additive error of $\varepsilon$ from the optimal solution with respect to the costs $\overline{c}(\cdot, \cdot)$ . For any edge $(u, v)$ , we define its slack $s(u, v)$ to be 0 if $(u, v) \in M$ . Otherwise, if $(u, v) \notin M$ , we set its slack to be $s(u, v) = \overline{c}(u, v) + \varepsilon - y(u) - y(v)$ . We say that $(u, v)$ is admissible if the slack on the edge is 0. + +We observe that any matching $M$ whose cardinality is at least $(1 - \varepsilon)n$ can be converted into a perfect matching simply by arbitrarily matching the remaining $\varepsilon n$ free vertices. The cost of any edge is at most 1, and so, this increases the cost of the matching $M$ by at most $\varepsilon n$ . In addition to this, the rounding of costs from $c(\cdot, \cdot)$ to $\bar{c}(\cdot, \cdot)$ also introduces an increase of cost by $\varepsilon n$ . Finally, the $\varepsilon$ -feasibility conditions introduced an additional additive error of $\varepsilon n$ , for a total error of $3\varepsilon n$ , as desired. Thus, in the rest of this section, we present an algorithm that computes an $\varepsilon$ -feasible matching of cardinality at least $(1 - \varepsilon)n$ , which has a cost no more than $\varepsilon n$ above the optimal matching's cost with respect to $\bar{c}(\cdot, \cdot)$ . + +# 2.2 Algorithm Details + +Initially, we set the dual weight of every vertex $b \in B$ to be $\varepsilon$ and every vertex $a \in A$ to be 0. We initialize $M$ to $\emptyset$ . Our initial choice of $M$ and the dual weights satisfies (2) and (3). Our algorithm executes iterations, which we will call phases. Within each phase, the algorithm constructs the set $B'$ , which consists of all free vertices of $B$ . If $|B'| \leq \varepsilon n$ , then $M$ is an $\varepsilon$ -feasible matching of cardinality at least $(1 - \varepsilon)n$ , and the algorithm will arbitrarily match the remaining free vertices and return the resulting matching. Otherwise, the algorithm computes the subset $E' \subseteq E$ of admissible edges with at least one end point in $B'$ . Let $A' = \{a \mid a \in A$ and $(a,b) \in E'\}$ , i.e., the set of points of $A$ that participate in at least one edge in $E'$ . For each phase, the algorithm executes the following steps: + +(I) Greedy step: Computing a maximal (i.e., greedy) matching $M'$ in the graph $G'(A' \cup B', E')$ . +(II) Matching Update: Let $A''$ be the set of points of $A'$ that are matched in both $M$ and $M'$ and let $M''$ be the edges of $M$ that are incident on some vertex of $A''$ . The algorithm adds the edges of $M'$ to $M$ and deletes the edges of $M''$ from $M$ . + +(III) Dual Update: + +a. For every edge $(a,b)\in M^{\prime}$ , the algorithm sets $y(a)\gets y(a) - \varepsilon$ , and +b. For every vertex $b \in B'$ that is free with respect to $M'$ , the algorithm sets $y(b) \gets y(b) + \varepsilon$ . + +In each phase, the matching update step will add edges of $M'$ to $M$ and remove edges of $M''$ from $M$ . By construction, the updated set $M$ is a matching. Furthermore, every vertex of $A$ that was matched prior to the update continues to be matched after the update. + +Lemma 2.1. The new set $M$ of edges obtained after Step (II) is a matching. Furthermore, any vertex of $A$ that was matched prior to Step (II) will continue to be matched after the execution of Step (II). + +The dual update step increases or reduces dual weights by $\varepsilon$ . Therefore, the dual weights always remain an integer multiple of $\varepsilon$ . + +The algorithm maintains the following invariants: + +(I1) The dual weight of every vertex in $B$ (resp. $A$ ) is non-negative (resp. non-positive). Furthermore, every free vertex of $A$ has a dual weight of 0. +(I2) The matching $M$ and a set of dual weights $y(\cdot)$ is $\varepsilon$ -feasible. + +Proofs of these invariants can be found in the appendix Section A.1 + +# 3 Analysis + +Next, in Section 3.1, we use invariants (I1) and (I2) to show that the algorithm produces a matching with the desired accuracy. In Section 3.2, we use the invariants to bound the sequential and parallel execution times of our algorithm. + +# 3.1 Accuracy + +As stated in Section 2.1, the rounding of costs from $c(\cdot, \cdot)$ to $\overline{c}(\cdot, \cdot)$ introduces an error of $\varepsilon n$ . Furthermore, after obtaining a matching of size at least $(1 - \varepsilon)n$ , the cost of arbitrarily matching the last $\varepsilon n$ vertices is no more than $\varepsilon n$ . From the following lemma, we can conclude that the total error in the matching computed by our algorithm is no more than $+3\varepsilon n$ . Proof of this Lemma can be found in the appendix Section A.2 + +Lemma 3.1. The $\varepsilon$ -feasible matching of size at least $(1 - \varepsilon)n$ that is produced by the main routine of our algorithm is within an additive error of $\varepsilon n$ from the optimal matching with respect to the rounded costs $\overline{c}(\cdot, \cdot)$ + +# 3.2 Efficiency + +Suppose there $t$ phases executed by the algorithm. We use $n_i$ to denote the size of $B'$ in phase $i$ . By the termination condition, each phase is executed only if $B'$ has more than $\varepsilon n$ vertices, i.e., + +$n_i > \varepsilon n$ . First, in Lemma 3.2 (appendix A.3), we show that the magnitude of the dual weight of any vertex cannot exceed $(1 + 2\varepsilon)$ . This means the total dual weight magnitude over all vertices is upper bounded by $n(1 + 2\varepsilon)$ . Furthermore, in Lemma 3.3 (appendix A.4) we show that, during phase $i$ , the total dual weight magnitude increases by at least $\varepsilon n_i$ . From this, we can conclude that + +$$ +\sum_ {i = 1} ^ {t} n _ {i} \leq n (1 + 2 \varepsilon) / \varepsilon = O (n / \varepsilon). \tag {4} +$$ + +Note that, since each $n_i \geq \varepsilon n$ , we immediately get $t \varepsilon n \leq n(1 + 2\varepsilon) / \varepsilon$ , or $t \leq (1 + 2\varepsilon) / \varepsilon^2 = O(1 / \varepsilon^2)$ . In order to get the total sequential execution time, we show, in Lemma 3.4 (appendix A.5) that each phase can be efficiently executed in $O(n \times n_i)$ time. Combining this with equation (4) gives an overall sequential execution time of $O(n(\sum_{i=1}^{t} n_i)) = O(n^2 / \varepsilon)$ . + +Lemma 3.2. For any vertex $v \in A \cup B$ , the magnitude of its dual weight cannot exceed $1 + 2\varepsilon$ , i.e., $|y(v)| \leq (1 + 2\varepsilon)$ . + +Lemma 3.3. The sum of the magnitude of the dual weights increases by at least $\varepsilon n_{i}$ in each iteration. + +Lemma 3.4. The execution time of each phase is $O(n \times n_i)$ time. + +# 3.3 Analysis for the Unbalanced Case + +In this section, we describe how the analysis of our matching algorithm can be extended to work for the unbalanced case, where $|A| \neq |B|$ . This analysis is critical for proving the correctness of our optimal transport version of the algorithm. Without loss of generality, assume $|B| \leq |A| = n$ . The overall description of the algorithm remains the same, except for the main routine of our algorithm produces an $\varepsilon$ -feasible matching of size at least $(1 - \varepsilon)|B|$ . The asymptotic running time of both the parallel and sequential algorithms remains unchanged. In the following lemma, we bound the additive error of our algorithm for the unbalanced case; the argument is very similar to Lemma 3.1. + +Lemma 3.5. Given an unbalanced input to the assignment problem with $|B| \leq |A|$ , the $\varepsilon$ -feasible matching of cardinality at least $(1 - \varepsilon)|B|$ that is returned by our algorithm is within an additive error of $\varepsilon |B|$ from the optimal matching with respect to the cost function $\bar{c}(\cdot, \cdot)$ + +# 4 Parallel Algorithm + +In this section, we describe how to parallelize the matching algorithm of Section 2.2 leading to the result of Theorem 1.1. Recall that each phase of this algorithm has three steps: (I) Greedy step, (II) Matching update, and (III) Dual update. Steps (II) and (III) are easily parallelizable using $O(1)$ time. However, step (I) is nontrivial to parallelize. Fortunately, Isreali and Itai gave a $O(\log n)$ randomized parallel algorithm for computing a maximal matching on an arbitrary graph [15]. Therefore, we can complete step (I) by applying their algorithm as a black box. However their algorithm is very generic, applying even for non-bipartite graphs. In Section 4.1, we use the Israeli Itai algorithm to bound the parallel complexity of our algorithm. In Section 4.2, we further parallelize the phases of our algorithm leading to a simplified variation of the Israeli Itai algorithm that is more suited for a practical implementation. Finally, in Section 4.3, we provide a matrix-based interpretation of our simplified algorithm, allowing the algorithm to be easily implemented for GPUs. + +# 4.1 Analysis of our Algorithm for Parallel and MPC Models + +The Israeli Itai algorithm is designed to work on an arbitrary graph $G(V, E)$ , which may not be bipartite. It computes a maximal matching on $G$ in $O(\log n)$ iterations. By directly using their $O(\log n)$ algorithm for step (I) of our algorithm, we obtain an $O(\log n)$ parallel running time for each phase. Since our algorithm executes $O(1 / \varepsilon^2)$ phases, we obtain a worst-case theoretical bound of $O(\log n / \varepsilon^2)$ for our algorithm. + +We would also like to note that specialized algorithms for maximal matching exist for the MPC model as well. For example, it is possible to compute a maximal matching under the MPC model using just $O(\log (\log (\Delta)))$ rounds and $O(n)$ space per machine, where $\Delta$ is the maximum degree of the graph. [3]. As a result, we are able to achieve an algorithm in the MPC model that requires only $O(\log (\log n) / \varepsilon^2)$ rounds with $O(n)$ space per machine. + +# 4.2 Simplifying the Parallel Implementation of our Algorithm + +Instead of using Israeli Itai algorithm (which works for any arbitrary graph) as a black-box, we use a simpler adaptation of their algorithm for bipartite graphs. The Israeli Itai algorithm executes $O(\log n)$ iterations, where each iteration executes the following steps, using $O(1)$ parallel time: + +(i) Each vertex $u \in V$ selects an incident edge $(u, v)$ at random, and directs it from $u$ to $v$ , yielding a directed subgraph $R \subseteq E$ . +(ii) Each vertex $u \in V$ selects, at random, one incoming edge from $R$ . Let $S$ be the set of edges chosen by this step, with all directions removed. The graph $S$ has a maximum vertex degree of 2. +(iii) Each vertex selects an edge of $S$ at random. Any edge that is selected by both of its endpoints is added to $M$ , and any vertex matched by this step is removed for the next phase. + +The Israeli Itai algorithm is designed to work for any graph that is not necessarily bipartite. In this situation, there is no way to partition the vertices into sets $A$ and $B$ , and so it is necessary to consider edges as having two directions; a vertex $u$ could 'propose' an outgoing edge $(u, v)$ (see step (i)) and also 'receive' multiple proposals as incoming edges (see step (ii)). In the non-bipartite case, it is necessary for every vertex to both send and receive proposals; all vertices must be handled using a symmetric process. This results in a subgraph $S$ , where each vertex could have a degree of 2, and an additional step is required to eliminate some of these edges in order to form a matching (see step (iii)). + +However, in our situation, we are solely working with bipartite graphs. In this situation, the vertices are divided into two sets $A$ and $B$ , and, as a result, we do not need to process each vertex in a symmetric fashion. Instead, we can allow one side to make proposals and the other side to receive proposals. As a result, for each iteration of the maximal matching algorithm, we can execute the following steps, which correspond to steps (i) and (ii) in the Israeli Itai algorithm. + +(a) Each vertex of $B$ selects, at random, an incident edge, yielding a subgraph $S$ . +(b) Each vertex of $A$ , with degree at least one in $S$ , arbitrarily selects an edge from $S$ and adds it to the matching. + +Note that, after step (b) in our approach, each vertex has at most one incident edge selected. This alleviates the need for step (iii), since steps (a) and (b) alone immediately result in a matching. + +In addition to our simplified approach to each iteration of the Israeli Itai algorithm, we make a second optimization: Within our algorithm, instead of waiting for the Israeli Itai algorithm to complete, which could require $O(\log n)$ iterations, our implementation immediately updates the matching and dual weights after each iteration before moving to the next iteration of the Israeli Itai algorithm. While this increases the number of phases, each phase becomes very simple, taking only $O(1)$ time, resulting in practical improvements. We believe that this additional source of parallelization could lead to asymptotic improvements to the parallel complexity of our algorithm. However, the proofs of Israeli Itai do not readily extend to this modified algorithm. Obtaining a tight bound on the parallel complexity of our modified algorithm is an important open question. + +# 4.3 A GPU-Friendly Implementation + +Thus far, we have described our algorithm using graph theoretic notations. However, in practice, it is important for our algorithm to have an efficient GPU-based implementation. In this section, we provide a matrix-based implementation of our algorithm, using the simplified maximal matching approach described in Section 4.2 + +In Algorithm 1, we provide a pseudocode of our simplified algorithm. The algorithm resembles the one described in Section 2.2 except for the differences described in Section 4.2. It assumes that the costs were already rounded to each be an even multiple of $\varepsilon$ . The matching returned by the algorithm has cardinality at least $(1 - \varepsilon)n$ and a cost at most $\varepsilon n$ above the optimal cost. The algorithm, as written, does not maintain dual weights explicitly, but if dual weights are required as part of the output, then, as discussed below, the algorithm can be modified to keep track of them. Note that all operations in this psuedocode are based on relatively simple matrix operations, and can be implemented easily on a GPU. + +Algorithm 1 Approximate Bipartite Matching +1: Input: $W \in \mathbb{R}_{\geq 0}^{n \times n}, \delta \in \mathbb{R}_{>0}$ +2: $M \gets \mathbf{0}_{n \times n}, S \gets W$ +3: while $M$ has more than $\varepsilon n$ columns with all 0's do +4: $P \gets \mathbf{0}_{n \times n}$ +5: for all columns $b$ with all zero entries in $M$ do +6: Randomly select a row $a$ such that $S_{a,b} = 0$ +7: $P_{a,b} \gets 1$ +8: for all $a, b \in [1..n]$ do +9: $M_{a,b} \gets 1$ if $M_{a,b} = 1$ and row $a$ of $P$ has all 0's +10: otherwise $M_{a,b} \gets P_{a,b}$ +11: for all rows $a$ in $P$ with at least one 1 do +12: Add $\varepsilon$ to every entry in row $a$ of $S$ +13: for all columns $b$ in $M$ with all 0 entries do +14: $\rho \gets$ the minimum entry in column $b$ of $S$ +15: Decrease every entry in column $b$ of $S$ by $\rho$ +16: return $M$ + +The algorithm takes as input an $n \times n$ cost matrix $W$ and a value for the error parameter $\delta$ , and returns an $n \times n$ bit matrix that describes the matching, where an edge $(i,j)$ is in the matching if and only if the value at row $i$ and column $j$ is a 1. We use $\mathbf{1}_n$ to represent a row vector containing all 1's, and $\mathbf{1}_{m \times n}$ to represent an $m$ by $n$ matrix of all 1's. We use similar notation for vectors and matrices of all 0's. When considering an $n \times n$ matrix, we follow a convention that each row corresponds to a vertex of $A$ , and each column corresponds to a vertex of $B$ . + +Next, we explain further each part of the algorithm. Line 3 initializes the slack matrix $S$ to reflect the initial slacks of all edges, which are initially non-matching. Throughout the algorithm, the matrix $S$ will reflect the slack with respect to the edge 'as if it were a non-matching edge', i.e., $S_{i,j} = W_{i,j} + \varepsilon - y(a) - y(b)$ , regardless of the matching status of the edge. This makes the slacks easier to track without the need to explicitly maintain dual weights. + +The main loop of the algorithm, beginning at line 4, specifies the stopping condition of the algorithm. The algorithm terminates once at most $\varepsilon n$ free vertices of $B$ remain. Each iteration of this main loop is a phase. Lines 5-7 compute a matrix $P$ , which represents a set of edges that will be added to $M$ during the current phase. This edge set will be a matching on the admissible edges that are incident on free vertices of $B$ . This corresponds to Step (I) of the algorithm from Section 2.2 except for $P$ is not necessarily maximal. Lines 8-10 update the matching $M$ by adding edges of $P$ to $M$ , and removing any preexisting edges of $M$ that are matched in $P$ . This corresponds to Step (2) of the algorithm from Section 2.2. Finally, lines 11-15 update the slacks to reflect dual weight adjustments, corresponding to Step (3) of the algorithm from Section 2.2. However, instead of tracking the dual weights explicitly, we simply update the slacks directly. Note that, when updating the slacks on edges incident on free vertices of $B$ , we include a slight change to Step (3). Instead of increasing the dual weight of free vertices of $B$ by exactly $\varepsilon$ , we increase it as much as possible. For some free vertices of $B$ , the increase will be 0 (since $P$ is not maximal), but it is also possible, in practice, for the increase to be larger than $\varepsilon$ . + +# 5 Experiments + +In this section, we present our experimental results. We implemented the parallel version of our algorithm for both the assignment problem as well as the OT problem. Both implementations are written in Python using the PyTorch library, which supports GPU operations. We compare these implementations of our algorithm to the Sinkhorn algorithm implementation in the Python Optimal Transport (POT) library [11]. This Sinkhorn implementation also uses PyTorch. Additionally, we compare our algorithm to a CUDA based GPU implementation and present the results of this comparison in appendix section C.3 + +Our experiments are run with an Intel Xeon E5-2680v4 2.4GHz chip and an Nvidia Tesla V100 GPU. We ran our algorithms using both real and synthetic data, including four different settings: + +an assignment problem between randomly generated synthetic point sets, an OT problem between randomly generated point sets, each having randomly assigned demands and supplies, an assignment problem formed from two sets of MNIST images, and an OT problem between two text word embeddings. For each setup, we generated input data and computed the assignment or OT cost using our algorithm with different values of $\varepsilon$ . Then, we determined the appropriate regularization parameter of the Sinkhorn algorithm, ensuring the Sinkhorn distance is close but no lower than the cost of the solution generated by our algorithm. We recorded the running time and the number of parallel rounds for both Sinkhorn and our algorithm. We also repeated each experiment using a reversed process by fixing the regularization parameter of Sinkhorn and searching for the $\varepsilon$ value for our algorithm, which guarantees the our cost is similar to, but no more than, the Sinkhorn distance; the results for these reversed experimental setups can be found in the technical appendix. Note that, we also recorded the execution time of solving for the exact solution using POT's EMD function, which runs on the CPU. We only present for values of $\varepsilon$ that are large enough such that either our algorithm or Sinkhorn algorithms run faster than the exact algorithm. We also record the additive error, relative to the optimal solution, of both Sinkhorn and our algorithm in appendix section C.1. Additionally, we conducted experiments comparing the sequential performances of our algorithm and Sinkhorn on the CPU, which can be found in the appendix section C.4. + +Synthetic Data: For synthetic data generation, for both the assignment and OT experiments, we randomly sampled the location of two groups of $n = 10,000$ vertices, $A$ and $B$ , in a 2-dimensional unit square. For the assignment problem, the demand or supply of every vertex is $1/n$ . For the OT problem, the capacity of each vertex is initially chosen by selecting, uniformly at random from the interval [0, 1]. Then, the capacities are normalized such that the total supply and demand are both equal to 1. For any pair of points $(a,b) \in A \times B$ , the cost $c(a,b)$ was set as the squared Euclidean distance between them. For each value of $\varepsilon$ , we executed 10 runs. For each combination of $\varepsilon$ , and algorithm choice, we averaged both the running times as well as the number of parallel rounds over all 10 runs and recorded the results. The results of running times and parallel rounds can be seen in Figure 1(a)(b) and Figure 2(a)(b) respectively. + +![](images/0f39c98ef525afa0f8a5316570c446f1fda5ecf11394391cc200d9e5c2db08f5.jpg) + +![](images/631b7eafa8e564d9b57875f59cd15a7e29130d2a10643c01b3fcbe1d4c08463f.jpg) + +![](images/14c788f932af1ee2aa6bacd0ed3655220c724646f937039d9bb99cafa93fe33c.jpg) + +![](images/a247b2f2ebefc609b9cdc37a33ad5f8bc6118f0bb80f3e847b36cac5e6f796bf.jpg) +Figure 1: Plots of running times on GPU for the synthetic inputs (a)(b) and the real data inputs (c)(d)(e)(f). + +![](images/1d973245bcb077840adf014699191daaf9b8916f8161e828384d73959b1d7da9.jpg) + +![](images/654a73ea8ac09ce2f84962207ef7c5d619cb0fa911e5d0b21386be2f972ccd36.jpg) + +MNIST: Next, we ran a similar experiment using real-world image data. We generated our inputs using the MNIST dataset of hand-written digit images [21]. Each image consists of a $28 \times 28$ pixel gray-scale image. The sets $A$ and $B$ each consist of $n = 10,000$ images from the MNIST dataset, selected at random. The cost $c(a,b)$ between two images $a \in A$ and $b \in B$ is computed as follows: Let $a(i,j)$ (resp. $b(i,j)$ ) be the value of the pixel in row $i$ and column $j$ of image $a$ (resp. $b$ ). First, the two images are normalized so that the sum of all pixel values is equal to 1 for each image, i.e., $\sum_{i,j \in [1,28]} a(i,j) = 1$ and $\sum_{i,j \in [1,28]} b(i,j) = 1$ . Then, the cost $c(a,b)$ is given by the $L_1$ distance between the resulting normalized images: $c(a,b) = \sum_{i,j \in [1,28]} |a(i,j) - b(i,j)|$ . Note that an upper + +bound on the largest cost is 2. For each algorithm and for each value of $\varepsilon$ , we averaged both the running times as well as the number of parallel rounds over 10 runs. The results for these experiments can be found in the plot in Figure 1(c) and the plot in Figure 2(c). + +![](images/9a62b399302f2bcd02a0005fa3271c0c7d4187e8ead58b93bf38324669310af0.jpg) + +![](images/2754eb04ba9de450438ce71c1d2825171b1fa62c21986ab1ae29bc9468ab3c61.jpg) + +![](images/d94b423bca5e90ae96c3f7254158778ad1951c2a9dbf47cdd5b2ce63f248d345.jpg) + +![](images/7a41883028196acc099d65c9f6dda7a8b7f71276f32eee0298905486ba50673a.jpg) +Figure 2: Plots of parallel rounds on GPU for the synthetic inputs (a)(b) and the real data inputs (c)(d)(e)(f). + +![](images/696b7a944708bd06039ede95d74b01a780d9d58dc95c47ba407fe25381239c24.jpg) + +![](images/8b16be03838fe1c553d2295008df8454e2780789f04f94c7cf16fc2cd6957a12.jpg) + +NLP: In our final experiment, we considered OT with natural language data. We calculate the document distances based on OT following the procedure of previous work [18]. Each OT instance was generated by selecting two discrete sections of text with fixed lengths. Each unique word in the first (resp. second) section of text corresponds to a vertex of $A$ (resp. $B$ ) in the OT problem. To acquire the supply and demand for each vertex, we tokenized each text section with NLTK [6], and counted the number of appearances of each unique token. Then the counts were normalized so that the total supply and demand were each equal to 1. Next, to generate the costs between vertices, we represent each unique token using a 100-dimensional GloVe word embedding [26]. The cost of any edge $(a, b) \in A \times B$ is then given by the Euclidean distance between the corresponding points in this embedding. + +In the last three plots in Figure 1 and Figure 2 we show the results of applying this experimental setup, using sections of the text from The Count of Monte Cristo, 20News, IMDB. For each dataset, five different OT instances are created, using different sections of the text, and the results are averaged over all 5 runs. These experiments can be found in Figure 1(d)(e)(f) and Figure 2(d)(e)(f). + +In all our experiments, our new parallel combinatorial algorithm almost always runs significantly faster than the Sinkhorn algorithm for the OT problem often with significantly fewer parallel rounds. Unlike the POT's highly optimized implementation of the Sinkhorn method, the implementation of our algorithm is new and may benefit significantly from further optimizations. + +# 6 Conclusion + +In this work, we provided a fast, highly parallelizable combinatorial algorithm for computing an $\varepsilon$ -approximate solution to the assignment problem and the OT problem. We also provided a practical implementation of a slight variation of our algorithm, which outperforms the Sinkhorn algorithm, in terms of running times, in our experimental comparison. In light of this work, we would like to propose the following open question: Is it possible to improve the $O(\log (n) / \varepsilon^2)$ parallel running time of our algorithm, possibly by introducing an appropriate modification? In particular, our simplified practical algorithm presented in Section 4.2 seems to execute fewer parallel rounds than the our worst-case theoretical analysis might suggest. Can the simplifications used in our practical implementation be used to improve our worst-case theoretical running times? + +# Acknowledgement + +We would like to acknowledge Advanced Research Computing (ARC) at Virginia Tech, which provided us with the computational resources used to run the experiments. Research presented in this paper was funded by NSF CCF-1909171 and NSF CCF-2223871. We would like to thank the anonymous reviewers for their useful feedback. + +# References + +[1] Jason Altschuler, Jonathan Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In NIPS, pages 1961-1971, 2017. +[2] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv:1701.07875v3 [stat.ML], 2017. +[3] Soheil Behnezhad, Mohammad Taghi Hajiaghayi, and David G Harris. Exponentially faster massively parallel maximal matching. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1637-1649. IEEE, 2019. +[4] Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyre. Iterative bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111-A1138, 2015. +[5] Jérémie Bigot, Raul Gouet, Thierry Klein, Alfredo López, et al. Geodesic PCA in the wasserstein space by convex PCA. In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, volume 53, pages 1-26. Institut Henri Poincaré, 2017. +[6] Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. "O'Reilly Media, Inc.", 2009. +[7] Nicolas Bonneel, Michiel Van De Panne, Sylvain Paris, and Wolfgang Heidrich. Displacement interpolation using lagrangian mass transport. In Proceedings of the 2011 SIGGRAPH Asia Conference, pages 1-12, 2011. +[8] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, pages 2292-2300, 2013. +[9] Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In International Conference on Machine Learning, pages 685-693, 2014. +[10] Pavel Dvurechensky, Alexander Gasnikov, and Alexey Kroshnin. Computational optimal transport: Complexity by accelerated gradient descent is better than by sinkhorn's algorithm. In International Conference on Machine Learning, pages 1367-1376. PMLR, 2018. +[11] Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1-8, 2021. +[12] Rémi Flamary, Marco Cuturi, Nicolas Courty, and Alain Rakotomamonjy. Wasserstein discriminant analysis. Machine Learning, 107(12):1923-1945, 2018. +[13] Harold N Gabow and Robert E Tarjan. Faster scaling algorithms for network problems. SIAM Journal on Computing, 18(5):1013-1036, 1989. +[14] Andrew V Goldberg, Serge A Plotkin, and Pravin M Vaidya. Sublinear-time parallel algorithms for matching and related problems. Journal of Algorithms, 14(2):180-213, 1993. +[15] Amos Israeli and Alon Itai. A fast and simple randomized parallel algorithm for maximal matching. Information Processing Letters, 22(2):77-80, 1986. + +[16] Arun Jambulapati, Aaron Sidford, and Kevin Tian. A direct tilde $\tilde{O}(1 / \varepsilon)$ iteration parallel algorithm for optimal transport. Advances in Neural Information Processing Systems, 32, 2019. +[17] Harold Kuhn. Variants of the hungarian method for assignment problems. Naval Research Logistics, 3(4):253-258, 1956. +[18] Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In International conference on machine learning, pages 957-966. PMLR, 2015. +[19] Nathaniel Lahn, Deepika Mulchandani, and Sharath Raghvendra. A graph theoretic additive approximation of optimal transport. In Advances in Neural Information Processing Systems 32, pages 13813-13823, 2019. +[20] Nathaniel Lahn and Sharath Raghvendra. A weighted approach to the maximum cardinality bipartite matching problem with applications in geometric settings. Journal of Computational Geometry, 11(2), 2021. Special Issue of Selected Papers from SoCG 2019. +[21] Yann LeCun. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998. License: Creative Commons Attribution-Share Alike 3.0. +[22] Tianyi Lin, Nhat Ho, and Michael I Jordan. On the efficiency of entropic regularized algorithms for optimal transport. Journal of Machine Learning Research, 23(137):1-42, 2022. +[23] Huidong Liu, GU Xianfeng, and Dimitris Samaras. A two-step computation of the exact gan wasserstein distance. In International Conference on Machine Learning, pages 3159-3168. PMLR, 2018. +[24] Vien V Mai, Jacob Lindback, and Mikael Johansson. A fast and accurate splitting method for optimal transport: Analysis and implementation. arXiv preprint arXiv:2110.11738, 2021. +[25] James B Orlin. A polynomial time primal network simplex algorithm for minimum cost flows. Mathematical Programming, 78(2):109-129, 1997. +[26] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, 2014. +[27] Abhijeet Phatak, Sharath Raghvendra, Chittaranjan Tripathy, and Kaiyi Zhang. Computing all optimal partial transports. In The Eleventh International Conference on Learning Representations, 2022. +[28] Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover's distance as a metric for image retrieval. International journal of computer vision, 40(2):99-121, 2000. +[29] Roman Sandler and Michael Lindenbaum. Nonnegative matrix factorization with earth mover's distance metric for image analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1590-1602, 2011. +[30] R. Sharathkumar and P. K. Agarwal. Algorithms for transportation problem in geometric settings. In Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, pages 306-317, 2012. +[31] Jonah Sherman. Area-convexity, 1-infinity regularization, and undirected multicommodity flow. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 452–460, 2017. \ No newline at end of file diff --git a/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/images.zip b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a7fc641e1f019f25cc10a4f1921c8ddf7485d228 --- /dev/null +++ b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2eda58fe9b8531cdace47722cc9d9e4c0730178223b3e2e12d6088ebe5e112d +size 154381 diff --git a/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/layout.json b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cc63325ac0678450d6940dfa0b2b0cc7a9da65ca --- /dev/null +++ b/acombinatorialalgorithmforapproximatingtheoptimaltransportintheparallelandmpcsettings/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:512636b8c13b7e4c23f0d5dfd67c9f7c0afef8e7eb22fec2631da72b3cb94f9a +size 637707 diff --git a/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_content_list.json b/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0bf1195de0bfb2fb6810e6b35dec0d701a35a14f --- /dev/null +++ b/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72c86520e1bca3ee5586d180cbacfb63d90d2afc70f675449afcf0c87dcf655f +size 162079 diff --git a/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_model.json b/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_model.json new file mode 100644 index 0000000000000000000000000000000000000000..67e1c41b453ac0ab2b53fd084cded59f5ede80e1 --- /dev/null +++ b/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03d3e05d4209fe18e80ad0cc1a08ad011bd93355a60a4280bbb32047e899fcad +size 186070 diff --git a/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_origin.pdf b/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bb3b7ce2d5c3f3c388872fff0e415abc7c243257 --- /dev/null +++ b/acompetitivealgorithmforagnosticactivelearning/8e0e7c14-0d3d-477a-8295-a9ba7a08b523_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7e51ca2ffb66f678164f6ed0bdf0dc51ffcb23a1c05cf3dc55e6bdda9bc6fce +size 408470 diff --git a/acompetitivealgorithmforagnosticactivelearning/full.md b/acompetitivealgorithmforagnosticactivelearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2bb95b1715c660e5d14732802638b3a285b08f90 --- /dev/null +++ b/acompetitivealgorithmforagnosticactivelearning/full.md @@ -0,0 +1,845 @@ +# A Competitive Algorithm for Agnostic Active Learning + +# Eric Price + +Department of Computer Science + +University of Texas at Austin + +ecprice@cs.utexas.edu + +# Yihan Zhou + +Department of Computer Science + +University of Texas at Austin + +joeyzhou@cs.utexas.edu + +# Abstract + +For some hypothesis classes and input distributions, active agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs. + +We take a different approach to agnostic active learning, getting an algorithm that is competitive with the optimal algorithm for any binary hypothesis class $H$ and distribution $\mathcal{D}_X$ over $X$ . In particular, if any algorithm can use $m^*$ queries to get $O(\eta)$ error, then our algorithm uses $O(m^* \log |H|)$ queries to get $O(\eta)$ error. Our algorithm lies in the vein of the splitting-based approach of Dasgupta [2004], which gets a similar result for the realizable $(\eta = 0)$ setting. + +We also show that it is NP-hard to do better than our algorithm's $O(\log |H|)$ overhead in general. + +# 1 Introduction + +Active learning is motivated by settings where unlabeled data is cheap but labeling it is expensive. By carefully choosing which points to label, one can often achieve significant reductions in label complexity [Cohn et al., 1994]. A canonical example with exponential improvement is one-dimensional threshold functions $h_{\tau}(x) \coloneqq 1_{x \geq \tau}$ : in the noiseless setting, an active learner can use binary search to find an $\varepsilon$ -approximation solution in $O\left(\log \frac{1}{\varepsilon}\right)$ queries, while a passive learner needs $\Theta\left(\frac{1}{\varepsilon}\right)$ samples [Cohn et al., 1994, Dasgupta, 2005, Nowak, 2011]. + +In this paper we are concerned with agnostic binary classification. We are given a hypothesis class $H$ of binary hypotheses $h: \mathcal{X} \to \{0,1\}$ such that some $h^* \in H$ has $\mathrm{err}(h^*) \leq \eta$ , where the error + +$$ +\operatorname {e r r} (h) := \Pr_ {(x, y) \sim \mathcal {D}} [ h (x) \neq y ] +$$ + +is measured with respect to an unknown distribution $\mathcal{D}$ over $\mathcal{X} \times \{0,1\}$ . In our active setting, we also know the marginal distribution $\mathcal{D}_X$ of $x$ , and can query any point $x$ of our choosing to receive a sample $y \sim (Y \mid X = x)$ for $(X,Y) \sim \mathcal{D}$ . The goal is to output some $\widehat{h}$ with $\mathrm{err}(\widehat{h}) \leq \eta + \varepsilon$ , using as few queries as possible. + +The first interesting results for agnostic active learning were shown by Balcan et al. [2006], who gave an algorithm called Agnostic Active $(\mathrm{A}^2)$ that gets logarithmic dependence on $\varepsilon$ in some natural settings: it needs $\widetilde{O}\left(\log \frac{1}{\varepsilon}\right)$ samples for the 1d linear threshold setting (binary search), as long as $\varepsilon > 16\eta$ , and $\widetilde{O}\left(d^{2}\log \frac{1}{\varepsilon}\right)$ samples for $d$ -dimensional linear thresholds when $\mathcal{D}_X$ is the uniform sphere and $\varepsilon > \sqrt{d}\eta$ . This stands in contrast to the polynomial dependence on $\varepsilon$ necessary in the passive setting. The bound's requirement that $\varepsilon \gtrsim \eta$ is quite natural given a lower bound of $\Omega\left(d\frac{\eta^2}{\varepsilon^2}\right)$ + +due to [Käärääinen, 2006, Beygelzimer et al., 2009], where $d$ is the VC dimension. Subsequent works have given new algorithms [Dasgupta et al., 2007, Beygelzimer et al., 2010] and new analyses [Hanneke, 2007a] to get bounds for more general problems, parameterized by the "disagreement coefficient" of the problem. But while these can give better bounds in specific cases, they do not give a good competitive ratio to the optimum algorithm: see (Hanneke [2014], Section 8.2.5) for a realizable example where $O\left(\log \frac{1}{\varepsilon}\right)$ queries are possible, but disagreement-coefficient based bounds lead to $\Omega \left(\frac{1}{\varepsilon}\right)$ queries. + +By contrast, in the realizable, identifiable setting $(\eta = \varepsilon = 0)$ , a simple greedy algorithm is competitive with the optimal algorithm. In particular, Dasgupta [2004] shows that if any algorithm can identify the true hypothesis in $m$ queries, then the greedy algorithm that repeatedly queries the point that splits the most hypotheses will identify the true hypothesis in $O(m\log |H|)$ queries. This extra factor of $\log |H|$ is computationally necessary: as we will show in Theorem 1.2, avoiding it is NP-hard in general. This approach can extend [Dasgupta, 2005] to the PAC setting (so $\varepsilon > 0$ , but still $\eta = 0$ ), showing that if any algorithm gets error $\varepsilon$ in $m^*$ queries, then this algorithm gets error $8\varepsilon$ in roughly $\widetilde{O}(m^* \cdot \log |H|)$ queries (but see the discussion after Theorem 8.2 of Hanneke [2014], which points out that one of the logarithmic factors is in an uncontrolled parameter $\tau$ , and states that "Resolving the issue of this extra factor of $\log \frac{1}{\tau}$ remains an important open problem in the theory of active learning"). + +The natural question is: can we find an agnostic active learning algorithm that is competitive with the optimal one in the agnostic setting? + +Our Results. Our main result is just such a competitive bound. We say an active agnostic learning algorithm $\mathcal{A}$ solves an instance $(\bar{H},\mathcal{D}_X,\eta ,\varepsilon ,\delta)$ with $m$ measurements if, for every distribution $\mathcal{D}$ with marginal $\mathcal{D}_X$ and for which some $h^*\in H$ has $\mathrm{err}(h^{*})\leq \eta$ , with probability $1 - \delta$ , $\mathcal{A}$ uses at most $m$ queries and outputs $\widehat{h}\in H$ with $\mathrm{err}\left(\widehat{h}\right)\leq \eta +\varepsilon$ . Let $m^{*}(H,\mathcal{D}_{X},\eta ,\varepsilon ,\delta)$ be the optimal number of queries for this problem, i.e., the smallest $m$ for which any $\mathcal{A}$ can solve $(H,\mathcal{D}_X,\eta ,\varepsilon ,\delta)$ . + +Define $N(H, \mathcal{D}_X, \alpha)$ to be the size of the smallest $\alpha$ -cover over $H$ , i.e., the smallest set $S \subseteq H$ such that for every $h \in H$ there exists $h' \in S$ with $\operatorname{Pr}_{x \sim \mathcal{D}_X}[h(x) \neq h'(x)] \leq \alpha$ . When the context is clear, we drop the parameters and simply use $N$ . Of course, $N$ is at most $|H|$ . + +Theorem 1.1 (Competitive Bound). There exist some constants $c_{1}, c_{2}$ and $c_{3}$ such that for any instance $(H, \mathcal{D}_X, \eta, \varepsilon, \delta)$ with $\varepsilon \geq c_1\eta$ , Algorithm 1 solves the instance with sample complexity + +$$ +m (H, \mathcal {D} _ {X}, \eta , \varepsilon , \delta) \lesssim \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {2} \eta , c _ {3} \varepsilon , \frac {9 9}{1 0 0}\right) + \log \frac {1}{\delta}\right) \cdot \log \frac {N (H , \mathcal {D} _ {X} , \eta)}{\delta} +$$ + +and polynomial time. + +Even the case of $\eta = 0$ is interesting, given the discussion in [Hanneke, 2014] of the gap in [Dasgupta, 2005]'s bound, but the main contribution is the ability to handle the agnostic setting of $\eta >0$ . The requirement that $\varepsilon \geq O(\eta)$ is in line with prior work [Balcan et al., 2006, Dasgupta, 2005]. Up to constants in $\eta$ and $\varepsilon$ , Theorem 1.1 shows that our algorithm is within a $\log N\leq \log |H|$ factor of the optimal query complexity. + +We show that it NP-hard to avoid this log $N$ factor, even in the realizable $(\eta = \varepsilon = \delta = 0)$ case: + +Theorem 1.2 (Lower Bound). It is NP-hard to find a query strategy for every agnostic active learning instance within an $c \log |H|$ for some constant $c > 0$ factor of the optimal sample complexity. + +This is a relatively simple reduction from the hardness of approximating SETCOVER [Dinur and Steurer, 2014]. The lower bound instance has $\eta = \varepsilon = \delta = 0$ , although these can be relaxed to being small polynomials (e.g., $\varepsilon = \eta = \frac{1}{3|X|}$ and $\delta = \frac{1}{3|H|}$ ). + +Extension. We give an improved bound for our algorithm in the case of noisy binary search (i.e., $H$ consists of 1d threshold functions). When $\eta = \Theta(\varepsilon), N(H, \mathcal{D}_X, \varepsilon) = \Theta\left(\frac{1}{\varepsilon}\right)$ and $m^{*}(\eta, \varepsilon, .99) = O(\log \frac{1}{\varepsilon})$ . Thus Theorem 1.1 immediately gives a bound of $O(\log^2 \frac{1}{\varepsilon \delta})$ , which is nontrivial but not ideal. (For $\eta \ll \varepsilon$ , the same bound holds since the problem is strictly easier when $\eta$ is smaller.) However, the bound in Theorem 1.1 is quite loose in this setting, and we can instead give a bound of + +$$ +O \left(\log {\frac {1}{\varepsilon \delta}} \log {\frac {\log {\frac {1}{\varepsilon}}}{\delta}}\right) +$$ + +for the same algorithm, Algorithm 1. This matches the bound given by disagreement coefficient based algorithms for constant $\delta$ . The proof of this improved dependence comes from bounding a new parameter measuring the complexity of an $H, \mathcal{D}_x$ pair; this parameter is always at least $\Omega \left( \frac{1}{m^*} \right)$ but may be much larger (and is constant for 1d threshold functions). See Theorem 2.3 for details. + +# 1.1 Related Work + +Active learning is a widely studied topic, taking many forms beyond the directly related work on agnostic active learning discussed above [Settles, 2009]. Our algorithm can be viewed as similar to "uncertainty sampling" [Lewis, 1995, Lewis and Catlett, 1994], a popular empirical approach to active learning, though we need some modifications to tolerate adversarial noise. + +One problem related to the one studied in this paper is noisy binary search, which corresponds to active learning of 1d thresholds. This has been extensively studied in the setting of i.i.d. noise [Burnashev and Zigangirov, 1974, Ben-Or and Hassidim, 2008, Dereniowski et al., 2021] as well as monotonic queries [Karp and Kleinberg, 2007]. Some work in this vein has extended beyond binary search to (essentially) active binary classification [Nowak, 2008, 2011]. These algorithms are all fairly similar to ours, in that they do multiplicative weights/Bayesian updates, but they query the single maximally informative point. This is fine in the i.i.d. noise setting, but in an agnostic setting the adversary can corrupt that query. For this reason, our algorithm needs to find a set of high-information points to query. + +Another related problems is decision tree learning. The realizable, noiseless case $\eta = \varepsilon = 0$ of our problem can be reduced to learning a binary decision tree with minimal depth. Hegedús [1995] studied this problem and gave basically the same upper and lower bound as in Dasgupta [2005]. Kosaraju et al. [2002] studied a split tree problem, which is a generalization of binary decision tree learning, and also gave similar bounds. Azad et al. [2022] is a monograph focusing on decision tree learning, in which many variations are studied, including learning with noise. However, this line of work usually allows different forms of queries so their results are not directly comparable from results in the active learning literature. + +For much more work on the agnostic active binary classification problem, see Hanneke [2014] and references therein. Many of these papers give bounds in terms of the disagreement coefficient, but sometimes in terms of other parameters. For example, Katz-Samuels et al. [2021] has a query bound that is always competitive with the disagreement coefficient-based methods, and sometimes much better; still, it is not competitive with the optimum in all cases. + +In terms of the lower bound, it is shown in Laurent and Rivest [1976] that the problem is NP-complete, in the realizable and noiseless setting. To the best of our knowledge, our Theorem 1.2 showing hardness of approximation to within a $O(\log |H|)$ factor is new. + +Minimax sample complexity bounds. Hanneke and Yang [2015] and Hanneke [2007b] have also given "minimax" sample complexity bounds for their algorithms, also getting a sample complexity within $O(\log |H|)$ of optimal. However, these results are optimal with respect to the sample complexity for the worst-case distribution over $y$ and $x$ . But the unlabeled data $x$ is given as input. So one should hope for a bound with respect to optimal for the actual $x$ and only worst-case over $y$ ; this is our bound. + +We give the following example to illustrate that our bound, and indeed our algorithm, can be much better. + +Example 1.3. Define a hypothesis class of $N$ hypotheses $h_1, \dots, h_N$ , and $\log N + N$ data points $x_1, \dots, x_{\log N + N}$ . For each hypothesis $h_j$ , the labels of the first $N$ points express $j$ in unary and the labels of the last $\log N$ points express $j$ in binary. We set $\eta = \varepsilon = 0$ and consider the realizable case. + +In the above example, the binary region is far more informative than the unary region, but disagreement coefficient-based algorithms just note that every point has disagreement. Our algorithm will query the binary encoding region and take $O(\log N)$ queries. Disagreement coefficient based algorithms, including those in Hanneke and Yang [2015] and Hanneke [2007b], will rely on essentially uniform sampling for the first $\Omega(N / \log N)$ queries. These algorithms are "minimax" over $x$ , in the sense that if you didn't see any $x$ from the binary region, you would need almost as many samples as they use. But you do see $x$ from the binary region, so the algorithm should make use of it to get exponential improvement. + +Future Work. Our upper bound assumes full knowledge of $\mathcal{D}_X$ and the ability to query arbitrary points $x$ . Often in active learning, the algorithm receives a large but not infinite set of unlabeled sample points $x$ , and can only query the labels of those points. How well our results adapt to this setting we leave as an open question. + +Similarly, our bound is polynomial in the number of hypotheses and the domain size. This is hard to avoid in full generality—if you don't evaluate most hypotheses on most data points, you might be missing the most informative points—but perhaps it can be avoided in structured examples. + +# 2 Algorithm Overview + +Our algorithm is based on a Bayesian/multiplicative weights type approach to the problem, and is along the lines of the splitting-based approach of Dasgupta [2004]. + +We maintain a set of weights $w(h)$ for each $h \in H$ , starting at 1; these induce a distribution $\lambda(h) := \frac{w(h)}{\sum_{h} w(h)}$ which we can think of as our posterior over the "true" $h^*$ . + +Realizable setting. As initial intuition, consider the realizable case of $\eta = \varepsilon = 0$ where we want to find the true $h^*$ . If $h^*$ really were drawn from our prior $\lambda$ , and we query a point $x$ , we will see a 1 with probability $\mathbb{E}_{h \sim \lambda} h(x)$ . Then the most informative point to query is the one we are least confident in, i.e., the point $x^*$ maximizing + +$$ +r (x) := \min \left\{\underset {h \sim \lambda} {\mathbb {E}} [ h (x) ], 1 - \underset {h \sim \lambda} {\mathbb {E}} [ h (x) ] \right\}. +$$ + +Suppose an algorithm queries $x_{1},\ldots ,x_{m}$ and receives the majority label under $h\sim \lambda$ each time. Then the fraction of $h\sim \lambda$ that agree with all the queries is at least $1 - \sum_{i = 1}^{m}r(x_{i})\geq 1 - mr(x^{*})$ . This suggests that, if $r(x^{*})\ll \frac{1}{m}$ , it will be hard to uniquely identify $h^*$ . It is not hard to formalize this, showing that: if no single hypothesis has $75\%$ probability under $\lambda$ , and any algorithm exists with sample complexity $m$ and $90\%$ success probability at finding $h^*$ , we must have $r(x^{*})\geq \frac{1}{10m}$ . + +This immediately gives an algorithm for the $\eta = \varepsilon = 0$ setting: query the point $x$ maximizing $r(x)$ , set $w(h) = 0$ for all hypotheses $h$ that disagree, and repeat. As long as at least two hypotheses remain, the maximum probability will be $50\% < 90\%$ and each iteration will remove an $\Omega \left(\frac{1}{m}\right)$ fraction of the remaining hypotheses; thus after $O(m \log H)$ rounds, only $h^*$ will remain. This is the basis for Dasgupta [2004]. + +Handling noise: initial attempt. There are two obvious problems with the above algorithm in the agnostic setting, where a (possibly adversarial) $\eta$ fraction of locations $x$ will not match $h^*$ . First, a single error will cause the algorithm to forever reject the true hypothesis; and second, the algorithm makes deterministic queries, which means adversarial noise could be placed precisely on the locations queried to make the algorithm learn nothing. + +To fix the first problem, we can adjust the algorithm to perform multiplicative weights: if in round $i$ we query a point $x_{i}$ and see $y_{i}$ , we set + +$$ +w _ {i + 1} (h) = \left\{ \begin{array}{l l} w _ {i} (h) & \text {i f} h (x _ {i}) = y _ {i} \\ e ^ {- \alpha} w _ {i} (h) & \text {i f} h (x _ {i}) \neq y _ {i} \end{array} \right. +$$ + +for a small constant $\alpha = \frac{1}{5}$ . To fix the second problem, we don't query the single $x^{*}$ of maximum $r(x^{*})$ , but instead choose $x$ according to distribution $q$ over many points $x$ with large $r(x)$ . + +To understand this algorithm, consider how $\log \lambda_i(h^*)$ evolves in expectation in each step. This increases if the query is correct, and decreases if it has an error. A correct query increases $\lambda_i$ in proportion to the fraction of $\lambda$ placed on hypotheses that get the query wrong, which is at least $r(x)$ ; and the probability of an error is at most $\eta \max_x \frac{q(x)}{\mathcal{D}_x(x)}$ . If at iteration $i$ the algorithm uses query distribution $q$ , some calculation gives that + +$$ +\underset {q} {\mathbb {E}} \left[ \log \lambda_ {i + 1} \left(h ^ {*}\right) - \log \lambda_ {i} \left(h ^ {*}\right) \right] \geq 0. 9 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ r (x) ] - 2. 3 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {x} (x)}\right). \tag {1} +$$ + +
λ(h)Values h(x)
h10.91111 1111
h20.1 - 10-61111 0000
h310-60000 1110
y0000 1111
+ +Figure 1: An example demonstrating that the weight of the true hypothesis can decrease if $\lambda$ is concentrated on the wrong ball. In this example, the true labels $y$ are closest to $h_3$ . But if the prior $\lambda$ on hypotheses puts far more weight on $h_1$ and $h_2$ , the algorithm will query uniformly over where $h_1$ and $h_2$ disagree: the second half of points. Over this query distribution, $h_1$ is more correct than $h_3$ , so the weight of $h_3$ can actually decrease if $\lambda(h_1)$ is very large. + +The algorithm can choose $q$ to maximize this bound on the potential gain. There's a tradeoff between concentrating the samples over the $x$ of largest $r(x)$ , and spreading out the samples so the adversary can't raise the error probability too high. We show that if learning is possible by any algorithm (for a constant factor larger $\eta$ ), then there exists a $q$ for which this potential gain is significant. + +Lemma 2.1 (Connection to OPT). Define $\| h - h' \| = \operatorname{Pr}_{x \sim \mathcal{D}_x}[h(x) \neq h'(x)]$ . Let $\lambda$ be a distribution over $H$ such that no radius- $(2\eta + \varepsilon)$ ball $B$ centered on $h \in H$ has probability at least $80\%$ . Let $m^* = m^* \left( H, \mathcal{D}_X, \eta, \varepsilon, \frac{99}{100} \right)$ . Then there exists a query distribution $q$ over $\mathcal{X}$ with + +$$ +\underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {1}{1 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \frac {9}{1 0 0 m ^ {*}}. +$$ + +At a very high level, the proof is: imagine $h^* \sim \lambda$ . If the algorithm only sees the majority label $y$ on every query it performs, then its output $\widehat{h}$ is independent of $h^*$ and cannot be valid for more than 80% of inputs by the ball assumption; hence a 99% successful algorithm must have a 19% chance of seeing a minority label. But for $m^*$ queries $x$ drawn with marginal distribution $q$ , without noise the expected number of minority labels seen is $m^* \mathbb{E}[r(x)]$ , so $\mathbb{E}[r(x)] \gtrsim 1 / m^*$ . With noise, the adversary can corrupt the minority labels in $h^*$ back toward the majority, leading to the given bound. + +The query distribution optimizing (1) has a simple structure: take a threshold $\tau$ for $r(x)$ , sample from $\mathcal{D}_x$ conditioned on $r(x) > \tau$ , and possibly sample $x$ with $r(x) = \tau$ at a lower rate. This means the algorithm can efficiently find the optimal $q$ . + +Except for the caveat about $\lambda$ not already concentrating in a small ball, applying Lemma 2.1 combined with (1) shows that $\log \lambda(h^{*})$ grows by $\Omega\left(\frac{1}{m^{*}}\right)$ in expectation for each query. It starts out at $\log \lambda(h^{*}) = -\log H$ , so after $O(m^{*}\log H)$ queries we would have $\lambda(h^{*})$ being a large constant in expectation (and with high probability, by Freedman's inequality for concentration of martingales). Of course $\lambda(h^{*})$ can't grow past 1, which features in this argument in that once $\lambda(h^{*}) > 80\%$ , a small ball will have large probability and Lemma 2.1 no longer applies, but at that point we can just output any hypothesis in the heavy ball. + +Handling noise: the challenge. There is one omission in the above argument that is surprisingly challenging to fix, and ends up requiring significant changes to the algorithm: if at an intermediate step $\lambda_{i}$ concentrates in the wrong small ball, the algorithm will not necessarily make progress. It is entirely possible that $\lambda_{i}$ concentrates in a small ball, even in the first iteration—perhaps $99\%$ of the hypotheses in $H$ are close to each other. And if that happens, then we will have $r(x) \leq 0.01$ for most $x$ , which could make the RHS of (1) negative for all $q$ . + +In fact, it seems like a reasonable Bayesian-inspired algorithm really must allow $\lambda(h^{*})$ to decrease in some situations. Consider the setting of Figure 1. We have three hypotheses, $h_1, h_2$ , and $h_3$ , and a prior $\lambda = (0.9, 0.099999, 10^{-6})$ . Because $\lambda(h_3)$ is so tiny, the algorithm presumably should ignore $h_3$ and query essentially uniformly from the locations where $h_1$ and $h_2$ disagree. In this example, $h_3$ agrees with $h_1$ on all but an $\eta$ mass in those locations, so even if $h^{*} = h_{3}$ , the query distribution can match $h_1$ perfectly and not $h_3$ . Then $w(h_1)$ stays constant while $w(h_3)$ shrinks. $w(h_2)$ shrinks much faster, of course, but since the denominator is dominated by $w(h_1)$ , $\lambda(h_3)$ will still shrink. However, despite $\lambda(h_3)$ shrinking, the algorithm is still making progress in this example: $\lambda(h_2)$ is shrinking fast, and once it becomes small relative to $\lambda(h_3)$ then the algorithm will start querying points to distinguish $h_3$ from $h_1$ , at which point $\lambda(h_3)$ will start an inexorable rise. + +Our solution is to "cap" the large density balls in $\lambda$ , dividing their probability by two, when applying Lemma 2.1. Our algorithm maintains a set $S \subseteq H$ of the "high-density region," such that the capped + +distribution: + +$$ +\overline {{\lambda}} (h) := \left\{ \begin{array}{l l} \frac {1}{2} \lambda (h) & h \in S \\ \lambda (h) \cdot \frac {1 - \frac {1}{2} \Pr [ h \in S ]}{1 - \Pr [ h \in S ]} & h \notin S \end{array} \right. +$$ + +has no large ball. Then Lemma 2.1 applies to $\overline{\lambda}$ , giving the existence of a query distribution $q$ so that the corresponding $\overline{r}(x)$ is large. We then define the potential function + +$$ +\phi_ {i} \left(h ^ {*}\right) := \log \lambda_ {i} \left(h ^ {*}\right) + \log \frac {\lambda_ {i} \left(h ^ {*}\right)}{\sum_ {h \notin S _ {i}} \lambda_ {i} (h)} \tag {2} +$$ + +for $h^* \notin S_i$ , and $\phi_i = 0$ for $h^* \in S_i$ . We show that $\phi_i$ grows by $\Omega\left(\frac{1}{m^*}\right)$ in expectation in each iteration. Thus, as in the example of Figure 1, either $\lambda(h^*)$ grows as a fraction of the whole distribution, or as a fraction of the "low-density" region. + +If at any iteration we find that $\overline{\lambda}$ has some heavy ball $B(\mu, 2\eta + \varepsilon)$ so Lemma 2.1 would not apply, we add $B(\mu', 6\eta + 3\varepsilon)$ to $S$ , where $B(\mu', 2\eta + \varepsilon)$ is the heaviest ball before capping. We show that this ensures that no small heavy ball exists in the capped distribution $\overline{\lambda}$ . Expanding $S$ only increases the potential function, and then the lack of heavy ball implies the potential will continue to grow. + +Thus the potential (2) starts at $-2\log |H|$ , and grows by $\Omega \left(\frac{1}{m^{*}}\right)$ in each iteration. After $O(m^{*}\log H)$ iterations, we will have $\phi_{i}\geq 0$ in expectation (and with high probability by Freedman's inequality). This is only possible if $h^{\ast}\in S$ , which means that one of the centers $\mu$ of the balls added to $S$ is a valid answer. + +In fact, with some careful analysis we can show that with $1 - \delta$ probability that one of the first $O(\log \frac{H}{\delta})$ balls added to $S$ is a valid answer. The algorithm can then check all the centers of these balls, using the following active agnostic learning algorithm: + +Theorem 2.2. Active agnostic learning can be solved for $\varepsilon = 3\eta$ with $O\left(|H| \log \frac{|H|}{\delta}\right)$ samples. + +Proof. The algorithm is the following. Take any pair $h, h'$ with $\| h - h' \| \geq 3\eta$ . Sample $O\left(\log \frac{|H|}{\delta}\right)$ observations randomly from $(x \sim \mathcal{D}_x \mid h(x) \neq h'(x))$ . One of $h, h'$ is wrong on at least half the queries; remove it from $H$ and repeat. At the end, return any remaining $h$ . + +To analyze this, let $h^* \in H$ be the hypothesis with error $\eta$ . If $h^*$ is chosen in a round, the other $h'$ must have error at least $2\eta$ . Therefore the chance we remove $h^*$ is at most $\delta / |H|$ . In each round we remove a hypothesis, so there are at most $|H|$ rounds and at most $\delta$ probability of ever crossing off $h^*$ . If we never cross off $h^*$ , at the end we output some $h$ with $\| h - h^* \| \leq 3\eta$ , which gives $\varepsilon = 3\eta$ . + +The linear dependence on $|H|$ makes the Theorem 2.2 algorithm quite bad in most circumstances, but the dependence only on $|H|$ makes it perfect for our second stage (where we have reduced to $O(\log |H|)$ candidate hypotheses). + +Overall, this argument gives an $O\left(m^{*}\log \frac{|H|}{\delta} +\log \frac{|H|}{\delta}\log \frac{\log|H|}{\delta}\right)$ sample algorithm for agnostic active learning. One can simplify this bound by observing that the set of centers $C$ added by our algorithm form a packing, and must therefore all be distinguishable by the optimal algorithm, so $m^{*}\geq \log C$ . This gives a bound of + +$$ +O \left(\left(m ^ {*} + \log \frac {1}{\delta}\right) \log \frac {| H |}{\delta}\right). +$$ + +By starting with an $\eta$ -net of size $N$ , we can reduce $|H|$ to $N$ with a constant factor increase in $\eta$ . + +With some properly chosen constants $c_4$ and $c_5$ , the entire algorithm is formally described in Algorithm 1. + +Remark 1: As stated, the algorithm requires knowing $m^*$ to set the target sample complexity / number of rounds $k$ . This restriction could be removed with the following idea. $m^*$ only enters the analysis through the fact that $O\left(\frac{1}{m^*}\right)$ is a lower bound on the expected increase of the potential + +function in each iteration. However, the algorithm knows a bound on its expected increase in each round $i$ ; it is the value + +$$ +\tau_ {i} = \max _ {q} \underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S _ {i}} (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}. +$$ + +optimized in the algorithm. Therefore, we could use an adaptive termination criterion that stops at iteration $k$ if $\sum_{i=1}^{k} \tau_i \geq O(\log \frac{|H|}{\delta})$ . This will guarantee that when terminating, the potential will be above 0 with high probability so our analysis holds. + +Remark 2: The algorithm's running time is polynomial in $|H|$ . This is in general not avoidable, since the input is a truth table for $H$ . The bottleneck of the computation is the step where the algorithm checks if the heaviest ball has mass greater than $80\%$ . This step could be accelerated by randomly sampling hypothesis and points to estimate and find heavy balls; this would improve the dependence to nearly linear in $|H|$ . If the hypothesis class has some structure, like the binary search example, the algorithm can be implemented more efficiently. + +Algorithm 1 Competitive Algorithm for Active Agnostic Learning +Compute a \(2\eta\) maximal packing \(H^{\prime}\) +Let \(w_{0} = 1\) for every \(h\in H^{\prime}\) +\(S_0\gets \emptyset\) +\(C\gets \emptyset\) +for \(i = 1,\ldots ,k = O\left(m^{*}\log \frac{|H^{\prime}|}{\delta}\right)\) do Compute \(\lambda_i(h) = \frac{w_{i - 1}(h)}{\sum_{h\in H}w_{i - 1}(h)}\) for every \(h\in H\) if there exists \(c_{4}\eta +c_{5}\varepsilon\) ball with probability \(>80\%\) over \(\overline{\lambda}_{i,S_{i - 1}}\) then \(S_{i}\gets S_{i}\cup B(\mu^{\prime},3c_{4}\eta +3c_{5}\varepsilon)\) where \(B(\mu^{\prime},c_{4}\eta +c_{5}\varepsilon)\) is the heaviest radius \(c_{4}\eta +c_{5}\varepsilon\) ball over \(\lambda_{i}\) \(C\gets C\cup \{\mu '\}\) else \(\begin{array}{rl} & {\mathrm{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textit{~\textbf{~\textit{~\textbf{~\textit{~\textit{~\textbf{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textit{~\textiti}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} + +Generalization for Better Bounds. To get a better dependence for 1d threshold functions, we separate out the Lemma 2.1 bound on (1) from the analysis of the algorithm given a bound on (1). Then for particular instances like 1d threshold functions, we get a better bound on the algorithm by giving a larger bound on (1). + +Theorem 2.3. Suppose that $\mathcal{D}_x$ and $H$ are such that, for any distribution $\lambda$ over $H$ such that no radius- $(c_4\eta + c_5\varepsilon)$ ball has probability more than $80\%$ , there exists a distribution $q$ over $X$ such that + +$$ +\underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {x} (x)} \geq \beta +$$ + +for some $\beta > 0$ . Then for $\varepsilon \geq c_1 \eta$ , $c_4 \geq 300$ , $c_5 = \frac{1}{10}$ and $c_1 \geq 90 c_4$ , let $N = N(H, \mathcal{D}_x, \eta)$ be the size of an $\eta$ -cover of $H$ . Algorithm 1 solves $(\eta, \varepsilon, \delta)$ active agnostic learning with $O\left(\frac{1}{\beta} \log \frac{N}{\delta} + \log \frac{N}{\delta} \log \frac{\log N}{\delta}\right)$ samples. + +Corollary 2.4. There exists a constant $c_{1} > 1$ such that, for 1d threshold functions and $\varepsilon > c_{1}\eta$ , Algorithm 1 solves $(\eta, \varepsilon, \delta)$ active agnostic learning with $O\left(\log \frac{1}{\varepsilon\delta}\log \frac{\log\frac{1}{\varepsilon}}{\delta}\right)$ samples. + +Proof. Because the problem is only harder if $\eta$ is larger, we can raise $\eta$ to be $\eta = \varepsilon / C$ , where $C > 1$ is a sufficiently large constant that Theorem 2.3 applies. Then $1d$ threshold functions have an $\eta$ -cover of size $N = O(1 / \varepsilon)$ . To get the result by Theorem 2.3, it suffices to show $\beta = \Theta(1)$ . + +Each hypothesis is of the form $h(x) = 1_{x \geq \tau}$ , and corresponds to a threshold $\tau$ . So we can consider $\lambda$ to be a distribution over $\tau$ . + +Let $\lambda$ be any distribution for which no radius- $R$ with probability greater than $80\%$ ball exists, for $R = c_4\eta +c_5\varepsilon$ . For any percent $p$ between 0 and 100, let $\tau_{p}$ denote the pth percentile of $\tau$ under $\lambda$ (i.e., the smallest $t$ such that $\operatorname*{Pr}[\tau \leq t] \geq p / 100$ ). By the ball assumption, $\tau_{10}$ and $\tau_{90}$ do not lie in the same radius- $R$ ball. Hence $\| h_{\tau_{10}} - h_{\tau_{90}}\| > R$ , or + +$$ +\operatorname * {P r} _ {x} [ \tau_ {1 0} \leq x < \tau_ {9 0} ] > R. +$$ + +We let $q$ denote $(\mathcal{D}_x \mid \tau_{10} \leq x < \tau_{90})$ . Then for all $x \in \operatorname{supp}(q)$ we have $r(x) \geq 0.1$ and + +$$ +\frac {q (x)}{D _ {x} (x)} = \frac {1}{\operatorname * {P r} _ {x \sim D _ {x}} [ x \in \operatorname {s u p p} (q) ]} < \frac {1}{R}. +$$ + +Therefore we can set + +$$ +\beta = \mathop{\mathbb{E}}_{x\sim q}\left[ r (x)\right] - \frac{c_{4}}{20}\eta \max_{x}\frac{q(x)}{D_{x}(x)}\geq 0.1 - \frac{c_{4}\eta}{20(c_{4}\eta + c_{5}\varepsilon)}\gtrsim 1, +$$ + +as needed. + +![](images/c7acfef2e7a84d20a9f87fb4ccb9f96334376ada9fabcda951fabc73104dcd56.jpg) + +# 3 Proof of Lemma 2.1 + +Lemma 2.1 (Connection to OPT). Define $\| h - h' \| = \operatorname{Pr}_{x \sim \mathcal{D}_x}[h(x) \neq h'(x)]$ . Let $\lambda$ be a distribution over $H$ such that no radius- $(2\eta + \varepsilon)$ ball $B$ centered on $h \in H$ has probability at least $80\%$ . Let $m^* = m^* \left( H, \mathcal{D}_X, \eta, \varepsilon, \frac{99}{100} \right)$ . Then there exists a query distribution $q$ over $\mathcal{X}$ with + +$$ +\underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {1}{1 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \frac {9}{1 0 0 m ^ {*}}. +$$ + +Proof. WLOG, we assume that $\operatorname{Pr}_{h\sim \lambda}[h(x) = 0] \geq \operatorname{Pr}_{h\sim \lambda}[h(x) = 1]$ for every $x \in \mathcal{X}$ . This means $r(x) = \mathbb{E}_{h\sim \lambda}[h(x)]$ . This can be achieved by flipping all $h(x)$ and observations $y$ for all $x$ not satisfying this property; such an operation doesn't affect the lemma statement. + +We will consider an adversary defined by a function $g: X \to [0,1]$ . The adversary takes a hypothesis $h \in H$ and outputs a distribution over $y \in \{0,1\}^X$ such that $0 \leq y(x) \leq h(x)$ always, and $\operatorname{err}(h) = \mathbb{E}_{x,y}[h(x) - y(x)] \leq \eta$ always. For a hypothesis $h$ , the adversary sets $y(x) = 0$ for all $x$ with $h(x) = 0$ , and $y(x) = 0$ independently with probability $g(x)$ if $h(x) = 1$ —unless $\mathbb{E}_x[h(x)g(x)] > \eta$ , in which case the adversary instead simply outputs $y = h$ to ensure the expected error is at most $\eta$ always. + +We consider the agnostic learning instance where $x \sim \mathcal{D}_x$ , $h \sim \lambda$ , and $y$ is given by this adversary. Let $\mathcal{A}$ be an $(\eta, \varepsilon)$ algorithm which uses $m$ measurements and succeeds with $99\%$ probability. Then it must also succeed with $99\%$ probability over this distribution. For the algorithm to succeed on a sample $h$ , its output $\widehat{h}$ must have $\| h - \widehat{h} \| \leq 2\eta + \varepsilon$ . By the bounded ball assumption, for any choice of adversary, no fixed output succeeds with more than $80\%$ probability over $h \sim \lambda$ . + +Now, let $\mathcal{A}_0$ be the behavior of $\mathcal{A}$ if it observes $y = 0$ for all its queries, rather than the truth; $\mathcal{A}_0$ is independent of the input. $\mathcal{A}_0$ has some distribution over $m$ queries, outputs some distribution of answers $\widehat{h}$ . Let $q(x) = \frac{1}{m} \operatorname*{Pr}[\mathcal{A}_0 \text{ queries } x]$ , so $q$ is a distribution over $\mathcal{X}$ . Since $\mathcal{A}_0$ outputs a fixed distribution, by the bounded ball assumption, for $h \sim \lambda$ and arbitrary adversary function $g$ , + +$$ +\Pr_{h\sim \lambda}[\mathcal{A}_{0}\text{succeeds} ]\leq 80\% . +$$ + +But $\mathcal{A}$ behaves identically to $\mathcal{A}_0$ until it sees its first nonzero $y$ . Thus, + +$$ +99 \% \leq \Pr [ A \text {succeeds} ] \leq \Pr [ A _ {0} \text {succeeds} ] + \Pr [ A \text {sees a non-zero } y ] +$$ + +and so + +$$ +\Pr [ \mathcal{A} \text {sees a non-zero } y ] \geq 19 \% +$$ + +Since $\mathcal{A}$ behaves like $\mathcal{A}_0$ until the first nonzero, we have + +$$ +\begin{array}{l} 19 \% \leq \Pr [ \mathcal {A} \text {sees a non - zero } y ] \\ = \Pr \left[ \mathcal {A} _ {0} \text {m a k e s a q u e y} x \text {w i t h} y (x) = 1 \right] \\ \leq \mathbb {E} [ \text {N u m b e r q u e i e r s} x \text {b y} \mathcal {A} _ {0} \text {w i t h} y (x) = 1 ] \\ = m \underset {h \sim \lambda} {\mathbb {E}} \underset {y} {\mathbb {E}} \underset {x \sim q} {\mathbb {E}} [ y (x) ]. \\ \end{array} +$$ + +As an initial note, observe that $\mathbb{E}_{h,y}[y(x)]\leq \mathbb{E}_h[h(x)] = r(x)$ so + +$$ +\underset {x \sim q} {\mathbb {E}} [ r (x) ] \geq \frac {0 . 1 9}{m}. +$$ + +Thus the lemma statement holds for $\eta = 0$ . + +Handling $\eta > 0$ . Consider the behavior when the adversary's function $g: X \to [0,1]$ satisfies $\mathbb{E}_{x \sim \mathcal{D}_x}[g(x)r(x)] \leq \eta / 10$ . We denote the class of all adversary satisfying this condition as $G$ . We have that + +$$ +\underset {h \sim \lambda} {\mathbb {E}} \left[ \underset {x \sim \mathcal {D} _ {x}} {\mathbb {E}} [ g (x) h (x) ] \right] = \underset {x \sim \mathcal {D} _ {x}} {\mathbb {E}} [ g (x) r (x) ] \leq \eta / 1 0. +$$ + +Let $E_{h}$ denote the event that $\mathbb{E}_{x\sim \mathcal{D}_x}[g(x)h(x)]\leq \eta$ , so $\operatorname*{Pr}[\overline{E}_h]\leq 10\%$ . Furthermore, the adversary is designed such that under $E_{h}$ , $\mathbb{E}_y[y(x)] = h(x)(1 - g(x))$ for every $x$ . Therefore: + +$$ +\begin{array}{l} 0. 1 9 \leq \Pr [ \mathcal {A} _ {0} \text {m a k e s a q u e y} x \text {w i t h} y (x) = 1 ] \\ \leq \Pr [ \overline {{E}} _ {h} ] + \Pr [ \mathcal {A} _ {0} \text {m a k e s a q u e y} x \text {w i t h} y (x) = 1 \cap E _ {h} ] \\ \leq 0. 1 + \mathbb {E} [ \text {N u m b e r q u e i r e s} x \text {b y} \mathcal {A} _ {0} \text {w i t h} y (x) = 1 \text {a n d} E _ {h} ] \\ = 0. 1 + m \underset {h} {\mathbb {E}} \left[ \mathbb {1} _ {E _ {h}} \underset {x \sim q} {\mathbb {E}} \left[ \mathbb {E} y (x) \right] \right] \\ = 0. 1 + m \underset {h} {\mathbb {E}} \left[ \mathbb {1} _ {E _ {h}} \underset {x \sim q} {\mathbb {E}} [ h (x) (1 - g (x)) ] \right] \\ \leq 0. 1 + m \underset {x \sim q} {\mathbb {E}} [ \mathbb {E} _ {h} [ h (x) ] (1 - g (x)) ] \\ = 0. 1 + m \underset {x \sim q} {\mathbb {E}} [ r (x) (1 - g (x)) ]. \\ \end{array} +$$ + +Thus + +$$ +\max _ {q} \min _ {g \in G} \mathbb {E} _ {x \sim q} [ r (x) (1 - g (x)) ] \geq \frac {9}{1 0 0 m} \tag {4} +$$ + +over all distributions $q$ and functions $g:X\to [0,1]$ satisfying $\mathbb{E}_{x\sim \mathcal{D}_x}[g(x)r(x)]\leq \eta /10$ . We now try to understand the structure of the $q,g$ optimizing the LHS of (4). + +Let $g^{*}$ denote an optimizer of the objective. First, we show that the constraint is tight, i.e., $\mathbb{E}_{x\sim \mathcal{D}_x}[g^* (x)r(x)] = \eta /10$ . Since increasing $g$ improves the constraint, the only way this could not happen is if the maximum possible function, $g(x) = 1$ for all $x$ , lies in $G$ . But for this function, the LHS of (4) would be 0, which is a contradiction; hence we know increasing $g$ to improve the objective at some point hits the constraint, and hence $\mathbb{E}_{x\sim \mathcal{D}_x}[g^* (x)r(x)] = \eta /10$ . + +For any $q$ , define $\tau_q \geq 0$ to be the minimum threshold such that + +$$ +\underset {x \sim \mathcal {D} _ {x}} {\mathbb {E}} \left[ r (x) \cdot 1 _ {\frac {q (x)}{\mathcal {D} _ {X} (x)} > \tau_ {q}} \right] < \eta / 1 0. +$$ + +and define $g_{q}$ by + +$$ +g _ {q} (x) := \left\{ \begin{array}{l l} 1 & \frac {q (x)}{\mathcal {D} _ {X} (x)} > \tau_ {q} \\ \alpha & \frac {q (x)}{\mathcal {D} _ {X} (x)} = \tau_ {q} \\ 0 & \frac {q (x)}{\mathcal {D} _ {X} (x)} < \tau_ {q} \end{array} \right. +$$ + +where $\alpha \in [0,1]$ is chosen such that $\mathbb{E}_{x\sim \mathcal{D}_x}[r(x)g_q(x)] = \eta /10$ ; such an $\alpha$ always exists by the choice of $\tau_{q}$ . + +For any $q$ , we claim that the optimal $g^{*}$ in the LHS of (4) is $g_{q}$ . It needs to maximize + +$$ +\underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} \left[ \frac {q (x)}{\mathcal {D} _ {X} (x)} r (x) g (x) \right] +$$ + +subject to a constraint on $\mathbb{E}_{x\sim \mathcal{D}_X}[r(x)g(x)]$ ; therefore moving mass to points of larger $\frac{q(x)}{\mathcal{D}_X(x)}$ is always an improvement, and $g_{q}$ is optimal. + +We now claim that the $q$ maximizing (4) has $\max \frac{q(x)}{\mathcal{D}_X(x)} = \tau_q$ . If not, some $x'$ has $\frac{q(x')}{\mathcal{D}_X(x')} > \tau_q$ . Then $g_q(x') = 1$ , so the $x'$ entry contributes nothing to $\mathbb{E}_{x \sim q}[r(x)(1 - g_q(x))]$ ; thus decreasing $q(x)$ halfway towards $\tau_q$ (which wouldn't change $g_q$ ), and adding the savings uniformly across all $q(x)$ (which also doesn't change $g_q$ ) would increase the objective. + +So there exists a $q$ satisfying (4) for which $\operatorname*{Pr}\left[\frac{q(x)}{\mathcal{D}_X(x)} >\tau_q\right] = 0$ , and therefore the set $T = \left\{x\mid \frac{q(x)}{\mathcal{D}_X(x)} = \tau_q\right\}$ satisfies $\mathbb{E}_{\mathcal{D}_X}[r(x)\mathbb{1}_{x\in T}]\geq \eta /10$ and a $g_{q}$ minimizing (4) is + +$$ +g _ {q} (x) = \frac {\eta}{1 0} \frac {\mathbb {1} _ {x \in T}}{\mathbb {E} _ {\mathcal {D} _ {X}} [ r (x) \mathbb {1} _ {x \in T} ]}. +$$ + +Therefore + +$$ +\begin{array}{l} \underset {x \sim q} {\mathbb {E}} \left[ r (x) g _ {q} (x) \right] = \underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} \left[ \frac {q (x)}{\mathcal {D} _ {X} (x)} r (x) \frac {\eta}{1 0} \frac {\mathbb {1} _ {x \in T}}{\mathbb {E} _ {\mathcal {D} _ {X}} [ r (x) \mathbb {1} _ {x \in T} ]} \right] \\ \leq \frac {\eta}{1 0} \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \\ \end{array} +$$ + +and so by (4), + +$$ +\underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {\eta}{1 0} \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \frac {9}{1 0 0 m} +$$ + +as desired. + +![](images/1a046fef2112d1dcc744fb45662f408f028766ebef5dcd62b790a98e5724aa28.jpg) + +# 4 Conclusion + +We have given an algorithm that solves agnostic active learning with (for constant $\delta$ ) at most an $O(\log |H|)$ factor more queries than the optimal algorithm. It is NP-hard to improve upon this $O(\log |H|)$ factor in general, but for specific cases it can be avoided. We have shown that 1d threshold functions, i.e. binary search with adversarial noise, is one such example where our algorithm matches the performance of disagreement coefficient-based methods and is within a log $\log \frac{1}{\varepsilon}$ factor of optimal. + +# 5 Acknowledgments + +Yihan Zhou and Eric Price were supported by NSF awards CCF-2008868, CCF-1751040 (CAREER), and the NSF AI Institute for Foundations of Machine Learning (IFML). + +# References + +Mohammad Azad, Igor Chikalov, Shahid Hussain, Mikhail Moshkov, and Beata Zielosko. Decision Trees with Hypotheses. Springer International Publishing, 2022. doi: 10.1007/978-3-031-08585-7. URL https://doi.org/10.1007/978-3-031-08585-7. +Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning, pages 65-72, 2006. +Michael Ben-Or and Avinatan Hassidim. The bayesian learner is optimal for noisy binary search (and pretty good for quantum as well). In 2008 49th Annual IEEE Symposium on Foundations of Computer Science, pages 221-230, 2008. doi: 10.1109/FOCS.2008.58. + +Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance weighted active learning. In Proceedings of the 26th annual international conference on machine learning, pages 49-56, 2009. +Alina Beygelzimer, Daniel J Hsu, John Langford, and Tong Zhang. Agnostic active learning without constraints. Advances in neural information processing systems, 23, 2010. +Marat Valievich Burnashev and Kamil'Shamil'evich Zigangirov. An interval estimation problem for controlled observations. Problemy Peredachi Informatsii, 10(3):51-61, 1974. +David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15:201-221, 1994. +Sanjoy Dasgupta. Analysis of a greedy active learning strategy. Advances in neural information processing systems, 17, 2004. +Sanjoy Dasgupta. Coarse sample complexity bounds for active learning. Advances in neural information processing systems, 18, 2005. +Sanjoy Dasgupta, Daniel J Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. Advances in neural information processing systems, 20, 2007. +Dariusz Dereniowski, Aleksander Lukasiewicz, and Przemyslaw Uznanski. Noisy searching: simple, fast and correct. CoRR, abs/2107.05753, 2021. URL https://arxiv.org/abs/2107.05753. +Irit Dinur and David Steurer. Analytical approach to parallel repetition. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 624-633, 2014. +Steve Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pages 353-360, 2007a. +Steve Hanneke. Teaching dimension and the complexity of active learning. In International conference on computational learning theory, pages 66-81. Springer, 2007b. +Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends® in Machine Learning, 7(2-3):131-309, 2014. +Steve Hanneke and Liu Yang. Minimax analysis of active learning. J. Mach. Learn. Res., 16(1): 3487-3602, 2015. +Tibor Hegedus. Generalized teaching dimensions and the query complexity of learning. In Proceedings of the eighth annual conference on Computational learning theory, pages 108-117, 1995. +Matti Kääräinen. Active learning in the non-realizable case. In Algorithmic Learning Theory: 17th International Conference, ALT 2006, Barcelona, Spain, October 7-10, 2006. Proceedings 17, pages 63-77. Springer, 2006. +Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07, page 881-890, USA, 2007. Society for Industrial and Applied Mathematics. ISBN 9780898716245. +Julian Katz-Samuels, Jifan Zhang, Lalit Jain, and Kevin Jamieson. Improved algorithms for agnostic pool-based active classification. In International Conference on Machine Learning, pages 5334-5344. PMLR, 2021. +S Rao Kosaraju, Teresa M Przytycka, and Ryan Borgstrom. On an optimal split tree problem. In Algorithms and Data Structures: 6th International Workshop, WADS'99 Vancouver, Canada, August 11-14, 1999 Proceedings, pages 157-168. Springer, 2002. +Hyafil Laurent and Ronald L Rivest. Constructing optimal binary decision trees is np-complete. Information processing letters, 5(1):15-17, 1976. +David D Lewis. A sequential algorithm for training text classifiers: Corrigendum and additional data. In Acm Sigir Forum, volume 29, pages 13-19. ACM New York, NY, USA, 1995. + +David D Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In Machine learning proceedings 1994, pages 148-156. Elsevier, 1994. +Robert Nowak. Generalized binary search. In 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pages 568-574. IEEE, 2008. +Robert D Nowak. The geometry of generalized binary search. IEEE Transactions on Information Theory, 57(12):7893-7906, 2011. +Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009. +Joel Tropp. Freedman's inequality for matrix martingales. 2011. +Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. + +# A Query Complexity Upper Bound + +In this section we present the whole proof of the query complexity upper bound of Algorithm 1, as stated in Theorem 1.1. + +# A.1 Notation + +We remind the readers about some definitions first. Remember that $w_{i}(h)$ denote the weight of hypothesis $h$ in iteration $i$ and $\lambda_{i,S}(h) = \frac{w_i(h)}{\sum_{h' \in S} w_i(h')}$ for some $S \subseteq H$ denote the proportion of $h$ in $S$ . We view $\lambda_{i,S}$ as a distribution of hypotheses in $S$ so for $h \notin S$ , $\lambda_{i,S}(h) = 0$ . For a set $S \subseteq H$ of hypotheses, we define $w_{i}(S) := \sum_{h \in S} w(h)$ and $\lambda_{i}(h) = \lambda_{i,H}(h)$ . + +Define $r_{\lambda,h^*}(x) \coloneqq \operatorname{Pr}_{h \sim \lambda}[h(x) \neq h^*(x)]$ , and $r_\lambda(x) = \min_{y \in \{0,1\}} \operatorname{Pr}_{h \sim \lambda}[h(x) \neq y]$ , so $r_\lambda(x) = \min(r_{\lambda,h^*}(x), 1 - r_{\lambda,h^*}(x))$ . + +Define + +$$ +\bar {\lambda} _ {i, S} (h) := \frac {1}{2} \lambda_ {i} (h) + \frac {1}{2} \lambda_ {i, H \backslash S} (h) = \left\{ \begin{array}{l l} \frac {1}{2} \lambda_ {i} (h) & h \in S \\ \lambda_ {i} (h) \cdot \frac {1 - \frac {1}{2} \Pr_ {h \sim \lambda_ {i}} [ h \in S ]}{1 - \Pr_ {h \sim \lambda_ {i}} [ h \in S ]} & h \notin S \end{array} \right. \tag {5} +$$ + +as the "capped" distribution in iteration $i$ . + +Finally, for notational convenience define $r_{i,S} \coloneqq r_{\lambda_{i,S}}$ , $r_{i,S,h} \coloneqq r_{\lambda_{i,S},h}$ and $\bar{r}_{i,S} \coloneqq r_{\overline{\lambda}_{i,S}}$ . + +The main focus of our proof would be analyzing the potential function + +$$ +\phi_ {i} (h ^ {*}) = \left\{ \begin{array}{l l} \log \lambda_ {i} (h ^ {*}) + \log \lambda_ {i, H \setminus S _ {i}} (h ^ {*}) & h ^ {*} \notin S _ {i} \\ 0 & h ^ {*} \in S _ {i}, \end{array} \right. +$$ + +where $h^*$ is the best hypothesis in $H$ . We would like to show that $\phi_{i+1}(h^*) - \phi_i(h^*)$ is growing at a proper rate in each iteration. We pick $S_i$ to be an expanding series of sets, i.e., $S_i \subseteq S_{i+1}$ for any $i \geq 1$ . However, the change of the "capped" set $S_i$ makes this task challenging. Therefore, we instead analyze the following quantity defined as + +$$ +\Delta_ {i} (h ^ {*}) := \left\{ \begin{array}{l l} \log \frac {\lambda_ {i + 1} (h ^ {*})}{\lambda_ {i} (h ^ {*})} + \log \frac {\lambda_ {i + 1 , H \setminus S _ {i}} (h ^ {*})}{\lambda_ {i , H \setminus S _ {i}} (h ^ {*})} & h ^ {*} \notin S _ {i} \\ 0 & h ^ {*} \in S _ {i}, \end{array} \right. +$$ + +and $\phi_{i + 1}(h^{*}) - \phi_{i}(h^{*}) = \Delta_{i}(h^{*}) + \log \frac{\lambda_{i + 1,H\setminus S_{i + 1}}(h^{*})}{\lambda_{i + 1,H\setminus S_{i}}(h^{*})}$ if $h^*\notin S_{i + 1}$ . Further, we define $\psi_k(h^*)\coloneqq \sum_{i < k}\Delta_i(h^*)$ so by definition $\phi_k(h^*) = \phi_0(h^*) + \psi_k(h^*) + \sum_{i < k}\log \frac{\lambda_{i + 1,H\setminus S_{i + 1}}(h^*)}{\lambda_{i + 1,H\setminus S_{i}}(h^*)}$ if $h^*\notin S_{i + 1}$ . In the following text, we will drop the parameter $h^*$ when the context is clear and just use $\phi_i,\Delta_i$ and $\psi_{i}$ instead. + +# A.2 Potential Growth + +We will lower bound the conditional per iteration potential increase by first introducing a lemma that relates the potential change to the optimization problem (3). + +Lemma A.1. Assume that $\mathrm{err}(h^{*})\leq \eta$ , then for any set $S$ of hypotheses containing $h^*$ and query distribution $q$ , we have + +$$ +\mathbb {E} \left[ \log \frac {\lambda_ {i + 1 , S} (h ^ {*})}{\lambda_ {i , S} (h ^ {*})} \Bigg | \mathcal {F} _ {i} \right] \geq 0. 9 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ r _ {i, S, h} (x) ] - 2. 3 \eta \max _ {x} \frac {q (x)}{D _ {X} (x)}\right) +$$ + +for $\alpha \leq 0.2$ . Moreover, + +$$ +\mathbb {E} \left[ \max \left\{0, \log \frac {\lambda_ {i + 1 , S} \left(h ^ {*}\right)}{\lambda_ {i , S} \left(h ^ {*}\right)} \right\} \mid \mathcal {F} _ {i} \right] \leq \alpha \underset {x \sim q} {\mathbb {E}} [ r _ {i, S, h ^ {*}} (x) ]. +$$ + +Proof. For notational convenience, define $\widetilde{r}(x) \coloneqq r_{i,S,h^*}(x)$ . + +Observe that + +$$ +\frac {\lambda_ {i , S} (h ^ {*})}{\lambda_ {i + 1 , S} (h ^ {*})} = \frac {w _ {i} (h ^ {*})}{w _ {i + 1} (h ^ {*})} \frac {\sum_ {h \in S} w _ {i + 1 , S} (h)}{\sum_ {h \in S} w _ {i , S} (h)} = \frac {w _ {i} (h ^ {*})}{w _ {i + 1} (h ^ {*})} \underset {h \sim \lambda_ {i, S}} {\mathbb {E}} \left[ \frac {w _ {i + 1 , S} (h)}{w _ {i , S} (h)} \right]. +$$ + +Let $p(x) = \operatorname*{Pr}_{y\sim (Y|X)}[y\neq h^{*}(x)]$ denote the probability of error if we query $x$ , so + +$$ +\underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} [ p (x) ] \leq \eta . +$$ + +Suppose we query a point $x$ and do not get an error. Then the hypotheses that disagree with $h^*$ are downweighted by an $e^{-\alpha}$ factor, so + +$$ +\frac {\lambda_ {i , S} (h ^ {*})}{\lambda_ {i + 1 , S} (h ^ {*})} = \underset {h \sim \lambda_ {i, S}} {\mathbb {E}} [ 1 + (e ^ {- \alpha} - 1) 1 _ {h (x) \neq h ^ {*} (x)} ] = 1 - (1 - e ^ {- \alpha}) \widetilde {r} (x). +$$ + +On the other hand, if we do get an error then the disagreeing hypotheses are effectively upweighted by $e^{\alpha}$ : + +$$ +\frac {\lambda_ {i , S} \left(h ^ {*}\right)}{\lambda_ {i + 1 , S} \left(h ^ {*}\right)} = 1 + \left(e ^ {\alpha} - 1\right) \widetilde {r} (x). +$$ + +Therefore + +$$ +\begin{array}{l} \underset {y | x} {\mathbb {E}} \left[ \log \frac {\lambda_ {i + 1 , S} \left(h ^ {*}\right)}{\lambda_ {i , S} \left(h ^ {*}\right)} \middle | \mathcal {F} _ {i} \right] \\ = - (1 - p (x)) \log \left(1 - \left(1 - e ^ {- \alpha}\right) \widetilde {r} (x)\right) - p (x) \log \left(1 + \left(e ^ {\alpha} - 1\right) \widetilde {r} (x)\right) \tag {6} \\ \geq (1 - p (x)) \left(1 - e ^ {- \alpha}\right) \widetilde {r} (x) - p (x) \left(e ^ {\alpha} - 1\right) \widetilde {r} (x) \\ = (1 - e ^ {- \alpha}) \widetilde {r} (x) - p (x) \widetilde {r} (x) \left(e ^ {\alpha} - e ^ {- \alpha}\right). \\ \end{array} +$$ + +Using that $\widetilde{r}(x) \leq 1$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \log \frac {\lambda_ {i + 1 , S} \left(h ^ {*}\right)}{\lambda_ {i , S} \left(h ^ {*}\right)} \mid \mathcal {F} _ {i} \right] \geq (1 - e ^ {- \alpha}) \mathbb {E} _ {x \sim q} [ \widetilde {r} (x) ] - (e ^ {\alpha} - e ^ {- \alpha}) \mathbb {E} _ {x \sim q} [ p (x) ] \\ \geq 0. 9 \alpha \underset {x \sim q} {\mathbb {E}} [ \widetilde {r} (x) - 2. 3 p (x) ], \\ \end{array} +$$ + +where the last step uses $\alpha \leq 0.2$ . Finally, + +$$ +\underset {x \sim q} {\mathbb {E}} [ p (x) ] = \underset {x \sim \mathcal {D} _ {X}} {\mathbb {E}} \left[ p (x) \frac {q (x)}{\mathcal {D} _ {X} (x)} \right] \leq \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}. +$$ + +This proves the first desired result. For the second, note that if we query $x$ , then conditioned on $\mathcal{F}_i$ + +$$ +\max \left\{0, \log \frac {\lambda_ {i + 1 , S} (h ^ {*})}{\lambda_ {i , S} (h ^ {*})} \right\} = \left\{ \begin{array}{l l} 0 & \text {w i t h p r o b a b i l i t y} p (x), \\ \log (1 + (1 - e ^ {- \alpha}) \widetilde {r} (x)) & \text {o t h e r w i s e}. \end{array} \right. +$$ + +Since $\log (1 + (1 - e^{-\alpha})\widetilde{r} (x))\leq (1 - e^{-\alpha})\widetilde{r} (x)\leq \alpha \widetilde{r} (x)$ , taking the expectation over $x$ gives the result. + +The above lemma, combined with Lemma 2.1, proves the potential will grow at desired rate at each iteration. But remember that Lemma 2.1 requires the condition that no ball has probability greater than $80\%$ , so we need to check this condition is satisfied. The following lemma shows that if we cap the set $S_{i}$ , then the probability is not concentrated on any small balls. + +Lemma A.2. In Algorithm 1, for every iteration $i$ , $S_{i}$ is such that no radius $c_{4}\eta + c_{5}\varepsilon$ ball has more than $80\%$ probability under $\overline{\lambda}_{i,S_i}$ . + +Proof. If $S_{i} = S_{i - 1}$ , then by the construction of $S_{i}$ , there are no radius $c_{4}\eta +c_{5}\varepsilon$ balls have probability greater than $80\%$ under $\overline{\lambda}_{i,S_{i - 1}} = \overline{\lambda}_{i,S_i}$ . Otherwise, we have $S_{i - 1}\neq S_{i}$ and a ball $B(\mu ,3c_4\eta +3c_5\varepsilon)$ is added to $S_{i}$ in this iteration. We first prove a useful claim below. + +Claim A.3. If a ball $B' = (\mu, 3c_4\eta + 3c_5\varepsilon)$ is added to $S_i$ at some iteration $i$ , $\lambda_i(B(\mu, c_4\eta + c_5\varepsilon)) \geq 0.6$ . + +Proof. If $B'$ is added to $S_{i}$ at the iteration $i$ , then there exists some ball $D$ with radius $c_{4}\eta + c_{5}\varepsilon$ such that $\bar{\lambda}_{i,S_{i-1}}(D) \geq 0.8$ . If a set of hypotheses gains probability after capping, the gained probability comes from the reduced probability of other hypotheses not in this set. Therefore, the gained probability of any set is upper bounded by half of the probability of the complement of that set before capping. This means $\lambda_{i}(D) \geq 0.6$ because otherwise after capping $\bar{\lambda}_{i,S_{i-1}}(D) < 0.8$ , which is a contradiction. As a result, $\lambda_{i}(B(\mu, c_{4}\eta + c_{5}\varepsilon)) \geq \lambda_{i}(D) \geq 0.6$ . + +By Claim A.3, the probability of $B(\mu, c_4\eta + c_5\varepsilon)$ is at least 0.6 over the uncapped distribution $\lambda_i$ . So any ball not intersecting $B(\mu, c_4\eta + c_5\varepsilon)$ has probability at most 0.4 before capping. After capping these balls will have probability no more than 0.7. At the same time, any ball intersects $B(\mu, c_4\eta + c_5\varepsilon)$ would be completely inside $B(\mu, 3c_4\eta + 3c_5\varepsilon)$ so its probability would be at most 0.5 after capping. + +Now we are ready to apply Lemma A.1 and Lemma 2.1 except one caution. Remember that in the beginning of the algorithm, we compute a $2\eta$ -packing $H' \subseteq H$ of the instance. From the well-known relationship between packing and covering (for example, see Vershynin [2018, Lemma 4.2.8]), we have $|H'| \leq N(H, \eta)$ . Every hypothesis in $H$ is within $2\eta$ to some hypothesis in $H'$ , so there exists a hypothesis in $H'$ with error less than $3\eta$ . This means that the best hypothesis $h^* \in H'$ has error $3\eta$ instead of $\eta$ . The following lemma serves as the cornerstone of the proof of the query complexity upper bound, which states that the potential grows at rate $\Omega\left(\frac{1}{m^*}\right)$ in each iteration. + +Lemma A.4. Given $c_{4} \geq 300$ and $\mathrm{err}(h^{*}) \leq 3\eta$ , there exists a sampling distribution $q$ such that + +$$ +\mathbb {E} [ \Delta_ {i} | \mathcal {F} _ {i} ] \geq \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \gtrsim \frac {\alpha}{m ^ {*} \left(H , \mathcal {D} _ {X} , c _ {4} \eta , c _ {5} \varepsilon - 2 \eta , \frac {9 9}{1 0 0}\right)} i f h ^ {*} \notin S _ {i}, +$$ + +as well as $|\Delta_i| \leq \alpha$ always and $\operatorname{Var}[\Delta_i|\mathcal{F}_i] \leq \alpha \mathbb{E}[|\Delta_i||\mathcal{F}_i|] \lesssim \alpha \mathbb{E}[\Delta_i|\mathcal{F}_i]$ . + +Proof. For the sake of bookkeeping, we let $m^{*} = m^{*}\left(H,\mathcal{D}_{X},c_{4}\eta ,c_{5}\varepsilon -2\eta ,\frac{99}{100}\right)$ in this proof and the following text. We first bound the expectation. By Lemma A.1 applied to $S\in \{H,H\setminus S_i\}$ with $3\eta$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq 0. 9 \alpha \left(\mathbb {E} _ {x \sim q} \left[ r _ {i, H, h *} (x) + r _ {i, H \backslash S _ {i}, h *} (x) \right] - 1 3. 8 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}, \\ \end{array} +$$ + +where $q$ is the query distribution of the algorithm at iteration $i$ . Now, by the definition of + +$$ +\overline {{\lambda}} _ {i, S} = \frac {1}{2} \lambda_ {i} + \frac {1}{2} \lambda_ {i, H \backslash S}, +$$ + +we have for any $x$ that + +$$ +\bar {r} _ {i, S _ {i}, h ^ {*}} (x) = \frac {1}{2} \left(r _ {i, h ^ {*}} (x) + r _ {i, H \backslash S _ {i}, h ^ {*}} (x)\right) +$$ + +and thus + +$$ +\begin{array}{l} \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \\ \geq 1. 8 \alpha \left(\underset {x \sim q} {\mathbb {E}} \left[ \bar {r} _ {i, S _ {i}, h ^ {*}} (x) \right] - 6. 9 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \tag {7} \\ \geq 1.8 \alpha \left(\underset {x \sim q} {\mathbb {E}} \left[ \overline {{r}} _ {i, S _ {i}} (x) \right] - 8. 1 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right). \\ \end{array} +$$ + +Algorithm 1 chooses the sampling distribution $q$ to maximize $\mathbb{E}_{x\sim q}[\overline{r}_{i,S_i}(x)] - \frac{c_4}{20}\eta \max_x\frac{q(x)}{\mathcal{D}_X(x)}\leq$ $\mathbb{E}_{x\sim q}[\overline{r}_{i,S_i}(x)] - 15\eta \max_x\frac{q(x)}{\mathcal{D}_X(x)}$ because $c_{4}\geq 300$ . By Lemma A.2, $\overline{\lambda}_{i,S_i}$ over $H^{\prime}$ has no radius- $(c_{4}\eta +c_{5}\varepsilon)$ ball with probability larger than $80\%$ , so by Lemma 2.1 $q$ satisfies + +$$ +\underset {x \sim q} {\mathbb {E}} [ \overline {{r}} _ {i, S _ {i}} (x) ] - 1 5 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq \underset {x \sim q} {\mathbb {E}} [ \overline {{r}} _ {i, S _ {i}} (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \gtrsim \frac {1}{m ^ {*} \left(H ^ {\prime} , \mathcal {D} _ {X} , c _ {4} \eta , c _ {5} \varepsilon , \frac {9 9}{1 0 0}\right)}. +$$ + +Because $H' \subseteq H$ is a maximal $2\eta$ -packing, every hypothesis in $H$ is within $2\eta$ of some hypothesis in $H'$ . The problem $\left(H, \mathcal{D}_X, c_4\eta, c_5\varepsilon - 2\eta, \frac{99}{100}\right)$ is harder than the problem $\left(H', \mathcal{D}_X, c_4\eta, c_5\varepsilon, \frac{99}{100}\right)$ because we can reduce the latter to the former by simply adding more hypotheses and solve + +it then map the solution back by returning the closest hypothesis in $H'$ . Hence, $m^* \geq m^* \left(H', \mathcal{D}_X, c_4\eta, c_5\varepsilon, \frac{99}{100}\right)$ . Therefore, + +$$ +\mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \geq 1. 8 \alpha \left(\mathbb {E} _ {x \sim q} [ \bar {r} _ {i, S _ {i}} (x) ] - 8. 1 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \gtrsim \frac {\alpha}{m ^ {*}}. +$$ + +We now bound the variance. The value of $\Delta_{i}$ may be positive or negative, but it is bounded by $|\Delta_i| \leq \alpha$ . Thus + +$$ +\operatorname {V a r} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] \leq \mathbb {E} \left[ \Delta_ {i} ^ {2} \mid \mathcal {F} _ {i} \right] \leq \alpha \mathbb {E} \left[ \left| \Delta_ {i} \right| \mid \mathcal {F} _ {i} \right]. +$$ + +By Lemma A.1 and (7) we have + +$$ +\begin{array}{l} \mathbb {E} [ | \Delta_ {i} | | \mathcal {F} _ {i} ] = \mathbb {E} [ 2 \max \{\Delta_ {i}, 0 \} - \Delta_ {i} | \mathcal {F} _ {i} ] \\ \leq 4 \alpha \underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S _ {i}} (x) ] - 1. 8 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S _ {i}} (x) ] - 8. 1 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ \leq 2. 2 \alpha \left(\underset {x \sim q} {\mathbb {E}} [ \bar {r} _ {i, S, h ^ {*}} (x) ] + 6. 7 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ \leq \frac {2 . 2 \alpha}{1 . 8 \alpha} \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] + 2. 2 \alpha \cdot 6. 9 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} + 2. 2 \alpha \cdot 6. 7 \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \\ \leq 1. 3 \mathbb {E} [ \Delta_ {i} | \mathcal {F} _ {i} ] + 3 0 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}. \\ \end{array} +$$ + +Since $\mathbb{E}_{x\sim q}[\Delta_i|\mathcal{F}_i] - 2\alpha \eta \max_x\frac{q(x)}{\mathcal{D}_X(x)}\gtrsim \frac{1}{m^*}\geq 0$ we have + +$$ +\eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \leq \frac {1}{2 \alpha} \underset {x \sim q} {\mathbb {E}} [ \Delta_ {i} | \mathcal {F} _ {i} ], +$$ + +and thus + +$$ +\operatorname {V a r} [ \Delta_ {i} | \mathcal {F} _ {i} ] \leq \alpha \mathbb {E} [ | \Delta_ {i} | | \mathcal {F} _ {i} ] \lesssim \alpha \mathbb {E} [ \Delta_ {i} | \mathcal {F} _ {i} ]. +$$ + +![](images/8425b84abce92269c7b6813a35eee32828c52d289189fcb4636c87020c988277.jpg) + +# A.3 Concentration of potential + +We have showed that the potential will grow at $\Omega\left(\frac{1}{m^{*}}\right)$ per iteration, but only in expectation, while our goal is to obtain a high probability bound. Let $\mu_{k} \coloneqq \sum_{i < k} \mathbb{E}[\Delta_{i}|\mathcal{F}_{i-1}] \gtrsim k/m^{*}$ , then + +$$ +\begin{array}{l} \mathbb {E} \left[ \left(\psi_ {k} - \mu_ {k}\right) - \left(\psi_ {k - 1} - \mu_ {k - 1}\right) | \mathcal {F} _ {k - 1} \right] = \mathbb {E} \left[ \psi_ {k} - \psi_ {k - 1} | \mathcal {F} _ {k - 1} \right] - \left(\mu_ {k} - \mu_ {k - 1}\right) \\ = \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] - \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right] \geq 0. \\ \end{array} +$$ + +Apparently $|\psi_k - \mu_k|$ is upper bounded, so $\psi_k - \mu_k$ is a supermartingale. To show a high probability bound, we will use Freedman's inequality. A version is stated in Tropp [2011]. We slighted modify it so it can be applied to supermartingale as the following. (XXX Not sure if the following is correct. I can't find a version of supermartingale.) + +Theorem A.5 (Freedman's Inequality). Consider a real-valued supermartingale $\{Y_k : k = 0,1,2,\dots\}$ that is adapted to the filtration $\mathcal{F}_0 \subseteq \mathcal{F}_1 \subseteq \mathcal{F}_2 \subseteq \dots \subseteq \mathcal{F}$ with difference sequence $\{X_k : k = 1,2,3,\dots\}$ . Assume that the difference sequence is uniformly bounded: + +$$ +X _ {k} \leq R \text {a l m o s t s u r e l y f o r} k = 1, 2, 3, \dots +$$ + +Define the predictable quadratic variation process of the supermartingale: + +$$ +W _ {k} := \sum_ {j = 1} ^ {k} \mathbb {E} \left[ X _ {j} ^ {2} | \mathcal {F} _ {j - 1} \right] f o r k = 1, 2, 3, \dots +$$ + +Then, for all $t \geq 0$ and $\sigma^2 > 0$ , + +$$ +\Pr \left(\exists k \geq 0: Y _ {k} \leq - t a n d W _ {k} \leq \sigma^ {2}\right) \leq \exp \left(- \frac {t ^ {2} / 2}{\sigma^ {2} + R t / 3}\right). +$$ + +Then we can prove a high probability bound as the following. + +Lemma A.6. With probability $1 - \delta$ , $\phi_{i} = 0$ for some $i = O\left(m^{*}\log \frac{|H|}{\delta}\right)$ so $h^* \in S_i$ . + +Proof. Remember we have that + +$$ +\phi_ {k} = \phi_ {0} + \psi_ {k} + \sum_ {i < k} \log \frac {\lambda_ {i , H \backslash S _ {i + 1}} (h ^ {*})}{\lambda_ {i , H \backslash S _ {i}} (h ^ {*})}. +$$ + +Since $S_{i + 1}\supseteq S_i$ for all $i$ , $\lambda_{i,H\backslash S_{i + 1}}(h^{*})\geq \lambda_{i,H\backslash S_{i}}(h^{*})$ if $h^*\notin S_{i + 1}$ , we have + +$$ +\phi_ {k} \geq \phi_ {0} + \psi_ {k} \quad \text {i f} h ^ {*} \notin S _ {k}. +$$ + +Let $K = O\left(m^{*}\log \frac{|H|}{\delta}\right)$ . Let's assume by contradiction that $\phi_K < 0$ for for, then $h^* \notin S_i$ for $i \leq K$ . We know by Lemma A.4 that + +$$ +\mu_ {k} := \sum_ {i < k} \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i - 1} \right] \gtrsim \frac {k}{m ^ {*}} +$$ + +and that $\sum_{i < k} \operatorname{Var}\left[\Delta_i\right] \leq \frac{1}{4} \mu_k$ by picking $\alpha$ small enough. Moreover, $|\Delta_i| \leq \alpha$ always. To use Freedman's inequality, let's set the RHS + +$$ +\exp \left(- \frac {t ^ {2} / 2}{\sigma^ {2} + R t / 3}\right) \leq \delta . +$$ + +Solving the above quadratic equation, one solution is that $t \geq \frac{R}{3} \log \frac{1}{\delta} + \sqrt{\frac{R^2}{9} \log^2 \frac{1}{\delta} + 2 \sigma^2 \log \frac{1}{\delta}}$ . Let's substitute in $R = \alpha$ and $\sigma^2 = \sum_{i < k} \operatorname{Var}_{i-1}(\Delta_i)$ , with $1 - \delta$ probability we have for any $k > O(m^* \log \frac{1}{\delta})$ that + +$$ +\begin{array}{l} \psi_ {k} \geq \mu_ {k} - \sqrt {\frac {\alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + 2 \sum_ {i < k} \operatorname {V a r} (\Delta_ {i}) \log \frac {1}{\delta}} - \frac {\alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \sqrt {\frac {\alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + \frac {1}{2} \mu_ {k} \log \frac {1}{\delta}} - \frac {\alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \max \left\{\frac {\sqrt {2} \alpha}{3} \log \frac {1}{\delta}, \sqrt {\mu_ {k} \log \frac {1}{\delta}} \right\} - \frac {\alpha}{3} \log \frac {1}{\delta} \\ \geq \frac {1}{2} \mu_ {k} \\ \gtrsim \frac {k}{m ^ {*}}. \\ \end{array} +$$ + +The second last inequality is because the first term outscales all of the rest. Since $K = O\left(m^{*}\log \frac{|H|}{\delta}\right)$ , we have + +$$ +\psi_ {K} \geq 2 \log | H | +$$ + +with $1 - \delta$ probability. Then $\phi_K \geq \phi_0 + \psi_k$ because $\phi_0 \geq \log \frac{1}{2|H|} \geq -2\log |H|$ and this contradicts $h^* \notin S_K$ . Therefore, with probability at least $1 - \delta$ , $h^* \in S_K$ and by definition, $\phi_i = 0$ for some $i \leq K$ as desired. + +![](images/798a1d81254ac3d7485388d215d599976573fb15750495be8325b86cfbf529ea.jpg) + +# A.4 Bounding the Size of $|C|$ + +So far we've shown that after $O\left(m^{*}\log \frac{|H|}{\delta}\right)$ iterations, $h^*$ will be included in the set $S_{i}$ . The last thing we need to prove Theorem 1.1 is that with high probability, $C$ is small, which is equivalent to show that not many balls will be added to $S_{i}$ after $O\left(m^{*}\log \frac{|H|}{\delta}\right)$ iterations. To show this, we first need to relate the number of balls added to $S_{i}$ to $\psi_{i}$ . Let $\mathcal{E}_i$ denote the number of errors $h^*$ made up to iteration $i$ (and set $\mathcal{E}_i = \mathcal{E}_{i-1}$ if $h^* \in S_i$ ) and $\mathcal{N}_i$ denote the number of balls added to $S_i$ up to iteration $i$ (again set $\mathcal{N}_i = \mathcal{N}_{i-1}$ if $h^* \in S_i$ ). + +Lemma A.7. The following inequality holds for every $i$ : + +$$ +\mathcal {N} _ {i} \leq 5 (\psi_ {i} + 2 \alpha \mathcal {E} _ {i}) + 1. +$$ + +Proof. We divide the $i$ iterations into phases. A new phase begins and an old phase ends if at this iteration a new ball is added to the set $S_{i}$ . We use $p_1, \ldots, p_k$ for $k \leq i$ to denote phases and $i_1, \ldots, i_k$ to denote the starting iteration of the phases. We analyse how the potential changes from the phase $p_j$ to the phase $p_{j+1}$ . Let's say the ball $B_2 = (\mu_2, 3c_4\eta + 3c_5\varepsilon)$ is added at the beginning of $p_{j+1}$ and $B_1 = (\mu_1, 3c_4\eta + 3c_5\varepsilon)$ is the ball added at the beginning of $p_j$ . Then the ball $B_2' = (\mu_2, c_4\eta + c_5\varepsilon)$ and the ball $B_1' = (\mu_1, c_4\eta + c_5\varepsilon)$ are disjoint. Otherwise, $B_2' \subseteq B_1$ so $B_2$ would not have been added by the algorithm. At the beginning of $p_j$ , $B_1'$ has probability no less than 0.6 by Claim A.3. Therefore, $B_2'$ has probability no more than 0.4. Similarly, at the beginning of $p_{j+1}$ , $B_2'$ has probability at least 0.6 by Claim A.3. Since during one iteration the weight of a hypothesis cannot change too much, at iteration $i_{j+1} - 1$ , $B_2'$ has weight at least 0.5 by picking $\alpha$ small enough. Therefore, we have $\log \lambda_{i_{j+1}-1}(B_2') - \log \lambda_{i_j}(B_2') \geq \log \frac{0.5}{0.4} \geq \frac{1}{5}$ . Moreover, note that $S_i$ does not change from iteration $i_j$ to iteration $i_{j+1} - 1$ by the definition of phases. Now we compute + +$$ +\begin{array}{l} \sum_ {l = i _ {j}} ^ {i _ {j + 1} - 1} \Delta_ {l} = \log \frac {\lambda_ {i _ {j + 1} - 1} (h ^ {*})}{\lambda_ {i _ {j}} (h ^ {*})} + \log \frac {\lambda_ {i _ {j + 1} - 1 , H \setminus S _ {i _ {j}}} (h ^ {*})}{\lambda_ {i _ {j} , H \setminus S _ {i _ {j}}} (h ^ {*})}, \\ = \log \frac {w _ {i _ {j + 1} - 1} (h ^ {*})}{w _ {i _ {1}} (h ^ {*})} \frac {\sum_ {h \in H} w _ {i _ {1}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} + \log \frac {w _ {i _ {j + 1} - 1} (h ^ {*})}{w _ {i _ {j}} (h ^ {*})} \frac {w _ {i _ {j}} (H \setminus S _ {i _ {j}})}{w _ {i _ {j + 1} - 1} (H \setminus S _ {i _ {j}})}. \\ \end{array} +$$ + +The change of the weight of $h^*$ is + +$$ +\frac {w _ {i _ {j + 1}} (h ^ {*})}{w _ {i _ {j}} (h ^ {*})} = e ^ {- \alpha \mathcal {E} _ {p _ {j}}}, +$$ + +where $\mathcal{E}_{p_j}$ is the number of errors $h^*$ made in $p_j$ . Consequently, + +$$ +\begin{array}{l} \sum_ {l = i _ {j}} ^ {i _ {j + 1} - 1} \Delta_ {l} = - 2 \alpha \mathcal {E} _ {p _ {j}} + \log \frac {\sum_ {h \in H} w _ {i _ {j}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} + \log \frac {w _ {i _ {j}} (H \setminus S _ {i _ {j}})}{w _ {i _ {j + 1} - 1} (H \setminus S _ {i _ {j}})} \\ \geq - 2 \alpha \mathcal {E} _ {p _ {j}} + \frac {1}{5}. \\ \end{array} +$$ + +The last step above comes from + +$$ +\log \frac {\sum_ {h \in H} w _ {i _ {j}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} \geq \log \frac {\sum_ {h \in B _ {2} ^ {\prime}} w _ {i _ {j + 1} - 1} (h)}{\sum_ {h \in B _ {2} ^ {\prime}} w _ {i _ {j}} (h)} \frac {\sum_ {h \in H} w _ {i _ {j}} (h)}{\sum_ {h \in H} w _ {i _ {j + 1} - 1} (h)} = \log \frac {\lambda_ {i _ {j + 1} - 1} \left(B _ {2} ^ {\prime}\right)}{\lambda_ {i _ {j}} \left(B _ {2} ^ {\prime}\right)} \geq \frac {1}{5}, +$$ + +and + +$$ +\log \frac {w _ {i _ {j}} (H \setminus S _ {i _ {j}})}{w _ {i _ {j + 1} - 1} (H \setminus S _ {i _ {j}})} \geq 0 +$$ + +because the weight $w(h)$ only decreases. Summing over all phases $j$ and we get + +$$ +\psi_ {i} \geq - 2 \alpha \mathcal {E} _ {i} + \frac {1}{5} \left(\mathcal {N} _ {i} - 1\right). +$$ + +Since $i$ may not exactly be the end of a phase, the last phase may end early so we have $\mathcal{N}_i - 1$ instead of $\mathcal{N}_i$ . Rearrange and the proof finishes. + +We have already bounded $\psi_{i}$ , so we just need to bound $\mathcal{E}_i$ in order to bound $\mathcal{N}_i$ by the following lemma. + +Lemma A.8. For every $k$ , with probability at least $1 - \delta$ + +$$ +\mathcal {E} _ {k} \leq \frac {1}{\alpha} \left(\psi_ {k} + \sqrt {2} \log \frac {1}{\delta}\right). +$$ + +Proof. Let $q$ be the query distribution at iteration $i - 1$ and $p(x)$ be the probability that $x$ is corrupted by the adversary. Then the conditional expectation of $\mathcal{E}_i - \mathcal{E}_{i-1}$ is + +$$ +\mathbb {E} \left[ \mathcal {E} _ {i} - \mathcal {E} _ {i - 1} | \mathcal {F} _ {i} \right] = \Pr_ {x \sim q} \left[ h ^ {*} (x) \text {i s w r o n g} \right] = \mathbb {E} _ {x \sim q} \left[ p (x) \right] = \mathbb {E} _ {x \sim \mathcal {D}} \left[ p (x) \frac {q (x)}{\mathcal {D} (x)} \right] \leq \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}. +$$ + +Then if $h^* \notin S$ , from Lemma A.4 + +$$ +\mathbb {E} \left[ \Delta_ {i} - 2 \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \mid \mathcal {F} _ {i} \right] \geq \mathbb {E} \left[ \Delta_ {i} \mid \mathcal {F} _ {i} \right] - 2 \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)} \gtrsim \frac {1}{m ^ {*}}. +$$ + +Therefore, $\mathbb{E}[\alpha (\mathcal{E}_i - \mathcal{E}_{i - 1})|\mathcal{F}_i]\leq \frac{1}{2}\mathbb{E}[\Delta_i|\mathcal{F}_i]$ and $\mathbb{E}[\Delta_i - \alpha (\mathcal{E}_i - \mathcal{E}_{i - 1})|\mathcal{F}_i]\geq \frac{1}{2}\mathbb{E}[\Delta_i|\mathcal{F}_i]$ . This means that $\psi_{k} - \alpha \mathcal{E}_{k} - \frac{1}{2}\mu_{k}$ is a supermartingale. We then bound Var $[\Delta_i - \alpha (\mathcal{E}_i - \mathcal{E}_{i - 1})|\mathcal{F}_i]$ . Note that $|\Delta_{i} - \alpha (\mathcal{E}_{i} - \mathcal{E}_{i - 1})|\leq 2\alpha$ , so + +$$ +\operatorname {V a r} \left[ \Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \mid \mathcal {F} _ {i} \right] \leq \mathbb {E} \left[ \left(\Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right)\right) ^ {2} \mid \mathcal {F} _ {i} \right] \leq 2 \alpha \mathbb {E} \left[ \left| \Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \right| \mid \mathcal {F} _ {i} \right]. +$$ + +Furthermore, + +$$ +\mathbb {E} \left[ | \Delta_ {i} - \alpha (\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}) | | \mathcal {F} _ {i} \right] \leq \mathbb {E} \left[ | \Delta_ {i} | | \mathcal {F} _ {i} \right] + \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}. +$$ + +As a result, + +$$ +\begin{array}{l} \operatorname {V a r} \left[ \Delta_ {i} - \alpha \left(\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}\right) \mid \mathcal {F} _ {i} \right] \leq 2 \alpha \left(\mathbb {E} \left[ | \Delta_ {i} | \mid \mathcal {F} _ {i} \right] + \alpha \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {X} (x)}\right) \\ \leq 2 \alpha \left(\mathbb {E} \left[ | \Delta_ {i} | | \mathcal {F} _ {i} \right] + \frac {1}{2} \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right]\right) \\ \leq 3 \alpha \mathbb {E} \left[ | \Delta_ {i} | | \mathcal {F} _ {i} \right] \\ \lesssim \alpha \mathbb {E} \left[ \Delta_ {i} | \mathcal {F} _ {i} \right]. \\ \end{array} +$$ + +By picking $\alpha$ small enough, $\sum_{i < k} \operatorname{Var}\left[\Delta_i - \alpha(\mathcal{E}_i - \mathcal{E}_{i-1})|\mathcal{F}_i\right] \leq \frac{1}{8}\mu_k$ . Moreover, $|\Delta_i - \alpha(\mathcal{E}_i - \mathcal{E}_{i-1})| \leq 2\alpha$ always. Therefore by Freedman's inequality, with $1 - \delta$ probability we have for any $k$ that + +$$ +\begin{array}{l} \psi_ {k} - \alpha \mathcal {E} _ {k} \geq \mu_ {k} - \sqrt {\frac {4 \alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + 2 \sum_ {i < k} \operatorname {V a r} _ {i - 1} [ \Delta_ {i} - \alpha (\mathcal {E} _ {i} - \mathcal {E} _ {i - 1}) ] \log \frac {1}{\delta}} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \sqrt {\frac {4 \alpha^ {2}}{9} \log^ {2} \frac {1}{\delta} + \frac {1}{4} \mu_ {k} \log \frac {1}{\delta}} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \max \left\{\frac {2 \sqrt {2} \alpha}{3} \log \frac {1}{\delta}, \frac {\sqrt {2}}{2} \sqrt {\mu_ {k} \log \frac {1}{\delta}} \right\} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \mu_ {k} - \max \left\{\frac {\sqrt {2}}{2} \log \frac {1}{\delta}, \frac {\sqrt {2}}{2} \mu_ {k} \right\} - \frac {2 \alpha}{3} \log \frac {1}{\delta} \\ \geq \left(1 - \frac {\sqrt {2}}{2}\right) \mu_ {k} - \sqrt {2} \log \frac {1}{\delta} \\ \geq - \sqrt {2} \log \frac {1}{\delta} \\ \end{array} +$$ + +Rearrange and we proved the lemma. + +![](images/53b143acfc150edc8a9d15b9e785fc9d4ed233879532f7f4a1449f89e1c72f75.jpg) + +Combining Lemma A.7 and Lemma A.8, we can show $C$ is small with high probability as the lemma follows. + +Lemma A.9. For $k = O\left(m^{*}\log \frac{|H|}{\delta}\right)$ , with probability at least $1 - 2\delta$ , $h^* \in S_k$ and $|C| \leq O\left(\log \frac{|H|}{\delta}\right)$ at iteration $k$ . + +Proof. By union bound, with probability at least $1 - 2\delta$ , Lemma A.6 and A.8 will hold at the same time. This means $h^*$ is added to $S_k$ . By definition, $0 \geq \phi_k \geq \phi_0 + \psi_k$ , so $\psi_k \leq 2\log |H|$ . Therefore, by Lemma A.7 and A.8, the number of balls added $|C|$ is $O\left(\log |H| + \log \frac{1}{\delta}\right) = O\left(\log \frac{|H|}{\delta}\right)$ . + +# A.5 Putting Everything Together + +We proved that after $O\left(m^{*}\log \frac{|H|}{\delta}\right)$ iterations, $h^* \in S_i$ and $C$ is small with high probability. Hence, running the stage two algorithm to return a desired hypothesis will not take much more queries. We are ready to put everything together and finally prove Theorem 1.1. + +Theorem 1.1 (Competitive Bound). There exist some constants $c_{1}, c_{2}$ and $c_{3}$ such that for any instance $(H, \mathcal{D}_X, \eta, \varepsilon, \delta)$ with $\varepsilon \geq c_1\eta$ , Algorithm 1 solves the instance with sample complexity + +$$ +m (H, \mathcal {D} _ {X}, \eta , \varepsilon , \delta) \lesssim \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {2} \eta , c _ {3} \varepsilon , \frac {9 9}{1 0 0}\right) + \log \frac {1}{\delta}\right) \cdot \log \frac {N (H , \mathcal {D} _ {X} , \eta)}{\delta} +$$ + +and polynomial time. + +Proof. Let's pick $c_{1}, c_{4}, c_{5}$ as in Theorem 2.3 and pick the confidence parameter to be $\frac{\delta}{3}$ . Then by Lemma A.9, with probability $1 - \frac{2\delta}{3}$ , the first $O\left(\log \frac{|H|}{\delta}\right)$ ball added to $S_{i}$ will contain $h^{*}$ . Since each ball added to $C$ has radius $3c_{4}\eta + 3c_{5}\varepsilon$ , the best hypothesis in $C$ has error $(3 + 3c_{4})\eta + 3c_{5}\varepsilon$ . By Theorem 2.2, with probability $1 - \frac{\delta}{3}$ , the algorithm will return a hypothesis with error $(9 + 9c_{4})\eta + 9c_{5}\varepsilon \leq \eta + \varepsilon$ . Therefore, by union bound, the algorithm will return a desired hypothesis with probability $1 - \delta$ . This proves the correctness of the algorithm. + +The stage one algorithm makes + +$$ +O \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {4} \eta , c _ {5} \varepsilon - 2 \eta , \frac {9 9}{1 0 0}\right) \log \frac {| H |}{\delta}\right) \leq O \left(m ^ {*} \left(H, \mathcal {D} _ {X}, c _ {4} \eta , \frac {c _ {5}}{2} \varepsilon , \frac {9 9}{1 0 0}\right) \log \frac {| H |}{\delta}\right) +$$ + +queries. The stage two algorithm makes $O\left(|C|\log \frac{|C|}{\delta}\right)$ queries by Theorem 2.2. Note that $C$ is a $c_{4}\eta +c_{5}\varepsilon$ -packing because the center of added balls are at least $c_{4}\eta +c_{5}\varepsilon$ away, so $m^{*}\left(H,\mathcal{D}_{X},\frac{c_{4}}{2}\eta ,\frac{c_{5}}{2}\varepsilon ,\frac{99}{100}\right)\geq \log |C|$ . Since $|C|\leq \log \frac{|H|}{\delta}$ by Lemma A.9, stage two algorithm takes $O\left(\left(m^{*}\left(H,\mathcal{D}_{X},\frac{c_{4}}{2}\eta ,\frac{c_{5}}{2}\varepsilon ,\frac{99}{100}\right) + \log \frac{1}{\delta}\right)\log \frac{|H|}{\delta}\right)$ queries. Picking $c_{2} = c_{4},c_{3} = \frac{c_{5}}{2}$ , we get the desired sample complexity bound. + +To compute the packing at the beginning of the algorithm, we need to compute the distance of every pair of hypotheses, which takes $O(|H|^2|\mathcal{X}|)$ time. Computing $r$ in each round takes $O(|H||\mathcal{X}|)$ time and solving the optimization problem takes $O(|\mathcal{X}|)$ time. Therefore, the remaining steps in stage one take $O\left(m^{*}|H||\mathcal{X}|\log \frac{|H|}{\delta}\right)$ time. Stage two takes $O\left(\log \frac{|H|}{\delta}\log \frac{\log\frac{|H|}{\delta}}{\delta}\right)$ time. Therefore, the overall running time is polynomial of the size of the problem. + +Similarly, we can prove Theorem 2.3, which is a stronger and more specific version of Theorem 1.1. + +Theorem 2.3. Suppose that $\mathcal{D}_x$ and $H$ are such that, for any distribution $\lambda$ over $H$ such that no radius- $(c_4\eta + c_5\varepsilon)$ ball has probability more than $80\%$ , there exists a distribution $q$ over $X$ such that + +$$ +\underset {x \sim q} {\mathbb {E}} [ r (x) ] - \frac {c _ {4}}{2 0} \eta \max _ {x} \frac {q (x)}{\mathcal {D} _ {x} (x)} \geq \beta +$$ + +for some $\beta > 0$ . Then for $\varepsilon \geq c_1 \eta$ , $c_4 \geq 300$ , $c_5 = \frac{1}{10}$ and $c_1 \geq 90 c_4$ , let $N = N(H, \mathcal{D}_x, \eta)$ be the size of an $\eta$ -cover of $H$ . Algorithm 1 solves $(\eta, \varepsilon, \delta)$ active agnostic learning with $O\left(\frac{1}{\beta} \log \frac{N}{\delta} + \log \frac{N}{\delta} \log \frac{\log N}{\delta}\right)$ samples. + +Proof. By Lemma A.9 (with $m^*$ replaced by $\frac{1}{\beta}$ and setting confidence parameter to $\frac{\delta}{3}$ ) after $O\left(\frac{1}{\beta}\log \frac{N}{\delta}\right)$ queries, with probability at least $1 - \frac{2\delta}{3}$ , a hypothesis in $C$ will be within $c_{4}\eta +c_{5}\varepsilon$ to $h^{*}$ and $|C| = O\left(\log \frac{N}{\delta}\right)$ . From Theorem 2.2, with probability at least $1 - \frac{\delta}{3}$ , stage two algorithm then outputs a hypothesis $\hat{h}$ that is $9c_{4}\eta +9c_{5}\varepsilon$ from $h^\prime$ so err $(\hat{h})\leq 9c_4\eta +9c_5\varepsilon \leq \eta +\varepsilon$ by the choice of the constants. The stage two algorithm makes $O\left(\log \frac{N}{\delta}\log \frac{\log\frac{N}{\delta}}{\delta}\right)$ queries. Overall, the algorithm makes $O\left(\frac{1}{\beta}\log \frac{N}{\delta} +\log \frac{N}{\delta}\log \frac{\log\frac{N}{\delta}}{\delta}\right)$ queries and succeeds with probability at least $1 - \delta$ . + +# B Query Complexity Lower Bound + +In this section we derive a lower bound for the agnostic binary classification problem, which we denote by AGNOSTICLEARNING. The lower bound is obtained from a reduction from minimum set cover, which we denote by SETCOVER. The problem SETCOVER consists a pair $(U, S)$ , where $U$ is a ground set and $S$ is a collection of subsets of $U$ . The goal is to find a set cover $C \subseteq S$ such that $\bigcup_{S \in C} S = U$ of minimal size $|C|$ . We use $K$ to denote the cardinality of the minimum set cover. + +Lemma B.1 (Dinur and Steurer [2014], Corollary 1.5). There exists hard instances SETCOVERHARD with the property $K \geq \log |U|$ such that for every $\gamma > 0$ , it is NP-hard to approximate SETCOVERHARD to within $(1 - \gamma)\ln |U|$ . + +Proof. This lemma directly follows from Dinur and Steurer [2014, Corollary 1.5]. In their proof, they constructed a hard instance of SETCOVER from LABELCOVER. The size of the minimum cover $K \geq |V_1| = Dn_1$ and $\log |U| = (D + 1)\ln n_1 \leq K$ . So the instance in their proof satisfies the desired property. + +Then we prove the following lemma by giving a ratio-preserving reduction from SETCOVER to AGNOSTICLEARNING. + +Lemma B.2. If there exists a deterministic $\alpha$ -approximation algorithm for AGNOSTICLEARNING $\left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|}\right)$ , there exists a deterministic $2\alpha$ -approximation algorithm for SETCOVERHARD. + +Proof. Given an instance of SETCOVERHARD, for each $s \in S$ , number the elements $u \in s$ in an arbitrary order; let $f(s, u)$ denote the index of $u$ in $s$ 's list (and padding 0 to the left with the extra bit). We construct an instance of AGNOSTICLEARNING as the following: + +1. Let the domain $\mathcal{X}$ have three pieces: $U, V \coloneqq \{(s,j) \mid s \in S, j \in [1 + \log |s|]\}$ , and $D = \{1, \dots, \log |U|\}$ , an extra set of $\log |U|$ more coordinates. +2. On this domain, we define the following set of hypotheses: + +(a) For $u \in U$ , define $h_u$ which only evaluates 1 on $u \in U$ and on $(s, j) \in V$ if $u \in s$ and the $j$ 'th bit of $(2f(s, u) + 1)$ is 1. +(b) For $d \in D$ , define $h_d$ which only evaluates 1 on $d$ . +(c) Define $h_0$ which evaluates everything to 0. + +3. Let $\mathcal{D}_X$ be uniform distribution over $\mathcal{X}$ and set $\eta = \frac{1}{3|\mathcal{X}|}$ and $\varepsilon = \frac{1}{3|\mathcal{X}|}$ . Set $\delta = \frac{1}{4|H|}$ . + +Any two hypotheses satisfy $\| h_1 - h_2 \| \geq \frac{1}{|\mathcal{X}|} > \varepsilon = \eta$ , so $\mathrm{err}(h^*) = 0$ . First we show that $m^* \left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|} \right) \leq K + \log |U|$ . Indeed there exists a deterministic algorithm using $K + \log |U|$ queries to identify any hypothesis with probability 1. Given a smallest set cover $C$ , the algorithm first queries all $(s, 0) \in V$ for $s \in C$ . If $h^* = h_u$ for some $u$ , then for the $s \in S$ that covers $u$ , $(s, 0)$ will evaluate to true. The identity of $u$ can then be read out by querying $(s, j)$ for all $j$ . The other possibilities $-h_d$ for some $d$ or 0—can be identified by evaluating on all of $D$ with $\log U$ queries. The total number of queries is then at most $K + \log |U|$ in all cases, so $m^* \leq K + \log |U| \leq 2K$ . + +We now show how to reconstruct a good approximation to set cover from a good approximate query algorithm. We feed the query algorithm $y = 0$ on every query it makes, and let $C$ be the set of all $s$ for which it queries $(s, j)$ for some $j$ . Also, every time the algorithm queries some $u \in U$ , we add an arbitrary set containing $u$ to $C$ . Then the size of $C$ is at most the number of queries. We claim that $C$ is a set cover: if $C$ does not cover some element $u$ , then $h_u$ is zero on all queries made by the algorithm, so $h_u$ is indistinguishable from $h_0$ and the algorithm would fail on either input $h_0$ or $h_u$ . Thus if $A$ is a deterministic $\alpha$ -approximation algorithm for AGNOSTICLEARNING, we will recover a set cover of size at most $\alpha m^* \leq \alpha (K + \log |U|) \leq 2\alpha K$ , so this gives a deterministic $2\alpha$ -approximation algorithm for SETCOVERHARD. + +Similar results also hold for randomized algorithms, we just need to be slightly careful about probabilities. + +Lemma B.3. If there exists a randomized algorithm for AGNOSTICLEARNING $\left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|}\right)$ , there exists a randomized $2\alpha$ -approximation algorithm for SETCOVERHARD with success probability at least $\frac{2}{3}$ . + +Proof. We use the same reduction as in Lemma B.2. Let $A$ be an algorithm solves AGNOSTICLEARNING $\left(H, \mathcal{D}_x, \frac{1}{3|\mathcal{X}|}, \frac{1}{3|\mathcal{X}|}, \frac{1}{4|H|}\right)$ . To obtain a set cover using $A$ , we keeping giving $A$ label 0 and construct the set $C$ as before. Let $q_C$ be a distribution over the reconstructed set $C$ . Assume that by contradiction with probability at least $\frac{1}{3}$ , $C$ is not a set cover. Then, with probability at least $1/3$ , there is some element $v$ such that both $h_v$ and $h_0$ are consistent on all queries the algorithm made; call such a query set "ambiguous". + +Then what is the probability that the agnostic learning algorithm fails on the input distribution that chooses $h^*$ uniformly from $H$ ? Any given ambiguous query set is equally likely to come from any of the consistent hypotheses, so the algorithm's success probability on ambiguous query sets is at most $1/2$ . The chance the query set is ambiguous is at least $\frac{2}{3|H|}$ : a $\frac{1}{3H}$ chance that the true $h^*$ is $h_0$ and the query set is ambiguous, and at least as much from the other hypotheses making it ambiguous. Thus the algorithm's fails to learn the true hypothesis with at least $\frac{1}{3|H|}$ probability, contradicting the assumed $\frac{1}{4|H|}$ failure probability. + +Therefore, a set cover of size at most $2\alpha K$ can be recovered with probability at least $\frac{1}{3}$ using the agnostic learning approximation algorithm. + +The following theorem will then follow. + +Theorem 1.2 (Lower Bound). It is NP-hard to find a query strategy for every agnostic active learning instance within an $c \log |H|$ for some constant $c > 0$ factor of the optimal sample complexity. + +Proof. Let's consider the instance of set cover constructed in Lemma B.2. Let $c = 0.1$ and note that $0.1\log |H|\leq 0.49\log \frac{|H|}{2}$ . If there exists a polynomial time $0.49\log \frac{|H|}{2}$ approximation algorithm for the instance, then there exists a polynomial time $0.98\log \frac{|H|}{2}\leq 0.98\log |U|$ approximation algorithm for SETCOVERHARD, which is a contradiction to Lemma B.1. \ No newline at end of file diff --git a/acompetitivealgorithmforagnosticactivelearning/images.zip b/acompetitivealgorithmforagnosticactivelearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0134abf5891a67630d1f11d011f52311dd1f5302 --- /dev/null +++ b/acompetitivealgorithmforagnosticactivelearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a421a8553f181acf4a7b6544aa4f00b6213efd5821da9d2a807fb194b052d071 +size 786943 diff --git a/acompetitivealgorithmforagnosticactivelearning/layout.json b/acompetitivealgorithmforagnosticactivelearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..78bf7f284fc6e93f62984476deaad0a27d390e5f --- /dev/null +++ b/acompetitivealgorithmforagnosticactivelearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cece8df0fca866440122b9673e75b7f8d2d9760d677d866366016f8057cc0e8 +size 1393365 diff --git a/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_content_list.json b/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..71957a25efc2d6168d9bd7055f5024625228c2a1 --- /dev/null +++ b/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e47be7c73862425e469eeb9e10194ed3c990c701c176327e7df48184e7a73f07 +size 79263 diff --git a/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_model.json b/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b452696351fd0e9ee049161b8136d15cc5aa37e9 --- /dev/null +++ b/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c38c0969697aca447d7d425b4f0865ab3bbc7dba499839fc2192cd8fb8e95c72 +size 97922 diff --git a/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_origin.pdf b/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ab8bf5b26dfa9d430751f173cb54a862a1df379d --- /dev/null +++ b/acomprehensivebenchmarkforneuralhumanradiancefields/19438a90-0378-4d15-9aea-afdde5dc6ed3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30774d4c670804f14764db8f5e8b9619abb5b643fa15277476e67580e54f13b8 +size 6667763 diff --git a/acomprehensivebenchmarkforneuralhumanradiancefields/full.md b/acomprehensivebenchmarkforneuralhumanradiancefields/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1de2384263b6bd752eb30eb90cd56fffb099c825 --- /dev/null +++ b/acomprehensivebenchmarkforneuralhumanradiancefields/full.md @@ -0,0 +1,249 @@ +# A Comprehensive Benchmark for Neural Human Radiance Fields + +Kenkun Liu $^{1,2*}$ , Derong Jin $^{2}$ , Ailing Zeng $^{1\dagger}$ , Xiaoguang Han $^{2}$ , Lei Zhang $^{1}$ + +$^{1}$ International Digital Economy Academy (IDEA) + +2The Chinese University of Hong Kong, Shenzhen + +https://kenkunliu.github.io/HMNeRFBench/ + +# Abstract + +The past two years have witnessed a significant increase in interest concerning NeRF-based human body rendering. While this surge has propelled considerable advancements, it has also led to an influx of methods and datasets. This explosion complicates experimental settings and makes fair comparisons challenging. In this work, we design and execute thorough studies into unified evaluation settings and metrics to establish a fair and reasonable benchmark for human NeRF models. To reveal the effects of extant models, we benchmark them against diverse and hard scenes. Additionally, we construct a cross-subject benchmark pre-trained on large-scale datasets to assess generalizable methods. Finally, we analyze the essential components for animatability and generalizability, and make HumanNeRF from monocular videos generalizable, as the inaugural baseline. We hope these benchmarks and analyses could serve the community. + +# 1 Introduction + +The free-view human body rendering and animation have gained significant attention due to their broad applications in industries such as film-making, video games, metaverse, and AR/VR. Recently, the neural radiance field (NeRF) [33] provides a new neural implicit representation that employs a multi-layer perceptron (MLP) to encode object density and view-dependent color for each point in a 3D scene, which has been widely adopted for human body rendering. The vanilla NeRF requires a batch of multi-view images for training to represent a single static scene, which is far from real applications. Thus, some works [35, 23, 42, 58, 53] attempt to reduce the number of views while some other works [46, 7, 30, 22, 9] try to adapt the per-scene training setting to be one-shot training. Furthermore, there are also some works [39, 15, 50, 5, 44, 27] trying to model dynamic scenes. + +The extended NeRFs for humans [38, 49, 12, 54, 10, 37, 32, 8, 24] follow similar pathways of development. Specifically, there have developed two streams of methods for human body rendering, i.e., scene-specific methods and generalizable methods. The former aims to encode a human's high-fidelity appearance with as few train views and video frames as possible and then synthesize the human image in novel views or novel poses. One of the representative works is NeuralBody [38], which requires four views and 100-300 frames for training. Another work, HumanNeRF [49], takes a step further with only monocular video frames for training. It exploits the prior of human body shape and deformation, making the single-view frames competitive to multi-view images. Recently, generalizable methods have tried to train a universal model that can be directly used to synthesize a novel-view human image. Like NHP [24], it only needs three views as input to render the human's image in any given view. Additionally, to achieve novel pose rendering, MPSNeRF [16] first warps observed appearance information to canonical space and then to the target pose space. + +Table 1: Comparisons of recent NeRF-based human rendering methods on different aspects. In the column Dataset, ZM, PS, GB, HM, H36M, RP are ZJU-MoCap [38], People-Snapshot [1], GeneBody [11], HuMMan [4], Human3.6M [19], RenderPeople datasets, respectively. Estimated means to use estimated masks and SMPL parameters instead of the ground-truth. Views: train views for scene-specific methods and input views (*) for generalizable methods. Frames: train frames for scene-specific methods and input frames (*) for generalizable methods. + +
MethodDatasetViewsFramesEstimatedGeneralizableAnimatableUnified Evaluation
NeuralBody [38]ZM, PS4100-300
AniNeRF [37]ZM, H36M4100-300
Arah [47]ZM, H36M4300-400
HumanNeRF [49]ZM1500-600
UV-Volume [10]ZM, H36M18100
MonoHuman [54]ZM1500-600
NHP [24]ZM, AIST++3*1* or 3*
MPSNerf [16]ZM, H36M, THuman3*1*
GP-NeRF [8]ZM3*1*
KeypointNeRF [32]ZM3*1*
GNR [11]ZM, GB, RP4*1*
OursZM, GB, HM1 or 4, 1*1, 60, 100, 300, 500, 10*
+ +While these methods progress greatly, some emerging problems should be taken seriously. First, existing methods are evaluated on different datasets, metrics, and settings (e.g., used views, frames, ground-truth masks, and SMPLs [31]), making systematic comparison hard. For instance, generalizable methods NHP and GP-NeRF have different train and test splits. Detailed comparisons are listed in Tab. 1. There is a lack of comprehensive ablation studies about the number of train views and frames that will influence the results. Second, existing commonly used dataset (e.g., ZJU-MoCap [38]), which contains a small number of actors performing easy actions wearing simple close-fitting clothes, is hard to reflect the effectiveness of these methods in real scenarios. Although other datasets (including Human3.6M [19], People-Snapshot [1], AIST++ [26], THuman [57] and RenderPeople) are also used in some works, they are still not complex, diverse, and large-scale enough. Third, for generalizable methods, the train data scale is too small (e.g., on ZJU-MoCap). Fourth, given monocular videos, there is currently no exploration of achieving both generalizability and animatability simultaneously. + +To address the above issues, this work builds a new benchmark for human NeRF models, where 1) we establish a comprehensive benchmark for human NeRF models from unified evaluation metrics and experimental settings and retrain the representative methods to serve future work for fair comparisons; 2) we elaborately explore challenging datasets (e.g., GeneBody [11] and HuMMan [4]), which contain large-scale subjects with diverse actions and challenging clothes, and conduct comprehensive experiments to evaluate existing methods. Instead of averaging the quantitative results on all subjects, we classify the selected dataset into several categories to separately evaluate the existing methods' performance on different typical cases; 3) we build a benchmark trained on large-scale datasets for generalizable models to boost their capabilities and conduct cross-subject validation; 4) we analyze the key elements of either animatability or generalizability and propose the first baseline for animatable and generalizable human body rendering from monocular videos. i.e., a generalizable HumanNeRF. We hope our studies and benchmarks could benefit future works. + +# 2 Related Work + +The neural radiance field, i.e., NeRF [33], is a powerful implicit representation of 3D scenes. There is a series of variants that improve the vanilla NeRF in several aspects, including improving rendering quality [2, 3, 43], reducing train views [35, 23, 42, 58, 53], acceleration [52, 34, 40, 41, 14, 6], one-shot training [46, 7, 30, 22, 9], mesh reconstruction [51, 17, 13, 48, 56, 45, 36] and so on. Among these variants, NeRFs for human body rendering [38, 49, 12, 54, 10, 37, 32, 8, 24] have attracted a lot of attention due to their broad applications. By exploiting the human body prior, they achieve impressive quality for synthesizing high-fidelity human images given sparse view video sequences. + +# 2.1 Scene-specific NeRFs for Human + +Scene-specific methods [38, 37, 47, 49, 54, 10, 20] for human body rendering require only sparse view videos for training as different video frames can be treated equivalent to dense view images by exploiting the human body prior. The number of train views has been reduced from 100 for + +![](images/c4a306f72ef178cfea815646badcf04a517ebfafc8d137a76bceea6aa5094226.jpg) +Figure 1: Example Images of the used datasets ZJU-MoCap, GeneBody, and HuMMan. + +vanilla NeRF to 4 for NeuralBody [38] and even 1 for HumanNeRF [49]. The most commonly used human body prior is the SMPL model [31], which parameterizes the human body shape and pose and provides the optimized skinning weights to describe the deformation of the human body. To strengthen the performance of novel pose rendering, some works [37, 54] introduce additional constraints to learn more reasonable skinning weights. UV-Volume[10] proposes a new representation of UV volume, which is used to regress uv coordinates to retrieve color from texture stacks, reduce training time, and support texture editing. + +# 2.2 Generalizable NeRFs for Human + +Unlike scene-specific methods that take a long time for training until converge, generalizable NeRFs [24, 32, 8, 11, 12, 18, 21] for human body rendering train a model in a one-shot manner. Once a generalizable model completes the training process, it can be directly used to render an unseen human by giving input images of the human in a feed-forward way. Thanks to SMPL vertices that approximate the human body surface, these generalizable methods can project SMPL vertices to image planes to retrieve image features for each vertex. The positions of these vertices are strong prior to geometry inference, making generalization available. NHP [24] projects vertices not only to other image views but also to other nearby frames to get spatial-temporal features. GP-NeRF [8] learns embeddings anchored on vertices to guide the feature fusion from different views. MPSNeRF [16] additionally defines a canonical space and first warp retrieved image features to the space and then to the target pose space so as to achieve animatability. These methods require multi-view images as input, while some works [12, 18] also have explored building a generalizable NeRF model that takes a single image as input. However, there is still no generalizable and animatable method that takes monocular video frames as input. More importantly, existing methods have various settings, making fair comparisons hard. + +# 3 Benchmarking Neural Human Radiance Fields + +# 3.1 Unifying Evaluation Metrics + +The commonly used metrics to measure the difference between a rendered image and a GT image are peak signal-to-noise ratio (PSNR $\uparrow$ ), structural similarity index (SSIM $\uparrow$ ), and learned perceptual image patch similarity (LPIPS $\downarrow$ ) [55]. PSNR measures pixel-wise similarity between the rendered image and the GT image. In contrast, SSIM and LPIPS estimate the patch-wise error between two images. Recent research found that LPIPS is more consistent with human perception. However, some existing works did not report the metric. We add this metric for all experiments to make the quantitative results more credible. + +![](images/bd60f5e6241c2b5adb9c75b585ab9f55c7bc2bbff0fbee8b4bb4ea7518493be8.jpg) +Figure 2: The PSNR distribution among different test views. We use the images from Camera B1 for training, and the rest cameras are located uniformly in a round from B1 to B23. + +# 3.2 Evaluating on Challenging Datasets + +The most commonly used dataset for neural human body rendering is ZJU-MoCap [38], which consists of 10 multi-view video sequences captured by 24 rounding synchronized cameras. However, most of the subjects perform simple actions and wear close-fitting clothes, and the light condition is also biased to dark and blue effect, as shown in Fig. 1. Moreover, the number of human actors is too few to train a generalizable method. To build more challenging and representative benchmarks, we select the GeneBody [11] and HuMMan [4] datasets. GeneBody contains human actors doing hard poses and wearing complicated clothes (e.g., long dresses), which is suitable for evaluating the real performance of scene-specific methods. HuMMan is a large-scale (about 153 released subjects) dataset with multi-view human video sequences that will benefit generalizable models. + +Table 2: Quantitative comparison of representative scene-specific methods with different training view settings on ZJU-MoCap. In this experiment, the number of train frames is fixed to be 300. + +
MethodsSingle ViewMultiple views
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
NeuralBody [38]22.650.86670.170727.820.93960.1048
AniNeRF [37]21.480.84770.222924.130.88220.1867
HumanNeRF [49]23.080.87630.132625.860.92450.0728
+ +# 3.3 Unifying Evaluation Settings + +According to various settings shown in Tab. 1, we make them unified for clear comparisons. + +Normalized metrics via human cropping. Some methods evaluate the three metrics mentioned above on the entire image [49] while others only calculate on the local human area via human cropping [38]. To avoid small-sized humans in an image and make the comparison unified, we use the cropping setting as normalized metrics in all experiments. + +Train & test view splits. The default number of train views for scene-specific methods is different due to their different targets (e.g., for monocular videos). To study the effect of the number of train views, we set the number of train views to be one and four to represent the single-view setting and multi-view setting respectively. In Tab. 2, we compare the results of novel view rendering on the ZJU-MoCap dataset. In the single-view training setting, HumanNeRF obtains the best results in all three metrics, especially in the LPIPS metric. When multiple views (i.e. four views) are available for training, NeuralBody performs best on PSNR $(\uparrow)$ and SSIM $(\uparrow)$ while HumanNeRF still has the lowest LPIPS $(\downarrow)$ . Since monocular videos are easier to obtain, the single-view setting is more friendly for real applications. However, training only on one view would inevitably lead to bad performance in test views that has large position and angle deviation with the train view. We compute PSNR for each test view in Fig. 2 and find that the PSNR of the rendered images worsens when the test view is far from the trained views. Thus, how to handle spatial-temporal video information to obtain better quality is key for future work. + +Train & test frame splits. For scene-specific methods, multi-view video frames are divided into two parts: training and novel pose rendering testing. For instance, a multi-view human video sequence may contain 500 frames captured by 24 cameras. The first 300 frames captured by four cameras are + +Table 3: Quantitative comparison of representative scene-specific methods with different numbers of train frames on ZJU-MoCap. In this experiment, the number of train views is fixed to be 4. + +
Methods1 frame60 frames100 frames300 frames500 frames
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
NeuralBody [38]23.830.88810.141226.290.92480.108727.350.93640.097427.820.93960.104827.770.93800.1137
AniNeRF [37]24.560.89870.129824.450.88990.165224.780.89590.155824.130.88220.186724.390.88650.1882
HumanNeRF [49]25.980.90310.101526.490.92250.080026.400.92530.073025.860.92460.072825.520.92180.0754
+ +used to train the model. In comparison, the first 300 frames captured by the remaining 20 cameras are used for testing novel view rendering, and the left 200 frames captured by all 24 cameras are used for testing novel pose rendering. With different numbers of train frames, the performance for scene-specific methods is shown in Tab. 3. Given any frames, HumanNeRF can obtain the lowest LPIPS with visually better renderings. For AntiNeRF, there is no trend that more frames can get better performance. Notably, the quantitative results of these methods vary in a non-negligible range. Thus, unifying the train and test frames is essential. + +After unifying the human cropping, train & test view (4 vs 20), and frame split (300 train frames vs 200 novel pose frames), we compare the quantitative results of recent human body rendering methods (some methods like [21, 29] requiring additional processed data are not included in this table) reported in their original paper with the results we retrained in unified settings on ZJU-MoCap in Tab. 4. Besides the evaluation settings, we also unify the train and test subjects split for generalizable methods, and the quantitative results are obtained on the test subjects split (subject 387, 393 and 394 of the ZJU-MoCap dataset). + +Table 4: Unified evaluation of free-view rendering for existing methods on ZJU-MoCap dataset. We compare the qualitative results reported in their original paper and the results after we unify the human cropping, train & test view (4 vs 20), and frame split (300 train frames vs 200 novel pose frames). And for all methods, we additionally calculate the metric of LPIPS, which is more consistent with human eyes. + +
MethodsReported in paperUnified evaluation
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
Scene-specificNeuralBody [38]28.100.9440-27.800.94000.1050
AniNeRF [37]27.100.9490-24.100.88200.1870
HumanNeRF [49]30.200.96800.031725.860.92450.0728
UV-Volume [10]27.950.93460.072026.450.92820.0726
GeneralizableNHP [24]24.800.9050-24.680.90340.1706
GP-NeRF [8]26.000.9210-26.660.92630.1256
KeypointNeRF [32]25.030.8969-26.010.91590.1041
+ +# 3.4 Analyzing the Effects of Estimated Mask & SMPL + +Most existing human body rendering methods depend on using ground-truth human masks and SMPL parameters provided by datasets. For example, NeuralBody exploits SMPL vertices to define latent codes and perform sparse convolution with the coordinates of these vertices as input. GP-NeRF [8] projects posed SMPL vertices on image planes to retrieve image features. Human masks for each image are used to eliminate the impact of background pixels. In these methods, the masks and SMPL parameters are assumed to be given and precise. In practical scenarios, we should estimate masks and SMPL parameters via existing state-of-the-art models. Nevertheless, none of the methods quantitatively investigated the effect of inaccurate masks and SMPL parameters. + +To study this issue, we adopt the recent state-of-the-art segmentation method RobustVideoMatting [28] to acquire human masks and human shape estimation method Hybrik [25] to obtain SMPL parameters. To simplify the cases, we conduct experiments on the subjects of ZJU-MoCap. Tab. 5 compares the performance between GT masks and estimated masks. All methods have a slight performance drop with estimated human masks, indicating existing methods have a good tolerance for the estimated mask. Interestingly, the degradation is smaller for multi-view models than single-view models since these methods can correct human masks automatically with aggregated multi-view information. + +In comparison, methods suffer from a significant performance drop with the estimated SMPL parameters inputs from Tab. 6, especially for NeuralBody and GP-NeRF. HumanNeRF is more + +![](images/97756137f0bfffe51a26ae3d089e21932d6efa4c07bdc9a7c912fe48e455a897.jpg) +Figure 3: Qualitative comparison of NueralBody, HumanNeRF, and NHP with accurate (pseudo-GT SMPL) and inaccurate (estimated by Hybrik) SMPL parameters. + +robust than other methods. Their model designs can explain the phenomenon. NeuralBody defines latent codes on each SMPL vertex, which implicitly stores the appearance information of the human actor. So, inaccurate SMPL will result in heavy changes in learning latent codes during training. In contrast, HumanNeRF stores appearance information of a human implicitly in canonical space, which is irrelevant to SMPL parameters, and its body deformation mechanism is modeled by two parts: rigid transformation, which relies on SMPL parameters, and non-rigid transformation, which is learned implicitly. Therefore, for HumanNeRF, inaccurate SMPL parameters only increase the difficulty of learning the non-rigid transformation but will not severely hurt the human appearance modeling. We also present the qualitative comparisons of NueralBody and HumanNeRF with accurate and inaccurate SMPL parameters in Fig. 3. The texture and pose quality will degrade significantly for NeuralBody and NHP, while the pose quality mainly influences HumanNeRF if the input SMPL parameters suffer high errors. + +Table 5: Quantitative comparison of methods using GT masks and using estimated masks to test the tolerance of representative methods to inaccurate masks. We conduct experiments on three subjects of ZJU-MoCap and use RobustVideoMatting as human segmentation. + +
MethodsGT MaskEstimated by RobustVideoMatting
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
Scene-specificNeuralBody [38]27.240.93200.120526.930.92680.1272
HumanNeRF [49]22.960.87930.113122.350.87240.1257
GeneralizableNHP [24]24.680.90340.170624.550.90050.1744
GP-NeRF [8]26.660.92630.125626.330.92410.1266
+ +Table 6: Quantitative comparison of methods using Pseudo-GT SMPL parameters and using estimated SMPL parameters to test the tolerance of representative methods to inaccurate SMPL parameters. We conduct experiments on three subjects of ZJU-MoCap and use Hybrik as SMPL parameter estimator. + +
MethodsPseudo-GT SMPL(Provided by dataset)Estimated by HybrIK
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
Scene-specificNeuralBody [38]22.410.86880.158316.650.76450.3338
HumanNeRF [49]22.960.87930.113121.870.86970.1272
GeneralizableNHP [24]24.680.90340.170619.470.79140.3170
GP-NeRF [8]26.660.92630.125619.020.80450.3093
+ +![](images/ac6c2faf4c54e4b777120a80117a5108bd4319e08c92d17dcbc45649da1490d7.jpg) +Figure 4: Qualitative comparisons on the widely used ZJU-MoCap (a) and challenging GeneBody datasets (b,c,d) on normal, hard clothes, and hard poses scenes. + +![](images/fb502f51a427f4bff5c74d5e15e978c503010d72a1eddea6c27033aa544d9b6d.jpg) + +# 4 Analyses of Generalization and Animatability + +This section empirically investigates the generalization ability and animatability of existing human body rendering methods. For scene-specific methods, we select NeuralBody and HumanNeRF as representative methods; for generalizable methods, we select GP-NeRF as the representative method. + +# 4.1 Benchmarking Scene-specific Methods on GeneBody + +For scene-specific methods of human body rendering, the meaning of generalization is two folds: novel view rendering and novel pose rendering. Novel view rendering renders novel views of a human in a pose seen during training. In comparison, novel pose rendering needs to render a human in a given unseen pose from novel views. In general, novel pose rendering is more challenging than novel view rendering because rendering a human in a novel pose requires a method to model the human body deformation well. Unfortunately, few scene-specific methods have sufficient experiments and discussions for the two aspects separately. Moreover, human actors in ZJU-MoCap only wear easy close-fitting clothes and perform simple actions with small-scale subjects, so the dataset is hard to reflect the real generalization ability. Therefore, as discussed above, we adopt the GeneBody dataset and conduct extensive experiments to explore the effectiveness of representative scene-specific methods. We evaluate the performance of novel view and pose rendering separately in Tab. 7. Following the official splits, we also split the dataset into normal, hard cloth, and hard pose. Meanwhile, we unify the number of train views to be four and train frames to be 100 while the remaining views and frames are used for evaluation. There are some observations from the results: + +1. The tolerance to SMPL inaccuracy varies among different methods. In the GeneBody dataset, the provided masks and SMPL parameters are not as accurate as ZJU-MoCap. So, we can see from the columns of GeneBody Normal even though the human actors do not wear complex clothes or perform hard poses, NeuralBody still performs poorly due to inaccurate SMPL parameters. The reason behind this has been discussed in Sec. 3.4. +2. Existing scene-specific methods still perform poorly in challenging cases. As demonstrated in the columns GeneBody Hard Cloth and GeneBody Hard Pose, both NeuralBody and HumanNeRF have a severe performance drop on all metrics compared to the results on ZJU-MoCap. The challenges brought by hard clothes are two folds. The first is that the motion of clothes (especially loose clothes) cannot be depicted by SMPL model. The other one is that complex clothes have more texture details and geometry variations. They lead to increased difficulties for both appearance and body deformation modeling in the sense of more data inconsistency. As for hard poses, the challenge is the larger variations of body and the corresponding cloth deformation, which increases the difficulties for body deformation modeling. +3. Compared to hard poses, hard clothes are more challenging for existing models. Comparing the column GeneBody Hard Cloth and GeneBody Hard Pose, it can be observed that hard clothes have a more severe impact on the overall performance for all representative methods than hard poses. This is because hard clothes affect both appearance and body deformation modeling, while hard poses mainly affect body deformation modeling. The qualitative results are shown in Fig. 4 +4. Existing scene-specific methods have a consistent performance drop for novel pose rendering. Comparing the results in the top rows (novel view rendering) and bottom rows (novel pose rendering), we can find that with the same settings and training data, the selected representative methods have + +inferior performance for novel pose rendering compared to novel view rendering. This indicates that the learned body deformation cannot generalize well enough to unseen poses for existing scene-specific methods, especially significantly increasing the LPIPS. + +Table 7: Extensive quantitative comparison of representative scene-specific methods on ZJU-MoCap and different partitions of GeneBody. + +
MethodsNovel view rendering
ZJU-MoCapGeneBody normalGeneBody hard clothGeneBody hard pose
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
NeuralBody [38]27.820.93960.104819.310.84990.319716.240.77480.309319.990.84120.2467
HumanNeRF [49]25.860.92500.072824.630.88650.231917.360.77290.215521.560.83880.1715
MethodsNovel pose rendering
ZJU-MoCapGeneBody normalGeneBody hard clothGeneBody hard pose
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
NeuralBody [38]23.730.88710.152518.790.83250.340214.470.70390.365717.210.78010.2985
HumanNeRF [49]23.640.89070.102724.610.88610.225415.600.73310.261019.070.79460.2122
+ +# 4.2 Benchmarking Generalizable Methods on HuMMan + +For generalizable methods, the generalization mainly refers to novel view rendering for unseen human actors after pre-training on the multi-view images of a certain number of human actors. The scale and diversity of training data are of great importance to achieve good generalization. To study the potential of this stream of methods, We evaluate the in-domain (i.e., cross-subject) and out-of-domain (i.e., cross-dataset) generalization ability the selected representative generalizable method, i.e. GP-NeRF. + +From Tab. 8, the test results consistently perform better than the model trained on the indomain dataset. Even using a large-scale HuM-Man dataset to train the model and cross-dataset test, the performance is still worse than the model trained on ZJU-MoCap. This indicates that the existing generalizable methods still cannot achieve satisfying performance when rendering novel views for an unseen human in real scenarios. The state-of-the-art generalizable + +method GP-NeRF tends to overfit small-scale data due to the higher performance on in-domain ZJU-MoCap training and testing. + +Table 8: Quantitative comparison of cross-subjects generalization of GP-NeRF. ZJU-MoCap 7 and ZJU-MoCap 3 mean the train and test split of ZJU-MoCap, respectively. + +
Train setTest setPSNR ↑SSIM ↑LPIPS ↓
ZJU-MoCap 7ZJU-MoCap 326.660.92630.1256
HuMManZJU-MoCap 323.820.88100.1850
ZJU-MoCap 7HuMMan Eval17.890.88330.2318
HuMManHuMMan Eval21.680.92340.1511
+ +Upper-bound performance of finetuning on a single subject. For generalizable methods, to further increase the performance of novel view rendering for an unseen human, one effective way is to finetune the trained model on the human's multi-view sequences. To test the upper-bound performance of finetuning, we use the pretrained model of GP-NeRF to train three subjects of ZJU-MoCap separately for a long enough time and record the quantitative performance after 15 minutes, 1 hour, and 20 hours, respectively. As shown in Fig. 5 and Tab.9, benefiting from pretraining, GP-NeRF has higher PSNR than HumanNeRF from the early start of finetune but increases slowly even after a long training time. In contrast, the scene-specific method HumanNeRF is trained from scratch with the same settings, and it performs poorly initially, but its PSNR grows faster than GP-NeRF. After enough time for finetune (training), HumanNeRF has significantly lower LPIPS so + +![](images/855f11d759da592c27dc26a741a6ccfc80bd798f52bbeb7018ca6dfca62c3cb8.jpg) +Figure 5: Rendering quality grows with the increase of finetuning time. The generalizable method GP-NeRF initially looks better than the scene-specific method HumanNeRF, but Human-NeRF has visually better results after several hours. + +that the rendered images by HumanNeRF look realistic and high-quality to the GT images. + +Table 9: Quantitative comparison of the upper bound performance of finetuning on a single subject using a pretrained model for the generalizable method GP-NeRF. For reference, we also train HumanNeRF from scratch given the same train views, frames, and time. + +
Methods15 mins finetune1 hour finetune20 hours finetune
PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓
GP-NeRF [8]27.460.92750.146427.940.93300.139028.800.94080.1290
HumanNeRF [49]22.220.85490.177223.260.87790.128525.360.91560.0807
+ +# 4.3 Exploring the Key to Animatability + +Scene-specific methods. For scene-specific methods, the definition of animatability is equivalent to novel pose rendering, i.e., rendering a human in a given pose. Thus, as discussed in Sec.4.1, existing scene-specific methods are all animatable, but the rendering quality would be worse than novel view rendering. The key element of animatability for scene-specific methods is a module to model the body deformation so that they can use images of a human performing different actions for training. For example, NeuralBody encodes the appearance information of a human into SMPL vertices, so it can directly use the body deformation module of SMPL to animate a human body into any given pose. In contrast, HumanNeRF encodes appearance information of a human of T-pose in canonical space and decouples the body deformation into skeletal motion, which is modeled by an implicitly learned LBS (linear blend skinning) weight field, and non-rigid motion, which is modeled by an implicitly learned translation field. Most of the existing scene-specific methods follow these two paradigms. From our experiments in Sec. 3.4, the latter has more tolerance to inaccurate SMPL parameters, while the former is easier to implement. + +GeneHumanNeRF: making HumanNeRF generalizable. Existing generalizable methods are not animatable, except for MPSNeRF [16], because they require a certain number of views as input images to render novel views. For generalizable methods, the appearance information of a human to be rendered is acquired from input images. Thus, these methods cannot directly work by merely specifying a pose without reference to multi-view input images. + +Table 10: The performance of our built simple baseline GeneHumanNeRF for animatable and generalizable human body rendering from a monocular video. + +
Train setTest setPSNR ↑SSIM ↑LPIPS ↓
ZJU-MoCap 7ZJU-MoCap 322.870.86470.2110
HuMManZJU-MoCap 322.730.85730.2290
ZJU-MoCap 7HuMMan Eval18.090.87150.2159
HuMManHuMMan Eval21.060.88820.2010
+ +MPSNeRF achieves animatability by exploiting LBS warping twice so that the appearance information of a posed human in the input multi-view images can be warped to the rendered human in given poses. The key element is a canonical pose space where the appearance information from input images is first warped to and from which the appearance information is warped to the target pose space. We find that such an element is not limited to the setting that requires multi-view images as input. Therefore, borrowing this effective module, we built a new generalizable and animatable method that takes a few monocular video frames as input (e.g., [16]), which is like an extension of HumanNeRF from the setting of per-scene trained into one-shot trained. We name this new baseline method as GeneHumanNeRF. The motivation for this baseline method is that monocular videos are easier to acquire, so it is more suitable for practical applications. The more technical details of this baseline can be found in supplementary material. We build the benchmark on both ZJU-MoCap and HuMMan datasets and evaluate their cross-subject and cross-dataset generalization ability in Tab. 10. The performance of the proposed simple baseline can achieve a competitive performance on both datasets, which can be a good reference for future works. + +# 5 Inspirations for Future Works + +# 5.1 Unifying Settings + +As we can see from the tables presented in Sec. 3, chaotic train & evaluation settings make the community hard to distinguish the advantages and drawbacks of existing methods from their paper-reported results. And our ablation studies for different settings indicate that those settings can make a difference. Therefore, for future works, it is highly recommended to conduct experiments in the + +settings consistent with those used in this benchmark and report their results of more settings (more datasets, different frame & view settings and so on). In addition, most works assume accurate SMPL parameters are available by default, but it is not true especially for real applications. Thus, how to reduce the adverse impact of inaccurate SMPL parameters can also be a problem worthy of studying. + +# 5.2 Scene-specific Methods + +For scene-specific methods, future works should focus more on the scenes that people wearing complex clothes and improve the pose generalization. To evaluate the real performance on these aspects, more datasets should be included for experiments. Besides, a proper sampling strategy for train frames and views can also lead to a significant performance increase. As most existing methods rely on SMPL to capture the human motions, it is worth thinking is there any alternative model that can depict the human geometry more precisely and whose body parts move more naturally. + +# 5.3 Generalizable Methods + +For generalizable methods, future works should focus on the improvement of cross-dataset generalization. Currently, existing methods train on small-size datasets and evaluate on the in-domain test sets. Although the quantitative results look good, they have poor performance when tested on other datasets that have more diverse cases. Due to the high cost for acquiring multi-view video sequences, data scarcity may also be an obstacle to training a generalizable model that can perform well in real scenarios. On the other hand, animatability is not compatible with the common settings of existing generalizable methods, but animatability is expected for many applications. Therefore, building a high-performance animatable and generalizable model that takes a monocular video as input can lead to interesting applications. We have proposed a simple baseline of this setting to be a reference for future works. + +# 6 Conclusion + +In this work, we build the inaugural comprehensive benchmark for neural human radiance fields. Specifically, we unify the evaluation settings and metrics. We introduce more challenging datasets and establish the benchmark for them. To explore the capability of existing generalizable models, we train them on large-scale datasets and conduct cross-subject validation. Lastly, after analyzing the key components of animatability or generalizability, we design a baseline that caters to both these attributes given monocular videos. We sincerely hope these efforts can benefit this field of study. + +Acknowledgement The work was supported in part by NSFC with Grant No. 62293482, the Basic Research Project No. HZQB-KCZYZ-2021067 of Hetao ShenzhenHK S&T Cooperation Zone, the National Key R&D Program of China with grant No. 2018YFB1800800, by Shenzhen Outstanding Talents Training Fund 202002, by Guangdong Research Projects No. 2017ZT07X152 and No. 2019CX01X104, by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 2022B1212010001), and by Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No. ZDSYS201707251409055). It was also partially supported by NSFC62172348, Outstanding Young Fund of Guangdong Province with No. 2023B1515020055 and Shenzhen General Project with No. JCYJ20220530143604010. In addition, we really thank Yuqi Hu from HKUST (GZ) for helping code cleaning and build the project webpage. + +# References + +[1] Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. Video based reconstruction of 3d people models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8387-8397, Jun 2018. CVPR Spotlight Paper. +[2] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855–5864, 2021. +[3] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mipnerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470-5479, 2022. +[4] Zhongang Cai, Daxuan Ren, Ailing Zeng, Zhengyu Lin, Tao Yu, Wenjia Wang, Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, Fangzhou Hong, Mingyuan Zhang, Chen Change Loy, Lei Yang, and Ziwei Liu. HuMMan: Multi-modal 4d human dataset for versatile sensing and modeling. In 17th European Conference on Computer Vision, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part VII, pages 557-577. Springer, 2022. +[5] Ang Cao and Justin Johnson. Hexplane: A fast representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 130-141, 2023. +[6] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII, pages 333-350. Springer, 2022. +[7] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14124-14133, 2021. +[8] Mingfei Chen, Jianfeng Zhang, Xiangyu Xu, Lijuan Liu, Yujun Cai, Jiashi Feng, and Shuicheng Yan. Geometry-guided progressive nef for generalizable and efficient neural human rendering. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIII, pages 222-239. Springer, 2022. +[9] Yu Chen and Gim Hee Lee. Dbarf: Deep bundle-adjusting generalizable neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24-34, 2023. +[10] Yue Chen, Xuan Wang, Xingyu Chen, Qi Zhang, Xiaoyu Li, Yu Guo, Jue Wang, and Fei Wang. Uv volumes for real-time rendering of editable free-view human performance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16621-16631, 2023. +[11] Wei Cheng, Su Xu, Jingtan Piao, Chen Qian, Wayne Wu, Kwan-Yee Lin, and Hongsheng Li. Generalizable neural performer: Learning robust radiance fields for human novel view synthesis. arXiv preprint arXiv:2204.11798, 2022. +[12] Hongsuk Choi, Gyeongsik Moon, Matthieu Armando, Vincent Leroy, Kyoung Mu Lee, and Grégory Rogez. Mononhr: Monocular neural human renderer. In 2022 International Conference on 3D Vision (3DV), pages 242-251. IEEE, 2022. +[13] François Darmon, Bénédicte Bascle, Jean-Clément Devaux, Pascal Monasse, and Mathieu Aubry. Improving neural implicit surfaces geometry with patch warping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6260–6269, 2022. +[14] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501–5510, 2022. + +[15] Chen Gao, Ayush Saraf, Johannes Kopf, and Jia-Bin Huang. Dynamic view synthesis from dynamic monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5712-5721, 2021. +[16] Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, and Xin Tong. Mps-nerf: Generalizable 3d human rendering from multiview images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. +[17] Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, and Xiaowei Zhou. Neural 3d scene reconstruction with the manhattan-world assumption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5511-5520, 2022. +[18] Shoukang Hu, Fangzhou Hong, Liang Pan, Haiyi Mei, Lei Yang, and Ziwei Liu. Sherf: Generalizable human nerf from a single image. +[19] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325-1339, 2013. +[20] Boyi Jiang, Yang Hong, Hujun Bao, and Juyong Zhang. Selfrecon: Self reconstruction your digital avatar from monocular video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5605-5615, 2022. +[21] Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, and Anurag Ranjan. Neuman: Neural human radiance field from a single video. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII, pages 402-418. Springer, 2022. +[22] Mohammad Mahdi Johari, Yann Lepoittevin, and François Fleuret. Geonerf: Generalizing nerf with geometry priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18365-18375, 2022. +[23] Mijeong Kim, Seonguk Seo, and Bohyung Han. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2022. +[24] Youngjoon Kwon, Dahun Kim, Duygu Ceylan, and Henry Fuchs. Neural human performer: Learning generalizable radiance fields for human performance rendering. In Neural Information Processing Systems, 2021. +[25] Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, and Cewu Lu. Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3383-3393, 2021. +[26] Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. Learn to dance with aist++: Music conditioned 3d dance generation, 2021. +[27] Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, and Noah Snavely. Dynibar: Neural dynamic image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4273-4284, 2023. +[28] Shanchuan Lin, Linjie Yang, Imran Saleemi, and Soumyadip Sengupta. Robust high-resolution video matting with temporal guidance. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 238-247, 2022. +[29] Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, and Christian Theobalt. Neural actor: Neural free-view synthesis of human actors with pose control. ACM transactions on graphics (TOG), 40(6):1-16, 2021. +[30] Yuan Liu, Sida Peng, Lingjie Liu, Qianqian Wang, Peng Wang, Christian Theobalt, Xiaowei Zhou, and Wenping Wang. Neural rays for occlusion-aware image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7824-7833, 2022. + +[31] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1-248:16, October 2015. +[32] Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, and Shunsuke Saito. Keypointnerf: Generalizing image-based volumetric avatars using relative spatial encoding of keypoints. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XV, pages 179-197. Springer, 2022. +[33] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. +[34] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG), 41(4):1-15, 2022. +[35] Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5480-5490, 2022. +[36] Michael Oechsle, Songyou Peng, and Andreas Geiger. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5589-5599, 2021. +[37] Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, and Hujun Bao. Animatable neural radiance fields for modeling dynamic human bodies. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14314-14323, 2021. +[38] Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9050–9059, 2020. +[39] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318–10327, 2021. +[40] Christian Reiser, Songyou Peng, Yiji Liao, and Andreas Geiger. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlp's. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14335-14345, 2021. +[41] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5459-5469, 2022. +[42] Prune Truong, Marie-Julie Rakotosaona, Fabian Manhardt, and Federico Tombari. Sparf: Neural radiance fields from sparse and noisy poses. Nov 2022. +[43] Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan. Ref-nerf: Structured view-dependent appearance for neural radiance fields. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5481-5490. IEEE, 2022. +[44] Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Jingyi Yu, and Lan Xu. Fourier plenoctrees for dynamic radiance field rendering in real-time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13524-13534, 2022. +[45] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689, 2021. + +[46] Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2021. +[47] Shaofei Wang, Katja Schwarz, Andreas Geiger, and Siyu Tang. Arah: Animatable volume rendering of articulated human sdfs. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII, pages 1-19. Springer, 2022. +[48] Yiqun Wang, Ivan Skorokhodov, and Peter Wonka. Hf-neus: Improved surface reconstruction using high-frequency details. Advances in Neural Information Processing Systems, 35:1966-1978, 2022. +[49] Chung-Yi Weng, Brian Curless, Pratul P Srinivasan, Jonathan T Barron, and Ira Kemelmacher-Shlizerman. Humannerf: Free-viewpoint rendering of moving people from monocular video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16210-16220, 2022. +[50] Zhiwen Yan, Chen Li, and Gim Hee Lee. Nerf-ds: Neural radiance fields for dynamic specular objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8285-8295, 2023. +[51] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. Advances in Neural Information Processing Systems, 34:4805-4815, 2021. +[52] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. Plenoptrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5752-5761, 2021. +[53] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2021. +[54] Zhengming Yu, Wei Cheng, Xian Liu, Wayne Wu, and Kwan-Yee Lin. Monohuman: Animatable human neural field from monocular video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16943-16953, 2023. +[55] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. +[56] Yongqiang Zhang, Zhipeng Hu, Haoqian Wu, Minda Zhao, Lincheng Li, Zhengxia Zou, and Changjie Fan. Towards unbiased volume rendering of neural implicit surfaces with geometry priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4359-4368, 2023. +[57] Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, and Yebin Liu. Deephuman: 3d human reconstruction from a single image. In The IEEE International Conference on Computer Vision (ICCV), October 2019. +[58] Zhizhuo Zhou and Shubham Tulsiani. Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12588-12597, 2023. \ No newline at end of file diff --git a/acomprehensivebenchmarkforneuralhumanradiancefields/images.zip b/acomprehensivebenchmarkforneuralhumanradiancefields/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..48a3e68be74402c9de7aa45758e80deff8170863 --- /dev/null +++ b/acomprehensivebenchmarkforneuralhumanradiancefields/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c7e0cdc3a0132e5ef757a6172312e698e597b685993048a59612684d582e746 +size 541095 diff --git a/acomprehensivebenchmarkforneuralhumanradiancefields/layout.json b/acomprehensivebenchmarkforneuralhumanradiancefields/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cc2a384ac44c2b868272601a2b3b6ad801a8c070 --- /dev/null +++ b/acomprehensivebenchmarkforneuralhumanradiancefields/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc3e1f43a5aba3622012e92e071f6ecd03da0fb754c4b37245b69d264266460f +size 298158 diff --git a/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_content_list.json b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6f0788c5dd4fe8dde2b8817c87199c6157012f72 --- /dev/null +++ b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e35e32d92ce8a3155abcebd1f2ecc3a01a65d74f7dd8e4b727ecaaa5fadbcb1d +size 191406 diff --git a/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_model.json b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d1335bbdaff77d7099f8e9721d97d07bac1a8d13 --- /dev/null +++ b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00ec903be68dda39e9c7419cec49721a282525708b19d77538ec095e1f27a991 +size 219028 diff --git a/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_origin.pdf b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4ab60e63edbff89f3303d048d37bb8cc3fc351c7 --- /dev/null +++ b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/f8ffc72d-4f0b-4279-9794-8cb7811debc3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04871d45e85dbabe003aa0e80c513ea73923021df66ce76d69ea7e9ae954e69e +size 883437 diff --git a/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/full.md b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9e52acc795746da7e64be9a775f2553cfb9ce6c4 --- /dev/null +++ b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/full.md @@ -0,0 +1,504 @@ +# A Comprehensive Study on Text-attributed Graphs: Benchmarking and Rethinking + +Hao Yan $^{1*}$ , Chaozhuo Li $^{2*}$ , Ruosong Long $^{3}$ , Chao Yan $^{4}$ , Jianan Zhao $^{6}$ , Wenwen Zhuang $^{1}$ , Jun Yin $^{1}$ , Peiyan Zhang $^{5}$ , Weihao Han $^{7}$ , Hao Sun $^{7}$ , Weiwei Deng $^{7}$ , Qi Zhang $^{7}$ , Lichao Sun $^{8}$ , Xing Xie $^{2}$ , Senzhang Wang $^{1\dagger}$ + +1Central South University, 2Microsoft Research Asia, 3University of Birmingham + +$^{4}$ Peking University, $^{5}$ Hong Kong University of Science and Technology + +6Université de Montréal, 7Microsoft, 8Lehigh University + +# Abstract + +Text-attributed graphs (TAGs) are prevalent in various real-world scenarios, where each node is associated with a text description. The cornerstone of representation learning on TAGs lies in the seamless integration of textual semantics within individual nodes and the topological connections across nodes. Recent advancements in pre-trained language models (PLMs) and graph neural networks (GNNs) have facilitated effective learning on TAGs, garnering increased research interest. However, the absence of meaningful benchmark datasets and standardized evaluation procedures for TAGs has impeded progress in this field. In this paper, we propose CS-TAG, a comprehensive and diverse collection of challenging benchmark datasets for TAGs. The CS-TAG datasets are notably large in scale and encompass a wide range of domains, spanning from citation networks to purchase graphs. In addition to building the datasets, we conduct extensive benchmark experiments over CS-TAG with various learning paradigms, including PLMs, GNNs, PLM-GNN co-training methods, and the proposed novel topological pre-training of language models. In a nutshell, we provide an overview of the CS-TAG datasets, standardized evaluation procedures, and present baseline experiments. The entire CS-TAG project is publicly accessible at https://github.com/sktsherlock/TAG-Benchmark. + +# 1 Introduction + +Graphs are ubiquitous in modeling the relational and structural aspects of real-world objects across various domains, such as social networks, transportation system networks, and biological protein-protein networks [1, 2, 3, 4]. In many real-world graphs, nodes are often associated with text attributes, giving rise to the text-attributed graphs (TAGs) [5, 6]. TAGs are prevalent in various scenarios, such as social graphs where each user is accompanied by a textual description and paper citation graphs where textual content is linked to each respective paper [7, 8]. The exploration of learning methodologies applied to TAGs has emerged as a prominent research area within multiple fields, including graph learning, information retrieval, and natural language processing [9]. + +The nucleus of learning on TAGs lies in the effective integration of both the node attributes (textual semantics) and graph topology (structural connections) to facilitate the learning of node representations. The textual information associated with each node offers a wealth of semantic content, enabling the characterization of individual node properties, which could be captured by the pretrained language models (PLMs) [10, 11, 12, 13, 14]. Meanwhile, the structural information encoded + +![](images/f79bdbffd2c9e0db066ab45cedc0dcf0b4dd10107c34e5c195ae409b2865c7e8.jpg) +Figure 1: The traditional text attributed graph representation learning pipeline. + +within the graph topology presents the inherent proximity relationships between nodes. Graph neural networks (GNNs) have proven to be effective in capturing such structural relations based on the message-passing mechanism [7, 15, 16, 17, 18, 19, 20, 21, 22]. + +PLM-based and GNN-based methods are two prevalent types of learning paradigms on TAGs as illustrated in Figure 1. PLM-based methods generally input the textual content derived from the target node into a pre-trained language model. However, the knowledge of topology resulting from the high non-linearity of graph structure within TAGs is largely discarded by the PLM-based methods [9]. Conversely, GNN-based methods are capable of preserving the intricate graph topology information with greater fidelity. Nevertheless, an inherent limitation plaguing GNN-based methods lies in the disconnected modeling process of node attributes and graph topology. Specifically, most GNNs pre-model node attributes as static representations, treating them as fixed and unlearnable parameters during the message passing process. Consequently, the gradients stemming from the learning objective of GNNs cannot be effectively back-propagated into the attribute modeling. This discrepancy in the training procedure hinders the attainment of an optimal solution, as it fails to guarantee end-to-end training, thereby impeding the overall effectiveness of the approach. + +To simultaneously enjoy the merits of GNNs and LMs, several recent endeavors shed light on the co-training paradigm as shown in Figure 1. LMs and GNNs are combined in a cascaded [23, 24, 25] or nested [5] manner, establishing a unified end-to-end training paradigm to model the node attributes and graph topology jointly. Despite its theoretical appeal, the co-training method suffers from severe scalability issues as its memory complexity is proportional to the graph size as neighborhood texts are also encoded [9]. Motivated by the recent advancements in pre-training techniques, a novel inquiry emerges: Can we pre-train the language models to understand the graph topology? If we can effectively encode topological information into LMs through appropriate pre-training tasks, LMs could serve as the foundational model for learning on TAGs. Topological pre-trained LMs eliminate the explicit GNN aggregations, thereby circumventing the efficiency challenges encountered in the co-training paradigm. However, the design of suitable and effective pre-training tasks to encode valuable knowledge derived from intricate graph topology into LMs remains an open question. + +In order to delve deeply into the intricate interplay between textual semantics and graph topology within TAGs, we embark on an unprecedented exploration to investigate the optimal training paradigm for various TAGs. Existing text-attributed graph datasets (e.g., Cora [8], WikiCS [26], Amazon Photo [27]) cannot meet our requirements, as they solely offer node attribute embeddings, devoid of the original textual sentences. To overcome this limitation, we meticulously curate a novel and comprehensive dataset, dubbed CS-TAG, comprising eight distinct TAGs sourced from diverse domains. This carefully crafted dataset serves as a solid foundation for future research endeavors, facilitating in-depth investigations in this burgeoning field. Moreover, extensive experiments are conducted on the CS-TAG dataset to provide comprehensive and reliable benchmarks. All aforementioned learning paradigms are thoroughly evaluated and analyzed. Experimental results and detailed discussions further reveal the underlying correlations between graph topology and textual attributes, drawing deep insight into the inherent characteristics of the TAGs. Our contributions are summarized as follows: + +1. To the best of our knowledge, CS-TAG is the first open dataset specifically designed for text-attributed graphs. TAGs from a variety of fields are collected, cleaned, and organized as the final structured dataset. We provide researchers with original links and data cleaning codes to facilitate their access and reprocessing of these datasets in accordance with their research interests and requirements. The entire CS-TAG project is publicly accessible as an open source repository on Github, accessible at https://github.com/sktsherlock/TAG-Benchmark. +2. In contrast to previous topology-driven graph learning models, our work underscores the vital significance of deep node attribute modeling. This novel perspective sheds light on the design of next-generation GNNs by emphasizing the incorporation of deep node attribute understanding. + +3. We investigate the novel problem of topological pre-training of language models, aiming at teaching LMs to understand topological structures. This innovative training paradigm exhibits remarkable performance on the CS-TAG dataset in terms of effectiveness and efficiency, which contributes to broadening the scope of language model pre-training. +4. Extensive experiments are conducted across eight diverse datasets, focusing on two downstream tasks: node classification and link prediction. Such experiments serve as a rigorous evaluation of various learning paradigms, providing precise and dependable benchmarks for future endeavors. + +# 2 Related Work + +In this section, we first briefly introduce three popular learning paradigms for TAGs. After that, the comparisons between the existing graph learning benchmarks and the proposed CS-TAG are also discussed. Refer to Appendix A for more detailed reviews of the related models. We have implemented most of the algorithms discussed in this section in the repository. + +PLM-based methods. The PLMs refer to universal language models that possess enhanced semantic understanding due to their pre-training on a vast corpus [28]. The early works on modeling textual attributes were based on shallow networks, e.g., Skip-Gram[29] and GloVe[30]. In recent years, the backbone networks dominated by the pre-training-fine-tuning paradigm are rapidly scaling up: from ELMo[31], GPT[32], to BERT [10], RoBERTa [12], DeBERTa [13]. The large-scale models, which get fully trained with massive data, demonstrate superior performances on general NLP tasks. One of the most critical usages of PLMs is text representation, where the underlying semantics of texts are captured by low-dimensional embeddings. On the TAGs, the PLMs use the local textual information of each node to learn a good representation for the downstream task [9]. + +GNN-based methods. As graph representation learning enjoys explosive growth in machine learning, numerous research works have been proposed for various tasks including node classification [15], link prediction [21], and so on. Graph neural networks are recognized as powerful tools for modeling graph data. Such methods (e.g., GCN [15], GAT [16], GraphSAGE [7], GIN [17], RevGAT [33]) learn effective message-passing mechanisms such that information between the nodes can get aggregated for expressive graph representations. GNNs generally adopt the "castle architecture" suggested by GraphSAGE for textual graph representation: node features are encoded independently using text modeling tools (e.g. PLMs) and subsequently aggregated by GNNs to produce the final representation. + +Co-training methods. The aforementioned two types of paradigms primarily focus on modeling partial information, which limits their ability to learn comprehensive features. Several recent endeavors propose to co-train GNNs and LMs to enjoy the merits from both sides. Specifically, LMs and GNNs are combined in the cascaded [24] or nested manner [5]. The outputs generated by LMs serve as inputs for GNNs, and vice versa. The parameters of both LM and GNN are updated through the back-propagation of gradients from downstream tasks. However, this co-training paradigm suffers from serious scalability problems, as all neighbors need to be encoded from scratch by the LMs, incurring significant additional computational costs [9]. + +Benchmarks for graph representation learning. Several established graph benchmarks have been developed and widely adopted [34, 35, 27, 36]. However, when it comes to learning on TAGs, these benchmarks exhibit notable deficiencies. Firstly, a majority of these datasets suffer from the absence of raw textual information, limiting the investigation of attribute modeling's effectiveness. Secondly, these datasets often neglect to explore the impact of text attribute modeling on GNNs. Thirdly, these datasets are predominantly small in scale. Thus, there is a compelling necessity to construct a comprehensive large-scale dataset for TAGs. + +# 3 CS-TAG: A Comprehensive Dataset and Benchmark for TAGs + +In this section, we commence by providing a concise summary of the constructed CS-TAG benchmark in Section 3.1. Subsequently, we present the details of the construction of CS-TAG in Section 3.2, including data collection, cleaning, and labeling. Moreover, we elucidate the details of GNN-based, PLM-based, and Co-training learning paradigms in Section 3.3. Finally, the proposed topological pre-training of LMs is presented in Section 3.4. + +# 3.1 Overview of CS-TAG + +In order to address the limitations inherent in prior researches, we propose the establishment of the text-attributed graph benchmark, dubbed CS-TAG, which serves as a standardized evaluation framework for assessing the efficacy of representation learning techniques on TAGs. To ensure scalability, CS-TAG includes datasets of varying sizes and incorporates scalable baselines consisting of PLMs, GNNs, and co-training methods. This enables researchers to evaluate the performance of their models across a broad range of dataset scales. To enhance usability, we provide a modular pipeline that simplifies the implementation of different models within the CS-TAG. Such a modular architecture enables researchers to easily integrate their novel methods and compare them with existing approaches. In addition, we are committed to maintaining a public leaderboard for TAGs, serving as a repository for the latest advancements in the field. This platform will continuously update text-attributed graph datasets that possess practical and research value, fostering ongoing progress and collaboration within the community. Overall, CS-TAG serves as a scalable, unified, modular, and consistently updated evaluation framework for assessing the performance of representation learning methods on text-attributed graphs. + +# 3.2 Dataset Construction + +In order to thoroughly investigate the performance of different learning paradigms on TAGs, we conduct an extensive survey of various text-attributed graph datasets that have been previously utilized in the literature. Our observations reveal that many commonly employed node-level datasets are essentially text-attributed graphs. For instance, well-known citation graphs such as Cora, PubMed, CiteSeer [8], and ogbn-arxiv [34] are all TAGs. These datasets derive node attributes from textual information, such as the title and abstract of papers. Additionally, academic collaboration networks such as Coauthor CS/Physics [27] set node attributes derived from keywords defined in the papers. + +However, while these datasets are frequently employed by GNNs, they possess obvious inadequacies when exploring representation learning + +![](images/2942cbbb9d4c36e94626c7a64461036c9ff1f7806fa90877d8e14fb4d12123c4.jpg) +Figure 2: The differences between the TAGs datasets in CS-TAG (used for node classification) and the previous datasets. + +on TAGs. Firstly, most of these datasets lack the availability of raw textual information, bringing challenges to investigating the effectiveness of attribute modeling on these datasets. Secondly, these datasets generally overlook the exploration of text attribute modeling's impact on GNNs. A majority of these datasets employ simplistic bag-of-words models or traditional text encoding techniques like GloVe or Skip-Gram to represent text attributes, which are kind of outdated. Lastly, these datasets are predominantly small in scale, leading to a lack of differentiation between different learning models across numerous datasets. + +To address these limitations, we have taken proactive steps to collect and construct some novel datasets of TAGs. Figure 2 illustrates the number of nodes/edges in the previous datasets and the proposed CS-TAG. One can clearly see that the TAGs within CS-TAG are comparatively larger than the counterparts. Here, we present the details of shopping graphs as an example. We extract datasets from the Amazon dataset [37], including Books-Children/History, Ele-Computers/Photo, and Sports-Fitness. Nodes represent different types of items, while edges indicate items that are frequently purchased or browsed together. Node labels are assigned based on the product category. To explore the influence of attributes in the text-attribute graphs, distinct text attributes have been provided for each of these datasets. For example, in the Books-Children/History dataset, node attributes are derived from the title and description of the respective books, such as "Description: Collection of Poetry; Title: The golden treasury of poetry". The Sports-Fitness dataset only incorporates node attributes from the title of the sports items, such as "Girls ballet Tutu Neon Orange". In the Ele-Computers/Photo dataset, node attributes are obtained from high-rated reviews and product summaries, for instance, + +Table 1: Statistics of text-attributed graph dataset used in CS-TAG. + +
DatasetNodesEdgesClassDomainModelingScaleTasksRaw Text
PreviousWikiCS11,701216,12310WikipediaGloVeMediumNode classification
Cora2,7085,4297AcademicBag of wordsSmallNode classification
Citeseer3,3274,7326AcademicBag of wordsSmallNode classification
Pubmed19,71744,3383AcademicBag of wordsMediumNode classification
ogbn-arxiv169,3431,166,24340AcademicSkip-GramLargeNode classification
Coauthor CS18,33381,89415AcademicBag of wordsMediumNode classification
Coauthor Physics34,493247,9625AcademicBag of wordsMediumNode classification
Amazon Photo7,487119,04310E-commerceBag of wordsSmallNode classification
Amazon Computers13,381245,7788E-commerceBag of wordsMediumNode classification
Oursogbn-arxiv-TA169,3431,166,24340AcademicPLMsLargeNode classification
Books-Children76,8751,554,57824E-commercePLMsLargeNode classification
Books-History41,551358,57412E-commercePLMsLargeNode classification
Ele-Computers87,229721,08110E-commercePLMsLargeNode classification
Ele-Photo48,362500,92812E-commercePLMsLargeNode classification
Sports-Fitness173,0551,773,50013E-commercePLMsLargeNode classification
CitationV81,106,7596,120,897-AcademicPLMsLargeLink Prediction
GoodReads676,0848,582,324-E-commercePLMsLargeLink Prediction
+ +"Great camera for the price! This camera takes crystal clear photos and is cheap too!". Further details on the dataset construction process can be found in Appendix B. + +Additionally, we construct two other datasets, CitationV8 and GoodReads, for the link prediction task. The CitationV8 dataset represents a citation network extracted from DBLP [38]. Node attributes in CitationV8 are derived from the titles and abstracts of research papers. Each edge signifies a citation relationship between two papers. The GoodReads dataset, on the other hand, originates from a prominent book review website. This dataset captures the "similar item" linking relationship between books and provides valuable information about the attributes of each book, such as the title and description. Therefore, we leverage the GoodReads dataset to construct link prediction tasks, which involve predicting relationships between similar books. Detailed descriptions of all the aforementioned datasets can be found in Appendix B. + +# 3.3 Conventional Learning Paradigms on TAGs + +Existing learning paradigms on TAGs can be broadly classified into three distinct categories: 1) GNN-based methods: These methods primarily leverage GNNs as the foundational model for capturing the underlying graph topology structures through message-passing mechanisms. 2) PLM-based methods: These approaches rely on prevalent pre-trained language models to capture the semantics from the textual node attributes, which excel in their ability to comprehend text semantics and exhibit strong transferability. 3) Co-training methods: This paradigm involves the joint learning of GNNs and LMs under a unified framework [23] to enjoy the merits from both sides. Next, we will give the formulaic definitions of these three paradigms. + +GNN-based Paradigm. GNNs are employed to propagate information across the graph nodes, allowing for the extraction of meaningful representations via message passing, which are formally defined as follows: + +$$ +\boldsymbol {h} _ {u} ^ {(k + 1)} = \operatorname {U P D A T E} _ {\omega} ^ {(k)} \left(\boldsymbol {h} _ {u} ^ {(k)}, \operatorname {A G G R E G A T E} _ {\omega} ^ {(k)} \left(\left\{\boldsymbol {h} _ {v} ^ {(k)}, v \in \mathcal {N} (u) \right\}\right)\right) \tag {1} +$$ + +where $k$ represents the layers of GNNs, $\mathcal{N}$ denotes the set of neighbors, $u$ denotes the target node, $\omega$ means the learning parameters in GNNs. Please note that, the initial node feature vector $h_u^{(0)}$ using pre-learned by PLMs or other shallow text encoder (e.g., Skip-Gram). Such attribute modeling phase is performed independently of the subsequent training of GNNs. Gradients from the GNN training objectives are unable to be back-propagated into the PLMs to update their parameters. And this decoupling of PLMs and GNNs impedes the overall effectiveness. + +PLM-based Paradigm. PLM-based methods leverage the effectiveness of pre-training techniques to enhance the modeling of text within each node. The formulation of these methods is as follows: + +$$ +\boldsymbol {h} _ {u} ^ {(k + 1)} = \operatorname {U P D A T E} _ {\boldsymbol {\psi}} ^ {(k)} \left(\boldsymbol {h} _ {u} ^ {(k)}\right) \tag {2} +$$ + +where $\psi$ denotes the learnable parameters in PLMs. PLMs advance the modeling of node text attributes. However, incorporating crucial topological context into PLM-based paradigms remains a challenge, particularly when the available textual data is limited. + +![](images/95bf651f7d15fa0e1ff4c14bdaf23f34d8d63f198455a96184893a5523d177af.jpg) +Figure 3: Illustrations of different topological pre-training methods. + +![](images/79ef47df88efd1d063fd2fa2857180fbe494e12919105b0d6df97df505f8fdb8.jpg) + +![](images/cd2a24ce0ea6e7e9fdbc50d11e5a06876ab072387d29ad816e0f6cfa4fb48882.jpg) + +Co-training Paradigm. GNNs and LMs are jointly trained under a unified training framework: + +$$ +f _ {\Theta} (A, T) = \operatorname {G N N} _ {\omega} (A, \operatorname {P L M} _ {\psi} (T)), \Theta = \{\omega , \psi \} \tag {3} +$$ + +where $f$ denotes the learning function and $\Theta$ denotes the entire learnable parameters, which are derived from both the GNN and PLM modules. The outputs generated by the LM serve as input to the GNN. Notably, the gradients obtained from the GNN can be back-propagated to the LM, enabling the update of its parameters. However, the co-training method faces significant scalability challenges. This is primarily due to the memory complexity associated with encoding neighborhood texts, resulting in a memory requirement that scales linearly with the size of the graph [9]. + +# 3.4 Topological Pre-training of Language Models + +The incorporation of explicit GNN aggregations in the co-training paradigm introduces inherent challenges in terms of training complexity and resource requirements. This is primarily due to the simultaneous modeling of texts from both the center node and its neighbors. Therefore, this brings us to a question: is there a training paradigm to enjoy the merits of graph topology while avoiding the explicit GNN operations? Inspired by the recent advancements in pre-training techniques [39, 40], our motivation lies in teaching language models to understand the topological structures. Three topological pre-training tasks are proposed to impart graph structures into the LMs, enabling them to better comprehend and capture the underlying topology. + +Topological Masked Language Model (TMLM). Inspired by the task of masked language modeling, we propose a novel topological masked language model (TMLM) to capture the first-order connections on the token level. Given a center node $c$ and one of its neighbors $n$ , their corresponding text is formally defined as $T^{(c)} = \{t_1^{(c)}, t_2^{(c)}, \ldots, t_k^{(c)}\}$ and $T^{(n)} = \{t_1^{(n)}, t_2^{(n)}, \ldots, t_u^{(n)}\}$ , respectively. We randomly replace a subset of tokens in the center node $T^{(c)}$ and $T^{(n)}$ with a special token [MASK]. The objective of TMLM is to predict the masked tokens. Let $\Phi^{(s)} = \{\phi_1^{(c)}, \phi_2^{(c)}, \ldots, \phi_{m-1}^{(n)}, \phi_m^{(n)}\}$ represents the indexes of the $m$ masked tokens in the sentence $T^{(c)}$ and $T^{(n)}$ . Let $T_{\Phi}^{(s)}$ denote the set of masked tokens in $T^{(c)}$ and $T^{(n)}$ , and $T_{-\Phi}^{(s)}$ denote the set of observed (unmasked) tokens. The objective of TMLM is: + +$$ +\mathcal {L} _ {\mathrm {t m l m}} \left(\boldsymbol {T} _ {\Phi} ^ {(s)} \mid \boldsymbol {T} _ {- \Phi} ^ {(s)}\right) = \frac {1}{m} \sum_ {i = 1} ^ {m} \log p \left(t _ {\phi_ {i}} \mid \boldsymbol {T} _ {- \Phi} ^ {(s)}; \theta\right). \tag {4} +$$ + +in which $\theta$ denotes the learnable parameters. + +Topological Contrastive Learning (TCL). Inspired by contrastive learning [41, 42, 43, 44], we propose a novel topological contrastive learning (TCL) task to capture the first-order topological information in the node level. Given a center node $c$ and one of its neighbors $n$ , their corresponding node-level (sentence/document-level) embedding is derived from their CLS token in the $T_{\Phi}^{(c)}$ and can be formally defined as $h^c, h^n$ . The objective of TCL is to bring the center node $h^c$ closer to its neighbors $h^n$ while pushing itself farther away from other nodes. Denoting the cosine similarity function as $\mathrm{sim}(h^c, h^n) = h^{c^\top}h^n / \| h^c\| \| h^n\|$ . The objective of TCL is: + +$$ +\mathcal {L} _ {\mathrm {t c l}} = - \log \frac {\exp \left(\sin \left(\boldsymbol {h} ^ {c} , \boldsymbol {h} ^ {n}\right) / \tau\right)}{\sum_ {n ^ {\prime} = 1 , n ^ {\prime} \neq n} ^ {N} \exp \left(\sin \left(\boldsymbol {h} ^ {c} , \boldsymbol {h} ^ {n ^ {\prime}}\right) / \tau\right)}, \tag {5} +$$ + +where $\tau$ denotes the temperature parameter, $N$ denotes the batch size. + +Topological Deepwalk Learning (TDK). TMLM and TCL mainly capture low-order structural information, while the higher-order structural information still needs to be captured by designing + +Table 2: Accuracy comparison among GNNs on the ogbn-arxiv-TA within different PLMs' node features. "Scale" means the different versions of PLMs (number of parameters). "Diff" denotes the performance gap between the best and worst performers. We mark the best performer in each row with blue bold font and mark the best performer in each column with black bold font. + +
ScalePLMsArxiv
GCNGATSAGERevGATNFormerGINJKNetAPPNPMoNetMLP
SmallBERT-Tiny72.0372.2572.3572.5271.9168.4269.5071.6345.1357.22
ELECTRA68.4570.9769.6371.1269.4558.0962.8759.5536.6536.58
DistilBERT73.3973.4874.4874.6873.5672.3071.4474.0150.5168.11
BaseELECTRA70.8171.6770.8271.9670.4364.8863.4165.6238.9148.56
BERT73.3073.4074.1474.5972.8071.9470.0873.9046.9067.35
RoBERTa73.5673.3874.5274.8273.1272.6369.4074.0144.5369.31
DeBERTa68.1566.5667.5868.2667.1162.0544.1652.3729.6747.07
LargeELECTRA70.4471.0170.7272.5670.0464.4758.3464.5237.2647.72
BERT73.2573.3774.1574.6873.1271.8868.7073.5343.3166.85
RoBERTa73.9573.7274.6474.9973.1273.1068.1074.1744.0169.51
DeBERTa72.5771.5073.2273.5971.8871.2554.4169.2833.5366.28
Diff5.807.167.067.546.4515.0127.2821.8020.8432.93
+ +suitable tasks. Considering that algorithms like Deepwalk [45] can capture higher-order structural information in the graph, we try to use the node representations learned by Deepwalk to augment the representations learned by LM. We first feed the whole graph structure into Deepwalk to get the corresponding representation $\pmb{k}^c$ of each node $c$ . The objective of TDK is to bring the center node $\pmb{h}^c$ closer to its representation learned from Deepwalk $\pmb{k}^c$ . The objective of TDK is: + +$$ +\mathcal {L} _ {\mathrm {t d k}} = - \log \frac {\exp \left(\operatorname {s i m} \left(\boldsymbol {h} ^ {c} , \boldsymbol {k} ^ {c}\right) / \tau\right)}{\sum_ {c ^ {\prime} = 1 , c ^ {\prime} \neq c} ^ {N} \exp \left(\operatorname {s i m} \left(\boldsymbol {h} ^ {c} , \boldsymbol {k} ^ {c ^ {\prime}}\right) / \tau\right)}, \tag {6} +$$ + +# 4 Experiments + +Baselines. (1) For GNN-based methods, we select 9 popular GNN models: GCN [15], GAT [16], GraphSAGE [7], RevGAT [33], NodeFormer [46], GIN [17], JKNet [18], MoNet [47] and APPNP [48]. (2) For PLM-based methods, we select 5 different PLMs with different scales: a) Small parameter scale models including BERT-Tiny [10], ELECTRA-Small [11], and DistilBERT [14]. b) Base parameter scale models including BERT-Base [10], ELECTRA-Base [11], RoBERTa-Base [12], and DeBERTa-Base [13]. c) Large parameter scale models including BERT-Large [10], ELECTRA-Large [11], RoBERTa-Large [12], and DeBERTa-Large [13]. (3) For Co-training methods, due to scalability constraints, we only explore the effectiveness of this pipeline on combinations of BERT-Tiny with GCN and GraphSAGE. (4) For the topological pre-training of LMs, we conduct experiments on various datasets and different PLM basic models. In addition, we iteratively train the proposed three pre-training tasks at batch level (named TMDC) in a multi-task learning framework. Please refer to Appendix A.2 for more details. + +Implementation details. GNNs are mainly implemented based on the DGL toolkit [49]. PLMs are obtained from Huggingface [50] and trained under a unified framework. Considering the recent rise of parameter-efficient fine-tuning, we only fine-tune the last four encoder layers of large-scale language models. Implementation details and hyperparameter selections are provided in Appendix C. + +**Evaluations metric.** We investigate the performance of different baselines through two tasks: node classification and link prediction. For the node classification task, we use Accuracy and F1-Score to evaluate the model performance. For the link prediction task, we use MRR, Hits@10, Hits@50, and Hits@100 as metrics. Due to space limitations, we present the results of some node classification experiments in the main paper, and the remaining node-level experiments with link prediction results are presented in Appendix D.1 and D.2. In addition to the aforementioned datasets, we also conduct experiments on other types of datasets, as described in Appendix D.6 (a large-scale ogbn-papers100M dataset) and Appendix D.7 (two social network datasets), where readers can find detailed information on these datasets and the corresponding experimental results. + +Table 3: Node classification of the three learning paradigms on six datasets. Sports corresponds to the F1 score as an indicator of experimental results and Accuracy for the rest of the data. We bold the best results for each dataset. + +
WayPLM-BasedGNN-BasedCo-Training Based
TinyBaseT-GCNB-GCNT-SAGEB-SAGEGCN(T)SAGE(T)
Arxiv70.8372.9672.0373.3072.3574.1469.22↓73.57↑
Children49.8559.9157.0758.1157.5758.7454.75↓59.70↑
History83.0686.0984.5285.0484.7985.1283.52↓85.09↑
Photo73.7577.5382.4282.7083.2583.2783.32↑86.64↑
Computers58.3260.4087.4387.8687.9088.3083.93↓86.04↓
Sports81.4786.0284.9386.1687.0687.3485.06↑85.87↓
+ +# 4.1 Impact of Static Modeling of Attributes on GNNs + +In this subsection, we analyze the impact of different node attribute modeling methods for downstream GNNs. Table 2 represents the effect of node classification on ogbn-arxiv-TA for different GNNs given different PLMs' features. The results on other datasets can be found in Appendix D.1. Observing Table 2, we find that RevGAT performs the best among all initial node features, while GAT and SAGE exhibit the second-best performance. They are relatively less affected by the different node features, $7.54\%$ , $7.16\%$ , and $7.06\%$ respectively. JKNet, APPNP, MoNet, and MLP are more influenced by the initial node features, which all come above 20. On the other hand, the node features encoded by RoBERTa, BERT, and DistilBERT generally perform better on all types of baselines. DeBERTa, which performs better on many downstream NLP-related tasks, is less effective. This may be because DeBERTa sees less corpus during pre-training, resulting in their inability to understand the semantics well when modeling the text directly on downstream tasks. Furthermore, we compare traditional shallow text encoders (e.g., Skip-Gram) with PLMs in Appendix D.4 for a more comprehensive analysis of the impact of text modeling on downstream GNNs. + +Recently, LLMs (Large-Language Models) are continuing to energize areas such as knowledge graphs [51] and recommender systems [52]. It is still an open question on how to successfully apply LLMs to text-attributed graph learning. We have conducted a preliminary exploration of how to use LLMs to advance the representation learning on TAGs. Please refer to Appendix D.8 for the detailed results and discussions. + +# 4.2 Pitfalls of Co-Training Paradigm + +In this subsection, we analyze the performance of the Co-training paradigm versus the PLM-based, GNN-based paradigm in terms of node classification tasks. Tiny and Base in the PLM column represent BERT-Tiny and BERT-Base, respectively. T-GCN, T-SAGE, and B-GCN, B-SAGE represent the node features of BERT-Tiny and BERT-Base fed to downstream GCN and GraphSAGE respectively. GCN(T) and SAGE(T) then denote co-training BERT-Tiny with GCN and SAGE, respectively. We compare GCN(T), and SAGE(T) with the corresponding T-GCN, T-SAGE respectively. As shown in Table 3, SAGE(T) improves in four of the datasets compared to the T-SAGE methods, with a maximum improvement of $3.39\%$ on the Photo dataset. However, GCN(T) performs worse than T-GCN on most datasets and is even weaker than BERT-Tiny $1.61\%$ on ogbn-arxiv-TA. The Co-training framework requires simultaneous training of PLMs and GNNs. The memory requirement and time cost of this paradigm are significantly increased. In order to facilitate the co-training of PLMs and GNNs, we reduce either the batch size or the number of neighbors. The limited scalability leads to a significant reduction in the number of neighbors for GNN aggregation, which may compromise the effectiveness of message passing. To analyze the impact of scalability on the Co-Training method, we analyze the effect of the number of neighbors sampled per GNN layer on Co-training in Fig 4. As can be seen from it, the effect of the model mainly tends to increase as the number of neighbors increases. The detailed discussions on efficiency and scalability can be found in Appendix D.3. + +# 4.3 Comparing PLM-based Methods with GNN-based Methods + +In this subsection, we compare the PLM-based methods and the GNN-based methods in different datasets. As shown in Table 4, the column GNNs indicates the best results on all GNNs for a given PLM node feature. On the Children and History dataset, the PLM-based methods works better than + +Table 4: Node classification accuracy comparison among PLM-based, GNN-based, and topological pre-training based on four datasets. The best method for each PLM on each dataset is shown in bold. + +
ScaleModelArxivHistory
PLMGNNsTMLMTDKTCLTMDCPLMGNNsTMLMTDKTCLTMDC
SmallBERT-Tiny70.8372.5270.8371.5071.5571.1783.0685.0385.7685.7986.0686.88
ELECTRA71.2671.1272.6572.8373.0673.7184.1883.1184.5484.4284.5785.18
DistilBERT72.5074.6873.5374.3874.8975.5085.8185.6785.7686.2986.2886.88
BaseELECTRA72.6771.9673.5174.3374.2675.5685.6483.7985.7785.8886.6286.41
BERT72.9674.5973.9774.2374.8776.1186.0985.2886.2486.4686.8086.82
RoBERTa73.1074.8274.2574.5775.3775.9785.8585.6986.1986.3286.9586.96
DeBERTa73.8268.2674.2675.0175.1575.9986.1682.3186.0086.4687.0186.94
LargeELECTRA72.4272.5674.7673.8274.1775.5886.1383.5686.3986.4986.8286.28
BERT73.2474.6875.0174.3175.1575.7586.2485.1586.4786.7386.9386.94
RoBERTa73.8374.9975.1874.5875.4875.7386.4185.2386.7286.7587.1187.22
DeBERTa74.5773.5975.9275.2075.5876.2087.0084.8987.1187.2687.3087.32
ScaleModelChildrenPhoto
PLMGNNsTMLMTDKTCLTMDCPLMGNNsTMLMTDKTCLTMDC
SmallBERT-Tiny49.8557.8654.2753.4354.1154.6673.7584.1274.3073.9973.8674.92
ELECTRA57.0356.4257.3556.9256.8858.5576.5883.1276.0976.8977.7477.83
DistilBERT59.9059.3360.0360.2360.6061.3877.5184.3477.8179.6981.8582.52
BaseELECTRA59.0956.4259.9360.2760.2160.8377.8482.9878.2780.1881.4782.82
BERT59.9158.7460.3460.4360.7361.4377.5384.4678.5481.0482.8584.09
RoBERTa59.8059.0160.1960.7161.4761.8378.1184.5978.3381.2682.4783.04
DeBERTa60.2650.7260.7361.3961.9262.2078.3781.4479.2781.3483.0783.80
LargeELECTRA58.2856.5960.5159.3159.2961.3177.2583.0079.2178.4479.5681.32
BERT60.6558.9060.8461.1561.5062.0677.7284.2178.9579.2680.7481.14
RoBERTa60.9359.2662.1161.9562.0663.2479.6085.1280.3280.8281.4782.55
DeBERTa61.6156.3461.9162.5162.3762.4679.6382.5580.4581.3382.3382.70
+ +![](images/408493d26b2216d84a2d6186c6a11386c80e051b47dc40c303b6bb59578dce64.jpg) +Figure 4: Node classification on four datasets are conducted to analyse the sensitivity of the two Co-Training models, GCN(T) and SAGE(T) to the number of neighbors sampled per GNN layer. "Fanout" denotes the number of neighboring nodes to which the center node is directly connected. + +![](images/3b18c2c45267a000c8116511e095d639767933565fe789b89ab6844bce2aa5e9.jpg) + +![](images/3897db693207e567e440fa4d64b6f4e7930d238743cb6b93d94578afaeaae8fe.jpg) + +![](images/e40376b9cc90a9fdac5ecbf4126789566922e7aa8bd68e82c4383fa45d1153fd.jpg) + +the GNN-based. This is probably because their text attributes are more fully informative and thus the text attributes largely reflect the linking relationships between the nodes. Therefore the PLM-based methods model text attributes more strongly would be more advantageous in this case. And on the Photo dataset, the GNN-based method outperforms the PLM-based method across the board. This may be because the text attributes of the Photo dataset are composed of information from user reviews of the product. Some lower-quality reviews introduce a certain amount of noise to the text attributes, which will reduce the effectiveness of the PLM-based methods. In order to analyze the importance of node text attribute selection, we further conduct relevant experiments in Appendix D.9. + +# 4.4 Validity of Topological Pre-training + +In this subsection, we compare the three different pre-training methods with the PLM-based and GNN-based methods. We observe that on almost all PLMs and all datasets, different degrees of improvement can be achieved with the proposed pre-training methods. For these three individual pre-training tasks, we find that TCL leads to greater improvement in most cases. The difference in performance between TMLM and TCL, which both capture low-order topological structure information, indicates that learning topological structure knowledge from a node-level perspective may work better than a token-level. TDK, on the other hand, performs second only to TCL in most cases, which reflects to some extent the fact that PLMs can benefit from knowledge of the complex topology. Further, we try to combine these three pre-training tasks to optimize the model together (name TMDC). We first + +perform token-level TMLM tasks on PLMs. The enhanced PLMs are then jointly optimized using both TCL and TDK tasks. (Detailed implementation can be found in Appendix A.2) Observing from the Table 4, we find that TMDC further improves performance in most cases. This indicates that different pre-training strategies can teach the PLM different topological knowledge from different perspectives, and this leads us to explore more pre-training tasks on TAGs in the future. To further analyze the effectiveness of these topological pre-training methods, we also test the performance of such topological pre-training models in other scenarios (e.g., semi-supervised learning and few-shot learning). Please refer to Appendix D.5 for detailed experimental results and discussions. + +# 5 Discussion on the Practical Values + +Text-attributed graphs have emerged as a prominent graph format, which finds extensive applications in modeling real-world tasks, such as the mentioned recommender systems. Our research concentrates on achieving a comprehensive understanding of the textual attributes embedded within a single node and the topological structural connections between nodes. For example, a famous example in recommender systems is the association between "diaper" and "beer", commonly co-purchased by customers, thereby establishing links between these items in the item-item graph. To achieve the optimal item representation, a prerequisite is to capture the inherent characteristics of a given item by modeling its metadata, such as title and descriptions. Simultaneously, it is imperative to incorporate valuable and unique signals derived from the graph's topological connections into the representation learning process. Given that real-world graph topology is usually shaped by human behaviors, there exist unique human perceptions and knowledge in the topology beyond the pure semantics (e.g., "diaper" and "beer" are semantically different but are connected in the co-purchased graph). Consequently, it is imperative to delve into the effective and efficient fusion of intrinsic semantics within individual nodes and the topological connections among different nodes on the text-attributed graphs. Moreover, the scope of our research also includes domains like user behavior-enhanced sponsored search, including AdsGNN [24], HBGLR [53], and PASS [54]. + +# 6 Conclusion + +We establish the first comprehensive benchmark CS-TAG specifically designed to explore representation learning on TAGs. We collect and provide eight available text-attributed graph datasets to facilitate the NLP and GNN communities to focus and investigate the data together. Our benchmark provides a more comprehensive evaluation of different learning paradigms, validating their effectiveness and limitations. We will also continue to mine and construct more research-worthy TAGs to advance the continued healthy development of the field. + +# Acknowledgement + +This research was funded by the National Science Foundation of China (No. 62172443), Open Project of Xiangjiang Laboratory (22XJ03010, 22XJ03005), the Science and Technology Major Project of Changsha (No. kh2202004), Hunan Provincial Natural Science Foundation of China (No. 2022JJ30053), and the High Performance Computing Center of Central South University. + +# References + +[1] Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, and Shu Wu. A survey on deep graph generation: Methods and applications. arXiv preprint arXiv:2203.06714, 2022. +[2] Hao Miao, Jiaxing Shen, Jiannong Cao, Jiangnan Xia, and Senzhang Wang. MBA-stnet: Bayes-enhanced discriminative multi-task learning for flow prediction. IEEE Transactions on Knowledge and Data Engineering, 2022. +[3] Zijian Zhang, Xiangyu Zhao, Hao Miao, Chunxu Zhang, Hongwei Zhao, and Junbo Zhang. Autostl: Automated spatio-temporal multi-task learning. arXiv preprint arXiv:2304.09174, 2023. +[4] Peiyan Zhang, Yuchen Yan, Chaozhuo Li, Senzhang Wang, Xing Xie, Guojie Song, and Sunghun Kim. Continual learning on dynamic graphs via parameter isolation. In Proceedings of SIGIR, 2023. +[5] Junhan Yang, Zheng Liu, Shitao Xiao, Chaozhuo Li, Defu Lian, Sanjay Agrawal, Amit Singh, Guangzhong Sun, and Xing Xie. Graphformers: Gnn-nested transformers for representation learning on textual graph. Advances in Neural Information Processing Systems, 2021. +[6] Xiaoxin He, Xavier Bresson, Thomas Laurent, and Bryan Hooi. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523, 2023. +[7] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of NeurIPS, 2017. +[8] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 2008. +[9] Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, and Jian Tang. Learning on large-scale text-attributed graphs via variational inference. arXiv preprint arXiv:2210.14709, 2022. +[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, 2018. +[11] Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. In Proceedings of ICLR, 2020. +[12] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. In Proceedings of ACL, 2020. +[13] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. In Proceedings of ICLR, 2021. +[14] Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, 2019. +[15] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of ICLR, 2017. +[16] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. Graph attention networks. In Proceedings of ICLR, 2018. +[17] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In Proceedings of ICLR, 2019. +[18] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In Proceedings of ICML, 2018. + +[19] Jinlong Du, Senzhang Wang, Hao Miao, and Jiaqiang Zhang. Multi-channel pooling graph neural networks. In *IJCAI*, 2021. +[20] Zhongyu Huang, Yingheng Wang, Chaozhuo Li, and Huiguang He. Going deeper into permutation-sensitive graph neural networks. In Proceedings of ICML, pages 9377-9409, 2022. +[21] Rui Li, Jianan Zhao, Chaozhuo Li, Di He, Yiqi Wang, Yuming Liu, Hao Sun, Senzhang Wang, Weiwei Deng, Yanming Shen, et al. House: Knowledge graph embedding with householder parameterization. In Proceedings of ICML, 2022. +[22] Yi Zhao, Chaozhuo Li, Jiquan Peng, Xiaohan Fang, Feiran Huang, Senzhang Wang, Xing Xie, and Jibing Gong. Beyond the overlapping users: Cross-domain recommendation via adaptive anchor link learning. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1488-1497, 2023. +[23] Jason Zhu, Yanling Cui, Yuming Liu, Hao Sun, Xue Li, Markus Pelger, Tianqi Yang, Liangjie Zhang, Ruofei Zhang, and Huasha Zhao. Textgnn: Improving text encoder via graph neural network in sponsored search. In Proceedings of WebConf, 2021. +[24] Chaozhuo Li, Bochen Pang, Yuming Liu, Hao Sun, Zheng Liu, Xing Xie, Tianqi Yang, Yanling Cui, Liangjie Zhang, and Qi Zhang. Adsgnn: Behavior-graph augmented relevance modeling in sponsored search. In Proceedings of SIGIR, 2021. +[25] Shuxian Bi, Chaozhuo Li, Xiao Han, Zheng Liu, Xing Xie, Haizhen Huang, and Zengxuan Wen. Leveraging bidding graphs for advertiser-aware relevance modeling in sponsored search. In Proceedings of EMNLP, 2021. +[26] Péter Mernyei and Cătălina Cangea. Wiki-cs: A wikipedia-based benchmark for graph neural networks. arXiv preprint arXiv:2007.02901, 2020. +[27] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Gunnemann. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018. +[28] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 2020. +[29] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546, 2013. +[30] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of EMNLP, 2014. +[31] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. +[32] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. +[33] Guohao Li, Matthias Müller, Bernard Ghanem, and Vladlen Koltun. Training graph neural networks with 1000 layers. In Proceedings of ICML, 2021. +[34] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 2020. +[35] Scott Freitas, Yuxiao Dong, Joshua Neil, and Duen Horng Chau. A large-scale database for graph representation learning. arXiv preprint arXiv:2011.07682, 2020. +[36] Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, and Jie Tang. Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning. arXiv preprint arXiv:2111.04314, 2021. + +[37] Jianmo Ni, Jiacheng Li, and Julian McAuley. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of EMNLP-IJCNLP, 2019. +[38] Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. Arnetminer: extraction and mining of academic social networks. In Proceedings of KDD, 2008. +[39] Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. +[40] Mingda Chen, Jingfei Du, Ramakanth Pasunuru, Todor Mihaylov, Srini Iyer, Veselin Stoyanov, and Zornitsa Kozareva. Improving in-context few-shot learning via self-supervised training. arXiv preprint arXiv:2205.01703, 2022. +[41] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. In Proceedings of NeurIPS, 2020. +[42] Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131, 2020. +[43] Hao Yan, Senzhang Wang, Jun Yin, Chaozhuo Li, Junxing Zhu, and Jianxin Wang. Hierarchical graph contrastive learning. In Proceedings of ECML PKDD, 2023. +[44] Senzhang Wang, Hao Yan, Jinlong Du, Jun Yin, Junxing Zhu, Chaozhuo Li, and Jianxin Wang. Adversarial hard negative generation for complementary graph contrastive learning. In Proceedings of SDM, 2023. +[45] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of KDD, 2014. +[46] Qitian Wu, Wentao Zhao, Zenan Li, David P Wipf, and Junchi Yan. Nodeformer: A scalable graph structure learning transformer for node classification. Proceedings of NeurIPS, 2022. +[47] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of CVPR, 2017. +[48] Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Gunnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In Proceedings of ICLR, 2019. +[49] Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019. +[50] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of EMNLP, 2020. +[51] Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. Unifying large language models and knowledge graphs: A roadmap. arXiv preprint arXiv:2306.08302, 2023. +[52] Wenjie Wang, Xinyu Lin, Fuli Feng, Xiangnan He, and Tat-Seng Chua. Generative recommendation: Towards next-generation recommender paradigm. arXiv preprint arXiv:2304.03516, 2023. +[53] Bochen Pang, Chaozhuo Li, Yuming Liu, Jianxun Lian, Jianan Zhao, Hao Sun, Weiwei Deng, Xing Xie, and Qi Zhang. Improving relevance modeling via heterogeneous behavior graph learning in bing ads. In Proceedings of KDD, 2022. + +[54] Zhoujin Tian, Chaozhuo Li, Zhiqiang Zuo, Zengxuan Wen, Lichao Sun, Xinyue Hu, Wen Zhang, Haizhen Huang, Senzhang Wang, Weiwei Deng, et al. Pass: Personalized advertiser-aware sponsored search. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023. +[55] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In Proceedings of ICML, 2019. +[56] Wentao Zhang, Ziqi Yin, Zeang Sheng, Yang Li, Wen Ouyang, Xiaosen Li, Yangyu Tao, Zhi Yang, and Bin Cui. Graph attention multi-layer perceptron. In Proceedings of KDD, 2022. +[57] Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. Sign: Scalable inception graph neural networks. arXiv preprint arXiv:2004.11198, 2020. +[58] Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. Fame for sale: Efficient detection of fake twitter followers. Decision Support Systems, 2015. +[59] Shangbin Feng, Herun Wan, Ningnan Wang, Jundong Li, and Minnan Luo. Twibot-20: A comprehensive twitter bot detection benchmark. In Proceedings of CIKM, 2021. +[60] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 2020. +[61] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022. +[62] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. + +# A Baselines and Model Implementation Details + +# A.1 Baselines + +We provide detailed descriptions on the GNNs and PLMs baselines used in the main experiments as follows. + +- GCN. [15] Graph Convolutional Network (GCN) is a classical model that works by performing a linear approximation to spectral graph convolutions. +- GraphSAGE. [7] GraphSAGE is a GNN model that focuses on inductive node classification, but can also be applied for transductive settings. +- GAT. [16] Graph Attention Network (GAT) introduces the attention mechanism to capture the importance of neighboring nodes when aggregating information from the graph. +- RevGAT. [33] RevGAT combines reversible connectivity with a deep network architecture to form a deep and efficient GNN. +- NodeFormer. [46] NodeFormer is a scalable graph Transformer for large-scale graphs, which achieves all-pair message passing with linear complexity. In the table, we denote NodeFormer by NFormer. +- GIN. [17] Graph Isomorphism Network (GIN) overcomes the drawbacks of previous MPNN-based methods, which struggles to differentiate various graph structures based on the generated graph embeddings. +- JKNet. [18] Jumping knowledge Network (JKNet) adaptively varies the neighborhood ranges for individual node, enabling enhanced structure-aware node representation. +- MoNet. [47] Mixture Model Network (MoNet) captures and represents the structural properties of graphs by incorporating multiple localized perspectives of each node's neighborhood. +- APPNP. [48] Approximate Personalized Propagation of Neural Predictions (APPNP) is specifically developed for semi-supervised learning tasks on graph-structured data. It utilizes personalized propagation to iteratively enhance node predictions by incorporating comprehensive information from both local and global contexts. +- DistilBERT. [14] DistilBERT is another distilled version of BERT. The student model DistillBert shares a similar overall architecture with the teacher model BERT. DistillBert differs from BERT by removing the token-type embeddings and pooled components, and also reducing the number of layers. +- ELECTRA. [11] ELECTRA is a pre-training model for NLP tasks that introduces Discriminative Pre-training. ELECTRA enhances the efficiency and effectiveness by generating more efficient representations through a discriminative approach. +- BERT. [10] Bidirectional encoder representations from Transformers (BERT) utilizes a transformer architecture that employs self-attention mechanisms to capture word relationships within sentences. This enables the model to effectively consider both the preceding and succeeding contexts of a word, facilitating bidirectional language comprehension. BERT undergoes unsupervised pre-training on an extensive text corpus, where it predicts masked words within sentences and acquires the ability to encode contextual information. +- RoBERTa. [12] Robustly Optimized BERT (RoBERTa) is a variant of the BERT. RoBERTa incorporates additional modifications during pre-training to optimize its performance. In the pretraining phase, it trains on a vast corpus of unlabeled text data by employing masked language modeling (MLM) while excluding the next sentence prediction (NSP) task. RoBERTa significantly expands the amount of training data used, enabling the acquisition of more comprehensive and robust language representations. +- DeBERTa. [13] Decoding-enhanced BERT with Disentangled Attention (DeBERTa) brings forth two significant enhancements: disentangled attention and heterogeneous layer integration. By dividing the attention heads into distinct groups, disentangled attention empowers the model to better capture dependencies among words, enabling more targeted and specialized attention. This enhancement greatly improves the model's ability to capture long-range dependencies in a more effective manner. + +In our work, we categorize three types of models, namely BERT-Tiny (4.4M), ELECTRA-Small (13.5M), and DistilBERT (66.4M), as small scale PLMs. ELECTRA-Base (109M), BERT-Base (110M), RoBERTa-Base (125M), and DeBERTa-Base (139M) are classified as base scale PLMs. ELECTRA-Large (334M), BERT-Large (340M), RoBERTa-Large (355M), and DeBERTa-Large (405M) are categorized as large scale PLMs. + +# A.2 Topological Pre-training Implementation + +We introduce the implementation details on 4 topological pre-training tasks, TLM, TCL, TDK and TMDC on text-attributed graphs. + +TMLM. We adopt the Masked Language Model (MLM) training implementation in huggingface [] for the TMLM task. For a given TAG, we first pre-sample each node with five neighbors, and subsequently concatenate the text of the original node with the text of the sampled neighboring nodes. Five corresponding center-neighbor pairs are obtained for each node. We then disorganize the dataset to form a new topologically augmented dataset. The new dataset is passed into the MLM training code to pre-train the language model. The learning rate of the TMLM is set to 5e-05 on all datasets and language models. For detailed code and implementation, please refer to the project repository. + +TCL. Traditional Contrastive Learning (CL) focuses on the central node itself (using the augmented nodes to construct the positive pairs) [42]. For our Topological Contrastive Learning (TCL) task, we consider the central node and its neighbors to form the positive pairs. In particular, when loading the data in batches, we sample one of its neighbor nodes for each central node through the adjacency matrix of the TAG. For the central node and its sampled neighbor, we compute the contrasting loss by mapping the textual representations into the contrastive space through a projection head with shared parameters by following [41]. Therefore the $\mathcal{L}_{\mathrm{tcl}}$ in the main text can be rewritten as + +$$ +\mathcal {L} _ {\mathrm {t c l}} = - \log \frac {\exp \left(\sin \left(z ^ {c} , z ^ {n}\right) / \tau\right)}{\sum_ {n ^ {\prime} = 1 , n ^ {\prime} \neq n} ^ {N} \exp \left(\sin \left(z ^ {c} , z ^ {n ^ {\prime}}\right) / \tau\right)}, \tag {7} +$$ + +where $z^c$ is the representation of the node $c$ in the contrastive space. The main learning parameters include the learning rate $lr$ and epoch $e$ . For all datasets and all language models we set $e = 5, lr = 5e - 05$ . The projection head is a two-layer MLP with a hidden layer set to 128. The $\tau$ in the contrastive loss is set to 0.2. Detailed pre-training commands on each dataset can be found in the CS-TAG GitHub repository. + +TDK. For the TDK task, we first run the Deepwalk [45] algorithm over the TAG to obtain its topological-level representation (only the topology information of TAG is fed into Deepwalk). Then we pull the representation of the center node close to the representation learned from Deepwalk. In particular, when loading the data in batches, we load the representation learned from Deepwalk at the same time. Subsequently, we from the textual representation of the center node with its corresponding topological structure representation to form the positive pairs in contrastive learning. We follow the same contrastive learning process with TCL presented above. Therefore the $\mathcal{L}_{\mathrm{tdk}}$ in the main text can be rewritten as + +$$ +\mathcal {L} _ {\mathrm {t d k}} = - \log \frac {\exp \left(\sin \left(\boldsymbol {z} ^ {c} , \boldsymbol {k} ^ {c}\right) / \tau\right)}{\sum_ {c ^ {\prime} = 1 , c ^ {\prime} \neq c} ^ {N} \exp \left(\sin \left(\boldsymbol {z} ^ {c} , \boldsymbol {k} ^ {c ^ {\prime}}\right) / \tau\right)}, \tag {8} +$$ + +For all datasets in CS-TAG, we use a uniform code to obtain the corresponding topological-level representation. The relevant parameters are consistent with the parameter settings in TCL. Detailed pre-training commands on each dataset can be found in the CS-TAG GitHub repository. + +TMDC. TMDC attempts to pre-train the language models jointly by combining the three pre-training tasks proposed above. As the forms of the TMLM and TCL, TDK tasks differ a lot, we use an iterative training way to perform the multi-task learning. In particular, we first obtain a token-level topologically augmented language model $PLM_{tmlm}$ by TMLM. Subsequently, we combine TCL and TDK to jointly optimize the $PLM_{tmlm}$ . The loss function of the joint optimization is shown as follows + +$$ +\mathcal {L} _ {\mathrm {t c l + t d k}} = - \left(\log \frac {\exp \left(\sin \left(\boldsymbol {z} ^ {c} , \boldsymbol {k} ^ {c}\right) / \tau\right)}{\sum_ {c ^ {\prime} = 1 , c ^ {\prime} \neq c} ^ {N} \exp \left(\sin \left(\boldsymbol {z} ^ {c} , \boldsymbol {k} ^ {c ^ {\prime}}\right) / \tau\right)} + \log \frac {\exp \left(\sin \left(\boldsymbol {z} ^ {c} , \boldsymbol {z} ^ {n}\right) / \tau\right)}{\sum_ {n ^ {\prime} = 1 , n ^ {\prime} \neq n} ^ {N} \exp \left(\sin \left(\boldsymbol {z} ^ {c} , \boldsymbol {z} ^ {n ^ {\prime}}\right) / \tau\right)}\right). \tag {9} +$$ + +# A.3 Code License + +The code of CS-TAG uses the MIT license. Please refer to the GitHub repository for license details. + +# B Datasets + +# B.1 Dataset Format + +For each dataset in CS-TAG, we provide three different files. We store the graph-type data available to dgl in the .pt format in Pytorch. For the node classification dataset, two types of features are stored, the adjacency matrix and the node labels. For the Ele-Computers and Ele-Photo datasets, we also store the year features of the nodes (the year of the comments posted by the user). For the link prediction dataset, it contains only the adjacency matrix information. We use a .txt file to store the text attributes of each dataset. The .csv file stores node-id, node-label, category, and text to provide a clearer picture of the dataset for subsequent researchers. + +# B.2 Dataset Construction + +The construction of the text-attributed graph datasets includes the following three steps. First, preprocessing the text attributes in the original dataset, including removing missing values, removing non-English statements, removing abnormal symbols, length truncation, etc. Second, building the graph. The linking relationship between nodes has been provided in the original data of the dataset constructed in CS-TAG, such as the information of "also view", and "also buy" of the products in the Amazon dataset (indicating the two product ids that are jointly purchased or viewed), and the citation relationship between papers in DBLP, etc. Note that when obtaining the final graph data, self-edges and isolated nodes need to be removed. Third, refining the constructed graph. For the node classification dataset, the nodes in the graph need corresponding numerical node labels. We convert the categories of nodes in the original data to numerical node labels in the graph. For some datasets that are divided in a specific form (e.g., by the year of publication of a paper), we also need to store additional information about the nodes. + +# B.3 Dataset Details + +CS-TAG includes 8 datasets, whose details are described as follows. The statistics of the datasets is shown in Table 5. + +ogbn-arxiv-TA dataset is derived from ogbn-arxiv. The corresponding task is to predict the categories of the papers, which is formulated as a 40-class classification problem. The text attributes of each paper node are extracted from its title and abstract in ogbn-arxiv. Note that ogbn-arxiv solely provides node embeddings acquired through shallow text encoders like Skip-Gram. Different from them, we use different PLMs to model the node attributes to get the initial node features and delve into the performance of employing multiple PLM features in downstream GNNs. + +Books-Children/History datasets are extracted from the Amazon-Books dataset. Books-Children consists of items with the second-level label "Children", while Books-History consists of items with the second-level label "History". The nodes in the dataset are books, and the edges mean two books are frequently co-purchased or co-viewed. The label of each dataset is the three-level label of the book. We choose the title and description of the book itself as the text attributes of the node. The task is to classify books into 24 and 12 categories, respectively. + +Ele-Computers/Photo datasets are extracted from the Amazon-Electronics dataset. Ele-Computers consists of items with the second-level label "Computers", while Ele-Photo consists of items with the second-level label "Photo". The two datasets are extracted from the updated 2018 Amazon Computer and Amazon Photo datasets [27]. The nodes in the dataset are electronics related products, and the edge between two products means that they are frequently co-purchased or co-viewed. The label of each dataset is the three-level label of the electronics products. We adopt user reviews on the item as its text attribute. Since the item has multiple reviews, we mainly adopt the review with the highest number of votes. For some items lacking highly votes reviews, we randomly adopt a user review as the text attribute. The task on the two datasets is to classify electronics products into 10 and 12 categories, respectively. + +Sports-Fitness dataset is extracted from the Amazon-Sports dataset. It consists of items with the second-level label "Fitness". The nodes in the dataset are the fitness-related items, and the edge between two items means that they are frequently co-purchased or co-viewed. The label of the dataset is the three-level label of the items. The task on this dataset is to classify items into 13 categories. + +Table 5: Statistics of text-attributed graph dataset used in CS-TAG. + +
DatasetNodesEdgesClassSplit SchemeSplit RatioTask TypeMetricMax length
ogbn-arxiv-TA169,3431,166,24340Time54/18/28Node Class.Acc, F1512
Books-Children76,8751,554,57824Random60/20/20Node Class.Acc, F1256
Books-History41,551358,57412Random60/20/20Node Class.Acc256
Ele-Computers87,229721,08110Time72/17/11Node Class.Acc, F1256
Ele-Photo48,362500,92812Time60/20/20Node Class.Acc, F1512
Sports-Fitness173,0551,773,50013Random20/10/70Node Class.F164
CitationV81,106,7596,120,897-Time99/1/1Link PredictionMRR256
GoodReads676,0848,582,306-Random90/2/8Link PredictionHits@K24
+ +CitationV8 is a directed graph dataset, representing the citation relationship among a subset of papers extracted from DBLP [38]. It is constructed following the form of ogbl-citation2 [34]. The corresponding task is to predict the missing citations given some existing citations. Specifically, for each source paper, two of its references are randomly dropped, and the prediction model tries to rank the missing two references higher than 2,000 negative reference candidates. The negative references are randomly-sampled from all the previously published papers that are not referenced by the source paper. We adopt the title and abstract of each paper as its node text attributes. + +GoodReads dataset is extracted from the world's largest book review site Goodreads. Its nodes are books and edges are identified by similarity relationships between books provided in the website. The corresponding task is to predict the similar relationships between books. We expect the model to rank the true correlations over the false ones. Specifically, we rank each true correlation among a set of 5,000 randomly-sampled negative correlation. + +# B.4 Datasets License + +The datasets follow the MIT license. Please refer to the GOOD GitHub repository for license details. + +# C Experiment Settings + +# C.1 Experimental Settings of GNNs + +We conduct experiments on 9 GNN models described in A.1 on 6 node classification datasets. We use the aforementioned 5 PLMs with different parameter scales to model the node attributes and form the initial node features of the graph data. Each experiment is repeated three times and the evaluation metrics are accuracy and F1-score. The parameters shared by all GNN models include epochs, model layers, hidden units, learning rate, and dropout ratio, and their values are set to 1000, {2,3}, {64,128,256}, {1e-04 ~ 1e-02}, 0.2, respectively. Besides these hyperparameters, for GAT we freeze the number of heads to 3 and set the ratio of attention-drop out to 0 by default. For GraphSAGE model, we use mean pool to aggregate the neighbor information, and for JKNet, we use cat to aggregate the features. For APPNP, we set the teleport probability to 0.1 by default and the number of propagation steps is set to 2. For MoNet, we set the pseudo coordinate dimensions in GMMConv to 2 and 3, and we set the number of kernels in GMMConv layer to 2. Since it mostly does not converge at epoch=1000, we set its maximum epoch to 2000. For GIN, we set the number of mlp layers as 2. The eval patience of all models is set to 1. We use cross-entropy loss with the AdamW optimizer to train and optimize all the above models. GNNs are mainly derived from the implementation in the DGL library. + +# C.2 Experimental Settings of PLMs + +We conduct experiments on 5 PLMs with different parameters described in A.1 on 6 node classification dataset. Considering the efficiency of the language models, we conduct experiments on each dataset only once. The parameters shared by all the PLMs in A.1 include epochs $e$ , label smoothing factor $lsf$ , learning rate $lr$ , warm-up epochs $w$ , batch size, and eval patience. The label smoothing factor is used to calculate the cross-entropy loss which is set in GNN as well. The $w$ denotes the duration of the warm-up phase, and $w = 1$ means the duration of the warm-up is in one epoch. For all datasets we set $e$ , $lsf$ , $w$ , $lr$ to 4, 0.1, 1, {5e-06~5e-04} , respectively. Due to different model parameters and different dataset sizes, we list the eval patience, batch size of different models on each dataset in Table 6. For PLMs at the same scale, we use the same batch size. For large-scale + +Table 6: Batch size and eval patience of the different scale PLMs. + +
DatasetsSmall ScaleBase ScaleLarge Scale
Batch SizeEval PatienceBatch SizeEval PatienceBatch SizeEval Patience
ogbn-arxiv-TA1005000060500006050000
Books-Children2401500090150009015000
Books-History2408000908000908000
Ele-Computeres300200001802000018020000
Ele-Photo1005000605000605000
Sports-Fitness800100004001000040010000
+ +models, since full parameter fine-tuning is costly, we only fine-tune the last four encoder layers and its' effect is sometimes better than full parameter tuning. The experimental setup of several topological pre-training methods has been mentioned in the previous section, and for the topological pre-trained language models, we follow the same tuning strategy as the PLM-based methods. + +# C.3 Reproducibility + +For all experiments, we select the best checkpoints according to the validation sets, and report the results. All the datasets and codes to reproduce the results in this paper are available at https://github.com/sktsherlock/TAG-Benchmark. + +# D Additional Experiment Results + +# D.1 Experimental Results for Node Classification + +GNN-based methods. Table 3-5 lists all the experimental results for the node classification task. We first analyze the performance of different GNN-based methods on the tasks of node classification. Over the evaluation metric accuracy, one can see that RevGAT, GraphSAGE, and GAT perform the best on these datasets, which are less affected by the node features learned by different PLMs. While for the other GNN models, their performance is affected by the PLMs to different extents. GIN, which is commonly used for graph-level tasks, does not achieve better performance on node-level tasks. MoNet's performance on Books-Children and History datasets is even lower than MLP in most cases. + +By investigating the impact of node features encoded by different PLMs on the downstream models, one can see that node features encoded by ELECTRA typically produce a larger gap between GNN and MLP. The failure of the ELECTRA model to generate higher quality node features may be due to its discriminative pre-training way. In contrast, the RoBERTa-Base model seems to have a better semantic understanding, with a difference between GNN and MLP of only 1.91 and 4.44 on the Books-History and Books-Children datasets. It is worth noting that the node features encoded by DeBERTa also perform poorly. However, DeBERTa performs better on NLP-related tasks, which indicates that the node features encoded by a language model cannot be judged by its performance on downstream tasks alone. The ineffectiveness of the DeBERTa to obtain better node features may be mainly due to its reduced corpus during pre-training. + +Similar experimental conclusions can be drawn on the F1-score results as shown in Table 4. Note that the F1 score of downstream models is more significantly affected by different PLMs. For example, on Books-Children, the score of JKNet ranges from 19.79 to 30.94 under different PLMs. + +PLM-based methods and Topological Pre-training Between Table 3 to 5, PLMs denote PLM-based methods, i.e., fine-tuning directly on the dataset without considering the topology. While TLM, TDK, TCL, and TMDC denote fine-tuning on models pre-trained for these four topological pre-training tasks. Observing the experimental results on these datasets, the PLM-basd method performs the worst on all models with all datasets. This indicates that it is not sensible to ignore topology and use only text attributes for representation learning on TAGs. In contrast, TMDC, a multi-task form of topology pre-training, performs the best, achieving the best on both metrics in the majority of experiments. This indicates that combining different topological structure pre-training methods can teach language models topological structure knowledge from different perspectives. + +Table 7: Node classification accuracy on ogbn-arxiv-TA, Books-Children, Books-History and ElePhoto. We bold the best results for each row. + +
Methodsogbn-arxiv-TA
Small ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM70.8371.2672.5072.6772.9673.1073.8272.4273.2473.8374.57
GNNsGCN72.0368.4573.3970.8173.3073.5668.1570.4473.2573.9572.57
GAT72.2570.9773.4871.6773.4073.3866.5671.0173.3773.7271.50
SAGE72.3569.6374.4870.8274.1474.5267.5870.7274.1574.6473.22
RevGAT72.5271.1274.6871.9674.5974.8268.2672.5674.6874.9973.59
NFormer71.9169.4573.5670.4372.8073.1267.1170.0473.1273.1271.88
GIN68.4258.0972.3064.8871.9472.6362.0564.4771.8873.1071.25
JKNet69.5062.8771.4463.4170.0869.4044.1658.3468.7068.1054.41
APPNP71.6359.5574.0165.6273.9074.0152.3764.5273.5374.1769.28
MoNet45.1336.6550.5138.9146.9044.5329.6737.2643.3144.0133.53
MLP57.2236.5868.1148.5667.3569.3147.0747.7266.8569.5166.28
Co-TrainingGCN69.22OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE73.57OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM70.8372.6573.5373.5173.9774.2574.2674.7675.0175.1875.92
TDK71.5072.8374.3874.3374.2374.5775.0173.8274.3174.5875.20
TCL71.5573.0674.8974.2674.8775.3775.1574.1775.1575.4875.58
TMDC71.1773.7175.5075.5676.1175.9775.9975.5875.7575.7376.20
MethodsBooks-Children
SMALLScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM49.8557.0359.9059.0959.9159.8060.2658.2860.6560.9361.61
GNNsGCN57.0754.3558.1955.3158.1158.6250.7254.6657.7057.1154.89
GAT57.2256.1857.9155.8957.7057.8347.6355.7257.5057.3555.45
SAGE57.5755.3259.3355.8458.7458.9749.6155.5258.4058.2156.29
RevGAT57.8656.4259.2856.4258.6759.0149.6356.5958.9059.2656.34
NFormer56.8955.1258.0355.1257.4257.2648.8954.5957.1056.4354.48
GIN53.1247.2655.8650.4555.6255.6247.0849.8555.2255.3751.90
JKNet53.4848.3651.2545.9052.3349.1234.1942.5651.1844.4736.89
APPNP56.1949.6357.8352.4257.7357.4941.1350.2457.7654.7346.51
MoNet36.8135.1837.5734.8736.4336.0232.2934.2235.6034.7033.16
MLP48.3440.3353.1843.1452.5554.5743.5543.1152.4352.6148.55
Co-TrainingGCN54.75OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE59.70OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM54.2757.3560.0359.9360.3460.1960.7360.5160.8462.1161.91
TDK53.4356.9260.2360.2760.4360.7161.3959.3161.1561.9562.51
TCL54.1156.8860.6060.2160.7361.4761.9259.2961.5062.0662.37
TMDC54.6658.5561.3860.8361.4361.8362.2061.3162.0663.2462.46
MethodsSMALLScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM83.0684.1885.8185.6486.0985.8586.1686.1386.2486.4187.00
GNNsGCN84.5282.0885.1482.4685.0485.1582.3182.5484.9584.5184.22
GAT84.2182.8584.6882.9784.4984.7680.7183.1084.3684.4783.78
SAGE84.7982.1285.5682.5385.1285.4782.0082.4585.0884.9284.51
RevGAT85.0383.1185.6783.7985.2685.6981.9883.5685.1585.2384.89
NFormer83.5980.9684.4981.1684.5984.4681.4680.1584.2984.2382.99
GIN82.6273.6983.6076.2983.1984.0179.8176.4783.2983.3482.89
JKNet82.9780.2584.0179.8883.5383.3169.2676.7783.4582.1777.36
APPNP84.3178.6585.4979.9185.2885.3578.1679.1884.9784.8682.68
MoNet71.2469.6770.4166.1771.2872.7259.1760.4870.6666.2660.79
MLP79.8664.3683.0068.0982.8483.7874.6068.8883.1882.7380.46
Co-TrainingGCN83.52OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE85.09OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM85.7684.5485.7685.7786.2486.1986.0086.3986.4786.7287.11
TDK85.7984.4286.2985.8886.4686.3286.4686.4986.7386.7587.26
TCL86.0684.5786.2886.6286.8086.9587.0186.8286.9387.1187.30
TMDC86.8885.1886.8886.4186.8286.9686.9486.2886.9487.2287.32
MethodsEle-Photo
SMALLScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM73.7576.5877.5177.8477.5378.1178.3777.2577.7279.6079.63
GNNsGCN82.4278.8682.9179.9982.7082.9980.0779.2082.0183.8280.76
GAT83.8282.8383.7582.8283.7483.9979.4783.0083.4883.9782.55
SAGE83.2580.9083.5081.7983.2783.8181.4481.0582.7784.1581.88
RevGAT84.1283.1284.3482.9884.4684.5980.9882.5984.2185.1281.12
NFormer79.9880.4582.6980.0281.7982.4479.6679.9681.2382.9680.62
GIN76.0964.8977.2268.5676.5577.7669.9166.9875.3279.3770.93
JKNet79.6875.2980.4176.5379.7279.1360.1374.3579.1878.6169.57
APPNP79.2470.7781.4573.5680.6881.8267.5272.8979.2182.0276.86
MoNet76.2466.5572.8468.8973.1773.6757.4866.2471.0873.5761.87
MLP58.4347.6964.4751.2462.5165.6454.1249.9860.8866.2658.76
Co-TrainingGCN83.32OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE86.64OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM74.3076.0977.8178.2778.5478.3379.2779.2178.9580.3280.45
TDK73.9976.8979.6980.1881.0481.2681.3478.4479.2680.8281.33
TCL73.8677.7481.8581.4782.8582.4783.0779.5680.7481.4782.33
TMDC74.9277.8382.5282.8284.0983.0483.8081.3281.1482.5582.70
+ +Table 8: Node classification f1 score on ogbn-arxiv-TA, Books-Children, Ele-Photo, and Sports-Fitness. We bold the best results for each row. + +
Methodsogbn-arxiv-TA
Small ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM50.4852.1553.0154.8154.3955.4156.4055.7155.8857.5257.64
GNNsGCN51.3844.4054.0848.9153.4454.3232.3847.1753.1454.3550.97
GAT52.2649.3353.9751.4154.1153.6141.0453.7950.6253.8851.32
SAGE51.8145.5855.2249.3854.5755.8638.5648.3754.3955.7952.41
RevGAT52.3449.8855.1252.2154.6855.9142.0654.1155.2156.0453.01
NFormer49.9545.5651.0247.7551.1651.5839.5541.1250.0351.1647.89
GIN46.9222.5652.2841.8852.2452.3132.7639.1451.3952.5849.88
JKNet47.1036.8247.0737.2548.4945.7529.5631.2744.5443.2928.55
APPNP50.6333.6754.3241.5054.3554.2640.9040.5853.6854.1242.69
MoNet38.6220.1535.2624.5636.5739.2228.4520.1231.2634.2826.51
MLP34.7115.3547.3725.3345.8848.0524.3945.5823.3848.3743.03
Co-TrainingGCN51.55OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE52.76OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM50.8152.4354.4155.0254.8956.6756.9255.8356.4258.6058.73
TDK50.9652.8854.9655.5255.2157.0357.2356.2256.9659.4459.59
TCL51.4653.1155.3656.0155.9657.6857.9657.0357.6960.0260.13
TMDC51.7953.4555.9656.4556.3958.1158.4557.6458.1260.5660.78
MethodsBooks-Children
SMall ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM26.2638.0946.1746.4447.5548.2549.5440.7050.5150.6751.34
GNNsGCN51.3844.4054.0841.9046.3546.9933.9741.5746.1645.8940.93
GAT52.2649.3353.9745.6046.5548.5429.8846.1648.4645.9843.46
SAGE51.8145.5855.2243.4947.3148.7733.3043.6046.7447.7742.94
RevGAT52.3549.9655.5245.7847.5649.0133.4146.2648.5947.8246.29
NFormer50.3043.3251.6342.2344.4344.1228.5540.0644.1544.2640.96
GIN46.9222.5652.2838.1644.2144.9227.9237.5644.2043.2241.09
JKNet47.1036.8247.0730.1538.9230.357.9823.1436.2226.5613.54
APPNP50.6333.6754.3237.5346.0844.2216.1632.0045.4640.9523.53
MoNet36.8921.5643.3828.5634.4536.5123.6531.1236.6437.9929.92
MLP34.7115.3547.3720.7534.8838.8722.5421.1434.1536.8529.97
Co-TrainingGCN52.03OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE52.96OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM36.4742.0448.1644.7947.8748.3749.4246.7548.8951.8051.59
TDK33.2240.2646.8047.1649.4049.4251.8445.8250.0652.0752.21
TCL34.1540.4348.4642.9349.7749.7251.8544.5950.6252.6351.85
TMDC35.7844.2149.3146.9650.8950.0952.3849.9651.2152.5552.28
MethodsEle-Photo
SMall ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM66.1169.3170.8270.4870.0471.2172.1067.9770.9673.6974.30
GNNsGCN75.0369.3276.4472.1275.3876.4970.2270.9074.9176.8472.93
GAT77.2476.2277.2676.5476.9777.3073.0275.9976.7777.4775.66
SAGE75.7572.9776.2273.9675.2676.7173.2573.5374.8677.3174.11
RevGAT77.3476.5577.3176.8177.2377.3573.5576.1276.9477.5176.21
Nformer72.2668.1273.2269.5772.1573.2868.1273.4973.5674.4570.12
GIN67.9553.3568.7560.2669.4563.5051.7755.9062.8270.4857.46
JKNet69.9964.7269.9265.569.567.0240.3362.5169.7268.1154.92
APPNP69.8656.5972.8461.9171.5373.3748.0259.3668.7473.8863.26
MoNet45.5144.1251.1248.8652.2655.5134.5642.5653.3557.1138.69
MLP39.6014.9848.3224.5644.8650.6529.3023.2541.9151.7537.18
Co-TrainingGCN74.59OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE76.98OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM65.1966.6271.5070.9471.3471.7672.2772.7773.2373.9674.61
TDK66.1568.0872.9273.6074.9875.0875.8473.7971.7174.8575.85
TCL64.9469.3475.0374.9376.5976.6276.9271.9575.0575.4876.95
TMDC66.2370.0276.8276.9778.1877.4578.6974.4775.2777.0876.10
MethodsSports-Fitness
SMall ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTa
LMsPLM81.4776.5885.3183.9486.0283.0484.2876.8585.0684.1686.21
GNNsGCN84.9382.2486.1583.4686.1685.8377.9683.5485.7086.0783.71
GAT86.4585.4686.5885.5786.6586.2674.2386.8585.8085.7784.56
SAGE87.0685.0687.5185.7087.3487.3976.8685.8687.4687.5885.60
RevGAT87.5585.9687.8885.8987.4687.5677.7987.0287.6887.9686.62
Nformer83.6981.1584.5682.2084.4284.6674.1282.2684.4985.0282.26
GIN81.5971.3174.9171.5181.9572.4767.0671.8068.7783.8379.86
JKNet80.7073.5675.6971.7080.3174.3715.3866.2080.9677.2448.73
APPNP83.6271.8483.5473.1084.5982.6136.8173.3785.1884.3177.37
MoNet59.9545.5668.8951.2369.0269.5655.5658.5970.0172.1265.13
MLP68.4743.6676.0949.5673.6974.9052.8656.3374.4877.3771.41
Co-TrainingGCN85.0683.21OOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE85.8786.21OOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM81.5680.1586.2284.4586.3583.9585.1679.8985.4584.8586.51
TDK81.4082.3386.4885.5186.8884.9286.0480.4785.6585.5086.75
TCL82.0483.2087.4486.7288.0886.6787.2982.0586.6485.8087.57
TMDC83.1584.2188.5687.6989.0188.8988.5684.5187.8986.4388.78
+ +Table 9: Node classification accuracy and f1 score on Ele-Computers. We bold the best results for each row. + +
MethodsEle-Computers (Accuracy)
Small ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTTa
LMsPLM58.3259.8860.5160.8060.4061.1061.6859.6360.7061.2261.96
GNNsGCN87.4384.1388.3786.1087.8688.5082.3084.7987.3888.5685.75
GAT88.5787.8689.0788.1488.6388.8083.4288.0788.6488.8887.40
SAGE87.9086.4388.6786.9788.3088.8782.8187.1688.3588.6686.85
RevGAT88.6687.9289.3288.2288.7288.9184.1288.4588.8689.0387.56
NFormer86.8185.4387.9686.0586.9587.1281.5686.2687.2387.6584.46
GIN78.0783.1883.1873.3683.0283.9871.5371.6481.2084.0975.13
JKNet85.0585.3185.3179.3785.0883.9155.1573.6783.3283.6670.39
APPNP83.6284.9684.9678.1584.5184.9762.0976.5183.0283.8374.90
MoNet78.3077.7677.7662.0877.6774.5347.6860.2468.1870.6359.67
MLP46.0253.7653.7636.2150.8555.1142.4536.2547.5955.0346.74
Co-TrainingGCN83.93OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE86.04OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM58.3260.2061.4261.5363.4361.3061.7461.9962.4163.1263.43
TDK58.1860.4263.3863.1362.9563.8064.9760.9662.4962.9262.95
TCL58.8061.0466.5067.0264.9367.9470.0861.5063.3464.5564.93
TMDC58.8661.1766.9468.2469.4567.5069.7165.0265.7465.7966.08
MethodsEle-Computers (F1 score)
Small ScaleBase ScaleLarge Scale
BERT-TinyELECTRADistilBERTELECTRABERTRoBERTaDeBERTaELECTRABERTRoBERTaDeBERTTa
LMsPLM44.5547.6551.4748.7752.5353.3952.1048.4651.5755.2855.04
GNNsGCN80.8875.7382.2479.1181.0382.1372.9778.1080.9382.8576.56
GAT83.4983.5284.1483.3583.9983.1478.3083.3383.5583.4383.06
SAGE82.8581.3884.2182.1883.6784.4476.4281.8182.9383.1380.98
RevGAT83.6583.7684.5583.5684.4384.6979.0283.4683.6987.7983.51
NFormer80.5775.1280.2677.5680.0181.1671.0376.5679.6581.5175.55
GIN71.5657.9976.4667.3977.0078.0961.0364.6375.4377.5762.25
JKNet77.0167.9073.9967.8375.0270.2641.4963.5872.3569.7858.30
APPNP75.4962.0478.0269.0476.9973.4642.0568.2277.7972.9659.96
MoNet48.5635.2252.6940.0349.8653.6043.6539.8845.6554.1248.55
MLP32.2016.2340.0621.0037.4141.2226.2119.8834.2542.3432.73
Co-TrainingGCN80.79OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
SAGE83.67OOMOOMOOMOOMOOMOOMOOMOOMOOMOOM
TPTTMLM44.9148.0151.5152.0152.5153.0053.8655.4755.7155.8157.56
TDK44.5048.4852.9755.8055.1556.5359.3950.4455.5355.9454.05
TCL44.4648.8154.6655.4459.2560.5857.6950.2254.4857.8655.67
TMDC45.0549.1954.9155.9359.1261.3559.3958.1058.4460.5260.06
+ +# D.2 Experimental Results for Link Prediction + +We further show the experiment results for link prediction in Table 10. We mainly follow the code used for link prediction in OGB. We choose GCN and GraphSAGE as the backbone model in the link prediction experiments. + +Table 10: Link Prediction results on CitationV8 and GoodReads. + +
MethodsCitationV8GoodReads
Hits10Hits50Hits100MRRHits10Hits50Hits100MRR
PLMBERT-Tiny33.56 ± 1.5648.15 ± 2.0266.56 ± 0.5641.23 ± 1.3936.86 ± 2.0452.45 ± 1.6976.23 ± 1.1142.15 ± 0.86
BERT-Base38.86 ± 2.5357.53 ± 1.9672.44 ± 0.9844.56 ± 1.2343.96 ± 2.2660.87 ± 1.4379.22 ± 0.4644.43 ± 1.15
GNNT-GCN50.89 ± 3.5674.26 ± 2.1690.23 ± 0.8960.79 ± 0.2861.47 ± 4.6584.14 ± 2.1590.43 ± 0.6069.44 ± 0.56
T-SAGE45.12 ± 3.2666.23 ± 1.5689.36 ± 0.9954.64 ± 1.0364.52 ± 3.1882.65 ± 1.4588.53 ± 0.6174.36 ± 0.84
B-GCN50.39 ± 4.5675.12 ± 2.5690.16 ± 0.4660.04 ± 0.8655.42 ± 5.8385.04 ± 2.3491.49 ± 1.2565.12 ± 0.48
B-SAGE44.12 ± 4.1271.26 ± 1.6789.12 ± 0.7553.96 ± 1.2454.05 ± 3.0382.87 ± 0.8989.61 ± 0.3365.68 ± 1.13
TCLBERT-Tiny41.26 ± 1.4957.26 ± 1.5972.62 ± 0.9647.26 ± 1.2345.47 ± 2.5361.56 ± 1.5682.43 ± 0.4955.12 ± 0.84
BERT-Base46.58 ± 1.6965.77 ± 2.0172.46 ± 0.5652.78 ± 2.0352.59 ± 2.2765.97 ± 1.2385.56 ± 0.3461.21 ± 1.23
T-GCN65.23 ± 2.3681.23 ± 1.3692.56 ± 0.5665.69 ± 0.4269.58 ± 2.2988.89 ± 0.5693.12 ± 0.5783.16 ± 1.04
T-SAGE61.89 ± 3.2180.12 ± 1.4690.23 ± 0.8955.70 ± 0.1570.28 ± 2.3685.12 ± 0.4790.38 ± 0.5781.12 ± 1.23
B-GCN68.26 ± 2.6684.56 ± 0.5693.68 ± 0.2670.16 ± 0.3873.87 ± 3.3692.82 ± 0.1695.85 ± 0.1585.12 ± 1.56
B-SAGE62.36 ± 3.4680.56 ± 1.5692.56 ± 0.6860.18 ± 0.1575.16 ± 2.2690.74 ± 0.1694.01 ± 0.1182.15 ± 1.15
+ +On the CitationV8 dataset, the parameters shared by the GCN and GraphSAGE include epochs, model layers, hidden units, learning rate, and batch size, and their values are set to 200, {2, 3}, {128, 256}, {1e-04 ~ 1e-03}, 65536. We validate the model performance with the Hits@K and MRR metrics. We bold the best results on each row. In Table 10, the PLM indicates that we use LMs directly for link prediction. In GNN, we use the features generated by BERT-Tiny and BERT-Base to feed into GCN and SAGE, respectively. In TCL, we are subdivided into two approaches. The first is to pre-train LMs with TCL tasks and then finetune LMs for downstream link prediction tasks. The second is to use LMs pre-trained by TCL to encode text attributes. T-GCN(SAGE) denotes the node features obtained by BERT-Tiny. B-GCN(SAGE) denotes the node features obtained by BERT-Base. When we use TCL to pre-train the language models, we find that BERT-Tiny and BERT-Base improve on MRR by 6.03 and 8.22, respectively. We also find that the features obtained using TCL-BERT-Base + +can significantly improve the results on downstream GNNs. For example, B-GCN improves by $10\%$ on average on all evaluation criteria. + +On the GoodReads dataset, the parameters shared by the GCN and GraphSAGE include epochs, model layers, hidden units, learning rate, and batch size, and their values are set to 200, {2}, {128}, {1e-04~5e-03}, 524288. We validate the model performance with the Hits@K and MRR metrics. We bold the best results on each row. Observing from Table 10 we can find that after using TCL to pre-train the language models and obtaining the corresponding node features, the significant improvements are obtained in all experiments. In addition to the improvement in performance, the variance of the model is also reduced in most experiments. This indicates that the node features generated by the language model trained by a suitable pre-training task can not only enhance the performance of the downstream model but also improve the stability of the downstream model to some extent. + +# D.3 Efficiency and Scalability of Co-Training Paradigm + +Table 11: Effectiveness and Scalability of Co-Training with TCL. + +
DatasetsBERT-TinyCo-TrainingBERT-Base
Co-TrainingTCLTCL
AccMemoryTimeAccMemoryTimeAccMemoryTimeAccMemoryTime
Arxiv73.5776.27%44.071.5527.59%7.0-OOM-74.8770.73%130
Children59.7097.28%15.554.1119.76%2.0-OOM-60.7380.99%30
History85.0985.74%5.786.0614.69%1.3-OOM-86.8098.73%18
Photo86.6497.83%14.673.8622.75%3.1-OOM-82.8570.65%120
# Average76.2589.38%19.9571.4021.20%3.4-OOM-76.3180.28%74.50
+ +In this subsection, we present the accuracy, GPU memory cost on a single 32GB V100, and the total training time (min) of the co-training paradigm and topological PLMs across five datasets. BERT-Tiny+SAGE and BERT-Base+SAGE are selected as the co-training approaches. As one of the best topological pre-training tasks, topological contrastive learning (TCL) is selected to enhance the PLMs. The experimental results are presented in the Table 11. For the small language model BERT-Tiny, the co-training paradigm costs more memories ( $\sim 4X$ ) compared to the TCL and is much slower ( $\sim 6X$ ) than TCL. If we use the larger language model like BERT-Base, co-training models will be out-of-memory in a single 32GB V100. Thus, the proposed topological pre-training paradigm is more efficient and practical than the co-training ones. + +# D.4 Comparison between Shallow Text Encoders and PLMs + +The traditional GNN pipelines generally encode the textual attributes of each node using a shallow model such as the Skip-Gram. Therefore, we extend our node text encoding approach by incorporating shallow models like Skip-Gram [29] and GloVe [30]. Experimental results (accuracy) on node classification task over ogbn-arxiv-TA and Books-Children datasets are presented in the Table 12. One can clearly see that GNNs equipped with deeper text encoders consistently outperform those with shallow encoders, verifying the importance of node attribute understanding. + +Table 12: Node classification performance on shallow text encoders and PLMs. We bold the best result for each row. + +
Text Attribute EncoderOGbn-arxiv-TABooks-Children
GCNGATSAGERevGATNFormerGCNGATSAGERevGATNFormer
ShallowSkip-Gram71.9772.2272.0273.4271.0356.2355.8456.4957.1356.03
GloVe72.1272.5472.4873.5172.0457.0256.5857.2258.1256.86
PLMsDistilBERT73.3973.4874.4874.6873.5658.1957.9159.3359.2858.03
BERT-Base73.3073.4074.1474.5972.8058.1157.7058.7458.6757.42
RoBERTa-Base73.5673.3874.5274.8273.1258.6257.8358.9759.0157.26
+ +# D.5 Topological Pre-training under Semi-supervised and Few-shot Learning Scenarios + +In order to provide the more comprehensive evaluation and deeper insights into our topological pre-training strategies, we evaluate them in different scenarios. + +# D.5.1 Semi-supervised Learning + +To effectively evaluate the performance of various methods within semi-supervised settings, we've adjusted the training ratio from $20\%$ to $100\%$ . $20\%$ implies that only $20\%$ of training samples are used in the model training process. BERT-Base is selected as the foundational text encoder model. Detailed experimental results concerning the node classification task across four datasets have been presented in Table 13. Notably, as the training ratio decreases, the benefits of topological pre-training approaches become even more significant. Among these strategies, TMDC demonstrates superior performance across all datasets within the semi-supervised context. + +Table 13: Topological pre-training methods under semi-supervised scenarios with different training ratios + +
MethodsArxivHistory
20%40%60%80%100%20%40%60%80%100%
PLM57.7662.5667.1270.1572.9670.8675.6180.1883.2886.09
TMLM62.1565.5169.0171.7673.9774.3977.7081.2184.0386.24
TDK63.8666.7870.0171.9574.2375.4678.9382.1784.1886.46
TCL64.9867.8971.2273.5674.8777.1280.5383.4685.4986.80
TMDC65.4868.9172.9275.0976.1178.2481.4884.2385.8086.82
MethodsChildrenPhoto
20%40%60%80%100%20%40%60%80%100%
PLM44.2149.1253.8056.9759.9156.6163.2869.4573.8677.53
TMLM48.5251.8855.3858.1360.3466.7270.0873.5876.3378.54
TDK49.4652.9856.2158.1560.4370.0773.5976.8278.7681.04
TCL51.2154.6757.5959.5560.7373.2076.6679.5881.5482.85
TMDC52.8856.1758.9160.4161.4375.5478.8381.5783.0784.09
+ +# D.5.2 Few-shot Learning + +Table 14: Topological Pre-training methods under few-shot scenarios + +
MethodsArxivChildrenHistorySports
3-shot5-shot3-shot5-shot3-shot5-shot3-shot5-shot
PLM37.7641.5626.5530.1532.5237.7842.5646.22
TMLM40.0845.2131.8636.8935.5140.1544.1850.15
TDK41.1547.4234.5639.2636.5842.2645.6952.68
TCL43.2649.5838.2642.1238.9544.1248.5855.64
TMDC45.6851.5240.0544.6240.8646.6950.2658.95
+ +We've undertaken few-shot learning experiments over four datasets. The term "K-shot" denotes that merely K samples correspond to each category within the training set. Based on the results in the Table 14, topological pre-training consistently enhances the performance of LMs across diverse few-shot scenarios. + +# D.6 Experimental Results on Large-scale Text-attributed Graphs + +Considering the existence of large text-attributed graphs, we further execute preliminary experiments on the ogbn-papers100M dataset [34] with 111,059,956 nodes and 1,615,685,872 edges. Given the substantial scale of this dataset, which poses challenges for many existing GNNs, we have chosen a set of scalable GNNs (SGC [55], GAMLP [56] and SIGN [57]). Additionally, we have selected several prominent LMs including BERT-Tiny, ELECTRA, and DistilBERT. Different training paradigms are systematically evaluated under the node classification task. Experimental results (accuracy) are demonstrated in the Table 15. GAMLP achieves the best performance among all the GNN models due to its adaptive node-wise feature combination. Furthermore, it is noteworthy that the topological + +pre-training (referred to as TPT) methods obtain the SOTA performance over this large dataset, demonstrating the superiority of the proposed pre-training tasks. + +Table 15: Node classification results on ogbn-papers100M + +
ModelsLMs PLMGNNsTPT
SGCSIGNGAMLPMLPTMLMTDKTCLTMDC
BERT-Tiny ELECTRA-Small DistilBERT62.1162.5164.2665.1249.3263.5563.9864.7865.31
61.0161.0763.0664.9847.2662.2463.1363.9865.16
64.1863.5865.8167.7351.2665.3665.8966.7668.12
+ +Table 16: Statics information of the Cresci-2015 and TwiBot-20 + +
DatasetsUserTweetEdgeHumanBot
Cresci-20155,3012,827,7577,086,1341,9503,351
TwiBot-20229,58033,488,19233,716,1715,2376,589
+ +# D.7 Text-attributed Graphs in Social Networks + +In the realm of text-attributed graphs (TAGs), two prominent and prevalent categories are academic and e-commerce graphs. For example, all text-attributed graphs within the OGB benchmark [34] belong to these two domains. To boost the impact of our benchmark, we have broadened the spectrum of TAG domains by introducing a new dimension, social networks, into our benchmark. To this end, we have incorporated two text-attributed graphs sourced from the widely-used social platform Twitter, named Cresci-2015 [58] and TwiBot-20 [59]. These two datasets are collected for social bot detection. Each node corresponds to a user within Twitter, intrinsically linked to the tweets they have published. The underlying graph topology is shaped by the relationships (e.g., following relations) among users. The labels attributed to each node signify whether the respective user is classified as a bot or not. It is worth noting that, owing to privacy concerns intrinsic to social network data, we provide a summarized representation of the outcomes and insights. The original datasets are available upon request through email, pending the acceptance of this paper. The detailed statistics of the datasets are presented in Table 16. + +Table 17: Node classification results on Cresci-2015 and TwiBot-20 dataset (accuracy) + +
DatasetsModelsLMs PLMGCNGNNs SAGEGATCo-TrainingTMLMTDKTCLTMDC
GCNSAGEGAT
Cresci-2015BERT-Tiny91.093.393.694.193.693.9OOM91.592.193.094.3
ELECTRA-Small92.093.093.493.9OOMOOMOOM92.392.993.894.6
DistilBERT95.595.395.596.0OOMOOMOOM95.996.397.197.9
RoBERTa-Base97.096.296.696.9OOMOOMOOM97.597.998.398.8
TwiBot-20BERT-Tiny68.276.879.281.1OOMOOMOOM69.972.675.676.9
ELECTRA-Small69.575.678.880.1OOMOOMOOM71.673.076.477.1
DistilBERT76.580.984.685.4OOMOOMOOM77.678.179.381.2
RoBERTa-Base78.681.684.985.9OOMOOMOOM80.181.282.383.4
+ +The node classification results (accuracy) are demonstrated in the Table 17. "OOM" stands for "Out-Of-Memory" on a 32GB V100. On the Cresci-2015 dataset, we observe a noteworthy trend wherein PLM-based methods exhibit comparable or even superior performance compared to GNN-based methods, particularly as the parameters of PLMs increase. RoBERTa consistently surpasses all three GNN models. On the TwiBot-20 dataset, the graph topology seems to be more important. The Graph Attention Network (GAT) emerges as the top-performing model on these datasets. Additionally, the employment of topological pre-training strategies brings performance enhancements for both datasets, underscoring their efficacy in advancing model capabilities. + +# D.8 Evaluations of LLMs on TAGs + +In our benchmark, we have mainly studied PLMs based on encoder architecture like BERT [10]. However, most of the recent rapidly developing models in the NLP field are LLMs based on the + +Decoder architecture represented by GPT [32]. Following the experiment workflow denoted in TAPE [6], we leverage LLMs to generate high-quality node features for TAGs. From the perspective of LMs, we incorporate the recent and prominent large language models as the baselines, including T5 (11B) [60], LaMDA (137B) [61], GPT-3 (175B) [28] and PaLM (540B) [62]. In brief, we use LLMs' inference APIs to generate explanations about the original text. These explanations are incorporated into the original text for fine-tuning the respective LMs. Finally, we extract features from LMs and use them to train the downstream GNNs. DistilBERT is selected as the feature extractor LMs, and GCN, GAT, and SAGE are selected as the downstream GNNs. Experimental results are presented in the Table 18. PaLM has consistently attained the most impressive performance across all downstream GNNs and datasets. + +Table 18: Node classification experiments on the three datasets. The row "LLMs" denotes using LLMs to generate explanations about the raw text to fine-tune the LM and generate node features for downstream GNNs. The row "Raw" denotes using the original LMs to generate node features. + +
MethodsGCNArxiv GATSAGEGCNChildren GATSAGEGCNPhoto GATSAGE
RawDistilBERT73.3973.4874.4858.1957.9159.3382.9183.7583.50
LLMsT573.5173.8074.2258.6158.1459.1283.7684.7584.33
LaMDA74.0674.5574.9259.6759.5660.8684.2384.9584.59
GPT-374.4174.8175.3460.1260.0161.1284.5685.2384.78
PaLM75.2276.4376.7261.5961.2662.2385.4585.9585.69
+ +# D.9 Study on the Selection of Node Attributes + +In our previous experiments, we have observed a phenomenon that LMs usually perform much better on datasets that use product descriptions as text attributes than those that use product reviews. In order to be more explicit about the effect of node attribute selection on different models, we have reconstructed two datasets with product description as text attribute for Ele-Photo and Ele-Computers, where LMs performed poorly before. In the Table 19, datasets labeled with "RW" incorporate user reviews as node attributes, while those marked "DS" employ product descriptions as attributes. Different training paradigms are systematically evaluated under the node classification task. BERT-Base is selected as the foundational text encoder model. Evidently, the performance of PLMs when utilizing descriptions as node attributes demonstrates a substantial enhancement in contrast to reviews. This observation underscores the pivotal role that the selection of node attributes plays in achieving desirable TAG representation learning. + +Table 19: Experimental results (accuracy) of node classification on datasets with different text attributes. + +
DatasetsLMsGNNsTPT
PLMSAGEGCNGATTMLMTDKTCLTMDC
Photo-RW77.5383.2782.7083.7478.5481.0482.8584.09
Photo-DS85.0784.8683.7285.1686.1586.4987.2688.15
Computers-RW61.9688.3087.8688.6363.4382.8564.9369.45
Computers-DS86.4188.9088.2689.1387.5687.2688.9689.53
+ +# E Broader Impact + +Representation learning on text-attributed graphs is a fast-growing and promising research field, and covers a wide range of applications. We start this benchmark to call more researchers' attention to this common data type. The proposed benchmark CS-TAG can significantly facilitate the development of the textual-attributed graph learning. CS-TAG deeply and extensively explores the paradigm of combining pre-trained language models (PLMs) with graph neural networks (GNNs), and provides a comprehensive evaluation over multiple large constructed datasets. Nevertheless, there are still + +lots of research gaps need to be bridged. First, the self-supervised and unsupervised learning in textual-attributed graphs are not included in CS-TAG, which play an important role in graph data mining research. Second, CS-TAG does not pay much attention to link prediction task that has many applications in real world, such as recommendation system and drug discovery. Third, the interpretability of textual-attributed graph learning is not discussed here. Comparing with the vectorized features in other graphs, such as chemical molecules, the text attributes can be directly understood by human. Therefore, the textual-attributed graphs are more human-intelligible and have promising potential for interpretability research. + +In the future, we will keep track on the newly emerged problems in textual-attributed graphs and provide more solid experimental results and detailed analyses to improve CS-TAG consistently. It is an ongoing effort and we strive to continuously include more datasets and evaluate different methods to advance the field. \ No newline at end of file diff --git a/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/images.zip b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5a5705baa0b613a403a12714adbc2d44811a7160 --- /dev/null +++ b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6610935ffbfe25a55951660c494adf49be38043adcc23b00c77f3087caa8bc09 +size 1987651 diff --git a/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/layout.json b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bbd5d9cce63572604d387bf8dc988d6f53f7bca0 --- /dev/null +++ b/acomprehensivestudyontextattributedgraphsbenchmarkingandrethinking/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a826814077437dadd99e61fce404acd24ad095b15e3620bfef1148f2dcd8fcd +size 628970 diff --git a/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_content_list.json b/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cfc68ac4fae5e4f177fe7e2435d65fe78f8a444c --- /dev/null +++ b/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8c0051c35ea83a0cfc94747ab62e5ea353889577c813679112a061daabbb38e +size 200488 diff --git a/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_model.json b/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9613637775c525692b26bb4ca668e6c9c0f8a8de --- /dev/null +++ b/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18ebed5876362a1a1e43482f9c9f86836a43b3e87cc653f0e2d2f2ece1605ea9 +size 235598 diff --git a/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_origin.pdf b/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..38bdc97a11a8a41372d0d0ffa53ee124cd98067f --- /dev/null +++ b/acomputationallyefficientsparsifiedonlinenewtonmethod/1aa9ee29-4f97-4d88-89b9-2d6f8db9bac1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaf4205915e1a4e17cee1b39e29ef683b3406d48b0e74aaf933816a13720e03c +size 1544184 diff --git a/acomputationallyefficientsparsifiedonlinenewtonmethod/full.md b/acomputationallyefficientsparsifiedonlinenewtonmethod/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0688ccbad6b27977606c6bd3e56add46ad0350d7 --- /dev/null +++ b/acomputationallyefficientsparsifiedonlinenewtonmethod/full.md @@ -0,0 +1,993 @@ +# A Computationally Efficient Sparsified Online Newton Method + +Devvrit *† + +Department of Computer Science + +The University of Texas at Austin + +devvrit.03@gmail.com + +Sai Surya Duvvuri* + +Department of Computer Science + +The University of Texas at Austin + +subramanyamdvss@gmail.com + +Rohan Anil + +Google DeepMind + +rohananil@google.com + +Vineet Gupta + +Google + +vineet@google.com + +Cho-Jui Hsieh + +CS Department, UCLA & Google + +chohsieh@cs.ucla.edu + +Inderjit Dhillon + +Google + +isd@google.com + +# Abstract + +Second-order methods hold significant promise for enhancing the convergence of deep neural network training; however, their large memory and computational demands have limited their practicality. Thus there is a need for scalable second-order methods that can efficiently train large models. In this paper, we introduce the Sparsified Online Newton (SONew) method, a memory-efficient second-order algorithm that yields a sparsified yet effective preconditioner. The algorithm emerges from a novel use of the LogDet matrix divergence measure; we combine it with sparsity constraints to minimize regret in the online convex optimization framework. Empirically, we test our method on large scale benchmarks of up to 1B parameters. We achieve up to $30\%$ faster convergence, $3.4\%$ relative improvement in validation performance, and $80\%$ relative improvement in training loss, in comparison to memory efficient optimizers including first order methods. Powering the method is a surprising fact - imposing structured sparsity patterns, like tridiagonal and banded structure, requires little to no overhead, making it as efficient and parallelizable as first-order methods. In wall-clock time, tridiagonal SONew is only about $3\%$ slower per step than first-order methods but gives overall gains due to much faster convergence. In contrast, one of the state-of-the-art (SOTA) memory-intensive second-order methods, Shampoo, is unable to scale to large benchmarks. Additionally, while Shampoo necessitates significant engineering efforts to scale to large benchmarks, SONew offers a more straightforward implementation, increasing its practical appeal. SONew code is available at: https://github.com/devvrit/SONew + +# 1 Introduction + +Stochastic first order methods which use the negative gradient direction to update parameters have become the standard for training deep neural networks (DNNs). Gradient-based preconditioning involves finding an update direction, by multiplying the gradient with a preconditioner matrix carefully chosen from gradients observed in previous iterations, to improve convergence. (Full + +matrix) Adagrad [15], online Newton method [25] and natural gradient descent [3] use a full-matrix preconditioner, but computing and storing the full matrix is infeasible when there are millions of parameters. Thus, diagonal versions such as diagonal Adagrad, Adam [33], and RMSprop [28] are now widely used to train DNNs due to their scalability. + +Several higher-order methods have previously been applied to deep learning ([24, 5, 23, 38]). All these methods use Kronecker product factorizations that reduce computational and storage costs to make them feasible for training neural networks. However, to precondition a $d_{1} \times d_{2}$ parameter matrix, these methods require matrix inverse operations, which take $\mathcal{O}(d_1^3 + d_2^3)$ time and $\mathcal{O}(d_1^2 + d_2^2)$ space. In comparison, first-order methods use $\mathcal{O}(d_1 d_2)$ time and memory, which is linear in the number of parameters. For instance, when $d_{1} = k d_{2}$ , the memory used by Shampoo, $d_{1}^{2} + d_{2}^{2}$ floating point numbers is $\mathcal{O}(k)$ times the number of parameters, which could be arbitrarily large depending on $k$ . This calls for further research in developing efficient second-order optimization techniques to train DNNs with memory and time complexity linear in the number of parameters. + +In this paper, we present a novel Sparsified Online Newton (SONew) method, which only requires linear time and space complexity, to train large-scale DNNs. We derive the algorithm through two steps, classical regret analysis followed by a sparsification step. In more detail, regret analysis when using a preconditioner reveals that the error is bounded by two terms, the first depends on the change in the preconditioning matrix, while the second depends on the generalized gradient norm (see Section 3 for more details). We take a novel approach of minimizing the second term while regularizing two successive preconditioners to be close in the LogDet matrix divergence measure [34] (see Section 3 for the intuition behind choosing LogDet divergence). This analysis naturally yields us an Online Newton method [25]. To make it computationally efficient, we further sparsify the preconditioner by finding a sparse approximation that is close in LogDet divergence. Thus we are consistent in using the same measure (LogDet divergence) in both the regularization and sparsification steps. This gives us our SONew method, which achieves linear complexity by leveraging structured sparsity patterns, such as tridiagonal and banded, in the preconditioner. This is unlike most existing online Newton methods that require quadratic space and cubic time complexity. By making each step linear time, the SONew method can be applied to train modern DNNs as efficiently as first order methods. Further, our method is embarrassingly parallelizable thus making negligible the overhead of computing the preconditioner. We also show that introducing sparsity allows us to reduce the condition number of the problem dynamically to improve numerical stability. + +We strengthen the relationship between sparse LogDet divergence minimization and online convex optimization by establishing an optimal $\mathcal{O}(\sqrt{T})$ regret upper bound for tridiagonal sparsity pattern. In our experiments on an MLP Autoencoder and Graph Neural Network (GNN), we found that our method outperformed first-order methods in terms of training loss within the same training time, while Shampoo (second-order method) takes significantly longer. In our experiments on Vision Transformers on Imagenet and GNN on OGBG-molpcba, we achieve a target validation performance using $10\%$ and $30\%$ fewer iterations respectively compared to Adam, the SOTA optimizer for both benchmarks. Furthermore, using the same number of iterations as Adam we observe $0.7\%$ and $3.4\%$ relative improvement for ViT and GNN respectively in validation performance. From an optimization point of view, SONew achieves $9\%$ and $80\%$ better relative training loss for ViT and GNN respectively. It is worth noting that Shampoo statistics required $\sim 7 \times \# \text{params}$ for ViT whereas tridiag-SONew uses only $2 \times \# \text{params}$ for its statistics. We also test another recently proposed memory efficient second order optimizer, rfdSON [37], but found its performance suboptimal to the best performing first order method. Owing to SONew's scalability, we train a Large Language Model (LLM) with 1 billion parameters and compare it with AdaFactor [45], a popularly used first order optimizer to train LLMs [11]. SONew achieves the same performance as AdaFactor using $26\%$ fewer steps, resulting in a $1.35 \times$ faster training. When using the same number of steps, SONew obtained a $1.7\%$ relative better train loss. In terms of implementation, SONew is just a few lines of code (Equation (13)) without complex engineering challenges, rendering it even more useful and practical. + +# 2 Background + +The inner product between matrices is defined as $\langle A, B \rangle = \operatorname{Tr}(A^T B)$ , where $\operatorname{Tr}(\cdot)$ denotes the matrix trace. The Frobenius norm of a matrix $A$ is $\|A\|_F = \sqrt{\operatorname{Tr}(A^TA)}$ , while its spectral norm is $\|A\|_2 = \max_x \|Ax\|_2 / \|x\|_2$ . We use $I_n \in \mathbb{R}^{n \times n}$ to denote an identity matrix. We use $S_n$ , $S_n^{++}$ to denote the set of symmetric, and positive definite matrices respectively. The generalized norm of a + +vector $x \in \mathbb{R}^n$ with respect to matrix $A \in S_n^{++}$ is defined as $\| x \|_A = \sqrt{x^TAx}$ . We use $\operatorname*{det}(A)$ to denote the determinant of matrix $A$ , and $\mathrm{diag}(A)$ to denote the diagonal matrix with $\mathrm{diag}(A)_{ii} = A_{ii}$ . We use $\mathcal{G}$ and $\tilde{\mathcal{G}}$ to denote a graph and its sub-graph with a vertex set $[n] = \{1, \ldots, n\}$ . Let $E_{\mathcal{G}}$ denote the set of edges in graph $\mathcal{G}$ , and $\mathrm{neig}_{\mathcal{G}}(i)$ denote neighbours of vertex $i$ in graph $\mathcal{G}$ . A sparse symmetric matrix $A \in \mathbb{R}^{n \times n}$ follows a sparsity structure graph $\mathcal{G}$ if $A_{i,j} = 0 \forall (i,j) \notin E_{\mathcal{G}}$ . Note that the set of all such matrices forms a linear subspace. We use $S_n(\mathcal{G})^{++}$ to denote the set of positive definite matrices with sparsity structure given by graph $\mathcal{G}$ , i.e., if $X \in S_n(\mathcal{G})^{++}$ , then $X_{ij} = 0 \forall (i,j) \notin E(\mathcal{G})$ . $S_n(\mathcal{G})^{++}$ is an open convex set. Given an index set $I = \{i_1, i_2, \ldots, i_n\}$ , we use $A_{II}$ to denote the corresponding principal sub-matrix of $A$ . + +# 2.1 LogDet matrix divergence + +Let $\phi : S_n^{++} \to \mathbb{R}$ be a strictly convex, differentiable function. The Bregman matrix divergence between $X, Y \in S_n^{++}$ is defined as [8, 34]: $\mathrm{D}_{\phi}(X,Y) = \phi(X) - \phi(Y) - \mathrm{Tr}(\nabla \phi(Y)^T(X - Y))$ . Since $\phi$ is convex, $\mathrm{D}_{\phi}(X,Y) \geq 0$ for all $X,Y \in S_n^{++}$ . For example if $\phi(X) = \|X\|_F^2$ , the corresponding Bregman divergence $\mathrm{D}_{\phi}(X,Y) = \|X - Y\|_F^2$ is the squared Frobenius distance. In this paper, we extensively use the convex function $\phi(X) = -\log \det(X)$ ; the corresponding divergence measure $\mathrm{D}_{\ell\mathrm{d}}(X,Y)$ is called the LogDet matrix divergence: + +$$ +\mathrm {D} _ {\ell \mathrm {d}} (X, Y) = - \log \det \left(X Y ^ {- 1}\right) + \operatorname {T r} \left(X Y ^ {- 1}\right) - n. \tag {1} +$$ + +The LogDet divergence is scale invariant to invertible matrices $A$ , i.e. $\mathrm{D}_{\ell \mathrm{d}}(A^T X A, A^T Y A) = \mathrm{D}_{\ell \mathrm{d}}(X, Y)$ . LogDet divergence can be written in terms of eigendecompositions of $X = V\Sigma V^T$ and $Y = U\Theta U^T$ [34]: + +$$ +\mathrm {D} _ {\ell \mathrm {d}} (X, Y) = \sum_ {i} \sum_ {j} \left(v _ {i} ^ {T} u _ {j}\right) ^ {2} \left(\sigma_ {i} / \theta_ {j} - \log \left(\sigma_ {i} / \theta_ {j}\right) - 1\right). \tag {2} +$$ + +These two properties are later used in Section 3 to highlight the significance of LogDet divergence in our algorithm. + +# 3 SONew: Sparsified Online Newton Method + +We now present our proposed algorithm SONew. + +# 3.1 Regret minimization via LogDet divergence + +We set up our problem under the online convex optimization framework (OCO) [43, 26], where at each round the learner makes a prediction $w_{t}$ and receives a convex loss $f_{t}(w_{t})$ and gradient $g_{t} = \nabla f_{t}(w_{t})$ as feedback. The goal of the learner is to reduce regret $R_{T}$ by predicting $w_{t}$ so that a low aggregate loss $\sum_{t=1}^{T} f_{t}(w_{t})$ is achieved compared to the best possible, $w^{*} = \arg \min_{w} \sum_{t=1}^{T} f_{t}(w)$ . Formally, regret is given by + +$$ +R _ {T} (w _ {1}, \dots , w _ {T}) = \sum_ {t = 1} ^ {T} f _ {t} (w _ {t}) - \sum_ {t = 1} ^ {T} f _ {t} (w ^ {*}). +$$ + +Using [10], $R$ regret in online setting yields $R / T$ convergence rate in the stochastic setting. To upper bound this regret, we proceed as in [26] by analyzing the error in the iterates for the update $w_{t + 1} \coloneqq w_t - \eta X_t g_t$ , where $X_t \in \mathbb{R}^{n \times n}$ . Then $\| w_{t + 1} - w^* \|_{X_t^{-1}}^2 = \| w_t - \eta X_t g_t - w^* \|_{X_t^{-1}}^2 = \| w_t - w^* \|_{X_t^{-1}}^2 + \eta^2 g_t^T X_t g_t - 2\eta (w_t - w^*)^T g_t$ . The convexity of $f_t$ implies that $f_t(w_t) - f_t(w^*) \leq (w_t - w^*)^T g_t$ leading to $f_t(w_t) - f_t(w^*) \leq \frac{1}{2\eta} (\| w_t - w^* \|_{X_t^{-1}}^2 - \| w_{t + 1} - w^* \|_{X_t^{-1}}^2 + \eta^2 g_t^T X_t g_t)$ . Summing over all $t \in [T]$ and rearranging reveals the following upper bound on overall regret: + +$$ +R _ {T} \leq \frac {1}{2 \eta} \| w _ {1} - w ^ {*} \| _ {X _ {1} ^ {- 1}} ^ {2} + \frac {\eta}{2} \sum_ {t = 1} ^ {T} g _ {t} ^ {T} X _ {t} g _ {t} + \frac {1}{2 \eta} \sum_ {t = 2} ^ {T} \left(w _ {t} - w ^ {*}\right) ^ {T} \left(X _ {t} ^ {- 1} - X _ {t - 1} ^ {- 1}\right) \left(w _ {t} - w ^ {*}\right). \tag {3} +$$ + +Since $w^{*}$ is unknown, finding $X_{t}$ which minimizes (3) is infeasible. So to minimize regret, we attempt to minimize the second term in (3) while regularizing $X_{t}^{-1}$ to be "close" to $X_{t - 1}^{-1}$ . The nearness measure we choose is the LogDet matrix divergence, thus leading to the following objective + +$$ +X _ {t} = \underset {X \in S _ {n} ^ {+ +}} {\arg \min } g _ {t} ^ {T} X g _ {t}, \text {s u c h t h a t} \mathrm {D} _ {\ell \mathrm {d}} (X, X _ {t - 1}) \leq c _ {t}, \tag {4} +$$ + +where $\mathrm{D}_{\ell \mathrm{d}}$ is as in (1). Why do we use the LogDet divergence? From (2), due to the term $\lambda_{i} / \theta_{j}$ , $\mathrm{D}_{\ell \mathrm{d}}(X,X_{t - 1})$ prioritizes matching the smaller eigenvalues of $X_{t - 1}$ with those of $X$ , i.e., matching the larger eigenvalues of $X_{t - 1}^{-1}$ and $X^{-1}$ . As a consequence, LogDet divergence regularizes $X$ by matching up its large eigenvalues with those of $X_{t - 1}$ . For example if smallest and largest eigenvalue of $X_{t - 1}$ are $\theta_{n}$ and $\theta_{1}$ , then for an eigenvalue $\sigma$ of $X$ , when $\sigma >\theta_{n}$ , $\theta_{1}$ , the penalty from (2) for $\theta_{n}$ is higher than for $\theta_{1}$ , $(\sigma /\theta_{n} - \log (\sigma /\theta_{n}) - 1) > (\sigma /\theta_{1} - \log (\sigma /\theta_{1}) - 1)$ . This intuition leads us to formulate (4) as our objective. We recall that there is precedence of using the LogDet divergence in the optimization literature; indeed the celebrated BFGS algorithm [9, 17, 22, 44] can be shown to be the unique solution obtained when the LogDet divergence between successive preconditioners, subject to a secant constraint, is minimized (as shown in the 4-page paper by [18]). + +The optimization problem in (4) is convex in $X$ since the LogDet divergence is convex in its first argument. The Lagrangian $\mathcal{L}(X, \lambda_t) = g_t^T X g_t + \lambda_t (\mathrm{D}_{\ell \mathrm{d}}(X, X_{t-1}) - c_t) = \mathrm{Tr}(X g_t g_t^T) + \lambda_t (-\log \det(X X_{t-1}^{-1}) + \mathrm{Tr}(X X_{t-1}^{-1}) - n)) - \lambda_t c_t$ . Setting $\nabla \mathcal{L}(X, \lambda_t) = 0$ , and using the fact that $\nabla \log \det(X) = X^{-1}$ we get the following update rule: + +$$ +X _ {t} ^ {- 1} = X _ {t - 1} ^ {- 1} + g _ {t} g _ {t} ^ {T} / \lambda_ {t}. \tag {5} +$$ + +We emphasize that the update rule (5) arises naturally from our novel use of LogDet divergence to minimize the regret. Moreover, Equation (5) can be seen as a general update rule applicable to numerous existing optimizers. For example, setting $c_{t} = 0$ (equivalently $\lambda_t = \infty$ ) $\forall t \in [n]$ in (4) results in no change to the preconditioner in any round. In this case, with $X_0 = I_n$ , we get online gradient descent [54]. On the other hand, setting $\lambda_{t} = 1$ gives the update rule of the online Newton method [25]. Our update rule differs from (full-matrix) Adagrad [15] which has $X_{t}^{-2} = X_{t - 1}^{-2} + g_{t}g_{t}^{T}$ . + +Maintaining and updating $X_{t}$ as in (5) is possible by using Sherman-Morrison formula but requires $\mathcal{O}(n^2)$ storage and time complexity. This becomes impractical when $n$ is in the order of millions which is typically the case in DNNs. + +# 3.2 Sparsifying the Preconditioner + +To minimize the memory needed for maintaining and updating $X_{t}$ using (5), we adopt the strategy of sparsifying the preconditioner. For existing optimizers such as (full-matrix) Adagrad or the Online Newton method, it is unclear how to sparsify a given preconditioner. Specifically, there is no intuitive approach to assessing the quality of a sparse preconditioner compared to a full-matrix preconditioner. However, since our update rule (5) originates from using LogDet divergence in the regret bound analysis, it gives us a natural metric to measure the quality of a sparse preconditioner. Let's consider the following problem: find a sparse positive definite $X$ with $\| X\|_0 \leq \alpha n$ , $\alpha > 1$ , such that the objective $\mathrm{D}_{\ell \mathrm{d}}(X,(X_{t - 1}^{-1} + g_t g_t^T / \lambda_t)^{-1})$ is minimized. Essentially, this problem imposes a sparsity constraint while requiring the sparse preconditioner to remain close to the full-matrix preconditioner in terms of LogDet divergence. + +Due to the $L_0$ -norm constraint, this is a non-convex problem, which makes it difficult to solve exactly. Since $L_{1}$ -norm serves as a convex relaxation for the $L_0$ norm, we could use it instead, resulting in the following optimization problem also known as graphical lasso estimator [19]: + +$$ +\min _ {X \in S _ {n} ^ {+ +}} \mathrm {D} _ {\ell \mathrm {d}} \left(X, \left(X _ {t - 1} ^ {- 1} + g _ {t} g _ {t} ^ {T} / \lambda_ {t}\right) ^ {- 1}\right) + \gamma \| X \| _ {1}. +$$ + +However, the time taken to solve the above problem, even with the current best methods [7, 29, 16, 53], can still be too large (as these methods take several minutes for a matrix of size million), making it impractical to embed in DNN training. + +In this paper, we take a different direction where we use fixed sparsity pattern constraints, specified by a fixed undirected graph $\mathcal{G}$ . To sparsify the solution in (5), we formulate the subproblem + +$$ +X _ {t} = \underset {X \in S _ {n} (\mathcal {G}) ^ {+ +}} {\arg \min } \mathrm {D} _ {\ell \mathrm {d}} \left(X, \left(X _ {t - 1} ^ {- 1} + g _ {t} g _ {t} ^ {T} / \lambda_ {t}\right) ^ {- 1}\right), \tag {6} +$$ + +where $S_{n}(\mathcal{G})^{++}$ denotes the set of positive definite matrices with the fixed sparsity pattern corresponding to the adjacency matrix of graph $\mathcal{G}$ . Note that both steps (4) and (6) use the same LogDet measure. + +Owing to the structure of LogDet divergence, (6) can be surprisingly solved in $\mathcal{O}(n)$ and easily parallelizable, for certain sparsity structures $\mathcal{G}$ . Algorithm 1 and 2 presents an instantiation of the proposed SONew method, which solves (6) using $\mathcal{O}(n)$ time and memory for banded matrices with band size $b$ . In particular a tridiagonal matrix, corresponding to a chain graph, is a banded matrix with bandsize 1. + +Algorithm 1 Sparsified Online Newton (SONew) Algorithm +Inputs: $\lambda_{t}\coloneqq$ coefficient in the update (10), $\mathcal{G}\coloneqq$ sparsity graph (banded/tridiagonal), $\epsilon \coloneqq$ damping parameter, $T\coloneqq$ total number of iterations/mini-batches, $\eta_t\coloneqq$ step size/learning rate. Output: $w_{T + 1}$ +1: $H_0 = \epsilon I_d,w_1 = 0$ +2: for $t\in \{1,\dots ,T\}$ do +3: compute $g_{t} = \nabla f_{t}(w_{t})$ +4: $H_{t}\coloneqq H_{t - 1} + P_{\mathcal{G}}(g_{t}g_{t}^{T} / \lambda_{t})\in S_{n}(\mathcal{G})$ with $P_{\mathcal{G}}$ as in (8). $\triangleright \mathcal{O}(n)$ time & memory +5: Get $L,D =$ SPARSIFIED_INVERSE $(H_t,\mathcal{G})$ where $X_{t} = LDL^{T}$ solves (11). +6: Compute descent direction $u_{t} = LDL^{T}g_{t}$ +7: $w_{t + 1} = w_t - \eta_t u_t$ +8: end for +9: return $w_{T + 1}$ + +Algorithm 2 SPARSIFIED_INVERSE $(H,\mathcal{G})$ in $\mathcal{O}(n)$ flops +Inputs: $H\in S_n(\mathcal{G})$ , is as (10). $\mathcal{G}\coloneqq$ the banded graph of band size $b\ll n$ Outputs: lower triangular banded $L\in \mathbb{R}^{n\times n}$ and diagonal matrix $D\in \mathbb{R}^{n\times n}$ +1: function SPARSIFIED_INVERSE(H, G) +2: $L\coloneqq 0,D\coloneqq 0$ +3: $L_{jj}\coloneqq 1,\forall j\in [n]$ +4: for $j\in \{1,\dots ,n\}$ do ▷ parallelizable +5: Let $H_{jI_j}$ and $H_{I_jI_j}$ be defined as in Section 2, where $I_{j}$ $= \{j + 1,\ldots ,j + b\} \cap [n],$ +6: Solve for $L_{I_{j}j}$ in the linear system $H_{I_jI_j}L_{I_jj} = -H_{I_jj}\triangleright \mathcal{O}(b^3)$ time. +7: $D_{jj}\coloneqq 1 / (H_{jj} + H_{I_{jj}}^T L_{I_{jj}})$ +8: end for +9: return $L,D$ +10: end function + +Maintaining $H_{t} \in S_{n}(\mathcal{G})$ in line 4. Solving the subproblem in (6) naively is impractical since $X_{t-1}^{-1}$ is a dense matrix. However, the structure of the LogDet divergence comes to the rescue; the optimization problem in (6) can be expanded as follows: + +$$ +\underset {X \in S _ {n} (\mathcal {G}) ^ {+ +}} {\arg \min } - \log \det (X) + \operatorname {T r} \left(X \left(X _ {t - 1} ^ {- 1} + g _ {t} g _ {t} ^ {T} / \lambda_ {t}\right)\right). \tag {7} +$$ + +Let us define the projection onto $S_{n}(\mathcal{G})$ , $P_{\mathcal{G}}:\mathbb{R}^{n\times n}\to \mathbb{R}^{n\times n}$ as: + +$$ +P _ {\mathcal {G}} (M) _ {i j} = \left\{ \begin{array}{l l} M _ {i j} & \text {i f} (i, j) \in E _ {\mathcal {G}}, \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {8} +$$ + +Note that the $\mathrm{Tr}(.)$ term in (7) is dependent only on the non-zero elements of $X\in S_n(\mathcal{G})^{++}$ , since $\mathrm{Tr}(AB) = \langle A,B\rangle$ , for symmetric matrices $A$ and $B$ . Hence, (7) can be written as + +$$ +\underset {X \in S _ {n} (\mathcal {G}) ^ {+ +}} {\arg \min } - \log \det (X) + \langle X, P _ {\mathcal {G}} \left(X _ {t - 1} ^ {- 1} + g _ {t} g _ {t} ^ {T} / \lambda_ {t}\right) \rangle , \tag {9} +$$ + +Computing the entire matrix $X_{t - 1}^{-1}$ can be avoided by analyzing the optimality condition of (9). Let $g(X) = -\log \det(X) + \langle X, P_{\mathcal{G}}(X_{t - 1}^{-1} + g_t g_t^T / \lambda_t) \rangle$ denote the objective function in (9), then the optimality condition of (9) is $P_{\mathcal{G}}(\nabla g(X)) = P_{\mathcal{G}}(\nabla (-\log \det(X) + \langle X, P_{\mathcal{G}}(X_{t - 1}^{-1} + g_t g_t^T / \lambda_t) \rangle) = 0,$ since gradients with respective nonzero entries of $X$ should be zero, $\frac{\partial g(X)}{\partial X_{i,j}} = (\nabla_X(g(X)))_{i,j} = 0,\forall (i,j) \in E_{\mathcal{G}}$ . Using $\nabla(-\log \det(X)) = -X^{-1}, \nabla_X(\langle X,Y \rangle) = Y$ , and setting $X = X_t$ gives: + +$$ +P _ {\mathcal {G}} (X _ {t} ^ {- 1}) - P _ {\mathcal {G}} (X _ {t - 1} ^ {- 1} + g _ {t} g _ {t} ^ {T} / \lambda_ {t}) = 0, +$$ + +$$ +H _ {t} = H _ {t - 1} + P _ {\mathcal {G}} \left(g _ {t} g _ {t} ^ {T} / \lambda_ {t}\right), \quad \text {w h e r e} H _ {t} = P _ {\mathcal {G}} \left(X _ {t} ^ {- 1}\right) \tag {10} +$$ + +Thus we only need to maintain $H_{t} = P_{\mathcal{G}}(X_{t}^{-1})$ . This matrix is updated as $H_{t} = H_{t - 1} + P_{\mathcal{G}}(g_{t}g_{t}^{T} / \lambda_{t})$ . Since $H_{t}\in S_{n}(\mathcal{G})$ , the update can be done in $\mathcal{O}(|E_{\mathcal{G}}|)$ memory and time, while computing the matrix $X_{t}^{-1}$ would have cost $\mathcal{O}(n^2)$ . In SONew (Algorithm 1), this key observation is used to maintain $H_{t}$ in line 4. + +Computing $X_{t}$ in line 5. Now that $H_{t}$ is known at every round $t$ , we can replace $P_{\mathcal{G}}(X_{t - 1}^{-1} + g_t g_t^T / \lambda_t)$ in (9) with $H_{t}$ as: + +$$ +X _ {t} = \underset {X \in S _ {n} (\mathcal {G}) ^ {+ +}} {\arg \min } - \log \det (X) + \operatorname {T r} \left(X H _ {t}\right). \tag {11} +$$ + +For an arbitrary graph $\mathcal{G}$ , solving (11) might be difficult. Theorems 3.1 and 3.2 show embarrassingly parallelizable explicit solutions to the subproblem (11) for tridiagonal and banded sparsity patterns. + +Theorem 3.1 (Explicit solution of (11) for tridiagonal structures/chain graph). Let the sparsity structure $\mathcal{G}$ be a chain with edges $E_{\mathcal{G}} = \{(i,j):|i - j|\leq 1,1\leq i,j\leq n\}$ . Also, let $H\in S_n(\mathcal{G})$ be such that any submatrix of $H$ corresponding to a complete subgraph of $\mathcal{G}$ is positive definite, then the solution of (11) is given by $\hat{X} = LDL^{T}$ , where the unit lower bidiagonal matrix $L$ and diagonal matrix $D$ have the following non-zero entries: + +$$ +L _ {j j} = 1, L _ {j + 1 j} = - \frac {H _ {j + 1 j}}{H _ {j + 1 j + 1}}, D _ {j j} ^ {- 1} = H _ {j j} - \frac {H _ {j + 1 j} ^ {2}}{H _ {j + 1 j + 1}}, j \leq n - 1 \& D _ {n n} ^ {- 1} = H _ {n n} \tag {12} +$$ + +Computing this explicit solution involves conducting parallellizable operations on $2 \times 2$ principle submatrices (highlighted in red) of the tridiagonal matrix $H$ to find the $\hat{X}$ as shown in the following $3 \times 3$ example: + +$$ +\begin{array}{l} H = \left( \begin{array}{c c c} \framebox {H _ {1 1}} & H _ {1 2} \\ \framebox {H _ {2 1}} & H _ {2 2} \\ 0 & H _ {3 2} \end{array} \right) = \left( \begin{array}{c c c} \tilde {H} _ {1 1} & \tilde {H} _ {1 2} & 0 \\ \tilde {H} _ {2 1} & \tilde {H} _ {2 2} & \tilde {H} _ {2 3} \\ 0 & \tilde {H} _ {3 2} & \tilde {H} _ {3 3} \end{array} \right) + \left( \begin{array}{c c c} g _ {1} ^ {2} & g _ {1} g _ {2} & 0 \\ g _ {1} g _ {2} & g _ {2} ^ {2} & g _ {2} g _ {3} \\ 0 & g _ {2} g _ {3} & g _ {3} ^ {2} \end{array} \right) \tag {13} \\ \rightarrow \hat {X} = \left(\begin{array}{c c c}\framebox {1}&0&0\\- \frac {H _ {2 1}}{H _ {2 2}}&1&0\\0&- \frac {H _ {3 2}}{H _ {3 3}}&1\end{array}\right)\left(\begin{array}{c c c}\framebox {H _ {1 1} - \frac {H _ {2 1} ^ {2}}{H _ {2 2}}}&0&0\\0&H _ {2 2} - \frac {H _ {2 3} ^ {2}}{H _ {3 3}}&0\\0&0&H _ {3 3}\end{array}\right)\left(\begin{array}{c c c}\framebox {1}&- \frac {H _ {2 1}}{H _ {2 2}}&0\\0&1&- \frac {H _ {3 2}}{H _ {3 3}}\\0&0&1\end{array}\right) \\ \end{array} +$$ + +Conducting these operations take $\mathcal{O}(n)$ time and memory complexity, and similarly the descent direction can be found sequentially by $X_{t}g_{t} = L(D(L^{T}g_{t}))$ , which can take $\mathcal{O}(n)$ time complexity, due to unit lower bidiagonal structure of $L$ , furthermore, these operations can be easily parallelized. We also generalize the explicit solution to banded sparsity structures with band size $b$ . + +Theorem 3.2 (Explicit solution of (11) for banded structures). Let the sparsity pattern $\mathcal{G}$ be a banded matrix of band size $b$ , i.e. $E_{\mathcal{G}} = \{(i,j):|i - j|\leq b,1\leq i,j\leq n\}$ . For every vertex $j$ , let $I_{j} = \{j + 1,\dots ,j + b\}$ . Then $X_{t} = LDL^{T}$ is the solution of (11) with nonzero entries of $L$ and $D$ defined as follows: + +$$ +L _ {j j} = 1, L _ {I _ {j} j} = - H _ {I _ {j} I _ {j}} ^ {- 1} H _ {I _ {j} j}, \quad D _ {j j} ^ {- 1} = \left(H _ {j j} - H _ {I _ {j} j} ^ {T} H _ {I _ {j} I _ {j}} ^ {- 1} H _ {I _ {j} j}\right), 1 \leq j \leq n. \tag {14} +$$ + +where, $H \in S_n(\mathcal{G})$ any submatrix of $H$ corresponding to a complete subgraph of $\mathcal{G}$ is positive definite. + +Note that Theorem 3.1 is a special case of Theorem 3.2 when $b$ is set to 1, and the proof for Theorem 3.2 is given in Appendix A.1. Computing the above solution requires solving $n$ linear systems of size $b$ (which is small) as shown in Algorithm 2, and takes $\mathcal{O}((n - b + 1)b^3)$ flops. Since $b \ll n$ , the number of flops is $\mathcal{O}(n)$ . + +# 3.3 Regret bound analysis of SONew + +The following theorem establishes optimal regret guarantee [26] for SONew in the online convex optimization framework mentioned in Section 3.1. + +Theorem 3.3. When $\mathcal{G} =$ tridiagonal/chain graph as defined in Theorem 3.1, then setting $\epsilon = \hat{\epsilon} G_{\infty}\sqrt{T}$ , $\lambda_t = G_{\infty}\sqrt{t}$ and $\eta_t = \frac{D_2}{\hat{\epsilon}\sqrt{n}}$ in Algorithm 1, where $\| w_t - w^*\| _2\leq D_2$ , $\| g_t\|_{\infty}\leq G_{\infty}$ incurs a regret $R_{T} = \mathcal{O}(\sqrt{n} G_{\infty}D_{2}\sqrt{T})$ . + +The proof sketch involves deriving an explicit expression for entries of $X_{t}^{-1}$ in Lemma A.2, to upper bound the term $(w_{t} - w^{*})^{T}(X_{t}^{-1} - X_{t - 1}^{-1})(w_{t} - w^{*})$ in regret upper bound (3). Upper bounding $\frac{\eta}{2}\sum_{t = 1}^{T}g_{t}^{T}X_{t}g_{t}$ involves using the Loewner order $X_{t}\preceq \| X_{t}\|_{2}I_{n}\preceq \| X_{t}\|_{\infty}I_{n}$ . A detailed proof sketch and proof is given in Appendix A.2. We note here that though the regret bound presented here is for convex losses, there are connections to non-convex convergence guarantees by using OCO (online convex optimization) learners, presented in Appendix A.2.5. While our main focus is on deep neural network training, which is typically non-convex, we also conducted convex experiments in Table 9. + +
OptimizerTime complexityMemory complexity
AdamO(d1d2)d1d2
rfdSON(m)O(m2d1d2)md1d2
ShampooO(d13+d23)(d12+d22)
tridiag-SONewO(d1d2)2d1d2
band-4-SONewO(d1d2)5d1d2
+ +Table 1: Consider preconditioning a $d_{1} \times d_{2}$ parameter matrix. Time complexity of tridiag and banded SONew inverse scale linearly with number of parameters, but, Shampoo is cubic in the dimensions of the matrix. Memory used to store second-moments of gradients by tridiag-SONew can be significantly lower than Shampoo, for e.g. if $d_{1} = 4d_{2}$ , then Shampoo takes $>2$ times more memory. + +# 3.4 Numerical Stability of SONew + +In Theorem 3.1 and Theorem 3.2, as mentioned, any submatrix of $H_{t}$ corresponding to a complete subgraph of $\mathcal{G}$ should be positive definite, however, in practice, due to finite precision, each entry of $H$ is inherently perturbed with an error proportional to $\mathcal{O}(\epsilon_{mach})$ , where $\epsilon_{mach}$ is machine epsilon [27]. We notice in practice that the subtraction operation in $D_{jj}^{-1} = S_{jj} = H_{jj} - H_{j+1j}^{2} / H_{j+1j+1}$ (line 7 Algorithm 2), which has a condition number $\kappa_{sub} = |H_{jj}| / |S_{jj}|$ , can be high as $S_{jj}$ can be arbitrarily low due to near singular submatrices $\begin{bmatrix} H_{ii} & H_{ii+1} \\ H_{i+1i} & H_{i+1i+1} \end{bmatrix}$ . Thus small perturbation in $H$ can lead to high perturbations in the preconditioner $\hat{X}$ . We formalize this notion by deriving an end-to-end componentwise condition number (pg. 135, problem 7.11 in [27]) of SPARSIFIED_INVERSE in Theorem A.10, Appendix A.3. To reduce this condition number upper bound and be robust to perturbations in $H_{t}$ caused by finite precision, for a tridiagonal graph $\mathcal{G}$ , we can remove edges $(j,j+1)$ which correspond to low $S_{jj} < \gamma$ , where $\gamma \geq 0$ denotes a tolerance parameter. We show in Theorem A.11, Appendix A.3 that this reduces the condition number upper bound of SPARSIFIED_INVERSE. Furthermore, we generalize this to banded sparsity pattern in Algorithm 3, Appendix A.3. + +# 4 Related Work + +Online Newton method is a second order method in online convex optimization framework with properties such as scale invariance [35] and logarithmic regrets in exp-concave and strongly convex functions [25, 26]. However, it has a time complexity of $\mathcal{O}(n^2)$ , making it infeasible for large $n$ . However, introduction of LogDet divergence measure in SONew allows us to set different sparsity graphs as $\mathcal{G}$ such as banded graph with band-size $b$ , for which our preconditioning process is more computationally efficient with a time complexity of $\mathcal{O}(b^3 (n - b + 1))$ compared to online-newton method $\mathcal{O}(n^2)$ . + +Shampoo [24, 5] approximates full gradient statistics matrix using Kronecker factored preconditioners to reduce the memory and time complexity from $\mathcal{O}(n^2)$ to $\mathcal{O}(d_1^2 + d_2^2)$ and $\mathcal{O}(d_1^3 + d_2^3)$ respectively. Here, $n = d_1d_2$ denotes number of parameters for a linear layer of dimensions $d_1 \times d_2$ . The time complexity of matrix inversion takes a heavy toll in Shampoo's compute time even with the Kronecker product assumption on the preconditioner, whereas, our method has a time complexity of $\mathcal{O}(b^3 d_1 d_2)$ quadratic in dimensions of the linear layer (note that $b = 1$ for tridiagonal structure). + +KFAC [38], similar to Shampoo, uses Kronecker factored preconditioning, but to approximate the Fisher-information matrix. FishLeg [20] instead approximates the inverse Fisher matrix directly by expressing it in terms of the solution to an optimisation problem. Both these methods have memory and time complexity similar to Shampoo. In this work, we compare with Shampoo among the class of Kronecker factored optimizers due to its widespread testing and adoption within the community [46]. We also point the readers to Eva [52], a concurrent work aimed at devising memory efficient optimizer by maintaining rank one matrix approximation to the Kronecker factors of KFAC matrices. For completeness, we include comparison with KFAC, FishLeg, and Eva on Autoencoder benchmark. + +There is prior work [35, 36] in reducing the complexity - $\mathcal{O}(n^2)$ flops of Online Newton Step (ONS) to $\mathcal{O}(n)$ flops using sketching. These ONS variants maintain a low rank approximation of $H_{t}$ (as in Algorithm 1) and updating it with a new gradient $g_{t}$ at every iteration requires conducting SVD [36]/orthonormalization [35] of a tall and thin matrix in $\mathbb{R}^{n\times r}$ , where $r$ denotes the rank of approximation of $H_{t}$ . In Section 5, we conduct large scale experiments and compare SONew against rfdSON [37] as it's more stable than Oja-SON [35]. + +Table 2: float32 experiments on Autoencoder benchmark. We observe that diag-SONew performs the best among all first order methods while taking similar time. tridiag and band-4 SONew perform significantly better than first order methods while requiring similar linear space and time. Shampoo performs best but takes $\mathcal{O}(d_1^3 + d_2^3)$ time for computing preconditioner of a linear layer of size $d_1 \times d_2$ , whereas our methods take $\mathcal{O}(d_1 d_2)$ time, as mentioned in Section 3.3. rfdSON takes similar space as SONew but performs considerably worse. + +
OptimizerFirst Order MethodsSecond Order Methods
AdagradRMSPropAdamdiag-SONewShampoo(20)rfdSON(1)rfdSON(4)tridiag-SONewband-4-SONew
Train CE loss54.39353.33053.59153.02550.70256.2155.5551.72351.357
Time(s)626262633718530070260
+ +LogDet problem in equation 11 is closely related to the Maximum Determinant Matrix Completion (MDMC) [4, 49]. The MDMC problem is the dual of LogDet problem (11), and has explicit solutions for chordal graphs [4]. Thus the explicit solutions in (14) are the same as the ones proved in [4]. Also, we noticed that the tridiagonal explicit solution has been used previously in KFAC [38] in the context of a gaussian graphical model interpretation of gradients, specifically, KFAC used a block-tridiagonal preconditioner to incorporate correlation within consecutive layers. + +# 5 Experimental Results + +We describe our experiments on standard Autoencoder benchmark [42] trained on MNIST dataset [12], Vision Transformer [13] on Imagenet training, GraphNetwork [6, 21] on OGBG-molpcba dataset [30], and a Large Language Model [47]. For all second order optimizers, we use grafting [2], a technique used to transfer step size between optimization algorithms. Specifically, given an update $v_{1}$ of Optimizer-1 and $v_{2}$ of Optimizer-2, grafting allows us to use the direction suggested by Optimizer-2 with step size suggested by Optimizer-1. The final update is given by $\frac{\|v_1\|}{\|v_2\|} \cdot v_2$ . Grafting has been shown to take advantage of a tuned optimizer step size and improve performance. For SONew and rfdSON, we use Adam grafting - using Adam optimizer step size $\left\| v_{1}\right\|$ with SONew/rfdSON direction $v_{2} / \| v_{2}\|$ . For Shampoo, we use its default RMSProp grafting. We couldn't find rfdSON official implementation, so we use our own implementation using which we reproduced the numbers on convex losses (Appendix A.4) reported in their paper [37]. + +# 5.1 Autoencoder benchmark + +Setup: We use three sparsity patterns for SONew - a) diagonal sparsity, resulting in a diagonal preconditioner similar to adaptive first order methods like Adam and Adagrad; b) tridiagonal sparsity, corresponding to a chain graph; and c) banded sparsity, represented by "band- $k$ " in tables and figures for band size of $k$ . We compare SONew against widely used first order methods including SGD [32]), SGD with Momentum [41], Nesterov [40], Adagrad [14], Adam [33], and Rmsprop [48]. We also compare with rfdSON [37], a recently proposed memory efficient second order optimizer and with Shampoo [24], a state of the art second-order optimizer used in practice, albeit with considerable memory and time requirements. Because of space constraint, we report only the best performing first order methods while include the entire set in the appendix. As previously mentioned, rfdSON maintains a low rank approximation of the Online Newton method's statistics matrix $\sum_{i} g_{i} g_{i}^{T}$ . We observed rfdSON with adam grafting always performed better than without grafting, hence report the corresponding numbers. We evaluate rfdSON with rank $m$ approximation, denoted as rfdSON( $m$ ), which requires $(m + 1) * \# \text{params}$ space when using grafting. For a fair comparison with tridiag-SONew and band-4-SONew, we test rfdSON with $m = 1$ and $m = 4$ , respectively. For shampoo, computing preconditioner at every step could be infeasible, instead it is computed every $t$ steps - referred to as Shampoo( $t$ ). Section 3.3 compares time and memory complexities of rfdSON, Shampoo, tridiag-SONew, band-4-SONew. Note that $d_{1}^{2} + d_{2}^{2} \geq 2d_{1}d_{2} \forall d_{1}, d_{2}$ , thus memory used by tridiag-SONew is never more than Shampoo. We use a 2.72M parameters Autoencoder and each experiment is performed using one V100 GPU having 16 GB memory. Further setup details are given in Appendix A.4. + +Results: In Table 2 we observe that among first order methods, diag-SONew performs the best while taking same amount of time. Increasing the number of edges in the sparsity graph to tridiag or banded sparsity with band size 4 enhances the performance further. Tridiag-SONew runs $5 \times$ faster than Shampoo at a marginal cost to the loss - even when Shampoo updates preconditioner once every 20 steps. Using same space, rfdSON performs considerably worse than SONew. To test the numerical stability and robustness of SONew, we reduce the precision to bfloat16 and conduct similar + +![](images/febbea4d9db6fa990fb60495b436542091a8be2af862d6ad8470853d94613437.jpg) +(a) VIT validation error + +![](images/e2cdb4a5c311a48be26b7af5c88db1cfe21526169f7cf00f138551ba8400d7fa.jpg) +(b) GraphNetwork validation avg. precision + +![](images/599b03a83859483c75f63e71dfcd33a93f1b1dc09a75d5d858cb37023d127e7c.jpg) +Figure 2: (a) Comparison of SONew with first-order optimizers, rfdSON, and Shampoo on Autoencoder benchmark in float32 training. We observe that SONew performs better than all first order methods, and second order methods using the same memory. +Figure 3: Comparison of SONew and Adafactor on LLM training. SONew takes $26\%$ less steps to reach the same performance as AdaFactor. Using the same number of steps, it achieves $\sim 1.7\%$ relative better log perplexity. + +![](images/5b829778791183e8fa63e3da54fe7670c24f5d93d582d0c5a6eedc01124d4892.jpg) +Figure 1: (a) Best validation error runs for tridiag-SONew vs Momentum, RMSProp, Adam, rfdSON, and Shampoo on (a) VIT benchmark (b) GraphNetwork benchmark. We notice that tridiag-SONew achieves same performance as Adam, the next best baseline using similar space and time, with $10\%$ and $30\%$ less steps/time in ViT and GraphNetwork respectively. While using the same number of steps, SONew achieves relatively $0.7\%$ and $\sim 3.4\%$ better validation error respectively. Shampoo doesn't fit in the 16GB memory of TPU v2 for ViT benchmark, hence we couldn't perform hyperparameter tuning on it. On GraphNetwork, compared to Shampoo, tridiag-SONew gives similar performance while being far more memory efficient (Refer Appendix A.4.2 for more details). + +experiments in Appendix A.4.4 (Table 8). We notice that SONew undergoes the least degradation in performance compared to all other optimizers. We refer the reader to Appendix A.4.4 for a thorough comparison and study of bfloat16 experiments. In Figure 2 we plot the loss curves of all the baselines and SONew for float32 experiments. Moreover, in Appendix A.4.1 Table 4 we provide ablation on performance of SONew with varying batch sizes. + +Comparison with other baselines: We further compare SONew with KFAC [38], FishLeg [20], and Eva [52] for completeness. Since these methods lack a JAX implementation, we adopted the authors' official Pytorch implementations. When we attempted to integrate their code with our Autoencoder benchmark, the results were unsatisfactory; for instance, FishLeg recorded a loss of approximately $\sim 60.0$ . This was notably unexpected as it underperformed Adam, a benchmark that the authors themselves compared against. Given these results and to minimize modifications to the official code, we decided to test our optimizer, SONew, directly on their provided autoencoder architecture. We present the results in Appendix A.4.4 and notice that SONew outperforms these baselines as well by a large margin. + +# 5.2 VIT and GraphNetwork benchmark + +Setup: We compare tridiag-SONew with Momentum, RMSProp, and Adam, on VIT ( $\sim 22\mathrm{M}$ parameters) and GraphNetwork ( $\sim 3.5\mathrm{M}$ parameters) benchmark. For each experiment, we search + +over 200 hyperparameters using 4 16 GB TPUs (v2) for each run. In order to conduct a fair comparison of the running times, we executed the optimal hyperparameter configurations on 4 32GB TPUs (v4) [31]. This is because certain operations, including reshapes and transposes, are not optimized on TPUs (v2). Consequently, methods like rfdSON, Shampoo or SONew, which utilize these operations, could potentially be disadvantaged if TPUs (v2) were used, skewing the comparative results. All memory-efficient methods, including rfdSON, first-order methods, and SONew, exhibit similar runtimes, with differences of approximately $\sim 5\%$ . For ViT, we evaluate their performance based on the same number of steps, as this also effectively compares wall-clock time. However, for GraphNetwork, we train Shampoo for $20\%$ fewer steps to achieve a comparable wall-clock time. + +Results: We plot the runs that give best validation error rate (for VIT) or validation average precision (for GraphNetwork) in Figure 1. tridiag-SONew requires $\sim 10\%$ less steps to reach the same performance as Adam for VIT, and $\sim 30\%$ less steps for GraphNetwork benchmark. Training for the same number of steps, we get $\sim 0.7\%$ better relative validation error for VIT and $\sim 3.4\%$ better relative validation average precision for GraphNetwork. On GraphNetwork benchmark, tridiag-SONew performs $1.3\%$ relatively worse in average precision compared to Shampoo, while being $1.25\times$ faster. On VIT benchmark, Shampoo doesn't fit in a 16 GB TPU v2. Its statistics require 155M entries ( $\sim 7\times$ #params) while tridiag-SONew requires only 44M entries ( $2\times$ #params). Hence, we could not tune it. rfdSON takes same memory but slightly more time because of its SVD computations. We also notice rfdSON performs worse than Adam on both benchmarks; we leave a thorough investigation of this behavior as a future work. + +We show in Appendix A.4 that corresponding to the best validation runs, tridiag-SONew optimizer's training loss is also less than that of Adam. Furthermore, from an optimization point of view we also show in Appendix A.4 that among all the 200 hyperparameter sweeps, the best training loss of tridiag-SONew is $9\%$ relatively better on ViT and $80\%$ relatively better on GraphNN than that of Adam. We further compare Adam and tridiag-SONew on a 248M parameter Transformer Model in Appendix A.4.4. In next subsection, we present results on a decoder only large scale Language Model. + +# 5.3 Experiments on Language Models + +Setup: Owing to SONew's scalability, we test it on a Large Language Model (LLM) [47] with 1 billion parameters. We compare SONew with AdaFactor (without factoring), a commonly used first order optimizer for training LLMs [51, 11]. AdaFactor is similar to Adam except that in addition it offers benefits like "parameter scaling", which has an effect of layerwise damping of the learning rate. We defer the reader to [45] for more details. We trained the LLM for 5B tokens with a batch size of $64k$ tokens. All experiments were performed on 16 TPU v4s. To support efficient training of large models, we implemented a sharded tridiag-SONew following model parallelism approach. + +Results: We report the experiment in Figure 3 where we find that SONew beats Adafactor by a large margin. Specifically, SONew achieves the same log perplexity using $26\%$ less steps. Moreover, using the same number of tokens, SONew achieves $1.7\%$ relative better performance on train loss, leading to $1.35 \times$ speedup. This shows the potential of SONew as a scalable optimizer that can be used to train large models while using second order information. + +# 6 Conclusions and Future Work + +In this paper we have introduced a novel Sparsified Online Newton (SONew) method that yields a computationally efficient sparse preconditioner that can effectively train very large DNNs. The time and memory complexity of SONew is linear in the number of parameters, unlike current Kronecker-factorization based second-order methods for training deep networks. Our experimental results show that SONew uses similar time as first order methods and achieves much better validation and training performance in various benchmarks. In the future, we plan to explore different sparsity graphs for which efficient solutions exist for the LogDet subproblem (11) and develop corresponding regret bound analyses. Some of the limitations of SONew include: 1) explicit solutions akin to Theorem 3.1 & 3.2 need not exist for all sparsity graphs $\mathcal{G}$ ; 2) Not all graphs allow for efficient optimizer implementation; 3) Among graphs permitting efficient optimizer implementation—like tridiagonal sparsity—the ordering of parameters remains unexplored. An alternative ordering might position closely related parameters adjacently, potentially enhancing performance; 4) Comprehensive exploration of methods to scale final updates is needed. While we employ grafting [2], other techniques, such as clipping [45, 38], merit investigation. + +# References + +[1] N. Agarwal, B. Bullins, X. Chen, E. Hazan, K. Singh, C. Zhang, and Y. Zhang. Efficient full-matrix adaptive regularization. In International Conference on Machine Learning, pages 102-110. PMLR, 2019. +[2] N. Agarwal, R. Anil, E. Hazan, T. Koren, and C. Zhang. Learning rate grafting: Transferability of optimizer tuning, 2022. +[3] S.-I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998. +[4] M. S. Andersen, J. Dahl, and L. Vandenberghe. Logarithmic barriers for sparse matrix cones. Optimization Methods and Software, 28(3):396-423, 2013. +[5] R. Anil, V. Gupta, T. Koren, K. Regan, and Y. Singer. Scalable second order optimization for deep learning. arXiv preprint arXiv:2002.09018, 2020. +[6] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulcehre, F. Song, A. Ballard, J. Gilmer, G. Dahl, A. Vaswani, K. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. Botvinick, O. Vinyls, Y. Li, and R. Pascanu. Relational inductive biases, deep learning, and graph networks, 2018. URL https://arxiv.org/abs/1806.01261. +[7] M. Bollhöfer, A. Eftekhari, S. Scheidegger, and O. Schenk. Large-scale sparse inverse covariance matrix estimation. SIAM Journal on Scientific Computing, 41(1):A380-A401, 2019. +[8] L. M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics, 7(3):200-217, 1967. +[9] C. G. Broyden. Quasi-Newton methods and their application to function minimisation. Mathematics of Computation, 21(99):368-381, 1967. +[10] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. Advances in neural information processing systems, 14, 2001. +[11] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways, 2022. +[12] L. Deng. The MNIST database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142, 2012. +[13] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. URL https://arxiv.org/abs/2010.11929. +[14] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61):2121-2159, 2011. URL http://jmlr.org/papers/v12/duchi11a.html. +[15] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011. +[16] S. Fattahi and S. Sojoudi. Graphical lasso and thresholding: Equivalence and closed-form solutions. Journal of machine learning research, 2019. + +[17] R. Fletcher. A new approach to variable metric algorithms. The computer journal, 13(3): 317-322, 1970. +[18] R. Fletcher. A new variational result for quasi-Newton formulae. SIAM Journal on Optimization, 1(1):18-21, 1991. +[19] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432-441, 2008. +[20] J. R. Garcia, F. Freddi, S. Fotiadis, M. Li, S. Vakili, A. Bernacchia, and G. Hennequin. Fisher-legendre (fishleg) optimization of deep neural networks. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=c91AOPvQHS. +[21] J. Godwin*, T. Keck*, P. Battaglia, V. Bapst, T. Kipf, Y. Li, K. Stachenfeld, P. Velickovic, and A. Sanchez-Gonzalez. Jraph: A library for graph neural networks in jax., 2020. URL http://github.com/deepmind/jraph. +[22] D. Goldfarb. A family of variable-metric methods derived by variational means. Mathematics of computation, 24(109):23-26, 1970. +[23] D. Goldfarb, Y. Ren, and A. Bahamou. Practical quasi-Newton methods for training deep neural networks. Advances in Neural Information Processing Systems, 33:2386-2396, 2020. +[24] V. Gupta, T. Koren, and Y. Singer. Shampoo: Preconditioned stochastic tensor optimization. In International Conference on Machine Learning, pages 1842-1850. PMLR, 2018. +[25] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2):169-192, 2007. +[26] E. Hazan et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157-325, 2016. +[27] N. J. Higham. Accuracy and stability of numerical algorithms. SIAM, 2002. +[28] G. Hinton, N. Srivastava, and K. Swersky. Neural networks for machine learning lecture 6a: overview of mini-batch gradient descent. Cited on, 14(8):2, 2012. +[29] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, P. K. Ravikumar, and R. Poldrack. BIG & QUIC: Sparse inverse covariance estimation for a million variables. Advances in neural information processing systems, 26, 2013. +[30] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs, 2020. URL https://arxiv.org/abs/2005.00687. +[31] N. P. Jouppi, G. Kurian, S. Li, P. Ma, R. Nagarajan, L. Nai, N. Patil, S. Subramanian, A. Swing, B. Towles, C. Young, X. Zhou, Z. Zhou, and D. Patterson. Tpu v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings, 2023. +[32] J. Kiefer and J. Wolfowitz. Stochastic Estimation of the Maximum of a Regression Function. The Annals of Mathematical Statistics, 23(3):462 - 466, 1952. doi: 10.1214/aoms/1177729392. URL https://doi.org/10.1214/aoms/1177729392. +[33] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[34] B. Kulis, M. A. Sustik, and I. S. Dhillon. Low-rank kernel learning with Bregman matrix divergences. Journal of Machine Learning Research, 10(2), 2009. +[35] H. Luo, A. Agarwal, N. Cesa-Bianchi, and J. Langford. Efficient second order online learning by sketching. Advances in Neural Information Processing Systems, 29, 2016. +[36] L. Luo, C. Chen, Z. Zhang, W.-J. Li, and T. Zhang. Robust frequent directions with application in online learning. The Journal of Machine Learning Research, 20(1):1697-1737, 2019. + +[37] L. Luo, C. Chen, Z. Zhang, W.-J. Li, and T. Zhang. Robust frequent directions with application in online learning. Journal of Machine Learning Research, 20(45):1-41, 2019. URL http://jmlr.org/papers/v20/17-773.html. +[38] J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In International conference on machine learning, pages 2408-2417. PMLR, 2015. +[39] S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models, 2016. +[40] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence $o(1 / k^2)$ . 1983. +[41] N. Qian. On the momentum term in gradient descent learning algorithms. *Neural Networks*, 12 (1):145–151, 1999. ISSN 0893-6080. doi: https://doi.org/10.1016/S0893-6080(98)00116-6. URL https://www.sciencedirect.com/science/article/pii/S0893608098001166. +[42] J. Schmidhuber. Deep learning in neural networks: An overview. *Neural Networks*, 61:85–117, jan 2015. doi: 10.1016/j.neunet.2014.09.003. URL https://doi.org/10.1016%2Fj.neunet.2014.09.003. +[43] S. Shalev-Shwartz et al. Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107-194, 2012. +[44] D. F. Shanno. Conditioning of quasi-Newton methods for function minimization. Mathematics of computation, 24(111):647-656, 1970. +[45] N. Shazeer and M. Stern. Adafactor: Adaptive learning rates with sublinear memory cost, 2018. +[46] H.-J. M. Shi, T.-H. Lee, S. Iwasaki, J. Gallego-Posada, Z. Li, K. Rangadurai, D. Mudigere, and M. Rabbat. A distributed data-parallel pytorch implementation of the distributed shampoo optimizer for training neural networks at-scale, 2023. +[47] D. R. So, W. Manke, H. Liu, Z. Dai, N. Shazeer, and Q. V. Le. Primer: Searching for efficient transformers for language modeling, 2022. +[48] T. Tieleman and G. Hinton. Lecture 6.5—rmsprop: Divide the gradient by a running average of its recent magnitude. coursera: Neural networks for machine learning. 2012. +[49] L. Vandenberghe, M. S. Andersen, et al. Chordal graphs and semidefinite optimization. Foundations and Trends® in Optimization, 1(4):241-433, 2015. +[50] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need, 2023. +[51] T. Wang, A. Roberts, D. Hesslow, T. L. Scao, H. W. Chung, I. Beltagy, J. Launay, and C. Raffel. What language model architecture and pretraining objective work best for zero-shot generalization?, 2022. +[52] L. Zhang, S. Shi, and B. Li. Eva: Practical second-order optimization with kronecker-vectorized approximation. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=_Mic8V96Voy. +[53] R. Zhang, S. Fattahi, and S. Sojoudi. Large-scale sparse inverse covariance estimation via thresholding and max-det matrix completion. In International Conference on Machine Learning, pages 5766-5775. PMLR, 2018. +[54] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th international conference on machine learning (icml-03), pages 928–936, 2003. + +# A Supplementary material + +A.1 Properties of LogDet subproblem 14 +A.2 Regret bound analysis 15 + +A.2.1 Regret bound decomposition 15 +A.2.2 Properties of tridiagonal preconditioner 16 +A.2.3 Upperbounding Regret 17 +A.2.4 $\mathcal{O}(\sqrt{T})$ Regret 20 +A.2.5 Non-convex guarantees 23 + +A.3 Numerical stability 23 + +A.3.1 Condition number analysis 24 +A.3.2 Degenerate $H_{t}$ 25 +A.3.3 Numerically Stable SONew proof 26 + +A.4 Additional Experiments, ablations, and details 26 + +A.4.1 Ablations 26 +A.4.2 Memory Requirements 27 +A.4.3 Hyperparameter search space 27 +A.4.4 Additional Experiments 27 +A.4.5 Convex experiments 30 + +# A.1 Properties of LogDet subproblem + +Proof of Theorem 3.2 + +The optimality condition of (11) is $P_{\mathcal{G}}(X^{-1}) = P_{\mathcal{G}}(H)$ , $X \in S_n^{++}(\mathcal{G})$ . Let $Z = L^{-T}D^{-1}L^{-1}$ , then $P_{\mathcal{G}}(Z) = H$ + +$$ +Z L = L ^ {- T} D ^ {- 1} \Longrightarrow Z L e _ {j} = L ^ {- T} D ^ {- 1} e _ {j} +$$ + +Let $J_{j} = I_{j} \cup j$ , where $I_{j} = \{j + 1, \dots, j + b\}$ as defined in the theorem, then select $J_{j}$ indices of vectors on both sides of the second equality above and selecting the $J_{j}$ indices: + +$$ +\left[ \begin{array}{l l} Z _ {j j} & Z _ {j I _ {j}} \\ Z _ {I _ {j} j} & Z _ {J _ {j} J _ {j}} \end{array} \right] \left[ \begin{array}{l} 1 \\ L _ {I _ {j}} \end{array} \right] = \left[ \begin{array}{c} 1 / d _ {j j} \\ 0 \end{array} \right] \tag {15} +$$ + +Note that $L^{-T}$ is an upper triangular matrix with ones in the diagonal hence $J_{j}^{th}$ block of $L^{-T}e_{j}$ will be $[1,0,0,\ldots]$ . Also, since $P_{\mathcal{G}}(Z) = H$ + +$$ +\left[ \begin{array}{c c} Z _ {j j} & Z _ {j I _ {j}} \\ Z _ {I _ {j} j} & Z _ {J _ {j} J _ {j}} \end{array} \right] = \left[ \begin{array}{c c} H _ {j j} & H _ {j I _ {j}} \\ H _ {I _ {j} j} & H _ {J _ {j} J _ {j}} \end{array} \right] +$$ + +Substituting this in the linear equation 15 + +$$ +\left[ \begin{array}{c c} H _ {j j} & H _ {j I _ {j}} \\ H _ {I _ {j} j} & H _ {J _ {j} J _ {j}} \end{array} \right] \left[ \begin{array}{c} 1 \\ L _ {I _ {j}} \end{array} \right] = \left[ \begin{array}{c} 1 / d _ {j j} \\ 0 \end{array} \right] +$$ + +$$ +\left[ \begin{array}{c c} H _ {j j} & H _ {j I _ {j}} \\ H _ {I _ {j} j} & H _ {J _ {j} J _ {j}} \end{array} \right] \left[ \begin{array}{c} d _ {j j} \\ d _ {j j} \cdot L _ {I _ {j}} \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] +$$ + +$$ +H _ {j j} d _ {j j} + d _ {j j} H _ {I _ {j} j} ^ {T} L _ {I _ {j} j} = 1 +$$ + +$$ +H _ {I _ {j j}} d _ {j j} + d _ {j j} H _ {I _ {j} I _ {j}} L _ {I _ {j j}} = 0 +$$ + +The lemma follows from solving the above equations. Note that here we used that lower triangular halves of matrices $L$ and $H$ have the same sparsity patterns, which follows from the fact that banded + +graph is a chordal graph with perfect elimination order $\{1,2,\ldots ,n\}$ . Furthermore, $X_{t}$ is positive definite, since as $(H_{jj} - H_{I_jj}^T H_{I_jI_j}^{-1}H_{I_jj})$ is a schur complement of submatrix of $H$ formed by $J_{j} = I_{j}\cup \{j\}$ . + +Proof of Theorem 3.1 The proof follows trivially from Theorem 3.1, when $b$ is set to 1. + +# A.2 Regret bound analysis + +Proof sketch of Theorem 3.3. We decompose the regret into $R_{T} \leq T_{1} + T_{2} + T_{3}$ in Lemma A.1 and individually bound the terms. Term $T_{2} = \frac{1}{2\eta} \cdot \sum_{t=1}^{T-1} (w_{t+1} - w^{*})^{T} (X_{t+1}^{-1} - X_{t}^{-1})(w_{t+1} - w^{*})$ depends on closeness of consecutive inverses of preconditioners, $(X_{t+1}^{-1} - X_{t}^{-1})$ , to upperbound this we first give explicit expressions of $X_{t}^{-1}$ for tridiagonal preconditioner in Lemma A.2 in Appendix A.2.2. This explicit expression is later used to bound each entry of $(X_{t+1}^{-1} - X_{t}^{-1})$ with $O(1/\sqrt{t})$ in Appendix A.2.4, this gives a $O(\sqrt{T})$ upperbound on $T_{2}$ . To show an upperbound on $T_{3} = \sum_{t=1}^{T} \frac{\eta}{2} \cdot g_{t}^{T} X_{t} g_{t}$ , we individually bound $g_{t}^{T} X_{t} g_{t}$ by using a Loewner order $X_{t} \preceq \|X_{t}\|_{2} I_{n} \preceq \|X_{t}\|_{\infty} I_{n}$ and show that $\|X_{t}\|_{\infty} = O(1/\sqrt{T})$ and consequently $T_{3} = O(\sqrt{T})$ . + +# A.2.1 Regret bound decomposition + +In this subsection we state Lemma A.1 which upper bound the regret $R_{T}$ using three terms $T_{1}, T_{2}, T_{3}$ . + +Lemma A.1 ([26]). In the OCO problem setup, if a prediction $w_{t} \in \mathbb{R}^{n}$ is made at round $t$ and is updated as $w_{t+1} := w_{t} - \eta X_{t} g_{t}$ using a preconditioner matrix $X_{t} \in S_{n}^{++}$ + +$$ +\begin{array}{l} R _ {T} \leq \frac {1}{2 \eta} \cdot \left(\left\| w _ {1} - w ^ {*} \right\| _ {X _ {1} ^ {- 1}} ^ {2} - \left\| w _ {T + 1} - w ^ {*} \right\| _ {X _ {T} ^ {- 1}}\right) (16) \\ + \frac {1}{2 \eta} \cdot \sum_ {t = 1} ^ {T - 1} \left(w _ {t + 1} - w ^ {*}\right) ^ {T} \left(X _ {t + 1} ^ {- 1} - X _ {t} ^ {- 1}\right) \left(w _ {t + 1} - w ^ {*}\right) (17) \\ + \sum_ {t = 1} ^ {T} \frac {\eta}{2} \cdot g _ {t} ^ {T} X _ {t} g _ {t} (18) \\ \end{array} +$$ + +Proof. + +$$ +\begin{array}{l} \left\| w _ {t + 1} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2} = \left\| w _ {t} - \eta X _ {t} g _ {t} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2} \\ = \left\| w _ {t} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2} + \eta^ {2} g _ {t} ^ {T} X _ {t} g _ {t} \\ - 2 \eta \left(w _ {t} - w ^ {*}\right) ^ {T} g _ {t} \\ \Longrightarrow 2 \eta \left(w _ {t} - w ^ {*}\right) ^ {T} g _ {t} = \left\| w _ {t} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2} - \left\| w _ {t + 1} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2} \\ + \eta^ {2} g _ {t} ^ {T} X _ {t} g _ {t} \\ \end{array} +$$ + +![](images/006ede6d80b6366737c5e61fd5b936eed40289c1444a961e61983d04551baed3.jpg) + +Using the convexity of $f_{t}, f_{t}(w_{t}) - f_{t}(w^{*}) \leq (w_{t} - w^{*})^{T}g_{t}$ , where $g_{t} = \Delta f_{t}(w_{t})$ and summing over $t \in [T]$ + +$$ +\begin{array}{l} R _ {T} \leq \sum_ {t = 1} ^ {T} \frac {1}{2 \eta} \cdot \left(\left\| w _ {t} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2} - \left\| w _ {t + 1} - w ^ {*} \right\| _ {X _ {t} ^ {- 1}} ^ {2}\right) (19) \\ + \frac {\eta}{2} \cdot g _ {t} ^ {T} X _ {t} g _ {t} (20) \\ \end{array} +$$ + +The first summation can be decomposed as follows + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \left(\| w _ {t} - w ^ {*} \| _ {X _ {t} ^ {- 1}} ^ {2} - \| w _ {t + 1} - w ^ {*} \| _ {X _ {t} ^ {- 1}} ^ {2}\right) \\ = \left(\| w _ {1} - w ^ {*} \| _ {X _ {1} ^ {- 1}} ^ {2} - \| w _ {T + 1} - w ^ {*} \| _ {X _ {T} ^ {- 1}} ^ {2}\right) \\ + \sum_ {t = 1} ^ {T - 1} \left(w _ {t + 1} - w ^ {*}\right) ^ {T} \left(X _ {t + 1} ^ {- 1} - X _ {t} ^ {- 1}\right) \left(w _ {t + 1} - w ^ {*}\right) \\ \end{array} +$$ + +Substituting the above identity in the Equation (19) proves the lemma. + +Let $R_{T} \leq T_{1} + T_{2} + T_{3}$ , where + +$$ +\cdot T _ {1} = \frac {1}{2 \eta} \cdot \left(\left\| w _ {1} - w ^ {*} \right\| _ {X _ {1} ^ {- 1}} ^ {2} - \left\| w _ {T + 1} - w ^ {*} \right\| _ {X _ {T} ^ {- 1}}\right) +$$ + +. + +$$ +\begin{array}{l} T _ {2} = \frac {1}{2 \eta} \cdot \sum_ {t = 1} ^ {T - 1} \left(w _ {t + 1} - w ^ {*}\right) ^ {T} \left(X _ {t + 1} ^ {- 1} - X _ {t} ^ {- 1}\right) \left(w _ {t + 1} - w ^ {*}\right) \tag {21} \\ \cdot T _ {3} = \sum_ {t = 1} ^ {T} \frac {\eta}{2} \cdot g _ {t} ^ {T} X _ {t} g _ {t} \\ \end{array} +$$ + +# A.2.2 Properties of tridiagonal preconditioner + +In this subsection, we derive properties of the tridigonal preconditioner obtained from solving the LogDet subproblem (11) with $\mathcal{G}$ set to a chain graph over ordered set of vertices $\{1,\dots ,n\}$ : + +$$ +\begin{array}{l} X _ {t} = \underset {X \in S _ {n} (\mathcal {G}) ^ {+ +}} {\arg \min } - \log \det (X) + \operatorname {T r} (X H _ {t}) (22) \\ = \underset {X \in S _ {n} (\mathcal {G}) ^ {+ +}} {\arg \min } \mathrm {D} _ {\ell \mathrm {d}} \left(X, H _ {t} ^ {- 1}\right) (23) \\ \end{array} +$$ + +The second equality holds true only when $H_{t}$ is positive definite. Although in Algorithm 1 we maintain a sparse $H_{t} = H_{t - 1} + P_{\mathcal{G}}(g_{t}g_{t}^{T} / \lambda_{t})$ , $H_0 = \epsilon I_n$ which is further used in (22) to find the preconditioner $X_{t}$ , our analysis assumes the full update $H_{t} = H_{t - 1} + g_{t}g_{t}^{T} / \lambda_{t}$ , $H_0 = \epsilon I_n$ followed by preconditioner $X_{t}$ computation using (23). Note that the preconditioners $X_{t}$ generated both ways are the same, as shown in Section 3.2. + +The following lemma shows that the inverse of tridiagonal preconditioners used in Algorithm 1, will restore $H_{i,j}$ , when $(i,j)$ fall in the tridiagonal graph, else, the expression is related to product of $H_{i+k,i+k+1}$ corresponding to the edges in the path from node $i$ to $j$ in chain graph. This lemma will be used later in upper bounding $T_2$ . + +Lemma A.2 (Inverse of tridiagonal preconditioner). If $\mathcal{G} =$ chain/tridiagonal graph and $\hat{X} =$ arg $\min_{X\in S_n(\mathcal{G})^{++}}\mathrm{D}_{\ell \mathrm{d}}(X,H^{-1})$ , then the inverse $\hat{X}^{-1}$ has the following expression + +$$ +\left(\hat {X} ^ {- 1}\right) _ {i j} = \left\{ \begin{array}{l l} H _ {i j} & | i - j | \leq 1 \\ \frac {H _ {i i + 1} H _ {i + 1 i + 2} \dots H _ {j - 1 j}}{H _ {i + 1 i + 1} \dots H _ {j - 1 j - 1}} \end{array} \right. \tag {24} +$$ + +Proof. + +$$ +\hat {X} ^ {- 1} \hat {X} ^ {(j)} = e _ {j} +$$ + +Where $\hat{X}^{(j)}$ is the $j^{th}$ column of $\hat{X}$ . Let $\hat{Y}$ denote the right hand side of Equation (24). + +$$ +\begin{array}{l} (\hat {Y} \hat {X}) _ {j j} = \hat {X} _ {j j} \hat {Y} _ {j j} + \hat {X} _ {j - 1 j} \hat {Y} _ {j - 1 j} + \hat {X} _ {j j + 1} \hat {Y} _ {j j + 1} \\ = \hat {X} _ {j j} H _ {j j} + \hat {X} _ {j - 1 j} H _ {j - 1 j} + \hat {X} _ {j j + 1} H _ {j j + 1} \\ = 1 \\ \end{array} +$$ + +The third equality is by using the following alternative form of Equation (12): + +$$ +\left(\hat {X} ^ {(1)}\right) _ {i, j} = \left\{ \begin{array}{l} 0, \text {i f} j - i > 1 \\ \frac {- H _ {i , i + 1}}{\left(H _ {i i} H _ {i + 1 , i + 1} - H _ {i + 1 , i + 1} ^ {2}\right)}, \text {i f} j = i + 1 \\ \frac {1}{H _ {i i}} \left(1 + \sum_ {j \in \operatorname {n e i g} _ {\mathcal {G}} (i)} \frac {H _ {i j} ^ {2}}{H _ {i i} H _ {j j} - H _ {i j} ^ {2}}\right), \text {i f} i = j \end{array} \right., \tag {25} +$$ + +where $i < j$ . Similarly, the offdiagonals of $\hat{Y}\hat{X}$ can be evaluated to be zero as follows. + +$$ +\begin{array}{l} (\hat {Y} \hat {X}) _ {i j} = \hat {Y} _ {i j} \hat {X _ {j}} j + \hat {Y} _ {i j - 1} \hat {X} _ {j - 1 j} + \hat {Y} _ {i j + 1} \hat {X} _ {j + 1 j} \\ = \hat {Y} _ {i j} \hat {X} _ {j j} + \hat {Y} _ {i j} \frac {H _ {j - 1 j - 1}}{H _ {j - 1 j}} + \hat {Y} _ {i j} \frac {H _ {j j + 1}}{H _ {j j}} \hat {X} _ {j + 1 j} \\ = 0 \\ \end{array} +$$ + +![](images/2d217571132b6a22c5cc867c2ce4a008025af132a64883bec7e683b507225f4c.jpg) + +Lemma A.3. Let $y \in \mathbb{R}^n$ , + +$\beta = \max_{t}\max_{i\in [n - 1]}\left|\left(H_{t}\right)_{ii + 1}\right| / \sqrt{\left(H_{t}\right)_{ii}\left(H_{t}\right)_{i + 1i + 1}} < 1$ then + +$$ +y ^ {T} X _ {t} ^ {- 1} y \leq \| y \| _ {2} ^ {2} \| \operatorname {d i a g} (H _ {t}) \| _ {2} \left(\frac {1 + \beta}{1 - \beta}\right), +$$ + +where $X_{t}$ is defined as in Lemma A.2. + +Proof. Let $\tilde{X}_t^{-1} = \mathrm{diag}(H_t)^{-1/2}\hat{X}_t\mathrm{diag}(H_t)^{-1/2}$ + +$$ +y ^ {T} X _ {t} ^ {- 1} y \leq \left\| \operatorname {d i a g} \left(H _ {t}\right) ^ {1 / 2} y \right\| _ {2} ^ {2} \left\| \tilde {X} _ {t} ^ {- 1} \right\| _ {2} \tag {26} +$$ + +Using the identity of spectral radius $\rho(X) \leq \|X\|_{\infty}$ and since $\tilde{X}$ is positive definite, $\left\|\tilde{X}_t^{-1}\right\|_2 \leq \|\tilde{X}_t^{-1}\|_{\infty}$ + +$$ +\begin{array}{l} \left\| \tilde {X} _ {t} ^ {- 1} \right\| _ {2} \leq \max _ {i} \left\{\sum_ {j} \left| (\tilde {X} _ {t} ^ {- 1}) _ {i j} \right| \right\} \\ \leq 1 + 2 \left(\beta + \beta^ {2} + \dots\right) \\ \leq \frac {1 + \beta}{1 - \beta} \\ \end{array} +$$ + +The second inequality is using Lemma A.2. Substituting this in Equation (26) will give the lemma. + +![](images/9d1924502e202b11fa4b6dbe8f760cacb097295507ab87d197b6116815064564.jpg) + +# A.2.3 Upperbounding Regret + +The following Lemma is used in upper bounding both $T_{1}$ and $T_{3}$ . In next subsection, we'll upper bound $T_{2}$ as well. + +Lemma A.4. Let $\beta = \max_{t\in [T]}\max_{i\in [n - 1]}|(H_t)_{ii + 1}| / \sqrt{(H_t)_{ii}(H_t)_{i + 1i + 1}}$ , then + +$$ +1 / (1 - \beta) \leq 8 / \hat {\epsilon} ^ {2}, +$$ + +where, $\hat{\epsilon}$ is a constant in parameter $\epsilon = \hat{\epsilon} G_{\infty}\sqrt{T}$ and consequently used in initializing $H_0 = \epsilon I_n$ in line 1 in Algorithm 1, + +Proof. + +$$ +\begin{array}{l} 1 / (1 - \beta) = \max _ {t} \max _ {i \in [ n - 1 ]} \frac {1}{1 - \left| (\hat {H} _ {t}) _ {i i + 1} \right|} (27) \\ = \max _ {t} \max _ {i \in [ n - 1 ]} \frac {1 + \left| (\hat {H} _ {t}) _ {i i + 1} \right|}{1 - (\hat {H} _ {t}) _ {i i + 1} ^ {2}} \quad \text {(w h e r e} (\hat {H} _ {t}) _ {i i + 1} = \frac {(H _ {t}) _ {i i + 1}}{\sqrt {(H _ {t}) _ {i i} (H _ {t}) _ {i + 1 i + 1})}} \\ \leq \max _ {t} \max _ {i \in [ n - 1 ]} \frac {2 \left(H _ {t}\right) _ {i i} \left(H _ {t}\right) _ {i + 1 i + 1}}{\left(H _ {t}\right) _ {i i} \left(H _ {t}\right) _ {i + 1 i + 1} - \left(H _ {t}\right) _ {i i + 1} ^ {2}} \quad (\text {s i n c e} | (H _ {t}) _ {i i + 1} | \leq \sqrt {\left(H _ {t}\right) _ {i i} \left(H _ {t}\right) _ {i + 1 i + 1}}) \\ \leq \max _ {t} \max _ {i \in [ n - 1 ]} \frac {2 \left(H _ {t}\right) _ {i i} \left(H _ {t}\right) _ {i + 1 i + 1}}{\det \left(\left[ \begin{array}{l l} \left(H _ {t}\right) _ {i i} & \left(H _ {t}\right) _ {i i + 1} \\ \left(H _ {t}\right) _ {i + 1 i} & \left(H _ {t}\right) _ {i + 1 i + 1} \end{array} \right]\right)} (28) \\ \end{array} +$$ + +Note that $\begin{bmatrix} (H_t)_{ii} & (H_t)_{ii+1} \\ (H_t)_{i+1i} & (H_t)_{i+1i+1} \end{bmatrix} \succeq \epsilon \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ (using line 1 in Algorithm 1), thus $\det \left( \begin{bmatrix} (H_t)_{ii} & (H_t)_{ii+1} \\ (H_t)_{i+1i} & (H_t)_{i+1i+1} \end{bmatrix} \right) \geq \det \left( \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \right) = \epsilon^2$ . The numerator last inequality can be upperbounded by bounding $(H_t)_{ii}$ individually as follows: + +$$ +\begin{array}{l} (H _ {t}) _ {i i} = \sum_ {s = 1} ^ {t} (g _ {s}) _ {i} ^ {2} / \lambda_ {s} \\ = \sum_ {s = 1} ^ {t} \left(g _ {s}\right) _ {i} ^ {2} / \lambda_ {s} \\ = \sum_ {s = 1} ^ {t} (g _ {s}) _ {i} ^ {2} / \left(G _ {\infty} \sqrt {s}\right) \\ \leq \sum_ {s = 1} ^ {t} G _ {\infty} ^ {2} / (G _ {\infty} \sqrt {s}) \\ \leq \sum_ {s = 1} ^ {t} \frac {G _ {\infty}}{\sqrt {s}} \\ \leq 2 G _ {\infty} \sqrt {t} \tag {29} \\ \end{array} +$$ + +Substituting the above in (28) gives + +$$ +\begin{array}{l} 1 / (1 - \beta) \leq \max _ {t} \frac {8 G _ {\infty} ^ {2} t}{\hat {\epsilon} ^ {2} G _ {\infty} ^ {2} T} \\ \leq \frac {8}{\bar {\epsilon} ^ {2}} \\ \end{array} +$$ + +![](images/de5454df3c759053ef33de062f737d1734dc9bc584c22e582c5757a86530d06e.jpg) + +Lemma A.5 (Upperbound of $T_{1}$ + +$$ +T _ {1} \leq \frac {1 6 D _ {2} ^ {2} G _ {\infty} \sqrt {T}}{\hat {\epsilon} ^ {2} \eta}, \tag {30} +$$ + +where $D_{2} = \max_{t\in [T]}\| w_{t} - w^{*}\|_{2}$ and $G_{\infty} = \max_{t}\| g_{t}\|_{\infty}$ + +Proof. Since $X_{T}$ is positive definite + +$$ +\begin{array}{l} T _ {1} \leq \frac {\left\| w _ {1} - w ^ {*} \right\| _ {X _ {1} ^ {- 1}} ^ {2}}{2 \eta} \\ = \frac {\left(y ^ {(1)}\right) ^ {T} X _ {1} ^ {- 1} y ^ {(1)}}{2 \eta} \quad \text {(w h e r e} y ^ {(1)} = w _ {1} - w ^ {*}) \\ \leq \frac {\left\| y ^ {(1)} \right\| _ {2} ^ {2} \left\| \operatorname {d i a g} \left(H _ {1}\right) \right\| _ {2}}{2 \eta} \cdot \frac {1 + \beta}{1 - \beta} \quad (\text {L e m m a A . 3}) \\ \leq \frac {D _ {2} ^ {2} \left(G _ {\infty} ^ {2} / \lambda_ {1} + \epsilon\right)}{2 \eta} \cdot \frac {1 + \beta}{1 - \beta} \quad \text {(l i n e 4 i n A l g o r i t h m 1)} \\ \leq \frac {8 D _ {2} ^ {2} \left(G _ {\infty} ^ {2} / \lambda_ {1} + \epsilon\right)}{\hat {\epsilon} ^ {2} \eta} \quad (\text {L e m m a A . 4}) \\ \leq \frac {8 D _ {2} ^ {2} \left(G _ {\infty} + \hat {\epsilon} G _ {\infty} \sqrt {T}\right)}{\hat {\epsilon} ^ {2} \eta} \quad \left(\text {S i n c e} \lambda_ {t} = G _ {\infty} \sqrt {t} \text {a n d} \epsilon = \hat {\epsilon} G _ {\infty} \sqrt {T}\right) \\ \leq \frac {1 6 D _ {2} ^ {2} G _ {\infty} \sqrt {T}}{\hat {\epsilon} ^ {2} \eta} \quad (\hat {\epsilon} < 1) \\ \end{array} +$$ + +![](images/31e48d15cd834a8040e5548eb877fa85ee2ed814101f679fc2fc978a831213a2.jpg) + +Lemma A.6 $(O(\sqrt{T})$ upperbound on $T_{3}$ + +$$ +T _ {3} = \sum_ {t = 1} ^ {T} \frac {\eta}{2} \cdot g _ {t} ^ {T} X _ {t} g _ {t} \leq \frac {4 n G _ {\infty} \eta}{\hat {\epsilon} ^ {3}} \sqrt {T} +$$ + +where, $\| g_t\|_{\infty}\leq G_{\infty}$ and parameters $\epsilon = \hat{\epsilon} G_{\infty}\sqrt{T}$ , $\lambda_{t} = G_{\infty}\sqrt{t}$ in Algorithm 1. + +Proof. Using Theorem 3.1, nonzero entries of $X_{t}$ can be written as follows: + +$$ +(X _ {t}) _ {i i} = \frac {1}{H _ {i i}} \left(1 + \sum_ {(i, j) \in E _ {\mathcal {G}}} \frac {H _ {i j} ^ {2}}{H _ {i i} H _ {j j} - H _ {i j} ^ {2}}\right) +$$ + +$$ +(X _ {t}) _ {i i + 1} = - \frac {H _ {i i + 1}}{H _ {i i} H _ {i + 1 i + 1} - H _ {i i + 1} ^ {2}} +$$ + +where, $E_{\mathcal{G}}$ denote the set of edges of the chain graph $\mathcal{G}$ in Theorem 3.1. Also, for brevity, the subscript is dropped for $H_{t}$ . Let $\hat{X}_t = \sqrt{\operatorname{diag}(H)} X_t\sqrt{\operatorname{diag}(H)}$ , then $\hat{X}_t$ can be written as + +$$ +(\hat {X} _ {t}) _ {i i} = \left(1 + \sum_ {(i, j) \in E _ {\mathcal {G}}} \frac {\hat {H} _ {i j} ^ {2}}{1 - \hat {H} _ {i j} ^ {2}}\right), +$$ + +$$ +(\hat {X} _ {t}) _ {i i + 1} = - \frac {\hat {H} _ {i i + 1}}{1 - \hat {H} _ {i i + 1} ^ {2}}, +$$ + +where, $\hat{H}_{ij} = H_{ij} / \sqrt{H_{ii}H_{jj}}$ . Note that $\hat{X}_t \preceq \| \hat{X}_t\|_2 I_n \preceq \| \hat{X}_t\|_\infty I_n$ , using $\max \{|\lambda_1(\hat{X}_t)|, \ldots, |\lambda_n(\hat{X}_t)|\} \leq \| \hat{X}_t\|_\infty$ (property of spectral radius). So we upperbound $\| \hat{X}_t\|_\infty = \max_{i \in [n]} \{|\hat{X}_t)_{11}| + |(\hat{X}_t)_{12}|, \ldots, |(\hat{X}_t)_{ii-1}| + |(\hat{X}_t)_{ii}| + |(\hat{X}_t)_{ii+1}|, \ldots, |(\hat{X}_t)_{nn}| + |(\hat{X}_t)_{nn-1}|\}$ next. Individual terms $|(\hat{X}_t)_{ii-1}| + |(\hat{X}_t)_{ii}| + |(\hat{X}_t)_{ii+1}|$ can be written as follows: + +$$ +\begin{array}{l} \sum_ {(i, j) \in E _ {\mathcal {G}}} | (\hat {X} _ {t}) _ {i j} | = 1 + \sum_ {(i, j) \in E _ {\mathcal {G}}} \frac {\hat {H} _ {i j} ^ {2}}{1 - \hat {H} _ {i j} ^ {2}} + \frac {| \hat {H} _ {i j} |}{1 - \hat {H} _ {i j} ^ {2}} \\ = 1 + \sum_ {(i, j) \in E _ {\mathcal {G}}} \frac {| \hat {H} _ {i j} |}{1 - | \hat {H} _ {i j} |} \\ \leq 2 \max _ {i \in [ n - 1 ]} \frac {1}{1 - | \hat {H} _ {i i + 1} |} \\ \end{array} +$$ + +The last inequality is because $|\hat{H}_{ij}| \leq 1$ . Thus, $\| \hat{X}_t\|_\infty \leq 2\max_{i\in [n - 1]}\frac{1}{1 - |\hat{H}_{ii + 1}|}$ . Now + +$$ +\begin{array}{l} g _ {t} ^ {T} X _ {t} g _ {t} \leq g _ {t} ^ {T} \operatorname {d i a g} \left(H _ {t}\right) ^ {- 1 / 2} \hat {X} _ {t} \operatorname {d i a g} \left(H _ {t}\right) ^ {- 1 / 2} g _ {t} \\ \leq \| \hat {X} _ {t} \| _ {\infty} \| \operatorname {d i a g} \left(H _ {t}\right) ^ {- 1 / 2} g _ {t} \| _ {2} ^ {2} \quad \left(\left\| \hat {X} _ {t} \right\| _ {2} \leq \left\| \hat {X} _ {t} \right\| _ {\infty}\right) \\ \leq 2 \max _ {i \in [ n - 1 ]} \frac {1}{1 - | \hat {H} _ {i i + 1} |} g _ {t} ^ {T} \operatorname {d i a g} (H _ {t}) ^ {- 1} g _ {t}. \\ \end{array} +$$ + +Using $\mathrm{diag}(H_t) \succeq \epsilon I_n$ (step 1 in Algorithm 1), where $\epsilon = \hat{\epsilon} G_{\infty}\sqrt{T}$ as set in Lemma A.8, gives + +$$ +\begin{array}{l} g _ {t} ^ {T} X _ {t} g _ {t} \leq 2 \max _ {i \in [ n - 1 ]} \frac {1}{1 - | \hat {H} _ {i i + 1} |} \frac {\| g _ {t} \| _ {2} ^ {2}}{\epsilon G _ {\infty} \sqrt {T}} \\ \leq 2 \max _ {i \in [ n - 1 ]} \frac {n G _ {\infty}}{\hat {\epsilon} (1 - | \hat {H} _ {i i + 1} |) \sqrt {T}} \\ \leq \frac {2 n G _ {\infty}}{\hat {\epsilon} (1 - \beta) \sqrt {T}} \quad \left(\text {w h e r e} \beta = \max _ {t \in [ T ]} \max _ {i \in [ n - 1 ]} \left| (\hat {H} _ {t}) _ {i i + 1} \right|\right) \\ \end{array} +$$ + +Summing up over $t$ gives + +$$ +\begin{array}{l} \sum_ {t} \frac {\eta}{2} g _ {t} ^ {T} X _ {t} g _ {t} \leq \sum_ {t} \frac {1 6 n G _ {\infty} \eta}{\hat {\epsilon} ^ {3} \sqrt {T}} \quad (\text {U s i n g L e m m a A . 4}) \\ \leq \frac {1 6 n G _ {\infty} \eta}{\epsilon^ {3}} \sqrt {T} \\ \end{array} +$$ + +![](images/1e3cf5bef63a2ce60e407ef60e06efc2c41bb2e1f6fb6e7b103e38743bb69d60.jpg) + +# A.2.4 $\mathcal{O}(\sqrt{T})$ Regret + +In this section we derive a regret upper bound with a $\mathcal{O}(T^{1/2})$ growth. For this, we upper bound $T_{2}$ as well in this section. In (21), $T_{2} = \sum_{t=2}^{T} (w_{t} - w^{*})^{T} (X_{t}^{-1} - X_{t-1}^{-1})(w_{t} - w^{*})$ can be upper bounded to a $\mathcal{O}(T^{1/2})$ by upper bounding entries of $X_{t}^{-1} - X_{t-1}^{-1}$ individually. The following lemmas construct a telescoping argument to bound $\left| (X_{t}^{-1} - X_{t-1}^{-1})_{i,j} \right|$ . + +Lemma A.7. Let $H, \tilde{H} \in S_n^{++}$ , such that $\tilde{H} = H + g g^T / \lambda$ , where $g \in \mathbb{R}^n$ , then + +$$ +\begin{array}{l} \frac {\tilde {H} _ {i j}}{\sqrt {\tilde {H} _ {i i} \tilde {H} _ {j j}}} - \frac {H _ {i j}}{\sqrt {H _ {i i} H _ {j j}}} \\ = \frac {g _ {i} g _ {j}}{\lambda \sqrt {\tilde {H} _ {i i} \tilde {H} _ {j j}}} + \frac {H _ {i j}}{\sqrt {H _ {i i} H _ {j j}}} \left(\sqrt {\frac {H _ {i i} H _ {j j}}{\tilde {H} _ {i i} \tilde {H} _ {j j}}} - 1\right) = \theta_ {i j} \\ \end{array} +$$ + +Proof. + +$$ +\begin{array}{l} \frac {\tilde {H} _ {i j}}{\sqrt {\tilde {H} _ {i i} \tilde {H} _ {j j}}} - \frac {H _ {i j}}{\sqrt {H _ {i i} H _ {j j}}} \\ = \frac {1}{\sqrt {H _ {i i} H _ {j j}}} (\tilde {H} _ {i j} \frac {\sqrt {H _ {i i} H _ {j j}}}{\sqrt {\tilde {H} _ {i i} \tilde {H} _ {j j}}} - H _ {i j}) \\ = \frac {1}{\sqrt {H _ {i i} H _ {j j}}} \left(g _ {i} g _ {j} \frac {\sqrt {H _ {i i} H _ {j j}}}{\sqrt {\tilde {H} _ {i i} \tilde {H} _ {j j}}} + H _ {i j} \left(\frac {\sqrt {H _ {i i} H _ {j j}}}{\sqrt {\tilde {H} _ {i i} \tilde {H} _ {j j}}} - 1\right)\right) \\ \end{array} +$$ + +![](images/0c09359572a8e357ea5dc2112cd45bfabd48363bf6a401a37e4588f04acc5707.jpg) + +The following Lemma bounds the change in the inverse of preconditioner $Y^{-1}$ , when there is a rank one perturbation to $H\succ 0$ in following LogDet problem (11): + +$$ +\begin{array}{l} Y = \operatorname *{arg min}_{X\in S_{n}(\mathcal{G})^{+ + }} - \log \det \left(X\right) + \operatorname {Tr}(XH) \\ = \operatorname *{arg min}_{X\in S_{n}(\mathcal{G})^{+ + }}\mathrm{D}_{\ell \mathrm{d}}(X,H) \\ \end{array} +$$ + +Lemma A.8 (Rank 1 perturbation of LogDet problem (11)). Let $H, \tilde{H} \in S_n^{++}$ , such that $\tilde{H} = H + gg^T / \lambda$ , where $g \in \mathbb{R}^n$ . Also, $\tilde{Y} = \arg \min_{X \in S_n(\mathcal{G})^{++}} \mathrm{D}_{\ell \mathrm{d}}(X, \tilde{H})$ and $Y = \arg \min_{X \in S_n(\mathcal{G})^{++}} \mathrm{D}_{\ell \mathrm{d}}(X, H)$ , where $\mathcal{G}$ is a chain graph, then + +$$ +\left| (\tilde {Y} ^ {- 1} - Y ^ {- 1}) _ {i i + k} \right| \leq G _ {\infty} ^ {2} \kappa (k \beta + k + 2) \beta^ {k - 1} / \lambda , +$$ + +where $i, i + k \leq n$ , $G_{\infty} = \| g \|_{\infty}$ and $\max_{i,j} |H_{ij}| / \sqrt{H_{ii} H_{jj}} \leq \beta < 1$ . Let $\kappa(\mathrm{diag}(H)) := \text{condition number of the diagonal part of } H$ , then $\kappa := \max(\kappa(\mathrm{diag}(H)), \kappa(\mathrm{diag}(\tilde{H})))$ . + +Proof. Using Lemma A.2 will give the following: + +$$ +\begin{array}{l} \left| \left(\tilde {Y} ^ {- 1} - Y ^ {- 1}\right) _ {i i + k} \right| \\ = \left| \frac {\tilde {H} _ {i i + 1} \dots \tilde {H} _ {i + k - 1 i + k}}{\tilde {H} _ {i + 1 i + 1} \dots \tilde {H} _ {i + k - 1 i + k - 1}} - \frac {H _ {i i + 1} \dots H _ {i + k - 1 i + k}}{H _ {i + 1 i + 1} \dots H _ {i + k - 1 i + k - 1}} \right| \\ = \left| \sqrt {\tilde {H} _ {i i}} \tilde {N} _ {i i + 1} \dots \tilde {N} _ {i + k - 1 i + k} \sqrt {\tilde {H} _ {i + k i + k}} \right. \\ - \sqrt {H _ {i i}} N _ {i i + 1} \dots N _ {i + k - 1 i + k} \sqrt {H _ {i + k i + k}} \\ = \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \left| \tilde {N} _ {i i + 1} \dots \tilde {N} _ {i + k - 1 i + k} - N _ {i i + 1} \dots N _ {i + k - 1 i + k} \sqrt {H _ {i i} H _ {i + k i + k} / \tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \right| \\ \end{array} +$$ + +where $N_{ij} = H_{ij} / \sqrt{H_{ii}H_{jj}} < 1$ (Since determinants of $2\mathrm{x}2$ submatrices of $\mathrm{H}$ are positive). Expanding $\tilde{N}_{ii+1} = N_{ii+1} + \theta_{ii+1}$ (from Lemma A.7), subsequently $\tilde{N}_{ii+2} = N_{ii+2} + \theta_{ii+2}$ and so on will give + +$$ +\begin{array}{l} \left| \tilde {N} _ {i i + 1} \dots \tilde {N} _ {i + k - 1 i + k} - N _ {i i + 1} \dots N _ {i + k - 1 i + k} \sqrt {\frac {H _ {i i} H _ {i + k i + k}}{\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}}} \right| = \\ \left| \theta_ {i i + 1} \tilde {N} _ {i + 1 i + 2} \dots \tilde {N} _ {i + k - 1 i + k} + N _ {i i + 1} \left(\tilde {N} _ {i + 1 i + 2} \dots \tilde {N} _ {i + k - 1 i + k} - N _ {i + 1 i + 2} \dots N _ {i + k - 1 i + k} \sqrt {\frac {H _ {i i} H _ {i + k i + k}}{\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}}}\right) \right| \\ \end{array} +$$ + +$$ +\begin{array}{l} = \left| \theta_ {i i + 1} \tilde {N} _ {i + 1 i + 2} \dots \tilde {N} _ {i + k - 1 i + k} + N _ {i i + 1} \theta_ {i + 1 i + 2} \tilde {N} _ {i i + 3} \dots \tilde {N} _ {i + k - 1 i + k} + \dots + N _ {i i + 1} \dots N _ {i i + k - 1} \theta_ {i + k - 1 i + k} \right. \\ + N _ {i i + 1} \dots N _ {i i + k} \left(1 - \sqrt {\frac {H _ {i i} H _ {i + k i + k}}{\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}}}\right) | \\ \leq \left(\sum_ {l = 0} ^ {k - 1} \left| \theta_ {i + l i + l + 1} \right|\right) \beta^ {k - 1} + \beta^ {k - 1} \left| 1 - \sqrt {\frac {H _ {i i} H _ {i + k i + k}}{\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}}} \right|, \\ \Rightarrow \left| (\tilde {Y} ^ {- 1} - Y ^ {- 1}) _ {i i + k} \right| \leq \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \cdot \left(\left(\sum_ {l = 0} ^ {k - 1} | \theta_ {i + l i + l + 1} |\right) \beta^ {k - 1} + \beta^ {k - 1} \left| 1 - \sqrt {\frac {H _ {i i} H _ {i + k i + k}}{\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}}} \right|\right) \\ \end{array} +$$ + +where $\max_{i,j}|N_{i,j}|$ , $\max_{i,j}|\tilde{N}_{i,j}|\leq \beta < 1$ . Expanding $\theta_{i + li + l + 1}$ from Lemma A.7 in the term $|\theta_{i + li + l + 1}|\sqrt{\tilde{H}_{ii}\tilde{H}_{i + ki + k}}$ will give: + +$$ +\begin{array}{l} \left| \theta_ {i + l i + l + 1} \right| \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \\ = \left| \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \frac {g _ {i + l} g _ {i + l + 1}}{\lambda \sqrt {\tilde {H} _ {i + l i + l} \tilde {H} _ {i + l + 1 i + l + 1}}} + \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} N _ {i + l i + l + 1} \left(\sqrt {\frac {H _ {i + l i + l} H _ {i + l + 1 i + l + 1}}{\tilde {H} _ {i + l i + l} \tilde {H} _ {i + l + 1 i + l + 1}}} - 1\right) \right| \\ \leq \left| \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \frac {g _ {i + l} g _ {i + l + 1}}{\lambda \sqrt {\tilde {H} _ {i + l i + l} \tilde {H} _ {i + l + 1 i + l + 1}}} \right| + \left| \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} N _ {i + l i + l + 1} \left(1 - \sqrt {\frac {H _ {i + l i + l} H _ {i + l + 1 i + l + 1}}{\tilde {H} _ {i + l i + l} \tilde {H} _ {i + l + 1 i + l + 1}}}\right) \right| \\ \end{array} +$$ + +Since $H_{i + li + l}H_{i + l + 1i + l + 1}\leq \tilde{H}_{i + li + l}\tilde{H}_{i + l + 1i + l + 1}$ + +$$ +\begin{array}{l} 1 - \sqrt {\frac {H _ {i + l i + l} H _ {i + l + 1 i + l + 1}}{\tilde {H} _ {i + l i + l} \tilde {H} _ {i + l + 1 i + l + 1}}} \leq \max \left(1 - \frac {H _ {i + l i + l}}{\tilde {H} _ {i + l i + l}}, 1 - \frac {H _ {i + l + 1 i + l + 1}}{\tilde {H} _ {i + l + 1 i + l + 1}}\right) \\ \leq \max \left(\frac {g _ {i + l} ^ {2}}{\lambda \tilde {H} _ {i + l i + l}}, \frac {g _ {i + l + 1} ^ {2}}{\lambda \tilde {H} _ {i + l + 1 i + l + 1}}\right) \\ \end{array} +$$ + +Using the above, $H_{i,i} / H_{j,j} \leq \kappa$ , and $|g_i| \leq G_\infty, \forall i,j \in [n]$ , gives + +$$ +\begin{array}{l} \sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} | \theta_ {i + l i + l + 1} | \leq G _ {\infty} ^ {2} \kappa / \lambda + \beta G _ {\infty} ^ {2} \kappa / \lambda \\ \leq G _ {\infty} ^ {2} \kappa (1 + \beta) / \lambda \\ \end{array} +$$ + +Thus the following part of $\left|\left(\tilde{Y}^{-1} - Y^{-1}\right)_{ii + k}\right|$ can be upperbounded: + +$$ +\sqrt {\tilde {H} _ {i i} \tilde {H} _ {i + k i + k}} \left(\left(\sum_ {l = 0} ^ {k - 1} \left| \theta_ {i + l i + l + 1} \right|\right) \beta^ {k - 1}\right) \leq G _ {\infty} ^ {2} \kappa (1 + \beta) k \beta^ {k - 1} / \lambda +$$ + +Also, $\sqrt{\tilde{H}_{ii}\tilde{H}_{i + ki + k}}\beta^{k - 1}\left|1 - \sqrt{\frac{H_{ii}H_{i + ki + k}}{\tilde{H}_{ii}\tilde{H}_{i + ki + k}}}\right|\leq \beta^{k - 1}\kappa G_{\infty}^2 /\lambda$ , so + +$$ +\left| \left(\tilde {Y} ^ {- 1} - Y ^ {- 1}\right) _ {i i + k} \right| \leq G _ {\infty} ^ {2} \kappa (k \beta + k + 2) \beta^ {k - 1} / \lambda +$$ + +![](images/4549a7623284181e4178936387617f8518a68ab15e0decd7d3d19f4e9df0743a.jpg) + +Lemma A.9 ( $\mathcal{O}(\sqrt{T})$ upper bound of $T_{2}$ ). Given that $\kappa(\mathrm{diag}(H_t)) \leq \kappa$ , $\|w_t - w^*\|_2 \leq D_2$ , $\max_{i,j} |(H_t)_{ij}| / \sqrt{(H_t)_{ii}(H_t)_{jj}} \leq \beta < 1$ , $\forall t \in [T]$ in Algorithm 1, then $T_{2}$ in Appendix A.2.1 can be bounded as follows: + +$$ +T _ {2} \leq \frac {2 0 4 8 \sqrt {T}}{\eta \hat {\epsilon} ^ {5}} (G _ {\infty} D _ {2} ^ {2}) +$$ + +where $\lambda_{t} = G_{\infty}\sqrt{t}$ , and $\epsilon = \hat{\epsilon} G_{\infty}\sqrt{T}$ in Algorithm 1, and $\hat{\epsilon} \leq 1$ is a constant. + +Proof. Note that $T_2 = \frac{1}{2\eta} \cdot \sum_{t=1}^{T-1} (w_{t+1} - w^*)^T (X_{t+1}^{-1} - X_t^{-1})(w_{t+1} - w^*) \leq \sum_{t=1}^{T-1} D_2^2 \left\| (X_{t+1}^{-1} - X_t^{-1}) \right\|_2 / (2\eta)$ . Using $\| A \|_2 = \rho(A) \leq \| A \|_\infty$ for symmetric matrices $A,$ we get + +$$ +\begin{array}{l} \left\| X _ {t + 1} ^ {- 1} - X _ {t} ^ {- 1} \right\| _ {2} \leq \left\| X _ {t + 1} ^ {- 1} - X _ {t} ^ {- 1} \right\| _ {\infty} \\ = \max _ {i} \left(\sum_ {j} \left| \left(X _ {t + 1} ^ {- 1} - X _ {t} ^ {- 1}\right) _ {i j} \right|\right) \\ \leq 1 6 \frac {G _ {\infty} \kappa}{\sqrt {t} (1 - \beta) ^ {2}} \quad (\text {L e m m a A . 8}) \\ \leq 1 0 2 4 \cdot \frac {G _ {\infty} \kappa}{\sqrt {t} \hat {\epsilon} ^ {4}} \\ \end{array} +$$ + +Now using $\kappa \leq 2 / \hat{\epsilon}$ (using Equation (29) and $(H_{t})_{ii} > \hat{\epsilon})$ and summing up terms in $T_{2}$ using the above will give the result. + +Putting together $T_{1}, T_{2}$ and $T_{3}$ from Lemma A.5, Lemma A.9 and Lemma A.6 respectively, when $\epsilon$ , $\lambda_{t}$ are defined as in Lemma A.9: + +$$ +T _ {1} \leq \frac {1 6 D _ {2} ^ {2} G _ {\infty} \sqrt {T}}{\epsilon^ {2} \eta}, +$$ + +$$ +T _ {2} \leq \frac {2 0 4 8 \sqrt {T}}{\eta \hat {\epsilon} ^ {5}} \left(G _ {\infty} D _ {2} ^ {2}\right) \tag {31} +$$ + +$$ +T _ {3} \leq \frac {4 n G _ {\infty} \eta}{\hat {\epsilon} ^ {3}} \sqrt {T} \tag {32} +$$ + +Setting $\eta = \frac{D_2}{\hat{\epsilon}\sqrt{n}}$ + +$$ +R _ {T} \leq T _ {1} + T _ {2} + T _ {3} \leq O (\sqrt {n} G _ {\infty} D _ {2} \sqrt {T}) +$$ + +# A.2.5 Non-convex guarantees + +Minimizing smooth non-convex functions $f$ is a complex yet interesting problem. In Agarwal et al. [1], this problem is reduced to an online convex optimization, where a sequence of objectives $f_{t}(w) = f(w) + c\| w - w_{t}\|_{2}^{2}$ are minimized. Using this approach Agarwal et al. [1] established convergence guarantees to reach a stationary point via regret minimization. Thus non-convex guarantees can be obtained from regret guarantees and is our main focus in the paper. + +# A.3 Numerical stability + +In this section we conduct perturbation analysis to derive an end-to-end componentwise condition number (pg. 135, problem 7.11 in [27]) upper bound of the tridiagonal explicit solution in Theorem 3.1. In addition to this, we devise Algorithm 3 to reduce this condition number upper bound for the tridiagonal sparsity structure, and be robust to $H_{t}$ which don't follow the non-degeneracy condition: any principle submatrix of $H_{t}$ corresponding to a complete subgraph of $\mathcal{G}$ . + +Theorem A.10 (Condition number of tridiagonal LogDet subproblem (11)). Let $H \in S_n^{++}$ be such that $H_{ii} = 1$ for $i \in [n]$ . Let $\Delta H$ be a symmetric perturbation such that $\Delta H_{ii} = 0$ for $i \in [n]$ , and $H + \Delta H \in S_n^{++}$ . Let $P_{\mathcal{G}}(H)$ be the input to 11, where $\mathcal{G}$ is a chain graph, then + +$$ +\kappa_ {\infty} ^ {\ell d} \leq \max _ {i \in [ n - 1 ]} 2 / \left(1 - \beta_ {i} ^ {2}\right) = \hat {\kappa} _ {\infty} ^ {\ell d}, \tag {33} +$$ + +where, $\beta_{i} = H_{ii + 1},\kappa_{\infty}^{\ell d}\coloneqq$ componentwise condition number of (11) for perturbation $\Delta H$ + +The tridiagonal LogDet problem with inputs $H$ as mentioned in Theorem A.10, has high condition number when $1 - \beta_{i}^{2} = H_{ii} - H_{ii + 1}^{2} / H_{i + 1i + 1}$ are low and as a result the preconditioner $X_{t}$ in + +SONew (Algorithm 1) has high componentwise relative errors. We develop Algorithm 3 to be robust to degenerate inputs $H$ , given that $H_{ii} > 0$ . It finds a subgraph $\tilde{\mathcal{G}}$ of $\mathcal{G}$ for which non-degeneracy conditions in Theorem 3.2 is satisfied and (14) is well-defined. This is done by removing edges which causes inverse $H_{I_jI_j}^{-1}$ to be singular or $(H_{jj} - H_{I_jj}^T H_{I_jI_j}^{-1} H_{I_jj})$ to be low. In the following theorem we also show that the condition number upper bound in Theorem A.10 reduces in tridiagonal case. To test the robustness of this method we conducted an ablation study in Table 5, in an Autoencoder benchmark (from Section 5) in bfloat16 where we demonstrate noticeable improvement in performance when Algorithm 3 is used. + +Theorem A.11 (Numerically stable algorithm). Algorithm 3 finds a subgraph $\tilde{\mathcal{G}}$ of $\mathcal{G}$ , such that explicit solution for $\tilde{\mathcal{G}}$ in (14) is well-defined. Furthermore, when $\mathcal{G}$ is a tridiagonal/chain graph, the component-wise condition number upper bound in (33) is reduced upon using Algorithm 3, $\hat{\kappa}_{\ell d}^{\tilde{\mathcal{G}}} < \hat{\kappa}_{\ell d}^{\mathcal{G}}$ , where $\hat{\kappa}_{\ell d}^{\tilde{\mathcal{G}}}$ , $\hat{\kappa}_{\ell d}^{\mathcal{G}}$ are defined as in Theorem A.10 for graphs $\tilde{\mathcal{G}}$ and $\mathcal{G}$ respectively. + +The proofs for Theorems A.10 and A.11 are given in the following subsections. + +Algorithm 3 Numerically stable banded LogDet solution +1: Input: $\mathcal{G}-$ tridiagonal or banded graph, $H-$ symmetric matrix in $\mathbb{R}^{n\times n}$ with sparsity structure $\mathcal{G}$ and $H_{ii} > 0,\gamma-$ tolerance parameter for low schur complements. +2: Output: Finds subgraph $\tilde{\mathcal{G}}$ of $\mathcal{G}$ without any degenerate cases from Lemma A.13 and finds preconditioner $\hat{X}$ corresponding to the subgraph +3: Let $E_{i} = \{(i,j):(i,j)\in E_{\mathcal{G}}\}$ be edges from vertex i to its neighbours in graph $\mathcal{G}$ +4: Let $V_{i}^{+} = \{j:i < j,(i,j)\in E_{\mathcal{G}}\}$ and $V_{i}^{-} = \{j:i > j,(i,j)\in E_{\mathcal{G}}\}$ , denote positive and negative neighbourhood of vertex i. +5: Let $K = \left\{i:H_{ii} - H_{I_i i}^T H_{I_i I_i}^{-1}H_{I_i i}\right.$ is undefined or $\leq \gamma \}$ +6: Consider a new subgraph $\tilde{\mathcal{G}}$ with edges $E_{\tilde{\mathcal{G}}} = E_{\mathcal{G}}\setminus (\bigcup_{i\in K}E_{i}\cup (V_{i}^{+}\times V_{i}^{-}))$ +7: return $\hat{X}\coloneqq$ SPARSIFIED_INVERSE $(\tilde{H}_t,\tilde{\mathcal{G}})$ , where $\tilde{H}_t = P_{\tilde{\mathcal{G}}}(\bar{H}_t)$ + +# A.3.1 Condition number analysis + +Theorem A.12 (Full version of Theorem A.10). Let $H \in S_n^{++}$ such that $H_{ii} = 1$ , for $i \in [n]$ and a symmetric perturbation $\Delta H$ such that $\Delta H_{ii} = 0$ , for $i \in [n]$ and $H + \Delta H \succ 0$ . Let $\hat{X} = \arg \min_{X \in S_n(\mathcal{G})^{++}} \mathrm{D}_{\ell \mathrm{d}}(X, H^{-1})$ and $\hat{X} + \Delta \hat{X} = \arg \min_{X \in S_n(\mathcal{G})^{++}} \mathrm{D}_{\ell \mathrm{d}}(X, (H + \Delta H)^{-1})$ , here $\mathcal{G} := \text{chain/tridiagonal sparsity graph and } S_n(\mathcal{G})^{++}$ denotes positive definite matrices which follows the sparsity pattern $\mathcal{G}$ . + +$$ +\begin{array}{l} \kappa_ {\ell d} = \lim _ {\epsilon \rightarrow 0} \sup \left\{\frac {\left| \Delta \hat {X} _ {i j} \right|}{\epsilon \left| \hat {X} _ {i j} \right|}: | \Delta H _ {k, l} | \leq | \epsilon H _ {k, l} |, (k, l) \in E _ {\mathcal {G}} \right\} \\ \leq \max _ {i \in [ n - 1 ]} 1 / (1 - \beta_ {i} ^ {2}) \\ \end{array} +$$ + +where, $\kappa_{\ell d} :=$ condition number of the LogDet subproblem, $\kappa_2(.) \coloneqq$ condition number of a matrix in $\ell_2$ norm, $\beta_i = H_{ii+1} / \sqrt{H_{ii}H_{i+1i+1}}$ + +Proof. Consider the offdiagonals for which $(\hat{X} + \Delta \hat{X})_{ii+1} = -H_{ii+1} / (1 - H_{ii+1}^2) = f(H_{ii+1})$ , where $f(x) = -x / (1 - x^2)$ . Let $y = f(x)$ , $\hat{y} = f(x + \Delta x)$ and $|\Delta x / x| \leq \epsilon$ then using Taylor series + +$$ +\begin{array}{l} \left| \frac {(\hat {y} - y)}{y} \right| = \left| \frac {x f ^ {\prime} (x)}{f (x)} \right| \left| \frac {\Delta x}{x} \right| + O ((\Delta x) ^ {2}) \\ \Longrightarrow \lim _ {\epsilon \rightarrow 0} \left| \frac {(\hat {y} - y)}{\epsilon y} \right| \leq \frac {x f ^ {\prime} (x)}{f (x)} \\ \end{array} +$$ + +Using the above inequality, with $x\coloneqq H_{ii + 1}$ and $y\coloneqq \hat{X}_{ii + 1}$ + +$$ +\begin{array}{l} \lim _ {\epsilon \rightarrow 0} \left| \frac {\Delta \hat {X} _ {i i + 1}}{\epsilon \hat {X} _ {i i + 1}} \right| \leq \frac {1 + H _ {i i + 1} ^ {2}}{1 - H _ {i i + 1} ^ {2}} \tag {34} \\ \leq \frac {2}{1 - H _ {i i + 1} ^ {2}} \\ \end{array} +$$ + +Let $g(x) = x^{2} / (1 - x^{2})$ , let $y_{1} = g(w_{1}),y_{2} = g(x_{2}),\hat{y}_{1} = g(w_{1} + \Delta x),\hat{y}_{2} = g(x_{2} + \Delta x)$ . Using Taylor series + +$$ +\begin{array}{l} \left| \frac {(\hat {y} _ {1} - y _ {1})}{y _ {1}} \right| = \left| \frac {x _ {1} f ^ {\prime} (x _ {1})}{f (x _ {1})} \right| \left| \frac {\Delta x _ {1}}{x _ {1}} \right| + O \left(\left(\Delta x _ {1}\right) ^ {2}\right) \\ \left| \frac {(\hat {y} _ {2} - y _ {2})}{y _ {2}} \right| = \left| \frac {x _ {2} f ^ {\prime} (x _ {2})}{f (x _ {2})} \right| \left| \frac {\Delta x _ {2}}{x _ {2}} \right| + O ((\Delta x _ {2}) ^ {2}) \\ \Rightarrow \lim _ {\epsilon \rightarrow 0} \frac {\Delta y _ {1} + \Delta y _ {2}}{\epsilon (1 + y _ {1} + y _ {2})} \leq \max \left(\frac {2}{1 - x _ {1} ^ {2}}, \frac {2}{1 - x _ {2} ^ {2}}\right) \\ \end{array} +$$ + +Putting $x_{1} \coloneqq H_{ii + 1}$ , $x_{2} \coloneqq H_{ii - 1}$ and analyzing $y_{1} \coloneqq H_{ii + 1}^{2} / (1 - H_{ii + 1}^{2})$ and $y_{2} \coloneqq H_{ii - 1}^{2} / (1 - H_{ii - 1}^{2})$ will result in the following + +$$ +\lim _ {\epsilon \rightarrow 0} \left| \frac {\Delta \hat {X} _ {i i}}{\hat {X} _ {i i}} \right| \leq \max \left(\frac {2}{1 - H _ {i i + 1} ^ {2}}, \frac {2}{1 - H _ {i i - 1} ^ {2}}\right) \tag {35} +$$ + +Since $\hat{X}_{ii} = 1 + H_{ii+1}^2 / (1 - H_{ii+1}^2) + H_{ii-1}^2 / (1 - H_{ii-1}^2)$ . Putting together Equation (35) and Equation (34), the theorem is proved. + +# A.3.2 Degenerate $H_{t}$ + +In SONew (1), the $H_{t} = P_{\mathcal{G}}(\sum_{s=1}^{t} g_{s} g_{s}^{T} / \lambda_{t})$ generated in line 4 could be such that the matrix $\sum_{s=1}^{t} g_{s} g_{s}^{T} / \lambda_{t}$ need not be positive definite and so the schur complements $H_{ii} - H_{ii+1}^{2} / H_{i+1i+1}$ can be zero, giving an infinite condition number $\kappa_{\infty}^{\ell d}$ by Theorem A.10. The following lemma describes such cases in detail for a more general banded sparsity structure case. + +Lemma A.13 (Degenerate inputs to banded LogDet subproblem). Let $H = P_{\mathcal{G}}(GG^T)$ , when $\epsilon = 0$ in Algorithm 1, where $G \in \mathbb{R}^{n \times T}$ and let $g_{1:T}^{(i)}$ be $i^{th}$ row of $G$ , which is gradients of parameter $i$ for $T$ rounds, then $H_{ij} = \left\langle g_{1:T}^{(i)}, g_{1:T}^{(j)} \right\rangle$ . + +- Case 1: For tridiagonal sparsity structure $\mathcal{G}$ : if $g_{1:T}^{(j)} = g_{1:T}^{(j+1)}$ , then $H_{jj} - H_{jj+1}/H_{j+1j+1} = 0$ . +- Case 2: For $b > 1$ in (14): If $\mathrm{rank}(H_{J_j J_j}) = \mathrm{rank}(H_{I_j I_j}) = b$ , then $(H_{jj} - H_{I_j j}^T H_{I_j I_j}^{-1} H_{I_j j}) = 0$ and $D_{jj} = \infty$ . If $\mathrm{rank}(H_{I_j I_j}) < b$ then the inverse $H_{I_j I_j}^{-1}$ doesn't exist and $D_{jj}$ is not well-defined. + +Proof. For $b = 1$ , if $g_{1:T}^{(j)} = g_{1:T}^{(j+1)}$ , then $H_{jj+1} = H_{jj} = H_{j+1j+1} = \left\| g_{1:T}^{(j)} \right\|_2^2$ , thus $H_{jj}-H_{jj+1}/H_{j+1j+1} = 0$ . + +For $b > 1$ , since $H_{I_j I_j}$ , using Guttman rank additivity formula, $\mathrm{rank}(H_{jj} - H_{jj + 1}^2 / H_{j + 1j + 1}) = \mathrm{rank}(H_{J_j J_j}) - \mathrm{rank}(H_{I_j I_j}) = 0$ , thus $H_{jj} - H_{jj + 1}^2 / H_{j + 1j + 1} = 0$ . + +Furthermore, if $\operatorname{rank}(H) \leq b$ , then all $b + 1 \times b + 1$ principal submatrices of $H$ have rank $b$ , thus $\forall j, H_{J_j J_j}$ have a rank $b$ , thus $D_{jj}$ for all $j$ are undefined. + +![](images/7c230cab9e43a1bab15c68659f60d65d9cac0f5cb6e7c278b39f8b75eecaa45b.jpg) + +If $GG^{T} = \sum_{i=1}^{T} g_{i} g_{i}$ is a singular matrix, then solution to the LogDet problem might not be well-defined as shown in Lemma A.13. For instance, Case 1 can occur when preconditioning the input layer of an image-based DNN with flattened image inputs, where $j^{th}$ and $(j + 1)^{th}$ pixel can be highly correlated throughout the dataset. Case 2 can occur in the first $b$ iterations in Algorithm 1 when the rank of submatrices $\mathrm{rank}(H_{I_j I_j}) < b$ and $\epsilon = 0$ . + +Table 3: float32 experiments on Autoencoder benchmark using different band sizes. Band size 0 corresponds to diag-SONew and 1 corresponds to tridiag-SONew. We see the training loss getting better as we increase band size + +
Band size0 (diag-SONew)1 (tridiag-SONew)410
Train CE loss53.02551.72351.35751.226
+ +# A.3.3 Numerically Stable SONew proof + +Proof of Theorem A.11 + +Let $I_{i} = \{j : i < j, (i,j) \in E_{\mathcal{G}}\}$ and $I_{i}' = \{j : i < j, (i,j) \in E_{\tilde{\mathcal{G}}}\}$ . Let $K = \{i : H_{ii} - H_{I_i}^T H_{I_i I_i}^{-1} H_{I_i i} \text{ is undefined or } 0, i \in [n]\}$ denote vertices which are getting removed by the algorithm, then for the new graph $\tilde{\mathcal{G}}$ , $D_{ii} = 1 / H_{ii}, \forall i \in K$ since $H_{ii} > 0$ . + +Let $\bar{K} = \left\{i:H_{ii} - H_{I_i^i}^T H_{I_i^i I_i}^{-1}H_{I_i i} > 0,i\in [n]\right\}$ . Let for some $j\in \bar{K}$ , if + +$$ +l = \arg \min \left\{i: j < i, i \in K \cap I _ {j} \right\}, +$$ + +denotes the nearest connected vertex higher than $j$ for which $D_{ll}$ is undefined or zero, then according to the definition $E_{\tilde{\mathcal{G}}}$ in Algorithm 3, $I_j' = \{j + 1, \dots, l - 1\} \subset I_j$ , since $D_{jj}$ is well-defined, $H_{I_jI_j}$ is invertible, which makes it a positive definite matrix (since $H$ is PSD). Since $H_{jj} - H_{I_jJ}^T H_{I_jI_j}^{-1} H_{I_jJ} > 0$ , using Guttman rank additivity formula $H_{J_jJ_j} \succ 0$ , where $J_j = I_j \cup j$ . Since $H_{J_j'J_j'}$ is a submatrix of $H_{J_jJ_j}$ , it is positive definite and hence its schur complement $H_{jj} - H_{I_jJ}^T H_{I_j'I_j'}^{-1} H_{I_j'J} > 0$ . Thus for all $j \in [n]$ , the corresponding $D_{jj}$ 's are well-defined in the new graph $\tilde{\mathcal{G}}$ . + +Note that $\kappa_{\ell d}^{\tilde{G}} = \max_{i\in [n - 1]}1 / (1 - \beta_i^2) < \max_{i\in \bar{K}}1 / (1 - \beta_i^2) = \kappa_{\ell d}^{\mathcal{G}}$ , for tridiagonal graph, where $\beta_{i} = H_{ii + 1}$ , in the case where $H_{ii} = 1$ . This is because the arg $\max_{i\in [n - 1]}1 / (1 - \beta_i^2)\in K$ . + +# A.4 Additional Experiments, ablations, and details + +# A.4.1 Ablations + +Effect of band size in banded-SONew Increasing band size in banded-SONew captures more correlation between parameters, hence should expectedly lead to better preconditioners. We confirm this through experiments on the Autoencoder benchmark where we take band size $= 0$ (diag-SONew), 1 (tridiag-SONew), 4, and 10 in Table 3. + +Effect of mini-batch size To find the effect of mini-batch size, in Table 4, We empirically compare SONew with state of the art first-order methods such as Adam and RMSProp, and second-order method Shampoo. We see that SONew performance doesn't deteriorate much when using smaller or larger batch size. First order methods on the other hand suffer significantly. We also notice that Shampoo doesn't perform better than SONew in these regimes. + +Table 4: Comparison on Autoencoder with different batch-sizes + +
Baseline\Batch size1001000500010000
RMSProp55.6153.3358.6964.91
Adam55.6754.3958.9365.37
Shampoo(20)53.9150.7053.5254.90
tds53.8451.7254.2455.87
bds-453.5251.3553.0354.89
+ +Effect of Numerical Stability Algorithm 3 On tridiag-SONew and banded-4-SONew, we observe that using Algorithm 3 improves training loss. We present in Table 5 results where we observed significant performance improvements. + +Table 5: bffloat16 experiments on Autoencoder benchmark with and without Algorithm 3. We observe improvement in training loss when using Algorithm 3 + +
OptimizerTrain CE loss - without Algorithm 3Train CE loss - with Algorithm 3
tridiag-SONew53.15051.936
band-4-SONew51.95051.84
+ +Table 6: A rough estimate of memory requirement comparisons of different optimizers tested across benchmarks. + +
Benchmark# model parametersK-FACShampooFishLegEvaAdamSGD+MomentumRMSproptds-SONew
Autoencodern=1.4M5.56n6.56n4.28nn2nnn3n
GraphNetworkn=3.5M8.6n10.6n4.8nn2nnn3n
Vision Transformern=22M6.4n7.2n3.7nn2nnnn
Language Modeln=1.3B5.6n6.6n3.3nn2nnn3n
+ +# A.4.2 Memory Requirements + +We present a list of approximate memory requirements of different optimizers across different benchmarks in Table 6. Note that for K-FAC and Shampoo, because preconditioner is updated once only a few steps, they require storing the latest computed preconditioners as well along with the statistics, causing even higher memory overhead. + +# A.4.3 Hyperparameter search space + +We provide the hyperparameter search space for experiments presented in Section 5. We search over $2k$ hyperparameters for each Autoencoder experiment using a Bayesian Optimization package. The search ranges are: first order momentum term $\beta_{1} \in [1e - 1, 0.999]$ , second order momentum term $\beta_{2} \in [1e - 1, 0.999]$ , learning rate $\in [1e - 7, 1e - 1]$ , $\epsilon \in [1e - 10, 1e - 1]$ . We give the optimal hyperparameter value for each experiment in Table 12. For VIT and GraphNetwork benchmark, we search $\beta_{1}, \beta_{2} \in [0.1, 0.999]$ , $lr \in [1e - 5, 1e - 1]$ , $\epsilon \in [1e - 9, 1e - 4]$ , weight decay $\in [1e - 5, 1.0]$ , learning rate warmup $\in [2\%, 5\%, 10\%] * \text{total_train_steps}$ , dropout $\in [00, 0.1]$ , label smoothing over $\{0.0, 0.1, 0.2\}$ . We use cosine learning rate schedule. Batch size was kept = 1024, and 512 for Vision Transformer, and GraphNetwork respectively. We sweep over 200 hyperparameters in the search space for all the optimizers. + +For rfdSON [37], there's no $\epsilon$ hyperparameter. In addition to the remaining hyperparameters, we tune $\alpha \in \{1e - 5,1.0\}$ (plays similar role as $\epsilon$ ) and $\mu_t \in [1e - 5,0.1]$ . + +For LLM [47] benchmark, we only tune the learning rate $\in [1e - 2, 1e - 3, 1e - 4]$ while keeping the rest of the hyperparams as constant. This is due to the high cost of running experiments hence we only tune the most important hyperparameter. For Adafactor [45], we use factored=False, decay method=adam, $\beta_{1} = 0.9$ , weight decay $= 1e - 3$ , decay factor $= 0.99$ , and gradient clipping $= 1.0$ . + +# A.4.4 Additional Experiments + +VIT and GraphNetwork Benchmarks: In Figure 5 we plot the training loss curves of runs corresponding to the best validation runs in Figure 1. Furthermore, from an optimization point of view, we plot the best train loss runs in Figure 6 got by searching over 200 hyperparameters. We find that tridiag-SONew is $9\%$ and $80\%$ relatively better in ViT and GraphNetwork benchmark respectively (Figure 6), compared to Adam (the next best memory efficient baseline). + +Autoencoder float32 and bfloat16 experiments: We provide curves of all the baselines and SONew in Figure 4(a) and the corresponding numbers in Table 7 for float32 experiments. + +To test numerical stability of SONew and compare it with other algorithms in low precision regime, we also conduct bfloat16 experiments on the Autoencoder benchmark (Table 8). We notice that SONew undergoes the least degradation. Tridiagonal-sparsity SONew CE loss increases by only 0.21 absolute difference (from 51.72 in float32 (7) to 51.93), whereas Shampoo and Adam incur 0.70 loss increase. It's worthwhile to note that SONew performs better than all first order methods while taking similar time and linear memory, whereas while Shampoo performs marginally better, it is $22 \times$ slower + +Table 7: float32 experiments on Autoencoder benchmark. We observe that diag-SONew performs the best among all first order methods while taking similar time. tridiag and band-4 perform significantly better than first order methods while requiring similar linear space and time. Shampoo performs best but takes $\mathcal{O}(d_1^3 + d_2^3)$ time for computing preconditioner of a linear layer of size $d_1 \times d_2$ , whereas our methods take $\mathcal{O}(d_1 d_2)$ time, as mentioned in Section 3.3. rfdSON takes similar space as SONew but performs considerably worse. + +
OptimizerFirst Order Methods
SGDNesterovAdagradMomentumRMSPropAdamdiag-SONew
Train CE loss67.65459.08754.39358.65153.33053.59153.025
Time(s)621026267626263
OptimizerSecond Order Methods
Shampoo(20)rfdSON(1)rfdSON(4)tridiag-SONewband-4-SONew
Train CE loss50.70253.5652.9751.72351.357
Time(s)3718530070260
+ +Table 8: bffloat16 experiments on Autoencoder benchmark to test the numerical stability of SONew and robustness of Algorithm 3. We notice that diag-SONew degrades only marginally (0.26 absolute difference) compared to float32 performance. tridiag-SONew and band-4-SONew holds similar observations as well. Shampoo performs the best but has a considerable drop (0.70) in performance compared to float32 due to using matrix inverse, and is slower due to its cubic time complexity for computing preconditioners. Shampoo implementation uses 16-bit quantization to make it work in 16-bit setting, leading to further slowdown. Hence the running time in bffloat16 is even higher than in float32. + +
OptimizerFirst Order Methods
SGDNesterovAdagradMomentumRMSPropAdamdiag-SONew
Train CE loss80.45472.97568.85470.05353.74354.32853.29
Train time(s)36433736373844
OptimizerSecond Order Methods
Shampoo(20)rfdSON(1)rfdSON(4)tridiag-SONewband-4-SONew
Train CE loss51.40157.4255.5351.93751.84
Train time(s)12458028455230
+ +![](images/b20b6b7412c30c2c3d747fc19ed4e92c7d8b2d714e518d2265686bb74d8308fd.jpg) +(a) float32 - autoencoder + +![](images/289598c8713c40b8dbd0680332742143d4ea322dfc9559785b1073845c1ca034.jpg) +(b) bfloat16 - autoencoder +Figure 4: Training curves of all the baselines for Autoencoder benchmark (a) float32 training (b) bfloat16 training + +than tridiagonal-SONew. The corresponding loss curves are given in Figure 4(b). + +Note: In the main paper, our reported numbers for rfdSON on Autoencoder benchmark in Table 2 for float32 experiments are erraneous. Please consider the numbers provided in Table 7 and the corresponding curve in Figure 4(a). Note that there's no qualitative change in the results and none of the claims made in the paper are affected. SONew is still significantly better than rfdSON. We also meticulously checked all other experiments, and they do not have any errors. + +Autoencoder on KFAC, FishLeg, Eva: For completeness, We compare SONew against KFAC [38], FishLeg [20], and Eva [52] on Autoencoder benchmark as used in their official implementation. + +![](images/599cf1380189a12abbe65b8c3fef137477824700aecc08327b884b79497f4b1e.jpg) +(a) VIT train CE loss + +![](images/2a15920600bcf9428db0b69ebf2ca6d978bc81cd215973d458039b053d311837.jpg) +(b) GraphNetwork train CE loss +Figure 5: Train loss corresponding to the best validation runs in Figure 1 (a) VIT benchmark (b) GraphNetwork benchmark. We observe that tridiag-SONew match or perform better than Adam. + +![](images/eabd1ae526500d7ae2dc9ea9ed999795b9f83d41263fe3c5fde29dd169ba9a48.jpg) +(a) Best VIT train CE loss + +![](images/8c58f167b9b9af26aeb97bbd4a622b20eb5999e51707b2079564d0cb47d884b0.jpg) +(b) Best GraphNetwork train CE loss + +![](images/40cb7c6b4ee9e94298269c660c44edc02b63e434ff36788350cf25ece849a377.jpg) +Figure 6: Best train loss achieved during hyperparam tuning. (a) VIT benchmark (b) GraphNetwork benchmark. We observe that tridiag-SONew significantly outperforms Adam, while being comparable or better than shampoo. +Figure 7: Autoencoder benchmark run using Pytorch on KFAC, FishLeg, Eva, and tridiag-SONew. We notice that tridiag-SONew beats all other baselines by a large margin. + +The main difference is their implementation uses ReLU activation compared to Tanh that we used for all our Autoencoder experiments. As these baselines done have JAX implementation, we use their official PyTorch implementation and run tridiag-SONew in PyTorch as well. Hyperparameter search is conducted for SONew similar to as reported above, over learning rate, $\beta_{1},\beta_{2}$ , and $\epsilon$ . For KFAC and Eva, rather than $\beta_{2}$ , damping factor is tuned over $[1e - 5,10]$ (default value specified is 0.03). kl Clip is tuned as well over $[1e - 5,1.0]$ . Preconditioners are updates once every 15 iterations to have same wall clock time as other baselines and SONew. For FishLeg, auxiliary learning rate is tuned $\in [1e - 7,1e - 1]$ and damping $\in [1e - 5,1.0]$ . All other hyperparameters are tuned similar to SONew. Eva is trained for 100 epochs, and for other methods we change number of epochs such that + +![](images/afc0b122584e7612a1963600656b49859b872a4df08aaaad9cf4880c2b9a2923.jpg) +(a) Validation CE loss +Figure 8: We observe mixed results for 248M parameter language model benchmark. This is possibly because 48 trials were insufficient for optimal tuning. We leave thorough tuning and investigating into the above observation as future work. + +![](images/d7bcce655ac85930d559f553c851719353dc65b5ad87adc298baa86814f86bf7.jpg) +(b) Train CE loss + +each experiment takes same amount of time. Each optimizer is tuned using 600 hyperparameters. The results are in Figure 7, where notice that tridiag-SONew beats all the baselines by a large margin. + +Adam vs SONew on 248M Language Model: We conduct an additional experiment on a 248M parameter transformer architecture [50] language model. The model is trained on WikiText103, introduced in [39]. We train the model for 3 epochs, having 8M tokens per epoch with a batch size of 8k tokens. We search over 48 hyperparameters, tuning learning rate $\in 2e - 2$ , $1e - 2$ , $5e - 3$ , $1e - 3$ , $\beta_{2} \in 0.99$ , $0.999$ , weight decay $\in 0.0$ , $0.1$ , and $\epsilon \in 1e - 10$ , $1e - 8$ , $1e - 6$ , while fixing $\beta_{1} = 0.9$ . Validation and training loss is given in Figure 8. Our observations indicate mixed results. While tds-SONew exhibits superior validation performance, Adam outperforms in training metrics. We believe that the 48 trials might have been insufficient for optimal tuning. It's conceivable that with further trials, SONew's training loss could surpass that of Adam. We leave this line of investigation for future work. + +# A.4.5 Convex experiments + +As our regret bound applies to convex optimization, we compare SONew to rfdSON [37], another recent memory-efficient second-order Newton method. We follow [37] for the experiment setup - each dataset is split randomly in $70\% / 30\%$ train and test set. Mean squared loss is used. For tridiag-SONew, we use a total of $2 \times d$ space for $d$ parameters. Hence, for fair comparison we show rfdSON with $m = 2$ . Since the code isn't open sourced, we implemented it ourselves. In order to show reproducibility with respect to the reported numbers in [37], we include results with $m = 5$ as well. We see in the Table 9 that tridiag-SONew consistently matches or outperforms rfdSON across all 3 benchmarks. Each experiment was run for 20 epochs and we report the best model's performance on test set. + +Table 9: Comparison of rfdSON and tridiag-SONew in convex setting on three datasets. We optimize least square loss $\sum_{t}(y_{t} - w^{T}x_{t})^{2}$ where $w$ is the learnable parameter and $(x_{t},y_{t})$ is the $t^{th}$ training point. Reported numbers is the accuracy on the test set. +Table 10: (a) Dataset stats + +
Dataset# total pointsdimension
a9a32,561123
gisette60005000
mnist11791780
+ +Table 11: (b) RFD-SON vs tridiag-SONew + +
DatasetRFD-SON, m=2RFD-SON, m=5tridiag-SONew
a9a83.383.684.6
gisette96.196.296.6
mnist93.294.596.5
+ +Table 12: Optimal hyperparams for Autoencoder Benchmark + +Table 13: (a) float32 experiments optimal hyperparameters + +
Baselineβ1β2εlr
SGD0.990.918.37e-91.17e-2
Nesterov0.9140.903.88e-105.74e-3
Adagrad0.950.909.96e-71.82e-2
Momentum0.90.991e-56.89e-3
RMSProp0.90.91e-104.61e-4
Adam0.90.941.65e-63.75e-3
Diag-SONew0.880.954.63e-61.18e-3
Shampoo0.90.959.6e-93.70e-3
tridiag0.90.961.3e-68.60e-3
band-40.880.951.5e-35.53e-3
+ +Table 14: (b) bfloat16 experiments optimal hyperparameters + +
Baselineβ1β2εlr
SGD0.960.982.80e-21.35e-2
Nesterov0.9140.9458.48e-96.19e-3
Adagrad0.950.932.44e-52.53e-2
Momentum0.90.990.17.77e-3
RMSProp0.90.92.53e-104.83e-4
Adam0.90.943.03e-103.45e-3
Diag-SONew0.90.954.07e-68.50e-3
Shampoo0.850.8066.58e-45.03e-3
ztridiag0.830.9541.78e-67.83e-3
band-40.90.961.52e-64.53e-3
\ No newline at end of file diff --git a/acomputationallyefficientsparsifiedonlinenewtonmethod/images.zip b/acomputationallyefficientsparsifiedonlinenewtonmethod/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9d5585faacc9bf18fb86e84b1a5bc325409ad38a --- /dev/null +++ b/acomputationallyefficientsparsifiedonlinenewtonmethod/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf0e9594b51cc87e06310b81796a19e03243a2e00b3306799aa43feecd6d6b5e +size 1455721 diff --git a/acomputationallyefficientsparsifiedonlinenewtonmethod/layout.json b/acomputationallyefficientsparsifiedonlinenewtonmethod/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5ff6dce684e7283b7970c7f347dd6865bad10e6f --- /dev/null +++ b/acomputationallyefficientsparsifiedonlinenewtonmethod/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9aef892ba9c03f255dcf4df3907eb08a83089868894136278c29bf2da47b2913 +size 1373511 diff --git a/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_content_list.json b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c1941b0ebb9520b388eca39c0eda73336baa25ff --- /dev/null +++ b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d87453258a4df899d591e26804052f3b81a219f05541cdb35e1385da5b01cc1 +size 764906 diff --git a/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_model.json b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f322126e9a04039b14411976c13638fbc6ef51cc --- /dev/null +++ b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcdadf177f74e3bb367901adaf4ca5ccea950e3bcd13b28979a0d4d30918bc94 +size 837244 diff --git a/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_origin.pdf b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f35b3273ce18520f357a66648e822b1f511fd931 --- /dev/null +++ b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/d528f9da-2a0d-454a-b8ba-9508d433fd11_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba19e0787b4e230cce64c0576a9fa67efec2001336c4321d7f4803a851ce36de +size 1578656 diff --git a/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/full.md b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..18cb3a506a63857850f9b0c2d571a2fe89421d25 --- /dev/null +++ b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/full.md @@ -0,0 +1,3001 @@ +# A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting + +Alexander Tyurin + +KAUST + +Saudi Arabia + +alexandertiurin@gmail.com + +Peter Richtárik + +KAUST + +Saudi Arabia + +richtarik@gmail.com + +# Abstract + +We present a new method that includes three key components of distributed optimization and federated learning: variance reduction of stochastic gradients, partial participation, and compressed communication. We prove that the new method has optimal oracle complexity and state-of-the-art communication complexity in the partial participation setting. Regardless of the communication compression feature, our method successfully combines variance reduction and partial participation: we get the optimal oracle complexity, never need the participation of all nodes, and do not require the bounded gradients (dissimilarity) assumption. + +# 1 Introduction + +Federated and distributed learning have become very popular in recent years (Konečný et al., 2016; McMahan et al., 2017). The current optimization tasks require much computational resources and machines. Such requirements emerge in machine learning, where massive datasets and computations are distributed between cluster nodes (Lin et al., 2017; Ramesh et al., 2021). In federated learning, nodes, represented by mobile phones, laptops, and desktops, do not send their data to a server due to privacy and their huge number (Ramaswamy et al., 2019), and the server remotely orchestrates the nodes and communicates with them to solve an optimization problem. + +As in classical optimization tasks, one of the main current challenges is to find computationally efficient optimization algorithms. However, the nature of distributed problems induces many other (Kairouz et al., 2021), including i) partial participation of nodes in algorithm steps: due to stragglers (Li et al., 2020) or communication delays (Vogels et al., 2021), ii) communication bottleneck: even if a node participates, it can be costly to transmit information to a server or other nodes (Alistarh et al., 2017; Ramesh et al., 2021; Kairouz et al., 2021; Sapio et al., 2019; Narayanan et al., 2019). It is necessary to develop a method that considers these problems. + +# 2 Optimization Problem + +Let us consider the nonconvex distributed optimization problem + +$$ +\min _ {x \in \mathbb {R} ^ {d}} \left\{f (x) := \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x) \right\}, \tag {1} +$$ + +where $f_{i}:\mathbb{R}^{d}\to \mathbb{R}$ is a smooth nonconvex function for all $i\in [n]\coloneqq \{1,\ldots ,n\}$ . The full information about function $f_{i}$ is stored on $i^{\mathrm{th}}$ node. The communication between nodes is maintained in the parameters server fashion (Kairouz et al., 2021): we have a server that receives compressed + +information from nodes, updates a state, and broadcasts an updated model. Since we work in the nonconvex world, our goal is to find an $\varepsilon$ -solution ( $\varepsilon$ -stationary point) of (1): a (possibly random) point $\widehat{x} \in \mathbb{R}^d$ , such that $\operatorname{E}\left[\|\nabla f(\widehat{x})\|^2\right] \leq \varepsilon$ . + +We consider three settings: + +1. Gradient Setting. The $i^{\mathrm{th}}$ node has only access to the gradient $\nabla f_{i}:\mathbb{R}^{d}\to \mathbb{R}^{d}$ of function $f_{i}$ . Moreover, the following assumptions for the functions $f_{i}$ hold. + +Assumption 1. There exists $f^{*} \in \mathbb{R}$ such that $f(x) \geq f^{*}$ for all $x \in \mathbb{R}$ . + +Assumption 2. The function $f$ is $L$ -smooth, i.e., $\| \nabla f(x) - \nabla f(y) \| \leq L \| x - y \|$ for all $x, y \in \mathbb{R}^d$ . + +Assumption 3. The functions $f_{i}$ are $L_{i}$ -smooth for all $i \in [n]$ . Let us define $\widehat{L}^2 \coloneqq \frac{1}{n}\sum_{i = 1}^{n}L_i^2$ . + +2. Finite-Sum Setting. The functions $\{f_i\}_{i=1}^n$ have the finite-sum form + +$$ +f _ {i} (x) = \frac {1}{m} \sum_ {j = 1} ^ {m} f _ {i j} (x), \quad \forall i \in [ n ], \tag {2} +$$ + +where $f_{ij}:\mathbb{R}^d\to \mathbb{R}$ is a smooth nonconvex function for all $j\in [m]$ . + +We assume that Assumptions 1, 2 and 3 hold and the following assumption. + +Assumption 4. The function $f_{ij}$ is $L_{ij}$ -smooth for all $i \in [n], j \in [m]$ . Let $L_{\max} := \max_{i \in [n], j \in [m]} L_{ij}$ . + +3. Stochastic Setting. The function $f_{i}$ is an expectation of a stochastic function, + +$$ +f _ {i} (x) = \mathrm {E} _ {\xi} \left[ f _ {i} (x; \xi) \right], \quad \forall i \in [ n ], \tag {3} +$$ + +where $f_{i}:\mathbb{R}^{d}\times \Omega_{\xi}\to \mathbb{R}$ . For a fixed $x\in \mathbb{R}$ , $f_{i}(x;\xi)$ is a random variable over some distribution $\mathcal{D}_i$ , and, for a fixed $\xi \in \Omega_{\xi}$ , $f_{i}(x;\xi)$ is a smooth nonconvex function. The $i^{\mathrm{th}}$ node has only access to a stochastic gradients $\nabla f_{i}(\cdot ;\xi_{ij})$ of the function $f_{i}$ through the distribution $\mathcal{D}_i$ , where $\xi_{ij}$ is a sample from $\mathcal{D}_i$ . We assume that Assumptions 1, 2 and 3 hold and the following assumptions. + +Assumption 5. For all $i \in [n]$ and for all $x \in \mathbb{R}^d$ , the stochastic gradient $\nabla f_i(x;\xi)$ is unbiased and has bounded variance, i.e., $\mathrm{E}_{\xi}\left[\nabla f_i(x;\xi)\right] = \nabla f_i(x)$ , and $\mathrm{E}_{\xi}\left[\| \nabla f_i(x;\xi) - \nabla f_i(x)\|^2\right] \leq \sigma^2$ , where $\sigma^2 \geq 0$ . + +Assumption 6. For all $i \in [n]$ and for all $x, y \in \mathbb{R}$ , the stochastic gradient $\nabla f_i(x; \xi)$ satisfies the mean-squared smoothness property, i.e., $\mathrm{E}_{\xi} \left[ \| \nabla f_i(x; \xi) - \nabla f_i(y; \xi) \|^2 \right] \leq L_{\sigma}^2 \| x - y \|^2$ . + +We compare algorithms using the oracle complexity, i.e., the number of (stochastic) gradients that each node has to calculate to get $\varepsilon$ -solution, and the communication complexity, i.e., the number of bits that each node has to send to the server to get $\varepsilon$ -solution. + +# 2.1 Unbiased Compressors + +We use the concept of unbiased compressors to alleviate the communication bottleneck. The unbiased compressors quantize and/or sparsify vectors that the nodes send to the server. + +Definition 1. A stochastic mapping $\mathcal{C}:\mathbb{R}^d\to \mathbb{R}^d$ is an unbiased compressor if there exists $\omega \in \mathbb{R}$ + +$$ +\text {s u c h} \quad \mathrm {E} [ \mathcal {C} (x) ] = x \quad \text {a n d} \quad \mathrm {E} \left[ \| \mathcal {C} (x) - x \| ^ {2} \right] \leq \omega \| x \| ^ {2} \quad \forall x \in \mathbb {R} ^ {d}. \tag {4} +$$ + +We denote a set of stochastic mappings that satisfy Definition 1 as $\mathbb{U}(\omega)$ . In our methods, the nodes make use of unbiased compressors $\{\mathcal{C}_i\}_{i=1}^n$ . The community developed a large number of unbiased compressors, including RandK (see Definition 5) (Beznosikov et al., 2020; Stich et al., 2018), Adaptive sparsification (Wangni et al., 2018) and Natural compression and dithering (Horváth et al., 2019a). We are aware of correlated compressors by Szlendak et al. (2021) and quantizers by Suresh et al. (2022) that help in the homogeneous regimes, but in this work, we are mainly concentrated on generic heterogeneous regimes, though, for simplicity, assume the independence of the compressors. + +Assumption 7. $\mathcal{C}_i \in \mathbb{U}(\omega)$ for all $i \in [n]$ , and the compressors are statistically independent. + +Table 1: Summary of methods that solve the problem (1) in the stochastic setting (3). Abbr.: VR (Variance Reduction) = Does a method have the optimal oracle complexity $\mathcal{O}\left(\frac{\sigma^2}{\varepsilon} + \frac{\sigma}{\varepsilon^{3/2}}\right)$ ? PP (Partial Participation) = Does a method support partial participation from Section 2.2? CC = Does a method have the communication complexity equals to $\mathcal{O}\left(\frac{\omega}{\sqrt{n}\varepsilon}\right)$ ? + +
MethodVRPPCCLimitations
SPIDER, SARAH, PAGE, STORM(Fang et al., 2018; Nguyen et al., 2017)(Li et al., 2021a; Cutkosky and Orabona, 2019)-
MARINA(Gorbunov et al., 2021)X(a)✓(b)Suboptimal convergence rate(see (Tyurin and Richtárik, 2023)).
FedPAGE(Zhao et al., 2021b)X(a)Suboptimal oracle complexity O(σ2/ε2).
FRECON(Zhao et al., 2021a)-
FedAvg(McMahan et al., 2017; Karimireddy et al., 2020b)Bounded gradients (dissimilarity) assumption of fi.
SCAFFOLD(Karimireddy et al., 2020b)Suboptimal convergence rate(e).
MIME(c)(Karimireddy et al., 2020a)X(d)Calculates full gradient.Bounded gradients (dissimilarity) assumption of fi.Suboptimal oracle compl. O(1/ε3/2) in the setting (2).
CE-LSGD (for Partial Participation)(c)(Patel et al., 2022) (concurrent work)Bounded gradients (dissimilarity) assumption of fi.Suboptimal oracle compl. O(1/ε3/2) in the setting (2).
DASHA(Tyurin and Richtárik, 2023)✗or-
DASHA-PP(new)-
+ +(a) MARINA and FedPAGE, with a small probability, require the participation of all nodes so that they can not support partial participation from Section 2.2. Moreover, these methods provide suboptimal oracle complexities. +(b) On average, MARINA provides the compressed communication mechanism with complexity $\mathcal{O}\left(\frac{\omega}{\sqrt{n\varepsilon}}\right)$ . However, with a small probability, this method sends non-compressed vectors. +(c) Note that MIME and CE-LSGD can not be directly compared with DASHA-PP because MIME and CE-LSGD consider the online version of the problem (1), and require more strict assumptions. +(d) Although MIME obtains the convergence rate $\mathcal{O}\left(\frac{1}{\varepsilon^{3/2}}\right)$ of a variance reduced method, it requires the calculation of the full (exact) gradients. +(c) It can be seen when $\sigma^2 = 0$ . Consider the $s$ -nice sampling of the nodes, then SCAFFOLD requires $\mathcal{O}\left(n^{3/2} / \varepsilon s^{3/2}\right)$ communication rounds to get $\varepsilon$ -solution, while DASHA-PP requires $\mathcal{O}\left(\sqrt{n} / \varepsilon s\right)$ communication rounds (see Theorem 4 with $\omega = 0$ , $b = p_{\mathrm{a}} / 2 - p_{\mathrm{a}}$ , and $p_{\mathrm{a}} = \frac{s}{n}$ ). + +# 2.2 Nodes Partial Participation Assumptions + +We now try to formalize the notion of partial participation. Let us assume that we have $n$ events $\{i^{\mathrm{th}} \text{ node is participating}\}$ with the following properties. + +Assumption 8. The partial participation of nodes has the following distribution: exists constants $p_{\mathrm{a}} \in (0,1]$ and $p_{\mathrm{aa}} \in [0,1]$ , such that + +1. Prob $(i^{\mathrm{th}}$ node is participating) $= p_{\mathrm{a}}\quad \forall i\in [n]$ +2. Prob $(i^{\mathrm{th}}$ and $j^{\mathrm{th}}$ nodes are participating) $= p_{\mathrm{aa}}$ $\forall i\neq j\in [n]$ +3. $p_{\mathrm{aa}} \leq p_{\mathrm{a}}^{2},$ (5) + +and these events from different communication rounds are independent. + +We are not fighting for the full generality and believe that more complex sampling strategies can be considered in the analysis. For simplicity, we settle upon Assumption 8. Standard partial participation strategies, including $s$ -nice sampling, where the server chooses uniformly $s$ nodes without replacement $(p_{\mathrm{a}} = s / n$ and $p_{\mathrm{aa}} = s(s - 1) / n(n - 1))$ , and independent participation, where each + +Table 2: Summary of methods that solve the problem (1) in the finite-sum setting (2). Abbr.: VR (Variance Reduction) = Does a method have the optimal oracle complexity $\mathcal{O}\left(m + \frac{\sqrt{m}}{\varepsilon}\right)$ ? PP and CC are defined in Table 1. + +
MethodVRPPCCLimitations
SPIDER, PAGE(Fang et al., 2018; Li et al., 2021a)
MARINA(Gorbunov et al., 2021)\(x^{(a)}\)\(\checkmark^{(b)}\)Suboptimal convergence rate(see (Tyurin and Richtárik, 2023)).
ZeroSARAH(Li et al., 2021b)Only homogeneous regime, i.e., the functions \(f_i\) are equal.
FedPAGE(Zhao et al., 2021b)\(x^{(a)}\)Suboptimal oracle complexity \(\mathcal{O}\left(\frac{m}{\varepsilon}\right)\).
DASHA(Tyurin and Richtárik, 2023)
DASHA-PP(new)
+ +(a),(b) : see Table 1. + +node independently participates with probability $p_{\mathrm{a}}$ (due to independence, we have $p_{\mathrm{aa}} = p_{\mathrm{a}}^{2}$ ), satisfy Assumption 8. In the literature, $s$ -nice sampling is one of the most popular strategies (Zhao et al., 2021a; Richtárik et al., 2021; Reddi et al., 2020; Konečný et al., 2016). + +# 3 Motivation and Related Work + +The main goal of our paper is to develop a method for the nonconvex distributed optimization that will include three key features: variance reduction of stochastic gradients, compressed communication, and partial participation. We now provide an overview of the literature (see also Table 1 and Table 2). + +# 1. Variance reduction of stochastic gradients + +It is important to consider finite-sum (2) and stochastic (3) settings because, in machine learning tasks, either the number of local functions $m$ is huge or the functions $f_{i}$ is an expectation of a stochastic function due to the batch normalization (Ioffe and Szegedy, 2015) or random augmentation (Goodfellow et al., 2016), and it is infeasible to calculate the full gradients analytically. Let us recall the results from the nondistributed optimization. In the gradient setting, the optimal oracle complexity is $\mathcal{O}\left(1 / \varepsilon\right)$ , achieved by the vanilla gradient descent (GD) (Carmon et al., 2020; Nesterov, 2018). In the finite-sum setting and stochastic settings, the optimal oracle complexities are $\mathcal{O}\left(m + \frac{\sqrt{m}}{\varepsilon}\right)$ and $\mathcal{O}\left(\frac{\sigma^2}{\varepsilon} + \frac{\sigma}{\varepsilon^{3/2}}\right)$ (Fang et al., 2018; Li et al., 2021a; Arjevani et al., 2019), accordingly, achieved by methods SPIDER, SARAH, PAGE, and STORM from (Fang et al., 2018; Nguyen et al., 2017; Li et al., 2021a; Cutkosky and Orabona, 2019). + +# 2. Compressed communication + +In distributed optimization (Ramesh et al., 2021; Xu et al., 2021), lossy communication compression can be a powerful tool to increase the communication speed between the nodes and the server. Different types of compressors are considered in the literature, including unbiased compressors (Alistarh et al., 2017; Beznosikov et al., 2020; Szlendak et al., 2021), contractive (biased) compressors (Richtárik et al., 2021), 3PC compressors (Richtárik et al., 2022). We will focus on unbiased compressors because methods DASHA and MARINA (Tyurin and Richtárik, 2023; Szlendak et al., 2021; Gorbunov et al., 2021) that employ unbiased compressors provide the current theoretical state-of-the-art (SOTA) communication complexities. + +Many methods analyzed optimization methods with the unbiased compressors (Alistarh et al., 2017; Mishchenko et al., 2019; Horváth et al., 2019b; Gorbunov et al., 2021; Tyurin and Richtárik, 2023). In the gradient setting, the methods MARINA and DASHA by Gorbunov et al. (2021) and Tyurin and Richtárik (2023) establish the current SOTA communication complexity, each method needs $\frac{1 + \omega / \sqrt{n}}{\varepsilon}$ communication rounds to get an $\varepsilon$ -solution. In the finite-sum and stochastic settings, the current + +SOTA communication complexity is attained by the DASHA method, while maintaining the optimal oracle complexities $\mathcal{O}\left(m + \frac{\sqrt{m}}{\varepsilon\sqrt{n}}\right)$ and $\mathcal{O}\left(\frac{\sigma^2}{\varepsilon n} +\frac{\sigma}{\varepsilon^{3 / 2}n}\right)$ per node. + +# 3. Partial participation + +From the beginning of federated learning era, the partial participation has been considered to be the essential feature of distributed optimization methods (McMahan et al., 2017; Konečný et al., 2016; Kairouz et al., 2021). However, previously proposed methods have limitations: i) methods MARINA and FedPAGE from (Gorbunov et al., 2021; Zhao et al., 2021b) still require synchronization of all nodes with a small probability. ii) in the stochastic settings, methods FedAvg, SCAFFOLD, and FRECON with the partial participation mechanism (McMahan et al., 2017; Karimireddy et al., 2020b; Zhao et al., 2021a) provide results without variance reduction techniques from (Fang et al., 2018; Li et al., 2021a; Cutkosky and Orabona, 2019) and, therefore, get suboptimal oracle complexities. Note that FRECON and DASHA reduce the variance only from compressors (in the partial participation and stochastic setting). iii) in the finite-sum setting, the ZeroSARAH method by Li et al. (2021b) focuses on the homogeneous regime only (the functions $f_{i}$ are equal). iv) The MIME method by Karimireddy et al. (2020a) and the CE-LSGD method (for Partial Participation) by the concurrent paper (Patel et al., 2022) consider the online version of the problem (1). Therefore, MIME and CE-LSGD (for Partial Participation) require stricter assumptions, including the bounded inter-client gradient variance assumption. In the finite-sum setting (2), MIME and CE-LSGD obtain a suboptimal oracle complexity $\mathcal{O}\left(1 / \varepsilon^{3 / 2}\right)$ while, in the full participation setting, it is possible to get the complexity $\mathcal{O}\left(1 / \varepsilon\right)$ . + +# 4 Contributions + +We propose a new method DASHA-PP for the nonconvex distributed optimization. + +- As far as we know, this is the first method that includes three key ingredients of federated learning methods: variance reduction of stochastic gradients, compressed communication, and partial participation. +- Moreover, this is the first method that combines variance reduction of stochastic gradients and partial participation flawlessly: i) it gets the optimal oracle complexity ii) does not require the participation of all nodes iii) does not require the bounded gradients assumption of the functions $f_{i}$ . +- We prove convergence rates and show that this method has the optimal oracle complexity and the state-of-the-art communication complexity in the partial participation setting. Moreover, in our work, we observe a nontrivial side-effect from mixing the variance reduction of stochastic gradients and partial participation. It is a general problem not related to our methods or analysis that we discuss in Section C. +- In Section A, we present experiments where we validate our theory and compare our new methods to previous ones. + +# 5 Algorithm Description and Main Challenges Towards Partial Participation + +We now present DASHA-PP (see Algorithm 1), a family of methods to solve the optimization problem (1). When we started investigating the problem, we took DASHA as a baseline method for two reasons: the family of algorithms DASHA provides the current state-of-the-art communication complexities in the non-partial participation setting, and, unlike MARINA, it does not send non-compressed gradients and does not synchronize all nodes. Let us briefly discuss the main idea of DASHA, its problem in the partial participation setting, and why the refinement of DASHA is not an exercise. + +In fact, the original DASHA method supports the partial participation of nodes in the gradient setting. Since the nodes only do the following steps (see full algorithm in Algorithm 6): + +$$ +g _ {i} ^ {t + 1} = g _ {i} ^ {t} + \mathcal {C} _ {i} (\nabla f _ {i} (x ^ {t + 1}) - (1 - a) \nabla f _ {i} (x ^ {t}) - a g _ {i} ^ {t}). \tag {6} +$$ + +# Algorithm 1 DASHA-PP + +1: Input: starting point $x^0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$ , momentum $a \in (0,1]$ , momentum $b \in (0,1]$ , probability $p_{\mathrm{page}} \in (0,1]$ (only in DASHA-PP-PAGE), batch size $B$ (only in DASHA-PP-PAGE, DASHA-PP-FINITE-MVR and DASHA-PP-MVR), probability $p_{\mathrm{a}} \in (0,1]$ that a node is participating(a), number of iterations $T \geq 1$ + +2: Initialize $g_{i}^{0}\in \mathbb{R}^{d}$ $h_i^0\in \mathbb{R}^d$ on the nodes and $g^0 = \frac{1}{n}\sum_{i = 1}^n g_i^0$ on the server + +3: Initialize $h_{ij}^{0} \in \mathbb{R}^{d}$ on the nodes and take $h_i^0 = \frac{1}{m}\sum_{j = 1}^{m}h_{ij}^{0}$ (only in DASHA-PP-FINITE-MVR) + +4: for $t = 0,1,\ldots ,T - 1$ do + +5: $x^{t + 1} = x^t -\gamma g^t$ + +6: Broadcast $x^{t+1}$ , $x^t$ to all participating $^{(\mathrm{a})}$ nodes + +7: for $i = 1,\dots ,n$ in parallel do + +8: if $i^{\mathrm{th}}$ node is participating(a) then + +9: Calculate $k_i^{t+1}$ using Algorithm 2, 3, 4 or 5 + +10: $h_i^{t + 1} = h_i^t +\frac{1}{p_n} k_i^{t + 1}$ + +11: $m_{i}^{t + 1} = \mathcal{C}_{i}\left(\frac{1}{p_{a}} k_{i}^{t + 1} - \frac{a}{p_{a}}\left(g_{i}^{t} - h_{i}^{t}\right)\right)$ + +12: $g_{i}^{t + 1} = g_{i}^{t} + m_{i}^{t + 1}$ + +13: Send $m_i^{t + 1}$ to the server + +14: else + +15: $h_{ij}^{t + 1} = h_{ij}^{t}$ (only in DASHA-PP-FINITE-MVR) + +16: $h_i^{t + 1} = h_i^t$ $g_{i}^{t + 1} = g_{i}^{t}$ $m_{i}^{t + 1} = 0$ + +17: end if + +18: end for + +19: $g^{t + 1} = g^t +\frac{1}{n}\sum_{i = 1}^n m_i^{t + 1}$ + +20: end for + +21: Output: $\hat{x}^T$ chosen uniformly at random from $\{x^t\}_{k=0}^{T-1}$ + +(a): For the formal description see Section 2.2. + +Algorithm 2 Calculate $k_{i}^{t + 1}$ for DASHA-PP in the gradient setting. See line 9 in Alg. 1. + +$$ +1: k _ {i} ^ {t + 1} = \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right) +$$ + +Algorithm 3 Calculate $k_{i}^{t + 1}$ for DASHA-PP-PAGE in the finite-sum setting. See line 9 in Alg. 1 + +1: Generate a random set $I_{i}^{t}$ of size $B$ from $[m]$ with replacement + +$$ +2: k _ {i} ^ {t + 1} = \left\{ \begin{array}{l} \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right), \\ \text {w i t h p r o b a b i l i t y p _ {\text {p a g e}} o n a l l p a r t i c i p a t i n g n o d e s}, \\ \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t})\right), \\ \text {w i t h p r o b a b i l i t y 1 - p _ {\text {p a g e}} o n a l l p a r t i c i p a t i n g n o d e s} \end{array} \right. +$$ + +Algorithm 4 Calc. $k_{i}^{t + 1}$ for DASHA-PP-FINITE-MVR in the finite-sum setting. See line 9 in Alg. 1 + +1: Generate a random set $I_{i}^{t}$ of size $B$ from $[m]$ without replacement + +$$ +\begin{array}{l} 2: k _ {i j} ^ {t + 1} = \left\{ \begin{array}{l l} \frac {m}{B} \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t})\right)\right), & j \in I _ {i} ^ {t}, \\ 0, & j \not \in I _ {i} ^ {t} \end{array} \right. \\ 3: h _ {i j} ^ {t + 1} = h _ {i j} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i j} ^ {t + 1} \\ 4: k _ {i} ^ {t + 1} = \frac {1}{m} \sum_ {j = 1} ^ {m} k _ {i j} ^ {t + 1} \\ \end{array} +$$ + +Algorithm 5 Calculate $k_{i}^{t + 1}$ for DASHA-PP-MVR in the stochastic setting. See line 9 in Alg. 1 + +1: Generate i.i.d. samples $\{\xi_{ij}^{t + 1}\}_{j = 1}^{B}$ of size $B$ from $\mathcal{D}_i$ . + +$$ +2: k _ {i} ^ {t + 1} = \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} (x ^ {t + 1}; \xi_ {i j} ^ {t + 1}) - \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} (x ^ {t}; \xi_ {i j} ^ {t + 1}) - b \left(h _ {i} ^ {t} - \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} (x ^ {t}; \xi_ {i j} ^ {t + 1})\right) +$$ + +The partial participation mechanism (independent participation from Section 2.2) can be easily implemented here if we temporally redefine the compressor and use another one instead: + +$$ +\mathcal {C} _ {i} ^ {p _ {\mathrm {a}}} := \left\{ \begin{array}{l l} \frac {1}{p _ {\mathrm {a}}} \mathcal {C} _ {i}, \text {w . p .} p _ {\mathrm {a}}, & \stackrel {(6)} {\Rightarrow} g _ {i} ^ {t + 1} = \left\{ \begin{array}{l l} g _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} \mathcal {C} _ {i} \left(\nabla f _ {i} (x ^ {t + 1}) - (1 - a) \nabla f _ {i} (x ^ {t}) - a g _ {i} ^ {t}\right), \text {w . p .} p _ {\mathrm {a}} \\ 0, \text {w . p .} 1 - p _ {\mathrm {a}}. & g _ {i} ^ {t}, \end{array} \right. \\ 0, \text {w . p .} 1 - p _ {\mathrm {a}}. \end{array} \right. +$$ + +With probability $1 - p_{\mathrm{a}}$ , a node does not update $g_{i}^{t}$ and does not send anything to the server. The main observation is that we can do this trick since $g_{i}^{t + 1}$ depends only on the vectors $x^{t + 1}, x^{t}$ , and $g_{i}^{t}$ . The points $x^{t + 1}$ and $x^{t}$ are only available in a node only during its participation. + +However, we focus our attention on partial participation in the finite-sum and stochastic settings. Consider the nodes' steps in DASHA-MVR (Tyurin and Richtárik, 2023) (see Algorithm 7) that is designed for the stochastic setting: + +$$ +h _ {i} ^ {t + 1} = \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) + (1 - b) \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right)\right), \tag {7} +$$ + +$$ +g _ {i} ^ {t + 1} = g _ {i} ^ {t} + \mathcal {C} _ {i} \left(h _ {i} ^ {t + 1} - h _ {i} ^ {t} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right). \tag {8} +$$ + +Now we have two sequences $h_i^t$ and $g_i^t$ . Even if we use the same trick for (8), we still have to update (7) in every iteration of the algorithm since $g_i^{t + 1}$ additionally depends on $h_i^{t + 1}$ and $h_i^t$ . In other words, if a node does not update $g_i^t$ and does not send anything to the server, it still has to update $h_i^t$ , what is impossible without the points $x^{t + 1}$ and $x^t$ . One of the main challenges was to "guess" how to generalize (7) and (8) to the partial participation setting. We now provide a solution (DASHA-PP-MVR with the batch size $B = 1$ ): + +$$ +\begin{array}{l} h _ {i} ^ {t + 1} = h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1}, k _ {i} ^ {t + 1} = \nabla f _ {i} (x ^ {t + 1}; \xi_ {i} ^ {t + 1}) - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1}) - b \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1})\right), \\ g _ {i} ^ {t + 1} = g _ {i} ^ {t} + \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \text {w i t h p r o b a b i l i t y} p _ {\mathrm {a}}, \tag {9} \\ \end{array} +$$ + +and $h_i^{t + 1} = h_i^t$ , $g_i^{t + 1} = g_i^t$ with probability $1 - p_{\mathrm{a}}$ . + +Now both control variables $g_{i}^{t}$ and $h_{i}^{t}$ do not change with the probability $1 - p_{\mathrm{a}}$ . When the $i^{\mathrm{th}}$ node participates, the update rules of $g_{i}^{t + 1}$ and $h_{i}^{t + 1}$ in (9) were adapted to make the proof work. When $p_{\mathrm{a}} = 1$ (no partial participation), the update rules from (9) reduce to (7) and (8). + +The theoretical analysis of the new algorithm became more complicated: unlike (7) and (8), the control variables $h_i^{t+1}$ and $g_i^{t+1}$ in (9) (see also main Algorithm 1) are coupled by the randomness from the partial participation. Going deeper into details, for instance, one can compare Lemma I.2 from (Tyurin and Richtárik, 2023) and Lemma 5, which both bound $\left\| g_i^{t+1} - h_i^{t+1} \right\|^2$ . The former lemma does not use the knowledge about the update rules of $h_i^{t+1}$ , works with one expectation $\mathbb{E}_{\mathcal{C}}[\cdot]$ , uses only (4), (15), and (16). The latter lemma additionally requires and uses the structure of the update rule of $h_i^{t+1}$ (the structure is very important in the lemma since the control variables $h_i^{t+1}$ and $g_i^{t+1}$ are coupled), surgically copes with the expectations $\mathbb{E}_{\mathcal{C}}[\cdot]$ and $\mathbb{E}_{p_a}[\cdot]$ (for instance, it is not trivial in each order one should apply the expectations), and uses the sampling lemma (Lemma 1). The same reasoning applies to other parts of the analysis and the finite-sum setting: the generalization of the previous algorithm and the additional randomness from the partial participation required us to rethink the previous proofs. + +At the first reading of the proofs, we suggest the reader follow the proof of Theorem 2 in the gradient setting (DASHA-PP), which takes a small part of the paper. Although the appendix seems to be dense and large, the size is justified by the fact that we consider four different sub-algorithms, DASHA-PP, DASHA-PP-PAGE, DASHA-PP-FINITE-MVR, and DASHA-PP-MVR, and also PL-condition (The theory is designed so that the proofs do not repeat steps of each other and use one framework). + +# 6 Theorems + +We now present the convergence rates theorems of DASHA-PP in different settings. We will compare the theorems with the results of the current state-of-the-art methods, MARINA and DASHA, that work + +in the full participation setting. Suppose that MARINA or DASHA converges to $\varepsilon$ -solution after $T$ communication rounds. Then, ideally, we would expect the convergence of the new algorithms to $\varepsilon$ -solution after up to $T / p_{\mathrm{a}}$ communication rounds due to the partial participation constraints4. The detailed analysis of the algorithms under Polyak-Lojasiewicz condition we provide in Section F. Let us define $\Delta_0 \coloneqq f(x^0) - f^*$ . + +# 6.1 Gradient Setting + +Theorem 2. Suppose that Assumptions 1, 2, 3, 7 and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ + +$$ +\gamma \leq \left(L + \left[ \frac {4 8 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {1 6}{n p _ {\mathrm {a}} ^ {2}} \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \right] ^ {1 / 2} \widehat {L}\right) ^ {- 1}, +$$ + +and $g_{i}^{0} = h_{i}^{0} = \nabla f_{i}(x^{0})$ for all $i\in [n]$ in Algorithm 1 (DASHA-PP), then $\operatorname{E}\left[\left\| \nabla f(\widehat{x}^T)\right\| ^2\right]\leq \frac{2\Delta_0}{\gamma T}$ . + +Let us recall the convergence rate of MARINA or DASHA, the number of communication rounds to get $\varepsilon$ -solution equals $\mathcal{O}\left(\frac{\Delta_0}{\varepsilon}\left[L + \frac{\omega}{\sqrt{n}}\widehat{L}\right]\right)$ , while the rate of DASHA-PP equals $\mathcal{O}\left(\frac{\Delta_0}{\varepsilon}\left[L + \frac{\omega + 1}{p_{\mathrm{a}}\sqrt{n}}\widehat{L}\right]\right)$ . Up to Lipschitz constants factors, we get the degeneration up to $1 / p_{\mathrm{a}}$ factor due to the partial participation. This is the expected result since each worker sends useful information only with the probability $p_{\mathrm{a}}$ . + +# 6.2 Finite-Sum Setting + +Theorem 3. Suppose that Assumptions 1, 2, 3, 4, 7, and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , probability $p_{\mathrm{page}} \in (0, 1]$ + +$$ +\gamma \leq \left(L + \left[ \frac {4 8 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\max} ^ {2}}{B}\right) + \frac {1 6}{n p _ {\mathrm {a}} ^ {2} p _ {p a g e}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\max} ^ {2}}{B}\right) \right] ^ {1 / 2}\right) ^ {- 1} +$$ + +and $g_{i}^{0} = h_{i}^{0} = \nabla f_{i}(x^{0})$ for all $i\in [n]$ in Algorithm 1 (DASHA-PP-PAGE) then $\operatorname{E}\left[\left\| \nabla f(\widehat{x}^T)\right\| ^2\right]\leq \frac{2\Delta_0}{\gamma T}$ . + +We now choose $p_{\mathrm{page}}$ to balance heavy full gradient and light mini-batch calculations. Let us define $\mathbb{1}_{p_{\mathrm{a}}} := \sqrt{1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}}} \in [0,1]$ . Note that if $p_{\mathrm{a}} = 1$ then $p_{\mathrm{aa}} = 1$ and $\mathbb{1}_{p_{\mathrm{a}}} = 0$ . + +Corollary 1. Let the assumptions from Theorem 3 hold and $p_{page} = \frac{B}{(m + B)}$ . Then DASHA-PP-PAGE needs + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max }}{\sqrt {B}}\right) + \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} \left(\frac {\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L}}{\sqrt {B}} + \frac {L _ {\max }}{B}\right) \right]\right) \tag {10} +$$ + +communication rounds to get an $\varepsilon$ -solution and the expected number of gradient calculations per node equals $\mathcal{O}(m + BT)$ . + +The convergence rate the rate of the current state-of-the-art method DASHA-PAGE without partial participation equals $\mathcal{O}\left(\frac{\Delta_0}{\varepsilon}\left[L + \frac{\omega}{\sqrt{n}}\left(\widehat{L} +\frac{L_{\max}}{\sqrt{B}}\right) + \sqrt{\frac{m}{n}}\frac{L_{\max}}{B}\right]\right)$ . Let us closer compare it with (10). As expected, we see that the second term w.r.t. $\omega$ degenerates up to $1 / p_{\mathrm{a}}$ . Surprisingly, the third term w.r.t. $\sqrt{m / n}$ can degenerate up to $\sqrt{B} /p_{\mathrm{a}}$ when $\widehat{L}\approx L_{\mathrm{max}}$ . Hence, in order to keep degeneration up to $1 / p_{\mathrm{a}}$ , one should take the batch size $B = \mathcal{O}\left(L_{\max}^{2} / \widehat{L}^{2}\right)$ . This interesting effect we analyze separately in Section C. The fact that the degeneration is up to $1 / p_{\mathrm{a}}$ we check numerically in Section A. + +In the following corollary, we consider Rand $K$ compressors (see Definition 5) and show that with the particular choice of parameters, up to the Lipschitz constants factors, DASHA-PP-PAGE gets the + +optimal oracle complexity and SOTA communication complexity. Indeed, comparing the following result with (Tyurin and Richtárik, 2023, Corollary 6.6), one can see that we get the degeneration up to $1 / p_{\mathrm{a}}$ factor, which is expected in the partial participation setting. Note that the complexities improve with the number of workers $n$ . + +Corollary 2. Suppose that assumptions of Corollary 1 hold, $B \leq \min \left\{\frac{1}{p_{\mathrm{a}}}\sqrt{\frac{m}{n}},\frac{L_{\max}^{2}}{\mathbb{1}_{P_{\mathrm{a}}}^{2}\hat{L}^{2}}\right\}^{6}$ , and we use the unbiased compressor RandK with $K = \Theta \left(\sqrt[3]{d / \sqrt{m}}\right)$ . Then the communication complexity of Algorithm 1 is $\mathcal{O}\left(d + \frac{L_{\max}\Delta_0d}{p_{\mathrm{a}}\varepsilon\sqrt{n}}\right)$ , and the expected number of gradient calculations per node equals $\mathcal{O}\left(m + \frac{L_{\max}\Delta_0\sqrt{m}}{p_{\mathrm{a}}\varepsilon\sqrt{n}}\right)$ . + +The convergence rate of DASHA-PP-FINITE-MVR is provided in Section E.5. + +# 6.3 Stochastic Setting + +We define $h^t \coloneqq \frac{1}{n} \sum_{i=1}^{n} h_i^t$ . + +Theorem 4. Suppose that Assumptions 1, 2, 3, 5, 6, 7 and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b \in \left(0, \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}} \right]$ , $\gamma \leq \left(L + \left[ \frac{48\omega(2\omega + 1)}{np_{\mathrm{a}}^2} \left( \widehat{L}^2 + \frac{(1 - b)^2 L_\sigma^2}{B} \right) + \frac{12}{np_{\mathrm{a}}b} \left( \left(1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}} \right) \widehat{L}^2 + \frac{(1 - b)^2 L_\sigma^2}{B} \right) \right]^{1/2}\right)^{-1}$ , and $g_i^0 = h_i^0$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP-MVR). Then + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \leq \frac {1}{T} \left[ \frac {2 \Delta_ {0}}{\gamma} + \frac {2}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {3 2 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}}}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) \right] \\ + \left(\frac {4 8 b ^ {2} \omega (2 \omega + 1)}{p _ {\mathrm {a}} ^ {2}} + \frac {1 2 b}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{n B}. \\ \end{array} +$$ + +In the next corollary, we choose momentum $b$ and initialize vectors $h_i^0$ to get $\varepsilon$ -solution. + +Corollary 3. Suppose that assumptions from Theorem 4 hold, momentum $b = \Theta \left( \min \left\{ \frac{p_{\mathrm{a}}}{\omega} \sqrt{\frac{n \varepsilon B}{\sigma^2}}, \frac{p_{\mathrm{a}} n \varepsilon B}{\sigma^2} \right\} \right)$ , $\frac{\sigma^2}{n \varepsilon B} \geq 1$ , and $h_i^0 = \frac{1}{B_{\mathrm{init}}} \sum_{k=1}^{B_{\mathrm{init}}} \nabla f_i(x^0; \xi_{ik}^0)$ for all $i \in [n]$ , and batch size $B_{\mathrm{init}} = \Theta \left( \frac{\sqrt{p_{\mathrm{a}}} B}{b} \right)$ , then Algorithm 1 (DASHA-PP-MVR) needs + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\frac {\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L}}{\sqrt {B}} + \frac {L _ {\sigma}}{B}\right) \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right) +$$ + +communication rounds to get an $\varepsilon$ -solution and the number of stochastic gradient calculations per node equals $\mathcal{O}(B_{\mathrm{init}} + BT)$ . + +The convergence rate of the DASHA-SYNC-MVR, the state-of-the-art method without partial participation, equals $\mathcal{O}\left(\frac{\Delta_0}{\varepsilon}\left[L + \frac{\omega}{\sqrt{n}}\left(\widehat{L} +\frac{L_\sigma}{\sqrt{B}}\right) + \frac{\sigma}{\sqrt{\varepsilon n}}\frac{L_\sigma}{B}\right] + \frac{\sigma^2}{n\varepsilon B}\right)$ . Similar to Section 6.2, we see that in the regimes when $\widehat{L}\approx L_{\sigma}$ the third term w.r.t. $1 / \varepsilon^{3 / 2}$ can degenerate up to $\sqrt{B} /p_{\mathrm{a}}$ . However, if we take $B = \mathcal{O}\left(L_{\sigma}^{2} / \widehat{L}^{2}\right)$ , then the degeneration of the third term will be up to $1 / p_{\mathrm{a}}$ . This effect we analyze in Section C. The fact that the degeneration is up to $1 / p_{\mathrm{a}}$ we check numerically in Section A. + +In the following corollary, we consider Rand $K$ compressors (see Definition 5) and show that with the particular choice of parameters, up to the Lipschitz constants factors, DASHA-PP-MVR gets the optimal oracle complexity and SOTA communication complexity of DASHA-SYNC-MVR method. Indeed, comparing the following result with (Tyurin and Richtárik, 2023, Corollary 6.9), one can see that we get the degeneration up to $1 / p_{\mathrm{a}}$ factor, which is expected in the partial participation setting. Note that the complexities improve with the number of workers $n$ . + +Corollary 4. Suppose that assumptions of Corollary 3 hold, batch size $B \leq \min \left\{\frac{\sigma}{p_{\mathrm{a}}\sqrt{\varepsilon}n}, \frac{L_{\sigma}^2}{1_{p_{\mathrm{a}}}^2\hat{L}^2}\right\}$ , we take RandK compressors with $K = \Theta\left(\frac{Bd\sqrt{\varepsilon n}}{\sigma}\right)$ . Then the communication complexity equals + +$\mathcal{O}\left(\frac{d\sigma}{\sqrt{p_{\mathrm{a}}}\sqrt{n\varepsilon}} + \frac{L_{\sigma}\Delta_0d}{p_{\mathrm{a}}\sqrt{n\varepsilon}}\right)$ , and the expected number of stochastic gradient calculations per node equals $\mathcal{O}\left(\frac{\sigma^2}{\sqrt{p_{\mathrm{a}}}n\varepsilon} + \frac{L_{\sigma}\Delta_0\sigma}{p_{\mathrm{a}}\varepsilon^{3/2}n}\right)$ . +We are aware that the initial batch size $B_{\mathrm{init}}$ can be suboptimal w.r.t. $\omega$ in DASHA-PP-MVR in some regimes (see also (Tyurin and Richtárik, 2023)). This is a side effect of mixing the variance reduction of stochastic gradients and compression. However, Corollary 4 reveals that we can escape these regimes by choosing the parameter $K$ of Rand $K$ compressors in a particular way. To get the complete picture, we analyze the same phenomenon under PL condition (see Section F) and provide a new method DASHA-PP-SYNC-MVR (see Section G). + +# Acknowledgements + +This work of P. Richtárik and A. Tyurin was supported by the KAUST Baseline Research Scheme (KAUST BRF) and the KAUST Extreme Computing Research Center (KAUST ECRC), and the work of P. Richtárik was supported by the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). + +# References + +Alistarh, D., Grubic, D., Li, J., Tomioka, R., and Vojnovic, M. (2017). QSGD: Communication-efficient SGD via gradient quantization and encoding. In Advances in Neural Information Processing Systems (NIPS), pages 1709-1720. +Arjevani, Y., Carmon, Y., Duchi, J. C., Foster, D. J., Srebro, N., and Woodworth, B. (2019). Lower bounds for non-convex stochastic optimization. arXiv preprint arXiv:1912.02365. +Beznosikov, A., Horvath, S., Richtárik, P., and Safaryan, M. (2020). On biased compression for distributed learning. arXiv preprint arXiv:2002.12410. +Carmon, Y., Duchi, J. C., Hinder, O., and Sidford, A. (2020). Lower bounds for finding stationary points i. Mathematical Programming, 184(1):71-120. +Chang, C.-C. and Lin, C.-J. (2011). LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):1-27. +Cutkosky, A. and Orabona, F. (2019). Momentum-based variance reduction in non-convex SGD. arXiv preprint arXiv:1905.10018. +Fang, C., Li, C. J., Lin, Z., and Zhang, T. (2018). SPIDER: Near-optimal non-convex optimization via stochastic path integrated differential estimator. In NeurIPS Information Processing Systems. +Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. (2016). Deep learning, volume 1. MIT Press. +Gorbunov, E., Burlachenko, K., Li, Z., and Richtárik, P. (2021). MARINA: Faster non-convex distributed learning with compression. In 38th International Conference on Machine Learning. +Horvath, S., Ho, C.-Y., Horvath, L., Sahu, A. N., Canini, M., and Richtárik, P. (2019a). Natural compression for distributed deep learning. arXiv preprint arXiv:1905.10988. +Horváth, S., Kovalev, D., Mishchenko, K., Stich, S., and Richtárik, P. (2019b). Stochastic distributed learning with gradient quantization and variance reduction. arXiv preprint arXiv:1904.05115. +Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456. PMLR. +Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Dennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1-2):1-210. + +Karimireddy, S. P., Jaggi, M., Kale, S., Mohri, M., Reddi, S. J., Stich, S. U., and Suresh, A. T. (2020a). Mime: Mimicking centralized stochastic algorithms in federated learning. arXiv preprint arXiv:2008.03606. +Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., and Suresh, A. T. (2020b). Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pages 5132-5143. PMLR. +Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., and Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. +Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. (2020). Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429-450. +Li, Z., Bao, H., Zhang, X., and Richtárik, P. (2021a). PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In International Conference on Machine Learning, pages 6286-6295. PMLR. +Li, Z., Hanzely, S., and Richtárik, P. (2021b). ZeroSARAH: Efficient nonconvex finite-sum optimization with zero full gradient computation. arXiv preprint arXiv:2103.01447. +Lin, Y., Han, S., Mao, H., Wang, Y., and Dally, W. J. (2017). Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887. +McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR. +Mishchenko, K., Gorbunov, E., Takáč, M., and Richtárik, P. (2019). Distributed learning with compressed gradient differences. arXiv preprint arXiv:1901.09269. +Narayanan, D., Harlap, A., Phanishayee, A., Seshadri, V., Devanur, N. R., Ganger, G. R., Gibbons, P. B., and Zaharia, M. (2019). PipeDream: generalized pipeline parallelism for dnn training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, pages 1-15. +Nesterov, Y. (2018). Lectures on convex optimization, volume 137. Springer. +Nguyen, L., Liu, J., Scheinberg, K., and Takáč, M. (2017). SARAH: A novel method for machine learning problems using stochastic recursive gradient. In The 34th International Conference on Machine Learning. +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019). Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS). +Patel, K. K., Wang, L., Woodworth, B., Bullins, B., and Srebro, N. (2022). Towards optimal communication complexity in distributed non-convex optimization. In Advances in Neural Information Processing Systems. +Ramaswamy, S., Mathews, R., Rao, K., and Beaufays, F. (2019). Federated learning for emoji prediction in a mobile keyboard. arXiv preprint arXiv:1906.04329. +Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021). Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092. +Reddi, S., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Konečny, J., Kumar, S., and McMahan, H. B. (2020). Adaptive federated optimization. arXiv preprint arXiv:2003.00295. +Richtárik, P., Sokolov, I., and Fatkhullin, I. (2021). EF21: A new, simpler, theoretically better, and practically faster error feedback. In Neural Information Processing Systems, 2021. +Richtárik, P., Sokolov, I., Fatkhullin, I., Gasanov, E., Li, Z., and Gorbunov, E. (2022). 3PC: Three point compressors for communication-efficient distributed training and a better theory for lazy aggregation. arXiv preprint arXiv:2202.00998. + +Sapio, A., Canini, M., Ho, C.-Y., Nelson, J., Kalnis, P., Kim, C., Krishnamurthy, A., Moshref, M., Ports, D. R., and Richtárik, P. (2019). Scaling distributed machine learning with in-network aggregation. arXiv preprint arXiv:1903.06701. +Stich, S. U., Cordonnier, J.-B., and Jaggi, M. (2018). Sparsified SGD with memory. Advances in Neural Information Processing Systems, 31. +Suresh, A. T., Sun, Z., Ro, J. H., and Yu, F. (2022). Correlated quantization for distributed mean estimation and optimization. arXiv preprint arXiv:2203.04925. +Szlendak, R., Tyurin, A., and Richtárik, P. (2021). Permutation compressors for provably faster distributed nonconvex optimization. arXiv preprint arXiv:2110.03300. +Tyurin, A. and Richtárik, P. (2023). DASHA: Distributed nonconvex optimization with communication compression and optimal oracle complexity. International Conference on Learning Representations (ICLR). +Vogels, T., He, L., Koloskova, A., Karimireddy, S. P., Lin, T., Stich, S. U., and Jaggi, M. (2021). RelaySum for decentralized deep learning on heterogeneous data. Advances in Neural Information Processing Systems, 34. +Wangni, J., Wang, J., Liu, J., and Zhang, T. (2018). Gradient sparsification for communication-efficient distributed optimization. Advances in Neural Information Processing Systems, 31. +Xu, H., Ho, C.-Y., Abdelmoniem, A. M., Dutta, A., Bergou, E. H., Karatsenidis, K., Canini, M., and Kalnis, P. (2021). Grace: A compressed communication framework for distributed machine learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), pages 561-572. IEEE. +Zhao, H., Burlachenko, K., Li, Z., and Richtárik, P. (2021a). Faster rates for compressed federated learning with client-variance reduction. arXiv preprint arXiv:2112.13097. +Zhao, H., Li, Z., and Richtárik, P. (2021b). FedPAGE: A fast local stochastic gradient method for communication-efficient federated learning. arXiv preprint arXiv:2108.04755. + +# Contents + +1 Introduction 1 +2 Optimization Problem 1 + +2.1 Unbiased Compressors 2 +2.2 Nodes Partial Participation Assumptions 3 + +3 Motivation and Related Work 4 +4 Contributions 5 +5 Algorithm Description and Main Challenges Towards Partial Participation 5 + +6 Theorems 7 + +6.1 Gradient Setting 8 +6.2 Finite-Sum Setting 8 +6.3 Stochastic Setting 9 + +A Numerical Verification of Theoretical Dependencies 15 +A.1 Experiments in Partial Participation Setting 16 +B Original DASHA and DASHA-MVR Methods 17 + +C Problem of Estimating the Mean in the Partial Participation Setting 18 +D Auxiliary facts 19 + +D.1 Sampling Lemma 19 +D.2 Compressors Facts 21 + +E Proofs of Theorems 21 + +E.1 Standard Lemmas in the Nonconvex Setting 22 +E.2 Generic Lemmas 23 +E.3 Proof for DASHA-PP 26 +E.4 Proof for DASHA-PP-PAGE 30 +E.5 Proof for DASHA-PP-FINITE-MVR 41 +E.6 Proof for DASHA-PP-MVR 52 + +F Analysis of DASHA-PP under Polyak-Lojasiewicz Condition 65 + +F.1 Gradient Setting 65 +F.2 Finite-Sum Setting 65 +F.3 Stochastic Setting 66 +F.4 Proofs of Theorems 67 + +F.4.1 Standard Lemma under Polyak-Lojasiewicz Condition 67 + +F.4.2 Generic Lemma 67 +F.4.3 Proof for DASHA-PP under PL-condition 69 +F.4.4 Proof for DASHA-PP-PAGE under PL-condition 71 +F.4.5 Proof for DASHA-PP-MVR under PL-condition 76 + +# G Description of DASHA-PP-SYNC-MVR 83 + +G.1 Proof for DASHA-PP-SYNC-MVR 85 + +# A Numerical Verification of Theoretical Dependencies + +![](images/de94a5a9c758b2ab30eace78c81cb75bbf6c95eaece740d3b7b8c5df7982c84a.jpg) +(a) Finite-sum setting, $K = 500$ in $\operatorname{Rand}K$ . +(b) Stochastic setting, $\sigma^2 / n_{\varepsilon B} = 10000$ , and $K = 200$ in Rand $K$ . +Figure 1: Classification task with the real-sim dataset. + +![](images/cf6bbbf07cf9482b8d7b4b461143361e7510b84e1349f2a385e1fd580125780b.jpg) + +Our main goal is to verify the dependences from the theory. We compare DASHA-PP with DASHA. Clearly, DASHA-PP can not generally perform better than DASHA. In different settings, we verify that the bigger $p_{\mathrm{a}}$ , the closer DASHA-PP is to DASHA, i.e., DASHA-PP converges no slower than $1 / p_{\mathrm{a}}$ times. + +In all experiments, we take the real-sim dataset with dimension $d = 20,958$ and the number of samples equals 72,309 from LIBSVM datasets (Chang and Lin, 2011) (under the 3-clause BSD license), and randomly split the dataset between $n = 100$ nodes equally, ignoring residual samples. In the finite-sum setting, we solve a classification problem with functions + +$$ +f _ {i} (x) := \frac {1}{m} \sum_ {j = 1} ^ {m} \left(1 - \frac {1}{1 + \exp \left(y _ {i j} a _ {i j} ^ {\top} x\right)}\right) ^ {2}, \tag {11} +$$ + +where $a_{ij} \in \mathbb{R}^d$ is the feature vector of a sample on the $i^{\text{th}}$ node, $y_{ij} \in \{-1, 1\}$ is the corresponding label, and $m$ is the number of samples on the $i^{\text{th}}$ node for all $i \in [n]$ . In the stochastic setting, we consider functions + +$$ +f _ {i} \left(x _ {1}, x _ {2}\right) := \mathrm {E} _ {j \sim [ m ]} \left[ - \log \left(\frac {\exp \left(a _ {i j} ^ {\top} x _ {y _ {i j}}\right)}{\sum_ {y \in \{1 , 2 \}} \exp \left(a _ {i j} ^ {\top} x _ {y}\right)}\right) + \lambda \sum_ {y \in \{1, 2 \}} \sum_ {k = 1} ^ {d} \frac {\left\{x _ {y} \right\} _ {k} ^ {2}}{1 + \left\{x _ {y} \right\} _ {k} ^ {2}} \right], \tag {12} +$$ + +where $x_{1}, x_{2} \in \mathbb{R}^{d}$ , $\{\cdot\}_{k}$ is an indexing operation, $a_{ij} \in \mathbb{R}^{d}$ is a feature of a sample on the $i^{\text{th}}$ node, $y_{ij} \in \{1, 2\}$ is a corresponding label, $m$ is the number of samples located on the $i^{\text{th}}$ node, constant $\lambda = 0.001$ for all $i \in [n]$ . + +The code was written in Python 3.6.8 using PyTorch 1.9 (Paszke et al., 2019). A distributed environment was emulated on a machine with Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz and 64 cores. + +We use the standard setting in experiments7 where all parameters except step sizes are taken as suggested in theory. Step sizes are finetuned from a set $\{2^i \mid i \in [-10, 10]\}$ . We emulate the partial participation setting using $s$ -nice sampling with the number of nodes $n = 100$ . We consider the Rand $K$ compressor and take the batch size $B = 1$ . We plot the relation between communication rounds and values of the norm of gradients at each communication round. + +In the finite-sum (Figure 1a) and in the stochastic setting (Figure 1b), we see that the bigger probability $p_{\mathrm{a}} = s / n$ to 1, the closer DASHA-PP to DASHA. Moreover, DASHA-PP with $s = 10$ and $s = 1$ converges approximately $\times 10$ ( $= 1 / p_{\mathrm{a}}$ ) and $\times 100$ ( $= 1 / p_{\mathrm{a}}$ ) times slower, accordingly. Our theory predicts such behavior. + +# A.1 Experiments in Partial Participation Setting + +In this experiments, we compare our new algorithm DASHA-PP with previous baselines MARINA and FRECON in the partial participation setting. We consider MARINA and FRECON because they are the previous SOTA methods in the partial participation setting with compression. We investigate the same optimization problem and setup as in Section A of the paper. All methods use the RandK compressor in these experiments. + +1. Finite-Sum Setting. We now consider the function from (11). In Figures 2 and 3, we compare all three methods in the finite-sum setting on two different datasets: real-sim and MNIST. The parameter $s$ is the number of clients participating in each round that are selected randomly using the $s$ -nice sampling (server chooses uniformly $s$ nodes without replacement). We can see that DASHA-PP converges faster than MARINA. Since FRECON does not support variance reduction of stochastic gradients, it converges to less accurate solutions. + +![](images/9bab16d39593416deb4b9f9659caea667d1b03019ec3b2102869dda1138f5c9f.jpg) +(a) $1\%$ of nodes participating + +![](images/fccde22bb0e08b51b221ccec30b5049beedd9154e636929da4ea8590f1999f18.jpg) +(b) $10\%$ of nodes participating +(c) $90\%$ of nodes participating + +![](images/6bec214eb7f54379db7d9e29f5f8a63a5df569c01e1f4b8baa4d091d7d4db974.jpg) + +![](images/aaaeaa0f75300dde2297e4ebb88985e445b04e159e0b1ed89942fd8a2d36dbab.jpg) +(a) $10\%$ of nodes participating + +![](images/3667e2cf3b1416459d8f54181b8efa1aed0066f05b2d68a5205aa3010acbc485.jpg) +Figure 2: Classification task on real-sim + +![](images/eb0a6bf3c3d1cc78bb7ee5edbed2e058c303aef7948182e270afa3753b10749f.jpg) +(b) $50\%$ of nodes participating +(c) $90\%$ of nodes participating +Figure 3: Classification task on MNIST + +2. Stochastic Setting. In Figures 4 and 5, we consider the stochastic setting with the function from (11). We can see that DASHA-PP converges to high accuracy solutions, unlike FRECON. Moreover, DASHA-PP improves the convergence rates of MARINA. + +![](images/a3ca199962ecab21e388f90fb2603ed881dbdac330e1613e757fea37b85c5dae.jpg) +(a) $10\%$ of nodes participating + +![](images/6a94f15ff1967ac3bba5f5e81d4ccb89bfd3dd9e806a03eed2742595c477f2f7.jpg) +(b) $50\%$ of nodes participating +(c) $100\%$ of nodes participating + +![](images/d5dec833c627dc087762a53636d450529bd5696e6d07f62f9e922d1b89532865.jpg) + +![](images/215305f3056c6cfcc53ad641ba9db4c848ad530d04ad37133489c97c8245bcd8.jpg) +Figure 4: Classification task on real-sim +(a) $10\%$ of nodes participating +Figure 5: Classification task on MNIST + +![](images/55603aee039945bf1bae254c0f109f5dd73b395267748f225f6c71a27c380a5c.jpg) +(b) $50\%$ of nodes participating +(c) $100\%$ of nodes participating + +![](images/9bc5241970b5af767aa139cdb5da47258066bff7d09e114ac418cac07679a1d7.jpg) + +# B Original DASHA and DASHA-MVR Methods + +To simplify the discussion and explanation from the main part, we present the algorithms from (Tyurin and Richtárik, 2023) + +# Algorithm 6 DASHA + +1: Input: starting point $x^0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$ , momentum $a \in (0,1]$ , number of iterations $T \geq 1$ +2: Initialize $g_i^0 \in \mathbb{R}^d$ on the nodes and $g^0 = \frac{1}{n}\sum_{i=1}^{n} g_i^0$ on the server +3: for $t = 0,1,\ldots ,T - 1$ do +4: $x^{t + 1} = x^t -\gamma g^t$ +5: Broadcast $x^{t + 1}$ and $x^t$ +6: for $i = 1,\dots ,n$ in parallel do +7: $m_{i}^{t + 1} = \mathcal{C}_{i}\left(\nabla f_{i}(x^{t + 1}) - \nabla f_{i}(x^{t}) - a(g_{i}^{t} - \nabla f_{i}(x^{t}))\right)$ +8: $g_{i}^{t + 1} = g_{i}^{t} + m_{i}^{t + 1}$ +9: Send $m_i^{t+1}$ to the server +0: end for +1: $g^{t + 1} = g^t +\frac{1}{n}\sum_{i = 1}^n m_i^{t + 1}$ +2: end for +3: Output: $\hat{x}^T$ chosen uniformly at random from $\{x^t\}_{k=0}^{T-1}$ + +# Algorithm 7 DASHA-MVR (with batch size $B = 1$ ) + +1: Input: starting point $x^0 \in \mathbb{R}^d$ , stepsize $\gamma > 0$ , momentum $a, b \in (0,1]$ , number of iterations $T \geq 1$ +2: Initialize $g_i^0 \in \mathbb{R}^d$ on the nodes and $g^0 = \frac{1}{n}\sum_{i=1}^{n} g_i^0$ on the server +3: for $t = 0,1,\ldots ,T - 1$ do +4: $x^{t + 1} = x^t -\gamma g^t$ +5: Broadcast $x^{t + 1}$ and $x^t$ +6: for $i = 1,\dots ,n$ in parallel do +7: $h_i^{t + 1} = \nabla f_i(x^{t + 1};\xi_i^{t + 1}) + (1 - b)(h_i^t -\nabla f_i(x^t;\xi_i^{t + 1}))$ $\xi_{i}^{t + 1}\sim \mathcal{D}_{i}$ +8: $m_{i}^{t + 1} = \mathcal{C}_{i}\left(h_{i}^{t + 1} - h_{i}^{t} - a\left(g_{i}^{t} - h_{i}^{t}\right)\right)$ +9: $g_{i}^{t + 1} = g_{i}^{t} + m_{i}^{t + 1}$ +0: Send $m_i^{t + 1}$ to the server +1: end for +2: $g^{t + 1} = g^t +\frac{1}{n}\sum_{i = 1}^n m_i^{t + 1}$ +3: end for +14: Output: $\hat{x}^T$ chosen uniformly at random from $\{x^t\}_{k=0}^{T-1}$ + +# C Problem of Estimating the Mean in the Partial Participation Setting + +We now provide the example to explain why the only choice of $B = \mathcal{O}\left(\min \left\{\frac{1}{p_{\mathrm{a}}}\sqrt{\frac{m}{n}},\frac{L_{\max}^{2}}{\mathbb{1}_{p_{\mathrm{a}}}^{2}\hat{L}^{2}}\right\}\right)$ and $B = \mathcal{O}\left(\min \left\{\frac{\sigma}{p_{\mathrm{a}}\sqrt{\varepsilon n}},\frac{L_{\sigma}^{2}}{\mathbb{1}_{p_{\mathrm{a}}}^{2}\hat{L}^{2}}\right\}\right)$ in DASHA-PP-PAGE and DASHA-PP-MVR, accordingly, guarantees the degeneration up to $1 / p_{\mathrm{a}}$ . This is surprising, because in methods with the variance reduction of stochastic gradients (Li et al., 2021a; Tyurin and Richtárik, 2023) we can take the size of batch size $B = \mathcal{O}\left(\sqrt{\frac{m}{n}}\right)$ and $B = \mathcal{O}\left(\frac{\sigma}{\sqrt{\varepsilon n}}\right)$ and guarantee the optimality. Note that the smaller the batch size $B$ , the more the server and the nodes have to communicate to get $\varepsilon$ -solution. + +Let us consider the task of estimating the mean of vectors in the distributed setting. Suppose that we have $n$ nodes, and each of them contains $m$ vectors $\{x_{ij}\}_{j=1}^{m}$ , where $x_{ij} \in \mathbb{R}^d$ for all $i \in [n], j \in [m]$ . First, let us consider that each node samples a mini-batch $I^i$ of size $B$ with replacement and sends it to the server. Then the server calculates the mean of the mini-batches from nodes. One can easily show that the variance of the estimator is + +$$ +\begin{array}{l} \left. \operatorname {E} \left[ \left\| \frac {1}{n B} \sum_ {i = 1} ^ {n} \sum_ {j \in I ^ {i}} x _ {i j} - \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} x _ {i j} \right\| ^ {2} \right] \right. \tag {13} \\ = \frac {1}{n B} \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| x _ {i j} - \frac {1}{m} \sum_ {j = 1} ^ {m} x _ {i j} \right\| ^ {2}. \\ \end{array} +$$ + +Next, we consider the same task in the partial participation setting with $s$ -nice sampling, i.e., we sample a random set $S \subset [n]$ of $s \in [n]$ nodes without replacement and receive the mini-batches only from the sampled nodes. Such sampling of nodes satisfy Assumption 8 with $p_{\mathrm{a}} = s / n$ and $p_{\mathrm{a}} = s(s - 1) / n(n - 1)$ . In this case, the variance of the estimator (See Lemma 1 with $r_i = 0$ and $s_i = \sum_{j \in I^i} x_{ij}$ ) is + +$$ +\begin{array}{l} \left. \operatorname {E} \left[ \left\| \frac {1}{s B} \sum_ {i \in S} \sum_ {j \in I ^ {i}} x _ {i j} - \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} x _ {i j} \right\| ^ {2} \right] \right. \tag {14} \\ = \frac {1}{s B} \underbrace {\frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| x _ {i j} - \frac {1}{m} \sum_ {j = 1} ^ {m} x _ {i j} \right\| ^ {2}} _ {\mathcal {L} _ {\max} ^ {2}} \\ + \frac {n - s}{s (n - 1)} \underbrace {\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \frac {1}{m} \sum_ {j = 1} ^ {m} x _ {i j} - \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} x _ {i j} \right\| ^ {2}} _ {\hat {\mathcal {L}} ^ {2}}. \\ \end{array} +$$ + +Let us assume that $s \leq n/2$ . Note that (13) scales with any $B \geq 1$ , while (14) only scales when $B = \mathcal{O}\left(\mathcal{L}_{\max}^2/\widehat{\mathcal{L}}^2\right)$ . In other words, for large enough $B$ , the variance in (14) does not significantly improve with the growth of $B$ due to the term $\widehat{\mathcal{L}}^2$ . In our proof, due to partial participation, the variance from (14) naturally appears, and we get the same effect. As was mentioned in Sections 6.2 and 6.3, it can be seen in our convergence rate bounds. + +# D Auxiliary facts + +We list auxiliary facts that we use in our proofs: + +1. For all $x,y\in \mathbb{R}^d$ , we have + +$$ +\left\| x + y \right\| ^ {2} \leq 2 \left\| x \right\| ^ {2} + 2 \left\| y \right\| ^ {2} \tag {15} +$$ + +2. Let us take a random vector $\xi \in \mathbb{R}^d$ , then + +$$ +\operatorname {E} \left[ \| \xi \| ^ {2} \right] = \operatorname {E} \left[ \| \xi - \operatorname {E} [ \xi ] \| ^ {2} \right] + \| \operatorname {E} [ \xi ] \| ^ {2}. \tag {16} +$$ + +# D.1 Sampling Lemma + +This section provides a lemma that we regularly use in our proofs, and it is useful for samplings that satisfy Assumption 8. + +Lemma 1. Suppose that a set $S$ is a random subset of a set $[n]$ such that + +1. $\mathbf{Prob}(i\in S) = p_{\mathrm{a}},\quad \forall i\in [n],$ +2. $\mathbf{Prob}(i\in S,j\in S) = p_{\mathrm{aa}},\quad \forall i\neq j\in [n],$ +3. $p_{\mathrm{aa}} \leq p_{\mathrm{a}}^{2}$ , + +where $p_{\mathrm{a}} \in (0,1]$ and $p_{\mathrm{aa}} \in [0,1]$ . Let us take random independent vectors $s_i \in \mathbb{R}^d$ for all $i \in [n]$ , nonrandom vector $r_i \in \mathbb{R}^d$ for all $i \in [n]$ , and random vectors + +$$ +v _ {i} = \left\{ \begin{array}{l} r _ {i} + \frac {1}{p _ {\mathrm {a}}} s _ {i}, i \in S, \\ r _ {i}, i \notin S, \end{array} \right. +$$ + +then + +$$ +\begin{array}{l} \operatorname {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} - \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} \right] \right\| ^ {2} \right] \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} \left[ \left\| s _ {i} - \operatorname {E} [ s _ {i} ] \right\| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \operatorname {E} [ s _ {i} ] \right\| ^ {2} + \frac {p _ {\mathrm {a a}} - p _ {\mathrm {a}} ^ {2}}{p _ {\mathrm {a}} ^ {2}} \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \operatorname {E} [ s _ {i} ] \right\| \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} \left[ \| s _ {i} - \operatorname {E} [ s _ {i} ] \| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \| \operatorname {E} [ s _ {i} ] \| ^ {2}. \\ \end{array} +$$ + +Proof. Let us define additional constants $p_{\mathrm{an}}$ and $p_{\mathrm{nn}}$ , such that + +1. $\mathbf{Prob}(i\in S,j\notin S) = p_{\mathrm{an}},\quad \forall i\neq j\in [n],$ +2. $\mathbf{Prob}(i\notin S,j\notin S) = p_{\mathrm{nn}},\quad \forall i\neq j\in [n].$ + +Note, that + +$$ +p _ {\mathrm {a}} = p _ {\mathrm {a a}} + p _ {\mathrm {a n}} \tag {17} +$$ + +and + +$$ +p _ {\mathrm {n n}} = 1 - p _ {\mathrm {a a}} - 2 p _ {\mathrm {a n}}. \tag {18} +$$ + +Using the law of total expectation and + +$$ +\operatorname {E} \left[ v _ {i} \right] = p _ {\mathrm {a}} \left(r _ {i} + \operatorname {E} \left[ \frac {1}{p _ {\mathrm {a}}} s _ {i} \right]\right) + (1 - p _ {\mathrm {a}}) r _ {i} = r _ {i} + \operatorname {E} \left[ s _ {i} \right], +$$ + +we have + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} - \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} \right] \right\| ^ {2} \right] \\ = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} \left[ \| v _ {i} - \left(r _ {i} + \mathrm {E} [ s _ {i} ]\right) \| ^ {2} \right] \\ + \frac {1}{n ^ {2}} \sum_ {i \neq j} ^ {n} \operatorname {E} \left[ \langle v _ {i} - (r _ {i} + \operatorname {E} [ s _ {i} ]), v _ {j} - (r _ {j} + \operatorname {E} [ s _ {j} ]) \rangle \right] \\ = \frac {p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} \left[ \left\| r _ {i} + \frac {1}{p _ {\mathrm {a}}} s _ {i} - \left(r _ {i} + \mathrm {E} [ s _ {i} ]\right) \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \| r _ {i} - \left(r _ {i} + \operatorname {E} [ s _ {i} ]\right) \| ^ {2} \\ + \frac {p _ {\mathrm {a a}}}{n ^ {2}} \sum_ {i \neq j} ^ {n} \mathrm {E} \left[ \left\langle r _ {i} + \frac {1}{p _ {\mathrm {a}}} s _ {i} - (r _ {i} + \mathrm {E} [ s _ {i} ]), r _ {j} + \frac {1}{p _ {\mathrm {a}}} s _ {j} - (r _ {j} + \mathrm {E} [ s _ {j} ]) \right\rangle \right] \\ + \frac {2 p _ {\mathrm {a n}}}{n ^ {2}} \sum_ {i \neq j} ^ {n} \mathrm {E} \left[ \left\langle r _ {i} + \frac {1}{p _ {\mathrm {a}}} s _ {i} - (r _ {i} + \mathrm {E} [ s _ {i} ]), r _ {j} - (r _ {j} + \mathrm {E} [ s _ {j} ]) \right\rangle \right] \\ + \frac {p _ {\mathrm {n n}}}{n ^ {2}} \sum_ {i \neq j} ^ {n} \left\langle r _ {i} - \left(r _ {i} + \operatorname {E} \left[ s _ {i} \right]\right), r _ {j} - \left(r _ {j} + \operatorname {E} \left[ s _ {j} \right]\right) \right\rangle . \\ \end{array} +$$ + +From the independence of random vectors $s_i$ , we obtain + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} - \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} \right] \right\| ^ {2} \right] \\ = \frac {p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} \left[ \left\| \frac {1}{p _ {\mathrm {a}}} s _ {i} - \mathrm {E} [ s _ {i} ] \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \| \operatorname {E} [ s _ {i} ] \| ^ {2} \\ + \frac {p _ {\mathrm {a a}} (1 - p _ {\mathrm {a}}) ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i \neq j} ^ {n} \langle \mathrm {E} [ s _ {i} ], \mathrm {E} [ s _ {j} ] \rangle \\ + \frac {2 p _ {\mathrm {a n}} (p _ {\mathrm {a}} - 1)}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i \neq j} ^ {n} \langle \operatorname {E} [ s _ {i} ], \operatorname {E} [ s _ {j} ] \rangle \\ + \frac {p _ {\mathrm {n n}}}{n ^ {2}} \sum_ {i \neq j} ^ {n} \left\langle \operatorname {E} \left[ s _ {i} \right], \operatorname {E} \left[ s _ {j} \right] \right\rangle . \\ \end{array} +$$ + +Using (17) and (18), we have + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} - \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} \right] \right\| ^ {2} \right] \\ = \frac {p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \operatorname {E} \left[ \left\| \frac {1}{p _ {\mathrm {a}}} s _ {i} - \operatorname {E} [ s _ {i} ] \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \| \operatorname {E} [ s _ {i} ] \| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {p _ {\mathrm {a a}} - p _ {\mathrm {a}} ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i \neq j} ^ {n} \langle \mathrm {E} [ s _ {i} ], \mathrm {E} [ s _ {j} ] \rangle \\ \stackrel {(1 6)} {=} \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} \left[ \| s _ {i} - \operatorname {E} [ s _ {i} ] \| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \| \operatorname {E} [ s _ {i} ] \| ^ {2} \\ + \frac {p _ {\mathrm {a a}} - p _ {\mathrm {a}} ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i \neq j} ^ {n} \langle \operatorname {E} [ s _ {i} ], \operatorname {E} [ s _ {j} ] \rangle \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} \left[ \| s _ {i} - \mathrm {E} [ s _ {i} ] \| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \| \mathrm {E} [ s _ {i} ] \| ^ {2} \\ + \frac {p _ {\mathrm {a a}} - p _ {\mathrm {a}} ^ {2}}{p _ {\mathrm {a}} ^ {2}} \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \operatorname {E} [ s _ {i} ] \right\|. \\ \end{array} +$$ + +Finally, using that $p_{\mathrm{aa}} \leq p_{\mathrm{a}}^2$ , we have + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} - \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} v _ {i} \right] \right\| ^ {2} \right] \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} \left[ \| s _ {i} - \operatorname {E} [ s _ {i} ] \| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \| \operatorname {E} [ s _ {i} ] \| ^ {2}. \\ \end{array} +$$ + +![](images/50b220e6ba15c14e4c9e3c69a6dbb24885143f5676e932a127954c8ac0cbbeda.jpg) + +# D.2 Compressors Facts + +We define the Rand $K$ compressor that chooses without replacement $K$ coordinates, scales them by a constant factor to preserve unbiasedness and zero-out other coordinates. + +Definition 5. Let us take a random subset $S$ from $[d]$ , $|S| = K$ , $K \in [d]$ . We say that a stochastic mapping $\mathcal{C} : \mathbb{R}^d \to \mathbb{R}^d$ is Rand $K$ if + +$$ +\mathcal {C} (x) = \frac {d}{K} \sum_ {j \in S} x _ {j} e _ {j}, +$$ + +where $\{e_i\}_{i = 1}^d$ is the standard unit basis. + +Theorem 6. If $\mathcal{C}$ is RandK, then $\mathcal{C} \in \mathbb{U}\left(\frac{d}{k} - 1\right)$ . + +See the proof in (Beznosikov et al., 2020). + +# E Proofs of Theorems + +There are three different sources of randomness in Algorithm 1: the first one from vectors $\{k_i^{t + 1}\}_{i = 1}^n$ , the second one from compressors $\{\mathcal{C}_i\}_{i = 1}^n$ , and the third one from availability of nodes. We define $\mathrm{E}_k[\cdot ]$ , $\mathrm{E}_{\mathcal{C}}[\cdot ]$ and $\mathrm{E}_{p_{\mathrm{a}}}[\cdot ]$ to be conditional expectations w.r.t. $\{k_i^{t + 1}\}_{i = 1}^n$ , $\{\mathcal{C}_i\}_{i = 1}^n$ , and availability, accordingly, conditioned on all previous randomness. Moreover, we define $\mathrm{E}_{t + 1}[\cdot ]$ to be a conditional expectation w.r.t. all randomness in iteration $t + 1$ conditioned on all previous randomness. Note, that $\mathrm{E}_{t + 1}[\cdot ] = \mathrm{E}_k[\mathrm{E}_{\mathcal{C}}[\mathrm{E}_{p_{\mathrm{a}}}[\cdot ]]]$ . + +In the case of DASHA-PP-PAGE, there are two different sources of randomness from $\{k_i^{t + 1}\}_{i = 1}^n$ . We define $\mathrm{E}_{p_{\mathrm{page}}}$ and $\mathrm{E}_B$ to be conditional expectations w.r.t. the probabilistic switching and mini-batch indices $I_{i}^{t}$ , accordingly, conditioned on all previous randomness. Note, that $\mathrm{E}_{t + 1}\left[\cdot \right] = \mathrm{E}_B\left[\mathrm{E}_{\mathcal{C}}\left[\mathrm{E}_{p_{\mathrm{a}}}\left[\mathrm{E}_{p_{\mathrm{page}}}\left[\cdot \right]\right]\right]\right]$ and $\mathrm{E}_{t + 1}\left[\cdot \right] = \mathrm{E}_B\left[\mathrm{E}_{p_{\mathrm{page}}}\left[\mathrm{E}_{\mathcal{C}}\left[\mathrm{E}_{p_{\mathrm{a}}}\left[\cdot \right]\right]\right]\right]$ . + +# E.1 Standard Lemmas in the Nonconvex Setting + +We start the proof of theorems by providing standard lemmas from the nonconvex optimization. + +Lemma 2. Suppose that Assumption 2 holds and let $x^{t + 1} = x^t - \gamma g^t$ . Then for any $g^t \in \mathbb{R}^d$ and $\gamma > 0$ , we have + +$$ +f \left(x ^ {t + 1}\right) \leq f \left(x ^ {t}\right) - \frac {\gamma}{2} \left\| \nabla f \left(x ^ {t}\right) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {\gamma}{2} \left\| g ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}. \tag {19} +$$ + +Proof. Using $L$ -smoothness, we have + +$$ +\begin{array}{l} f (x ^ {t + 1}) \leq f (x ^ {t}) + \left\langle \nabla f (x ^ {t}), x ^ {t + 1} - x ^ {t} \right\rangle + \frac {L}{2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ = f (x ^ {t}) - \gamma \left\langle \nabla f (x ^ {t}), g ^ {t} \right\rangle + \frac {L}{2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +Next, due to $-\langle x,y\rangle = \frac{1}{2}\| x - y\|^2 -\frac{1}{2}\| x\|^2 -\frac{1}{2}\| y\|^2$ , we obtain + +$$ +f (x ^ {t + 1}) \leq f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {\gamma}{2} \left\| g ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. +$$ + +![](images/d4167fec027af630c53d9f92961f97b643157df7c0b9c35e2963f3896edffc57.jpg) + +Lemma 3. Suppose that Assumption 1 holds and + +$$ +\operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \gamma \Psi^ {t + 1} \leq \operatorname {E} \left[ f \left(x ^ {t}\right) \right] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f \left(x ^ {t}\right) \right\| ^ {2} \right] + \gamma \Psi^ {t} + \gamma C, \tag {20} +$$ + +where $\Psi^t$ is a sequence of numbers, $\Psi^t \geq 0$ for all $t \in [T]$ , constant $C \geq 0$ , and constant $\gamma > 0$ . Then + +$$ +\mathrm {E} \left[ \left\| \nabla f \left(\widehat {x} ^ {T}\right) \right\| ^ {2} \right] \leq \frac {2 \Delta_ {0}}{\gamma T} + \frac {2 \Psi^ {0}}{T} + 2 C, \tag {21} +$$ + +where a point $\widehat{x}^T$ is chosen uniformly from a set of points $\{x^t\}_{t=0}^{T-1}$ . + +Proof. By unrolling (20) for $t$ from 0 to $T - 1$ , we obtain + +$$ +\frac {\gamma}{2} \sum_ {t = 0} ^ {T - 1} \mathrm {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] + \mathrm {E} \left[ f (x ^ {T}) \right] + \gamma \Psi^ {T} \leq f (x ^ {0}) + \gamma \Psi^ {0} + \gamma T C. +$$ + +We subtract $f^{*}$ , divide inequality by $\frac{\gamma T}{2}$ , and take into account that $f(x) \geq f^{*}$ for all $x \in \mathbb{R}$ , and $\Psi^t \geq 0$ for all $t \in [T]$ , to get the following inequality: + +$$ +\frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathrm {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \leq \frac {2 \Delta_ {0}}{\gamma T} + \frac {2 \Psi^ {0}}{T} + 2 C. +$$ + +It is left to consider the choice of a point $\widehat{x}^T$ to complete the proof of the lemma. + +![](images/63925d81e1cb23f83c4139d3ddc581aaceea6172e880baa27d70c55a27f82495.jpg) + +Lemma 4. If $0 < \gamma \leq (L + \sqrt{A})^{-1}$ , $L > 0$ , and $A \geq 0$ , then + +$$ +\frac {1}{2 \gamma} - \frac {L}{2} - \frac {\gamma A}{2} \geq 0. +$$ + +The lemma can be easily checked with the direct calculation. + +# E.2 Generic Lemmas + +Lemma 5. Suppose that Assumptions 7 and 8 hold and let us consider sequences $g_{i}^{t + 1}, h_{i}^{t + 1}$ , and $k_{i}^{t + 1}$ from Algorithm 1, then + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \\ \leq \frac {2 \omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + \frac {a ^ {2} \left(\left(2 \omega + 1\right) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}, \tag {22} \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \\ \leq \frac {2 \omega}{p _ {\mathrm {a}}} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + \left(\frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right) \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \quad \forall i \in [ n ]. \tag {23} \\ \end{array} +$$ + +Proof. First, we estimate $\operatorname{E}_{\mathcal{C}}\left[\operatorname{E}_{p_{\mathrm{a}}}\left[\left\|g^{t+1}-h^{t+1}\right\|^2\right]\right]$ : + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \\ = \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} - \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g ^ {t + 1} - h ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + \left\| \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g ^ {t + 1} - h ^ {t + 1} \right] \right] \right\| ^ {2}, \\ \end{array} +$$ + +where we used (16). Due to Assumption 8, we have + +$$ +\begin{array}{l} \mathbf {E} _ {\mathcal {C}} \left[ \begin{array}{l l} \mathrm {E} _ {\mathrm {p} _ {\mathrm {a}}} & \left[ \begin{array}{l} \mathrm {g} _ {i} ^ {t + 1} \end{array} \right] \end{array} \right] \\ = p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ g _ {i} ^ {t} + \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right] + (1 - p _ {\mathrm {a}}) g _ {i} ^ {t} \\ = g _ {i} ^ {t} + p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right] \\ = g _ {i} ^ {t} + k _ {i} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right), \\ \end{array} +$$ + +and + +$$ +\mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] \right] = p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} \right] + (1 - p _ {\mathrm {a}}) h _ {i} ^ {t} = h _ {i} ^ {t} + k _ {i} ^ {t + 1}. +$$ + +Thus, we can get + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \\ = \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} - \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g ^ {t + 1} - h ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +Due to the independence of compressors, we can use Lemma 1 with $r_i = g_i^t - h_i^t$ and $s_i =$ + +$p_{\mathrm{a}}\mathcal{C}_i\left(\frac{1}{p_{\mathrm{a}}} k_i^{t + 1} - \frac{a}{p_{\mathrm{a}}}\left(g_i^t -h_i^t\right)\right) - k_i^{t + 1},$ and obtain + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| g ^ {t + 1} - h ^ {t + 1} \right| \right| ^ {2} \right] \right] \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {\mathcal {C}} \left[ \left\| p _ {\mathrm {a}} \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - k _ {i} ^ {t + 1} - \mathrm {E} _ {\mathcal {C}} \left[ p _ {\mathrm {a}} \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - k _ {i} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \operatorname {E} _ {\mathcal {C}} \left[ p _ {\mathrm {a}} \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - k _ {i} ^ {t + 1} \right] \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \\ = \frac {p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +From Assumption 7, we have + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \\ \leq \frac {\omega p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} + \frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \\ = \frac {\omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} + \frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2 \omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + \frac {a ^ {2} ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +The second inequality can be proved almost in the same way: + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \\ = \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} - \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + \left\| \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right] \right] \right\| ^ {2} \\ = \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} - g _ {i} ^ {t} + a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) + h _ {i} ^ {t} \right\| ^ {2} \right] \right] + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ = p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} + a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \right] \\ + a ^ {2} \left(1 - p _ {\mathrm {a}}\right) \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \stackrel {(1 6)} {=} p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(\frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right\| ^ {2} \right] \\ + a ^ {2} \frac {(1 - p _ {\mathrm {a}}) ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + a ^ {2} \left(1 - p _ {\mathrm {a}}\right) \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \leq \frac {\omega}{p _ {\mathrm {a}}} \left\| k _ {i} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \\ + \frac {a ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2 \omega}{p _ {\mathrm {a}}} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + \frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +Lemma 6. Suppose that Assumptions 2, 7, and 8 hold and let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , then + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Proof. Due to Lemma 2 and the update step from Line 5 in Algorithm 1, we have + +$$ +\begin{array}{l} \mathbf {E} _ {t + 1} \left[ f (x ^ {t + 1}) \right] \\ \leq \operatorname {E} _ {t + 1} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {\gamma}{2} \left\| g ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ = \operatorname {E} _ {t + 1} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {\gamma}{2} \left\| g ^ {t} - h ^ {t} + h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \stackrel {(1 6)} {\leq} \operatorname {E} _ {t + 1} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left(\left\| g ^ {t} - h ^ {t} \right\| ^ {2} + \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \right). \\ \end{array} +$$ + +Let us fix some constants $\kappa, \eta \in [0, \infty)$ that we will define later. Combining the last inequality, bounds (22), (23) and using the law of total expectation, we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] \\ + \kappa \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ = \operatorname {E} \left[ \operatorname {E} _ {t + 1} \left[ f \left(x ^ {t + 1}\right) \right] \right] \\ + \kappa \mathrm {E} \left[ \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \right] + \eta \mathrm {E} \left[ \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left(\left\| g ^ {t} - h ^ {t} \right\| ^ {2} + \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}\right) \right] \\ + \kappa \mathrm {E} \left[ \frac {2 \omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + \frac {a ^ {2} ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ + \eta \mathrm {E} \left[ \frac {2 \omega}{n p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + \left(\frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ = \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\gamma + \kappa (1 - a) ^ {2}\right) \operatorname {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {\kappa a ^ {2} ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {2 \kappa \omega}{n p _ {\mathrm {a}}} + \frac {2 \eta \omega}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Now, by taking $\kappa = \frac{\gamma}{a}$ , we can see that $\gamma + \kappa (1 - a)^2 \leq \kappa$ , and thus + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] \\ + \frac {\gamma}{a} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma}{a} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {\gamma a ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {2 \gamma \omega}{a n p _ {\mathrm {a}}} + \frac {2 \eta \omega}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Next, by taking $\eta = \frac{\gamma((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ and considering the choice of $a$ , one can show that $\left(\frac{\gamma a((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2} + \eta \left(\frac{a^2(2\omega + 1 - p_{\mathrm{a}})}{p_{\mathrm{a}}} + (1 - a)^2\right)\right) \leq \eta$ . Thus + +E $[f(x^{t + 1})]$ + +$$ +\begin{array}{l} + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {2 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \omega}{n p _ {\mathrm {a}} ^ {3}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| k _ {i} ^ {t + 1} \| ^ {2} \right]. \\ \end{array} +$$ + +Considering that $p_{\mathrm{aa}} \geq 0$ , we can simplify the last term and get + +E $[f(x^{t + 1})]$ + +$$ +\begin{array}{l} + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \mathrm {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +![](images/e664b9a8a50817a566751917108735472c90481f119bd4e7d763ad6c49965374.jpg) + +# E.3 Proof for DASHA-PP + +Lemma 7. Suppose that Assumptions 3 and 8 hold. For $h_i^{t+1}$ and $k_i^{t+1}$ from Algorithm 1 (DASHA-PP) we have + +1. + +$$ +\begin{array}{l} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right| \right| ^ {2} \right] \\ \leq \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2} (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +2. + +$$ +\begin{array}{l} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ \leq \frac {2 (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} L _ {i} ^ {2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +3. + +$$ +\left\| k _ {i} ^ {t + 1} \right\| ^ {2} \leq 2 L _ {i} ^ {2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ]. +$$ + +Proof. First, let us proof the bound for $\mathrm{E}_k\left[\mathrm{E}_{p_{\mathrm{a}}}\left[\left\| h^{t + 1} - \nabla f(x^{t + 1})\right\| ^2\right]\right]$ : + +$$ +\mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] +$$ + +$$ += \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \operatorname {E} _ {p _ {\mathrm {a}}} \left[ h ^ {t + 1} \right] \right\| ^ {2} \right] + \left\| \operatorname {E} _ {p _ {\mathrm {a}}} \left[ h ^ {t + 1} \right] - \nabla f (x ^ {t + 1}) \right\| ^ {2}. +$$ + +Using + +$$ +\mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] = h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) +$$ + +and (16), we have + +$$ +\begin{array}{l} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ = \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \operatorname {E} _ {p _ {\mathrm {a}}} \left[ h ^ {t + 1} \right] \right\| ^ {2} \right] + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +We can use Lemma 1 with $r_i = h_i^t$ and $s_i = k_i^{t + 1}$ to obtain + +$$ +\begin{array}{l} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} - k _ {i} ^ {t + 1} \right\| ^ {2} + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ = \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 5)} {\leq} \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2} (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ \leq \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2} (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +In the last in inequality, we used Assumption 3. Now, we prove the second inequality: + +$$ +\begin{array}{l} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ = \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] \right\| ^ {2} \right] + \left\| \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \\ = \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \left(h _ {i} ^ {t} + \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}))\right) \right\| ^ {2} \right] + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ = \frac {\left(1 - p _ {\mathrm {a}}\right) ^ {2}}{p _ {\mathrm {a}}} \left| \left| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right| \right| ^ {2} \\ + \left(1 - p _ {\mathrm {a}}\right) \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ = \frac {(1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \leq \frac {2 \left(1 - p _ {\mathrm {a}}\right)}{p _ {\mathrm {a}}} L _ {i} ^ {2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \left(\frac {2 b ^ {2} \left(1 - p _ {\mathrm {a}}\right)}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Finally, the third inequality of the theorem follows from (15) and Assumption 3. + +![](images/b8b20338349a9a14d32d29f6a35ffb3c687104457d7c873eb3afa8e77b282704.jpg) + +Theorem 2. Suppose that Assumptions 1, 2, 3, 7 and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , + +$$ +\gamma \leq \left(L + \left[ \frac {4 8 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {1 6}{n p _ {\mathrm {a}} ^ {2}} \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \right] ^ {1 / 2} \widehat {L}\right) ^ {- 1}, +$$ + +and $g_{i}^{0} = h_{i}^{0} = \nabla f_{i}(x^{0})$ for all $i\in [n]$ in Algorithm 1 (DASHA-PP), then $\operatorname{E}\left[\left\| \nabla f(\widehat{x}^T)\right\| ^2\right]\leq \frac{2\Delta_0}{\gamma T}$ . + +Proof. Let us fix constants $\nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 6, Lemma 7, and the law of total expectation, we obtain + +$$ +\operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ = \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma \left(\left(2 \omega + 1\right) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \left. \right. + \nu \mathrm {E} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right] + \rho \mathrm {E} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right]\right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ 2 \widehat {L} ^ {2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \rho \mathrm {E} \left[ \frac {2 (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \widehat {L} ^ {2} \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \| ^ {2} \right]. \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \nu \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \rho \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(\gamma + \nu (1 - b) ^ {2}\right) \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \nu \frac {2 b ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right\| ^ {2} \right]. \\ \end{array} +$$ + +By taking $\nu = \frac{\gamma}{b}$ , one can show that $(\gamma + \nu (1 - b)^2) \leq \nu$ , and + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}} - \rho \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Note that $b = \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , thus + +$$ +\begin{array}{l} \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \\ \leq \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right). \\ \end{array} +$$ + +And if we take $\rho = \frac{8b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2} +\frac{2\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , then + +$$ +\left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) \leq \rho , +$$ + +and + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}} \right. \\ \left. - \frac {1 6 b \gamma \omega (2 \omega + 1) (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}} - \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}}\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Let us simplify the last inequality. First, note that + +$$ +\frac {1 6 b \gamma \omega (2 \omega + 1) (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}} \leq \frac {1 6 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}, +$$ + +due to $b \leq p_{\mathrm{a}}$ . Second, + +$$ +\frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}} \leq \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}}, +$$ + +due to $b \geq \frac{p_{\mathrm{a}}}{2}$ . All in all, we have + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 4 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}}\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +It is left to apply Lemma 3 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {(2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +to conclude the proof. + +# E.4 Proof for DASHA-PP-PAGE + +Let us denote + +$$ +k _ {i, 1} ^ {t + 1} := \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right), +$$ + +$$ +k _ {i, 2} ^ {t + 1} := \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t})\right), +$$ + +$$ +h _ {i, 1} ^ {t + 1} := \left\{ \begin{array}{l l} h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 1} ^ {t + 1}, & i ^ {\text {t h}} \text {n o d e i s p a r t i c i p a t i n g}, \\ h _ {i} ^ {t}, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +$$ +h _ {i, 2} ^ {t + 1} := \left\{ \begin{array}{l l} h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1}, & i ^ {\text {t h}} \text {n o d e i s p a r t i c i p a t i n g}, \\ h _ {i} ^ {t}, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +$$ +h _ {1} ^ {t + 1} := \frac {1}{n} \sum_ {i = 1} ^ {n} h _ {i, 1} ^ {t + 1}, \text {a n d} h _ {2} ^ {t + 1} := \frac {1}{n} \sum_ {i = 1} ^ {n} h _ {i, 2} ^ {t + 1}. \text {N o t e , t h a t} +$$ + +$$ +h ^ {t + 1} = \left\{ \begin{array}{l l} h _ {1} ^ {t + 1}, & \text {w i t h p r o b a b i l i t y} p _ {\text {p a g e}}, \\ h _ {2} ^ {t + 1}, & \text {w i t h p r o b a b i l i t y} 1 - p _ {\text {p a g e}}. \end{array} \right. +$$ + +Lemma 8. Suppose that Assumptions 3, 4, and 8 hold. For $h_i^{t+1}$ and $k_i^{t+1}$ from Algorithm 1 (DASHA-PP-PAGE) we have + +1. + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {p a g e}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \leq \left(\frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {p a g e}) L _ {\max} ^ {2}}{n p _ {\mathrm {a}} B}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {p a g e}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \left(p _ {p a g e} \left(1 - \frac {b}{p _ {p a g e}}\right) ^ {2} + (1 - p _ {p a g e})\right) \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +2. + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {p a g e}} \left[\left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right]\right] \\ \leq \left(\frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {p a g e}} + p _ {p a g e} \left(1 - \frac {b}{p _ {p a g e}}\right) ^ {2} + (1 - p _ {p a g e})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +3. + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\text {p a g e}}} \left[ \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \\ \leq \left(2 L _ {i} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {p a g e}} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +Proof. First, we prove the first inequality of the theorem: + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {p a g e}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ = p _ {\text {p a g e}} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] + (1 - p _ {\text {p a g e}}) \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {2} ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right]. \\ \end{array} +$$ + +Using + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {\mathrm {p} _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] = \\ = p _ {\mathrm {a}} h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) + \left(1 - p _ {\mathrm {a}}\right) h _ {i} ^ {t} \\ = h _ {i} ^ {t} + \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right). \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {\mathrm {p} _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] = \\ = p _ {\mathrm {a}} h _ {i} ^ {t} + \mathrm {E} _ {B} \left[ \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right] + (1 - p _ {\mathrm {a}}) h _ {i} ^ {t} \\ = h _ {i} ^ {t} + \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}), \\ \end{array} +$$ + +we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {p a g e}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \stackrel {(1 6)} {=} p _ {\text {p a g e}} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {1} ^ {t + 1} \right] \right\| ^ {2} \right] + (1 - p _ {\text {p a g e}}) \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + p _ {\text {p a g e}} \left\| \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {1} ^ {t + 1} \right] - \nabla f (x ^ {t + 1}) \right\| ^ {2} + \left(1 - p _ {\text {p a g e}}\right) \left\| \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {2} ^ {t + 1} \right] \right] - \nabla f (x ^ {t + 1}) \right\| ^ {2} \\ = p _ {\text {p a g e}} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {1} ^ {t + 1} \right] \right\| ^ {2} \right] + (1 - p _ {\text {p a g e}}) \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right. \\ + \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + \left(1 - p _ {\text {p a g e}}\right)\right) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}. \tag {24} \\ \end{array} +$$ + +Next, we consider $\mathrm{E}_{p_{\mathrm{a}}}\left[\left\| h_1^{t + 1} - \mathrm{E}_{p_{\mathrm{a}}}\left[h_1^{t + 1}\right]\right\| ^2\right]$ . We can use Lemma 1 with $r_i = h_i^t$ and $s_i = k_{i,1}^{t + 1}$ to obtain + +$$ +\begin{array}{l} \left. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {1} ^ {t + 1} \right] \right\| ^ {2} \right] \right. \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i, 1} ^ {t + 1} - k _ {i, 1} ^ {t + 1} \right\| ^ {2} + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| k _ {i, 1} ^ {t + 1} \right\| ^ {2} \\ = \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +From Assumption 3, we have + +$$ +\begin{array}{l} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {1} ^ {t + 1} \right] \right\| ^ {2} \right] \\ \leq \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \tag {25} \\ \end{array} +$$ + +Now, we prove the bound for $\mathrm{E}_B\left[\mathrm{E}_{p_{\mathrm{a}}}\left[\left\| h_2^{t + 1} - \mathrm{E}_B\left[\mathrm{E}_{p_{\mathrm{a}}}\left[h_2^{t + 1}\right]\right]\right\| ^2\right]$ . Considering that mini-batches in the algorithm are independent, we can use Lemma 1 with $r_i = h_i^t$ and $s_i = k_{i,2}^{t + 1}$ to obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h _ {2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {2} ^ {t + 1} \right]\right]\right\| ^ {2} \right]\right] \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {B} \left[ \left\| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {B} \left[ k _ {i, 2} ^ {t + 1} \right] \right\| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \mathrm {E} _ {B} \left[ k _ {i, 2} ^ {t + 1} \right] \right\| ^ {2} \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {B} \left[ \left\| \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}} B ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {B} \left[ \sum_ {j \in I _ {i} ^ {t}} \left\| \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Next, we use Assumptions 3 and 4 to get + +$$ +\left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h _ {2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {2} ^ {t + 1} \right]\right]\right\| ^ {2} \right]\right] \leq \left(\frac {L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B} + \frac {(p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2}. \tag {26} +$$ + +Applying (25) and (26) into (24), we get + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {p a g e}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \leq p _ {\text {p a g e}} \left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}\right) + \\ + \left(1 - p _ {\text {p a g e}}\right) \left(\frac {L _ {\text {m a x}} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {\left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ \left. + \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + \left(1 - p _ {\text {p a g e}}\right)\right) \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right. \\ \leq \left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + \left(1 - p _ {\text {p a g e}}\right)\right) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +The proof of the second inequality almost repeats the previous one: + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \right. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \right. \mathrm {E} _ {p _ {\text {p a g e}}} \left[ \right.\left| \right.\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right| ^ {2} \left. \right]\left. \right]\left. \right]\left. \right] \\ = p _ {\text {p a g e}} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 1} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] + (1 - p _ {\text {p a g e}}) \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 2} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] \\ \stackrel {(1 6)} {=} p _ {\text {p a g e}} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 1} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + (1 - p _ {\text {p a g e}}) \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] \\ + p _ {\text {p a g e}} \left\| \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} + \left(1 - p _ {\text {p a g e}}\right) \left\| \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \\ = p _ {\text {p a g e}} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 1} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + (1 - p _ {\text {p a g e}}) \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] \\ + \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \tag {27} \\ \end{array} +$$ + +Let us consider $\mathrm{E}_B\left[\mathrm{E}_{p_{\mathrm{a}}}\left[\left\| h_{i,1}^{t + 1} - \mathrm{E}_B\left[\mathrm{E}_{p_{\mathrm{a}}}\left[h_{i,1}^{t + 1}\right]\right]\right\| ^2\right]\right]$ + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {\mathrm {p} _ {\mathrm {a}}} \left[\left\| h _ {i, 1} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {\mathrm {p} _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right]\right]\right\| ^ {2} \right]\right] \\ = \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 1} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \\ = p _ {\mathrm {a}} \left\| h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 1} ^ {t + 1} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \\ \left. \right. + \left(1 - p _ {\mathrm {a}}\right)\left\| h _ {i} ^ {t} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right)\right\| ^ {2} \\ = \frac {\left(1 - p _ {\mathrm {a}}\right) ^ {2}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left(1 - p _ {\mathrm {a}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ = \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2}. \\ \end{array} +$$ + +Considering (15) and Assumption 3, we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h _ {i, 1} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right]\right]\right\| ^ {2} \right]\right] \\ \leq \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}} ^ {2}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \tag {28} \\ \end{array} +$$ + +Next, we obtain the bound for $\operatorname{E}_B\left[\operatorname{E}_{p_{\mathrm{a}}}\left[\left\|h_{i,2}^{t+1}-\operatorname{E}_B\left[\operatorname{E}_{p_{\mathrm{a}}}\left[h_{i,2}^{t+1}\right]\right]\right\|^2\right]\right]$ : + +$$ +\begin{array}{l} \left. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 2} ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] \right] \\ = p _ {\mathrm {a}} \mathrm {E} _ {B} \left[ \left\| h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \left(1 - p _ {\mathrm {a}}\right) \mathbb {E} _ {B} \left[ \left| \left| h _ {i} ^ {t} - \left(h _ {i} ^ {t} + \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right| \right| ^ {2} \right] \\ = p _ {\mathrm {a}} \mathrm {E} _ {B} \left[ \left\| \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right] \\ + (1 - p _ {a}) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \stackrel {(1 6)} {=} \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ \left\| k _ {i, 2} ^ {t + 1} - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right] + \frac {(1 - p _ {\mathrm {a}}) ^ {2}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + \left(1 - p _ {\mathrm {a}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ \left\| k _ {i, 2} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \leq \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ \left\| k _ {i, 2} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] + \frac {\left(1 - p _ {\mathrm {a}}\right) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2}, \tag {29} \\ \end{array} +$$ + +where we used Assumption 3. By plugging (28) and (29) into (27), we get + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {p a g e}}} \left[\left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \leq p _ {\text {p a g e}} \left(\frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}} ^ {2}} \| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \| ^ {2}\right) \\ \left. + \left(1 - p _ {\text {p a g e}}\right) \left(\frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ \left\| k _ {i, 2} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] + \frac {\left(1 - p _ {\mathrm {a}}\right) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2}\right) \right. \\ + \left(p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \leq \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {1 - p _ {\mathrm {p a g e}}}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ \left\| k _ {i, 2} ^ {t + 1} - (\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} \right] \\ + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +From the independence of elements in the mini-batch, we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {p a g e}}} \left[\left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right]\right] \\ \leq \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {1 - p _ {\mathrm {p a g e}}}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ \left\| \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} (\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t})) - (\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} \right] \\ + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} = \frac {2 \left(1 - p _ {\mathrm {a}}\right) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {1 - p _ {\mathrm {p a g e}}}{p _ {\mathrm {a}} B ^ {2}} \mathrm {E} _ {B} \left[ \sum_ {j \in I _ {i} ^ {t}} \left\| \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t})\right) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right] \\ + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ = \frac {2 \left(1 - p _ {\mathrm {a}}\right) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {1 - p _ {\text {p a g e}}}{m p _ {\mathrm {a}} B} \sum_ {j = 1} ^ {m} \left\| \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ \left. + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}}} + p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right. \\ \leq \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {1 - p _ {\text {p a g e}}}{m p _ {\mathrm {a}} B} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \\ \left. + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}}} + p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right. \\ \leq \left(\frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \\ \end{array} +$$ + +where we used Assumption 4. Finally, we prove the last inequality: + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \right. \mathrm {E} _ {p _ {\text {p a g e}}} \left[ \right.\left| \right.\left| k _ {i} ^ {t + 1} \right| ^ {2} \left. \right]\left. \right] \\ = p _ {\text {p a g e}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left(1 - p _ {\text {p a g e}}\right) \mathrm {E} _ {B} \left[ \left\| \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ \stackrel {(1 6)} {=} p _ {\text {p a g e}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {p a g e}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left(1 - p _ {\text {p a g e}}\right) \mathrm {E} _ {B} \left[ \left\| \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \left(1 - p _ {\text {p a g e}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} 2 p _ {\text {p a g e}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\text {p a g e}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ + \left(1 - p _ {\text {p a g e}}\right) \mathrm {E} _ {B} \left[ \left\| \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \left(1 - p _ {\text {p a g e}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \leq 2 \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\text {p a g e}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \end{array} +$$ + +$$ ++ (1 - p _ {\mathrm {p a g e}}) \mathrm {E} _ {B} \left[ \left\| \frac {1}{B} \sum_ {j \in I _ {i} ^ {t}} \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t})\right) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right]. +$$ + +Using the independence of elements in the mini-batch, we have + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {p a g e}}} \left[ \left| \left| k _ {i} ^ {t + 1} \right| \right| ^ {2} \right] \right] \\ \leq 2 \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\mathrm {p a g e}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + \frac {1 - p _ {\text {p a g e}}}{B ^ {2}} \mathrm {E} _ {B} \left[ \sum_ {j \in I _ {i} ^ {t}} \left\| \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ = 2 \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\mathrm {p a g e}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + \frac {1 - p _ {\mathrm {p a g e}}}{B m} \sum_ {j = 1} ^ {m} \left\| \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t})\right) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \\ \leq 2 \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\mathrm {p a g e}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + \frac {1 - p _ {\text {p a g e}}}{B m} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +It it left to consider Assumptions 3 and 4 to get + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {p a g e}}} \left[ \left| \left| k _ {i} ^ {t + 1} \right| \right| ^ {2} \right] \right] \\ \leq \left(2 L _ {i} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\mathrm {p a g e}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +□ + +Theorem 3. Suppose that Assumptions 1, 2, 3, 4, 7, and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{page} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , probability $p_{page} \in (0, 1]$ + +$$ +\gamma \leq \left(L + \left[ \frac {4 8 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) + \frac {1 6}{n p _ {\mathrm {a}} ^ {2} p _ {p a g e}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \right] ^ {1 / 2}\right) ^ {- 1} +$$ + +and $g_{i}^{0} = h_{i}^{0} = \nabla f_{i}(x^{0})$ for all $i\in [n]$ in Algorithm 1 (DASHA-PP-PAGE) then $\operatorname{E}\left[\left\| \nabla f(\widehat{x}^T)\right\|^2\right]\leq \frac{2\Delta_0}{\gamma T}$ . + +Proof. Let us fix constants $\nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 6, Lemma 8, and the law of total expectation, we obtain + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ = \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ \left. \right. + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {p a g e}}} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]\right]\right] \\ \left. \right. + \nu \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {p a g e}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right]\right] \\ \left. \right. + \rho \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {p a g e}}} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right]\right]\right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\mathrm {p a g e}}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left(\left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left(\left(\frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}}} + p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right) \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. - \nu \left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B}\right) - \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. + \left(\gamma + \nu \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right)\right) E \left[ \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2} \right] \right. \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \nu \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \\ \left. + \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}}} + p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Due to $b = \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}} \leq p_{\mathrm{page}}$ , one can show that $\left(p_{\mathrm{page}} \left(1 - \frac{b}{p_{\mathrm{page}}}\right)^2 + (1 - p_{\mathrm{page}})\right) \leq 1 - b$ . Thus, if we take $\nu = \frac{\gamma}{b}$ , then + +$$ +\left. \right.\left(\gamma + \nu \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right)\right) \leq \gamma + \nu (1 - b) = \nu , +$$ + +therefore + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \right. \\ \left. - \frac {\gamma}{b} \left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B}\right) - \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \\ \left. + \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Next, with the choice of $b = \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , we ensure that + +$$ +\left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}}} + p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \leq 1 - b. +$$ + +If we take $\rho = \frac{8b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2p_{\mathrm{page}}} +\frac{2\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2p_{\mathrm{page}}}$ , then + +$$ +\left. \right.\left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}} p _ {\text {p a g e}}} + p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right)\right) \leq \rho , +$$ + +therefore + +$$ +\mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \right. \\ - \frac {\gamma}{b n p _ {\mathrm {a}}} \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \left. - \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} p _ {\text {p a g e}}} + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \left(2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Let us simplify the inequality. First, due to $b \geq \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2}$ , we have + +$$ +\frac {\gamma}{b n p _ {\mathrm {a}}} \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \leq \frac {4 \gamma}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right). +$$ + +Second, due to $b \leq p_{\mathrm{a}}p_{\mathrm{page}}$ and $p_{\mathrm{aa}} \leq p_{\mathrm{a}}^2$ , we get + +$$ +\begin{array}{l} \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} p _ {\text {p a g e}}} + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \left(2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \leq \left(\frac {8 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \leq \frac {1 6 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ + \frac {4 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \leq \frac {1 6 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ + \frac {4 \gamma}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right). \\ \end{array} +$$ + +Combining all bounds together, we obtain the following simplified inequality: + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \right. \\ \left. - \frac {8 \gamma}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +It is left to apply Lemma 3 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {(2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +to conclude the proof. + +Corollary 1. Let the assumptions from Theorem 3 hold and $p_{page} = \frac{B}{(m + B)}$ . Then DASHA-PP-PAGE needs + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\operatorname* {m a x}}}{\sqrt {B}}\right) + \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} \left(\frac {\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L}}{\sqrt {B}} + \frac {L _ {\operatorname* {m a x}}}{B}\right) \right]\right) \tag {10} +$$ + +communication rounds to get an $\varepsilon$ -solution and the expected number of gradient calculations per node equals $\mathcal{O}(m + BT)$ . + +Proof. In the view of Theorem 3, it is enough to do + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \sqrt {\frac {\omega^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) + \frac {1}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right)} \right]\right) +$$ + +steps to get $\varepsilon$ -solution. Using the choice of $p_{\mathrm{mega}}$ and the definition of $\mathbb{1}_{p_a}$ , we can get (10). + +Note that the expected number of gradients calculations at each communication round equals $p_{\mathrm{mega}} m + (1 - p_{\mathrm{mega}}) B = \frac{2mB}{m + B} \leq 2B$ . + +Corollary 2. Suppose that assumptions of Corollary 1 hold, $B \leq \min \left\{\frac{1}{p_{\mathrm{a}}}\sqrt{\frac{m}{n}},\frac{L_{\max}^{2}}{\mathbb{1}_{p_{\mathrm{a}}}^{2}\widehat{L}^{2}}\right\}^{8}$ , and we use the unbiased compressor RandK with $K = \Theta \left(\frac{Bd}{\sqrt{m}}\right)$ . Then the communication complexity of + +Algorithm 1 is $\mathcal{O}\left(d + \frac{L_{\max}\Delta_0d}{p_{\mathrm{a}}\varepsilon\sqrt{n}}\right)$ , and the expected number of gradient calculations per node equals $\mathcal{O}\left(m + \frac{L_{\max}\Delta_0\sqrt{m}}{p_{\mathrm{a}}\varepsilon\sqrt{n}}\right)$ . + +Proof. The communication complexity equals + +$$ +\mathcal {O} (d + K T) = \mathcal {O} \left(d + \frac {\Delta_ {0}}{\varepsilon} \left[ K L + K \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max}}{\sqrt {B}}\right) + K \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} \left(\frac {\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L}}{\sqrt {B}} + \frac {L _ {\max}}{B}\right) \right]\right). +$$ + +Since $B \leq \frac{L_{\max}^2}{\mathbb{1}_{p_a}^2 \widehat{L}^2}$ , we have $\frac{\mathbb{1}_{p_a} \widehat{L}}{\sqrt{B}} + \frac{L_{\max}}{B} \leq \frac{2L_{\max}}{B}$ and + +$$ +\mathcal {O} (d + K T) = \mathcal {O} \left(d + \frac {\Delta_ {0}}{\varepsilon} \left[ K L + K \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max}}{\sqrt {B}}\right) + K \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} \frac {L _ {\max}}{B} \right]\right). +$$ + +Note that $K = \Theta\left(\frac{Bd}{\sqrt{m}}\right) = \mathcal{O}\left(\frac{d}{p_{\mathrm{a}}\sqrt{n}}\right)$ and $\omega + 1 = \frac{d}{K}$ due to Theorem 6, thus + +$$ +\begin{array}{l} \mathcal {O} (d + K T) = \mathcal {O} \left(d + \frac {\Delta_ {0}}{\varepsilon} \left[ \frac {d}{p _ {\mathrm {a}} \sqrt {n}} L + \frac {d}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max }}{\sqrt {B}}\right) + \frac {d}{p _ {\mathrm {a}} \sqrt {n}} L _ {\max } \right]\right) \\ = \mathcal {O} \left(d + \frac {L _ {\max} \Delta_ {0} d}{p _ {\mathrm {a}} \varepsilon \sqrt {n}}\right). \\ \end{array} +$$ + +Using the same reasoning, the expected number of gradient calculations per node equals + +$$ +\begin{array}{l} \mathcal {O} (m + B T) = \mathcal {O} \left(m + \frac {\Delta_ {0}}{\varepsilon} \left[ B L + B \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max }}{\sqrt {B}}\right) + B \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} \left(\frac {\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L}}{\sqrt {B}} + \frac {L _ {\max }}{B}\right) \right]\right) \\ = \mathcal {O} \left(m + \frac {\Delta_ {0}}{\varepsilon} \left[ B L + B \frac {d}{K p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max }}{\sqrt {B}}\right) + B \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} \frac {L _ {\max }}{B} \right]\right) \\ = \mathcal {O} \left(m + \frac {\Delta_ {0}}{\varepsilon} \left[ \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} L + \frac {\sqrt {m}}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\max}}{\sqrt {B}}\right) + \frac {1}{p _ {\mathrm {a}}} \sqrt {\frac {m}{n}} L _ {\max} \right]\right) \\ = \mathcal {O} \left(m + \frac {L _ {\operatorname* {m a x}} \Delta_ {0} \sqrt {m}}{p _ {\mathrm {a}} \varepsilon \sqrt {n}}\right). \\ \end{array} +$$ + +![](images/33c228709b327324ca825385f9b2807e4490419013a9071b61adfd5b81108ed7.jpg) + +# E.5 Proof for DASHA-PP-FINITE-MVR + +Lemma 9. Suppose that Assumptions 3, 4, and 8 hold. For $h_i^{t+1}$ , $h_{ij}^{t+1}$ and $k_i^{t+1}$ from Algorithm 1 (DASHA-PP-FINITE-MVR) we have + +1. + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] \\ \leq \left(\frac {2 L _ {\mathrm {m a x}} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +2. + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] \\ \leq \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 b ^ {2}}{p _ {\mathrm {a}} B m} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +3. + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] \\ \leq \frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) L _ {\max } ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \left(\frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) b ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} + (1 - b) ^ {2}\right) \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ], \forall j \in [ m ]. \\ \end{array} +$$ + +4. + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 L _ {i} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 b ^ {2}}{B m} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +Proof. We start by proving the first inequality. Note that + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] \right] \\ = p _ {\mathrm {a}} \left(h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {B} \left[ h _ {i} ^ {t + 1} \right]\right) + (1 - p _ {\mathrm {a}}) h _ {i} ^ {t} \\ = h _ {i} ^ {t} + \frac {1}{m} \sum_ {j = 1} ^ {m} \frac {B}{m} \cdot \frac {m}{B} \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right)\right) + \left(1 - \frac {B}{m}\right) \cdot 0 \\ = \nabla f _ {i} \left(x ^ {t + 1}\right) + (1 - b) \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right), \\ \end{array} +$$ + +thus + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right] \\ \stackrel {(1 6)} {=} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} [ h ^ {t + 1} ] \right] \right\| ^ {2} \right] \right] + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +We can use Lemma 1 with $r_i = h_i^t$ and $s_i = k_i^{t+1}$ to obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right] \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {B} \left[ \left\| k _ {i} ^ {t + 1} - \mathrm {E} _ {B} \left[ k _ {i} ^ {t + 1} \right] \right\| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \mathrm {E} _ {B} \left[ k _ {i} ^ {t + 1} \right] \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {B} \left[ \left\| \frac {1}{m} \sum_ {j = 1} ^ {m} k _ {i j} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +Next, we again use Lemma 1 with $r_i = 0$ , $s_i = \nabla f_{ij}(x^{t + 1}) - \nabla f_{ij}(x^t) - b\left(h_{ij}^t -\nabla f_{ij}(x^t)\right)$ , $p_{\mathrm{a}} = \frac{B}{m}$ , and $p_{\mathrm{aa}} = \frac{B(B - 1)}{m(m - 1)}$ : + +$$ +\left. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] \right. +$$ + +$$ +\begin{array}{l} \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left(\frac {m - B}{B m (m - 1)} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t})\right) \right\| ^ {2}\right) \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 5)} {\leq} \frac {2}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +Due to Assumptions 3 and 4, we have + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left|\left| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right|\right| ^ {2} \right]\right] \\ \leq \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +Let us get the bound for the second inequality: + +$$ +\begin{array}{l} \left. \right. \operatorname {E} _ {B} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[\left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right] \\ \stackrel {(1 6)} {=} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) + (1 - b) \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \right] \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = p _ {\mathrm {a}} \mathrm {E} _ {B} \left[ \left\| h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) + (1 - b) \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ \left. + \left(1 - p _ {\mathrm {a}}\right) \left\| h _ {i} ^ {t} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) + (1 - b) \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right. \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 6)} {=} \frac {1}{p _ {\mathrm {a}}} \operatorname {E} _ {B} \left[ \left\| k _ {i} ^ {t + 1} - \operatorname {E} _ {B} \left[ k _ {i} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Let us use Lemma 1 with $r_i = 0$ , $s_i = \nabla f_{ij}(x^{t + 1}) - \nabla f_{ij}(x^t) - b(h_{ij}^t -\nabla f_{ij}(x^t))$ , $p_{\mathrm{a}} = \frac{B}{m}$ , and $p_{\mathrm{aa}} = \frac{B(B - 1)}{m(m - 1)}$ : + +$$ +\left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left|\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right|\right| ^ {2} \right]\right] +$$ + +$$ +\begin{array}{l} \leq \frac {1}{p _ {\mathrm {a}}} \left(\frac {m - B}{B m (m - 1)} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2}\right) \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \leq \frac {1}{p _ {\mathrm {a}} B m} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t})\right) \right\| ^ {2} \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2}{p _ {\mathrm {a}} B m} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 \left(1 - p _ {\mathrm {a}}\right)}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ + \frac {2 b ^ {2}}{p _ {\mathrm {a}} B m} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \leq \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 b ^ {2}}{p _ {\mathrm {a}} B m} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \\ \end{array} +$$ + +where we used Assumptions 3 and 4. We continue the proof by considering + +$$ +\mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right]: +$$ + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right] \\ \stackrel {(1 6)} {=} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i j} ^ {t + 1} - \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) + (1 - b) \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \right] \\ + (1 - b) ^ {2} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \\ = \frac {p _ {\mathrm {a}} B}{m} \mathrm {E} _ {B} \left[ \left\| h _ {i j} ^ {t} + \frac {m}{B p _ {\mathrm {a}}} \left(\nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t})\right)\right) - \left(\nabla f _ {i j} (x ^ {t + 1}) + (1 - b) (h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}))\right) \right\| ^ {2} \right] \\ \left. + \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) \left\| h _ {i j} ^ {t} - \left(\nabla f _ {i j} \left(x ^ {t + 1}\right) + (1 - b) \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right. \\ + (1 - b) ^ {2} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {\left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {\left(1 - \frac {p _ {\mathrm {a}} B}{m}\right)}{\frac {p _ {\mathrm {a}} B}{m}} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t})\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right)}{\frac {p _ {\mathrm {a}} B}{m}} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \left(\frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) b ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} + (1 - b) ^ {2}\right) \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +It is left to consider Assumption 4: + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} (x ^ {t + 1}) \right\| ^ {2} \right] \right] \\ \leq \frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) L _ {\max} ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \left(\frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) b ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} + (1 - b) ^ {2}\right) \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +Finally, we obtain the bound for the last inequality of the lemma: + +$$ +\begin{array}{l} \mathrm {E} _ {B} \left[ \left| \left| k _ {i} ^ {t + 1} \right| \right| ^ {2} \right] \\ \stackrel {(1 6)} {=} \operatorname {E} _ {B} \left[ \left\| k _ {i} ^ {t + 1} - \operatorname {E} _ {B} \left[ k _ {i} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2}. \\ \end{array} +$$ + +Using Lemma 1, we get + +$$ +\begin{array}{l} \operatorname {E} _ {B} \left[ \left| \left| k _ {i} ^ {t + 1} \right| \right| ^ {2} \right] \\ \leq \frac {m - B}{B m (m - 1)} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ \leq \frac {1}{B m} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} \left(x ^ {t + 1}\right) - \nabla f _ {i j} \left(x ^ {t}\right) - b \left(h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2}{B m} \sum_ {j = 1} ^ {m} \left\| \nabla f _ {i j} (x ^ {t + 1}) - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + 2 \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + \frac {2 b ^ {2}}{B m} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \leq \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 L _ {i} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 b ^ {2}}{B m} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \\ \end{array} +$$ + +where we used Assumptions 3 and 4. + +Theorem 7. Suppose that Assumptions 1, 2, 3, 4, 7, and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{\frac{p_{\mathrm{a}}B}{m}}{2 - \frac{p_{\mathrm{a}}B}{m}}$ , + +$$ +\gamma \leq \left(L + \sqrt {\frac {1 4 8 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\operatorname* {m a x}} ^ {2}}{B}\right) + \frac {7 2 m}{n p _ {\mathrm {a}} ^ {2} B} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {L _ {\operatorname* {m a x}} ^ {2}}{B}\right)}\right) ^ {- 1}, +$$ + +$g_{i}^{0} = h_{i}^{0} = \nabla f_{i}(x^{0})$ for all $i\in [n]$ and $h_{ij}^{0} = \nabla f_{ij}(x^0)$ for all $i\in [n],j\in [m]$ in Algorithm 1 (DASHA-PP-FINITE-MVR) then $\operatorname {E}\left[\left\| \nabla f(\hat{x}^T)\right\| ^2\right]\leq \frac{2\Delta_0}{\gamma T}$ + +Proof. Let us fix constants $\nu, \rho, \delta \in [0, \infty)$ that we will define later. Considering Lemma 6, Lemma 9, and the law of total expectation, we obtain + +$$ +\operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \delta \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left| \left| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \delta \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left| \left| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ = \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \\ \left. \right. + \nu \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \left. \right. + \rho \mathrm {E} \left[ \right. \mathrm {E} _ {B} \left[ \right. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \right. \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \right.\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right| ^ {2} \left. \right]\left. \right]\left. \right] \\ + \delta \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2}}{B m n} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left(\left(\frac {2 L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \\ \left. + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} + \rho \mathrm {E} \left(\left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {2 b ^ {2}}{p _ {\mathrm {a}} B n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} + \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \delta \mathrm {E} \left(\frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) L _ {\max } ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \right. \\ \left. + \left(\frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) b ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} + (1 - b) ^ {2}\right) \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2}\right). \\ \end{array} +$$ + +Due to $b = \frac{\frac{p_{\mathrm{a}}B}{m}}{2 - \frac{p_{\mathrm{a}}B}{m}} \leq \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , we have + +$$ +\left(\frac {2 \left(1 - \frac {p _ {\mathrm {a}} B}{m}\right) b ^ {2}}{\frac {p _ {\mathrm {a}} B}{m}} + (1 - b) ^ {2}\right) \leq 1 - b +$$ + +and + +$$ +\left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \leq 1 - b. +$$ + +Moreover, we consider that $1 - \frac{p_{\mathrm{a}}B}{m}\leq 1$ , therefore + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \delta \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \left(\frac {2 L _ {\mathrm {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2}}{B m n} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} + \frac {2 b ^ {2}}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left(\left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}} B m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \\ \left. + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left(\left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. + \frac {2 b ^ {2}}{p _ {\mathrm {a}} B n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} + (1 - b) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \delta \mathrm {E} \left(\frac {2 m L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + (1 - b) \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2}\right). \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \delta \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left| \left| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ - \nu \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) - \delta \frac {2 m L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B}\left. \right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(\gamma + \nu (1 - b) ^ {2}\right) \operatorname {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \nu \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \right. \\ \left. \right. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {2 \nu b ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} B} + \delta (1 - b)\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Thus, if we take $\nu = \frac{\gamma}{b}$ , then $\gamma + \nu (1 - b)^2 \leq \nu$ and + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \delta \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. - \left(\frac {2 \gamma L _ {\max} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) - \delta \frac {2 m L _ {\max} ^ {2}}{p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {2 \gamma b}{n p _ {\mathrm {a}} B} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} B} + \delta (1 - b)\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Next, if we take $\rho = \frac{8b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2} +\frac{2\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , then + +$$ +\left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) = \rho , +$$ + +therefore + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \delta \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ - \left(\frac {2 \gamma L _ {\max} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \\ \left. - \delta \frac {2 m L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {2 \gamma b}{n p _ {\mathrm {a}} B} + \frac {1 6 b ^ {3} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} B} + \frac {4 b ^ {2} \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n B p _ {\mathrm {a}} ^ {3}} + \delta (1 - b)\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +Due to $b \leq p_{\mathrm{a}}$ and $\frac{p_{\mathrm{a}} - p_{\mathrm{aa}}}{p_{\mathrm{a}}} \leq 1$ , we have + +$$ +\begin{array}{l} \frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {2 \gamma b}{n p _ {\mathrm {a}} B} + \frac {1 6 b ^ {3} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} B} + \frac {4 b ^ {2} \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n B p _ {\mathrm {a}} ^ {3}} \\ \leq \frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {2 \gamma b}{n p _ {\mathrm {a}} B} + \frac {1 6 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {4 \gamma b}{n p _ {\mathrm {a}} B} \\ \end{array} +$$ + +$$ += \frac {2 4 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma b}{n p _ {\mathrm {a}} B}. +$$ + +Let us take $\delta = \frac{24b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2B} +\frac{6\gamma}{np_{\mathrm{a}}B}$ . Thus + +$$ +\left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {2 \gamma b}{n p _ {\mathrm {a}} B} + \frac {1 6 b ^ {3} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} B} + \frac {4 b ^ {2} \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n B p _ {\mathrm {a}} ^ {3}} + \delta (1 - b)\right) \leq \delta +$$ + +and + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ - \left(\frac {2 \gamma L _ {\max} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \\ \left. - \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \frac {2 m L _ {\operatorname* {m a x}} ^ {2}}{p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Let us simplify the term near $\mathrm{E}\left[\left\| x^{t + 1} - x^t\right\|^2\right]$ . Due to $b \leq p_{\mathrm{a}}$ , $\frac{p_{\mathrm{a}} - p_{\mathrm{aa}}}{p_{\mathrm{a}}} \leq 1$ , and $1 - p_{\mathrm{a}} \leq 1$ , we have + +$$ +\begin{array}{l} \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \\ + \left(\frac {2 \gamma L _ {\max} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) \\ + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \frac {2 m L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} \\ \leq \frac {1 2 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} + \left(\frac {6 \gamma L _ {\max} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {6 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \frac {2 m L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} \\ \end{array} +$$ + +Considering that $b \leq \frac{p_{\mathrm{a}}B}{m}$ and $b \geq \frac{p_{\mathrm{a}}B}{2m}$ , we obtain + +$$ +\begin{array}{l} \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \\ + \left(\frac {2 \gamma L _ {\max} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) \\ + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \frac {2 m L _ {\max} ^ {2}}{p _ {\mathrm {a}} B} \\ \leq \frac {3 6 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) + \left(\frac {1 8 \gamma L _ {\operatorname* {m a x}} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {6 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) \\ \leq \frac {3 6 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) + \left(\frac {3 6 m \gamma L _ {\operatorname* {m a x}} ^ {2}}{n p _ {\mathrm {a}} ^ {2} B ^ {2}} + \frac {1 2 m \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{B n p _ {\mathrm {a}} ^ {3}}\right). \\ \end{array} +$$ + +All in all, we have + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ \left. \right. - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {3 6 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 L _ {\operatorname* {m a x}} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) - \left(\frac {3 6 m \gamma L _ {\operatorname* {m a x}} ^ {2}}{n p _ {\mathrm {a}} ^ {2} B ^ {2}} + \frac {1 2 m \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{B n p _ {\mathrm {a}} ^ {3}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t + 1} - \nabla f _ {i j} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6 \gamma}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +It is left to apply Lemma 3 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {(2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1}{b} \operatorname {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 4 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} B} + \frac {6}{n p _ {\mathrm {a}} B}\right) \mathrm {E} \left[ \frac {1}{n m} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} \left\| h _ {i j} ^ {t} - \nabla f _ {i j} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ \end{array} +$$ + +to conclude the proof. + +# E.6 Proof for DASHA-PP-MVR + +Let us denote $\nabla f_{i}(x^{t + 1};\xi_{i}^{t + 1})\coloneqq \frac{1}{B}\sum_{j = 1}^{B}\nabla f_{i}(x^{t + 1};\xi_{ij}^{t + 1}).$ + +Lemma 10. Suppose that Assumptions 3, 5, 6 and 8 hold. For $h_i^{t+1}$ and $k_i^{t+1}$ from Algorithm 1 (DASHA-PP-MVR) we have + +1. + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right| \right| ^ {2} \right] \right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +2. + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +3. + +$$ +\operatorname {E} _ {k} \left[ \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \leq \frac {2 b ^ {2} \sigma^ {2}}{B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 L _ {i} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}, \quad \forall i \in [ n ]. +$$ + +Proof. First, let us proof the bound for $\operatorname{E}_k\left[\operatorname{E}_{p_a}\left[\left\| h^{t + 1} - \nabla f(x^{t + 1})\right\| ^2\right]\right]$ : + +$$ +\begin{array}{l} \left. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] \right. \\ = \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} [ h ^ {t + 1} ] \right] \right\| ^ {2} \right] \right] + \left\| \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} [ h ^ {t + 1} ] \right] - \nabla f (x ^ {t + 1}) \right\| ^ {2}. \\ \end{array} +$$ + +Using + +$$ +\operatorname {E} _ {k} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] \right] = h _ {i} ^ {t} + \operatorname {E} _ {k} \left[ k _ {i} ^ {t + 1} \right] = h _ {i} ^ {t} + \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) +$$ + +and (16), we have + +$$ +\begin{array}{l} \left. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] \right. \\ = \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} [ h ^ {t + 1} ] \right] \right\| ^ {2} \right] \right] + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +We can use Lemma 1 with $r_i = h_i^t$ and $s_i = k_i^{t + 1}$ to obtain + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right| \right| ^ {2} \right] \right] \\ \leq \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} _ {k} \left[ \left\| k _ {i} ^ {t + 1} - \operatorname {E} _ {k} \left[ k _ {i} ^ {t + 1} \right] \right\| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \operatorname {E} _ {k} \left[ k _ {i} ^ {t + 1} \right] \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ = \frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \right.\left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right)\right)\right. \\ \left. \left. - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 5)} {\leq} \frac {2}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} _ {k} \left[ \left\| b \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right)\right) \right\| ^ {2} \right] \\ + \frac {2}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| (1 - b) \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ = \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \frac {2 (1 - b) ^ {2}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +$$ +\begin{array}{l} = \frac {2 b ^ {2}}{n ^ {2} p _ {\mathrm {a}} B ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {B} \operatorname {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \frac {2 (1 - b) ^ {2}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} \\ + \left(1 - b\right) ^ {2} \left| \left| h ^ {t} - \nabla f (x ^ {t}) \right| \right| ^ {2}. \\ \end{array} +$$ + +In the last equality, we use the independence of elements in the mini-batches. Due to Assumption 5, we get + +$$ +\left. \right. \operatorname {E} _ {k} \left[ \operatorname {E} _ {p _ {a}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right] +$$ + +$$ +\begin{array}{l} \leq \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} \\ + \frac {2 (1 - b) ^ {2}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} (x ^ {t + 1}; \xi_ {i} ^ {t + 1}) - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1}) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right] \\ + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})) \right\| ^ {2} \\ + \left(1 - b\right) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 5)} {\leq} \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} \\ + \frac {2 (1 - b) ^ {2}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} (x ^ {t + 1}; \xi_ {i} ^ {t + 1}) - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1}) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right] \\ + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ = \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} \\ + \frac {2 (1 - b) ^ {2}}{n ^ {2} p _ {\mathrm {a}} B ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {B} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}, \\ \end{array} +$$ + +where we use the independence of elements in the mini-batches. Using Assumptions 3 and 6, we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +Now, we prove the second inequality: + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \right] \\ = \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] \\ + \left\| \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i} ^ {t + 1} \right] \right] - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \\ = \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \right] \\ + \left\| h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \\ = \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i} ^ {t + 1} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \right. \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = p _ {\mathrm {a}} \mathrm {E} _ {k} \left[ \left\| h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ \left. + \left(1 - p _ {\mathrm {a}}\right) \left\| h _ {i} ^ {t} - \left(h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right. \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = p _ {\mathrm {a}} \mathrm {E} _ {k} \left[ \left\| \frac {1}{p _ {\mathrm {a}}} k _ {i} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \left(1 - p _ {\mathrm {a}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \stackrel {(1 6)} {=} \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left\| k _ {i} ^ {t + 1} - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \frac {\left(1 - p _ {\mathrm {a}}\right) ^ {2}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \left(1 - p _ {\mathrm {a}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right)\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ = \frac {1}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left\| b \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right)\right) + (1 - b) \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2 b ^ {2}}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \| \nabla f _ {i} (x ^ {t + 1}; \xi_ {i} ^ {t + 1}) - \nabla f _ {i} (x ^ {t + 1}) \| ^ {2} \right] \\ + \frac {2 (1 - b) ^ {2}}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Considering the independence of elements in the mini-batch, we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right] \\ = \frac {2 b ^ {2}}{p _ {\mathrm {a}} B ^ {2}} \sum_ {j = 1} ^ {B} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \frac {2 (1 - b) ^ {2}}{p _ {\mathrm {a}} B ^ {2}} \sum_ {j = 1} ^ {B} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {a}}}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - b) ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 5)} {\leq} \frac {2 b ^ {2}}{p _ {\mathrm {a}} B ^ {2}} \sum_ {j = 1} ^ {B} \operatorname {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \frac {2 (1 - b) ^ {2}}{p _ {\mathrm {a}} B ^ {2}} \sum_ {j = 1} ^ {B} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ + \frac {2 (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \end{array} +$$ + +Next, we use Assumptions 3, 6, 5, to get + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left| \left| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right| \right| ^ {2} \right] \right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ \left. + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \right. \\ \end{array} +$$ + +It is left to prove the bound for $\mathrm{E}_k\left[\left\| k_i^{t + 1}\right\| ^2\right]$ + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ = \operatorname {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right)\right) \right\| ^ {2} \right] \\ \stackrel {(1 6)} {=} \operatorname {E} _ {k} \left[ \left\| \nabla f _ {i} (x ^ {t + 1}; \xi_ {i} ^ {t + 1}) - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1}) - b \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1})\right) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - b (h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}))\right) \right\| ^ {2} \right] \\ + \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ = \operatorname {E} _ {k} \left[ \left\| b \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right)\right) + (1 - b) \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \right\| ^ {2} \right] \\ + \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - b \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} 2 b ^ {2} \mathrm {E} _ {k} \left[ \left| \left| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right| \right| ^ {2} \right] \\ + 2 (1 - b) ^ {2} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} (x ^ {t + 1}; \xi_ {i} ^ {t + 1}) - \nabla f _ {i} (x ^ {t}; \xi_ {i} ^ {t + 1}) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right] \\ + 2 \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Using Assumptions 3, 6, 5 and the independence of elements in the mini-batch, we get + +$$ +\operatorname {E} _ {k} \left[ \left| \left| k _ {i} ^ {t + 1} \right| \right| ^ {2} \right] +$$ + +$$ +\leq \frac {2 b ^ {2} \sigma^ {2}}{B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 L _ {i} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}. +$$ + +![](images/df04273f25fe7519f1eb62f29d44694854cd960cf50cb35652f74d7e74e9c7c8.jpg) + +Theorem 4. Suppose that Assumptions 1, 2, 3, 5, 6, 7 and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , + +$b\in \left(0,\frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}\right],\gamma \leq \left(L + \left[\frac{48\omega(2\omega + 1)}{np_{\mathrm{a}}^{2}}\left(\widehat{L}^{2} + \frac{(1 - b)^{2}L_{\sigma}^{2}}{B}\right) + \frac{12}{np_{\mathrm{a}}b}\left(\left(1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}}\right)\widehat{L}^{2} + \frac{(1 - b)^{2}L_{\sigma}^{2}}{B}\right)\right]^{1 / 2}\right)^{-1},$ and $g_i^0 = h_i^0$ for all $i\in [n]$ in Algorithm 1 (DASHA-PP-MVR). Then + +$$ +\begin{array}{l} \operatorname {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \leq \frac {1}{T} \left[ \frac {2 \Delta_ {0}}{\gamma} + \frac {2}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {3 2 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}}}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) \right] \\ + \left(\frac {4 8 b ^ {2} \omega (2 \omega + 1)}{p _ {\mathrm {a}} ^ {2}} + \frac {1 2 b}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{n B}. \\ \end{array} +$$ + +Proof. Let us fix constants $\nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 6, Lemma 10, and the law of total expectation, we obtain + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ = \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \mathrm {E} _ {k} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| k _ {i} ^ {t + 1} \| ^ {2} \right] \right] \\ \left. \right. + \nu \mathrm {E} \left[ \mathrm {E} _ {B} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[\left\| h ^ {t + 1} - \nabla f \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right]\right] \\ \left. \right. + \rho \mathrm {E} \left[ \right. \mathrm {E} _ {B} \left[ \right. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \right. \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \right.\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right| ^ {2} \left. \right]\left. \right]\left. \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {2 b ^ {2} \sigma^ {2}}{B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \nu \mathrm {E} \left(\frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left(\frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2}\right). \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(\gamma + \nu (1 - b) ^ {2}\right) \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \nu \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \nu \frac {2 b ^ {2}}{n p _ {\mathrm {a}}} + \rho \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +By taking $\nu = \frac{\gamma}{b}$ , one can show that $(\gamma + \nu (1 - b)^2) \leq \nu$ , and + +$$ +\begin{array}{l} \mathrm {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {\gamma}{b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. \right. + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b}{n p _ {\mathrm {a}}} + \rho \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Note that $b \leq \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , thus + +$$ +\begin{array}{l} \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \\ \leq \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right). \\ \end{array} +$$ + +And if we take $\rho = \frac{8b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2} +\frac{2\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , then + +$$ +\left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) \leq \rho , +$$ + +and + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ - \frac {\gamma}{n p _ {\mathrm {a}} b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right) \\ \left. - \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b}{n p _ {\mathrm {a}}} + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Let us simplify the inequality. First, due to $b \leq p_{\mathrm{a}}$ and $(1 - p_{\mathrm{a}}) \leq \left(1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}} \right)$ , we have + +$$ +\begin{array}{l} \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right) \\ = \frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right) \\ + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \frac {8 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \\ + \frac {2 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right), \\ \end{array} +$$ + +therefore + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 2 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {3 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b}{n p _ {\mathrm {a}}} + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B} \\ = \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {6 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {8 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma b}{n p _ {\mathrm {a}}} + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Also, we can simplify the last term: + +$$ +\begin{array}{l} \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}} \\ = \frac {1 6 b ^ {3} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} + \frac {4 b ^ {2} \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}} \\ \leq \frac {1 6 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 b \gamma}{n p _ {\mathrm {a}}}, \\ \end{array} +$$ + +thus + +$$ +\operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 4 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {6 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {6 \gamma b}{n p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 4 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {6 \gamma b}{n p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +It is left to apply Lemma 3 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {(2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {8 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +and $C = \left(\frac{24b^2\omega(2\omega + 1)}{p_{\mathrm{a}}^2} +\frac{6b}{p_{\mathrm{a}}}\right)\frac{\sigma^2}{nB}$ to conclude the proof. + +Corollary 3. Suppose that assumptions from Theorem 4 hold, momentum $b = \Theta \left( \min \left\{ \frac{p_{\mathrm{a}}}{\omega} \sqrt{\frac{n \varepsilon B}{\sigma^2}}, \frac{p_{\mathrm{a}} n \varepsilon B}{\sigma^2} \right\} \right)$ , $\frac{\sigma^2}{n \varepsilon B} \geq 1$ , and $h_i^0 = \frac{1}{B_{\mathrm{init}}} \sum_{k=1}^{B_{\mathrm{init}}} \nabla f_i(x^0; \xi_{ik}^0)$ for all $i \in [n]$ , and batch size $B_{\mathrm{init}} = \Theta \left( \frac{\sqrt{p_{\mathrm{a}}} B}{b} \right)$ , then Algorithm 1 (DASHA-PP-MVR) needs + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\frac {\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L}}{\sqrt {B}} + \frac {L _ {\sigma}}{B}\right) \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right) +$$ + +communication rounds to get an $\varepsilon$ -solution and the number of stochastic gradient calculations per node equals $\mathcal{O}(B_{\mathrm{init}} + BT)$ . + +Proof. Using the result from Theorem 4, we have + +$$ +\begin{array}{l} \operatorname {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \\ \leq \frac {1}{T} \left[ 2 \Delta_ {0} \left(L + \sqrt {\frac {4 8 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B}\right) + \frac {1 2}{n p _ {\mathrm {a}} b} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B}\right)}\right) \right. \\ \left. + \frac {2}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {3 2 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}}}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) \right] \\ + \left(\frac {4 8 b ^ {2} \omega (2 \omega + 1)}{p _ {\mathrm {a}} ^ {2}} + \frac {1 2 b}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{n B} \\ \end{array} +$$ + +We choose $b$ to ensure $\left(\frac{48b^2\omega(2\omega + 1)}{p_{\mathrm{a}}^2} +\frac{12b}{p_{\mathrm{a}}}\right)\frac{\sigma^2}{nB} = \Theta (\varepsilon)$ . Note that $\frac{1}{b} = \Theta \left(\max \left\{\frac{\omega}{p_{\mathrm{a}}}\sqrt{\frac{\sigma^2}{n\varepsilon B}},\frac{\sigma^2}{p_{\mathrm{a}}n\varepsilon B}\right\}\right) \leq \Theta \left(\max \left\{\frac{\omega^2}{p_{\mathrm{a}}},\frac{\sigma^2}{p_{\mathrm{a}}n\varepsilon B}\right\}\right)$ , thus + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \\ = \mathcal {O} \left(\frac {1}{T} \left[ \Delta_ {0} \left(L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \right. \right. \\ \left. + \frac {1}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {b \omega^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {1}{n p _ {\mathrm {a}}}\right)\left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right)\right] + \varepsilon\left. \right), \\ \end{array} +$$ + +where $\mathbb{1}_{p_{\mathrm{a}}} = \sqrt{1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}}}$ . It enough to take the following $T$ to get $\varepsilon$ -solution. + +$$ +\begin{array}{l} T = \mathcal {O} \left(\frac {1}{\varepsilon} \left[ \Delta_ {0} \left(L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \right. \right. \\ \left. \left. + \frac {1}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {b \omega^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {1}{n p _ {\mathrm {a}}}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) \right]\right). \\ \end{array} +$$ + +Let us bound the norms: + +$$ +\begin{array}{l} \operatorname {E} \left[ \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} \right] = \operatorname {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \frac {1}{B _ {\mathrm {i n i t}}} \sum_ {k = 1} ^ {B _ {\mathrm {i n i t}}} \nabla f _ {i} (x ^ {0}; \xi_ {i k} ^ {0}) - \nabla f (x ^ {0}) \right\| ^ {2} \right] \\ = \frac {1}{n ^ {2} B _ {\mathrm {i n i t}} ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {k = 1} ^ {B _ {\mathrm {i n i t}}} \operatorname {E} \left[ \left\| \nabla f _ {i} (x ^ {0}; \xi_ {i k} ^ {0}) - \nabla f _ {i} (x ^ {0}) \right\| ^ {2} \right] \\ \leq \frac {\sigma^ {2}}{n B _ {\mathrm {i n i t}}}. \\ \end{array} +$$ + +Using the same reasoning, one can get $\frac{1}{n}\sum_{i=1}^{n}\mathrm{E}\left[\left\|h_i^0 - \nabla f_i(x^0)\right\|^2\right] \leq \frac{\sigma^2}{B_{\mathrm{init}}}$ . Combining all inequalities, we have + +$$ +\begin{array}{l} T = \mathcal {O} \left(\frac {1}{\varepsilon} \left[ \Delta_ {0} \left(L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \right. \right. \\ \left. \left. + \frac {\sigma^ {2}}{b n B _ {\mathrm {i n i t}}} + \frac {b \omega^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} ^ {2} B _ {\mathrm {i n i t}}} + \frac {\sigma^ {2}}{n p _ {\mathrm {a}} B _ {\mathrm {i n i t}}} \right] \right). \\ \end{array} +$$ + +Using the choice of $B_{\mathrm{init}}$ and $b$ , we obtain + +$$ +\begin{array}{l} T = \mathcal {O} \left(\frac {1}{\varepsilon} \left[ \Delta_ {0} \left(L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \right. \right. \\ \left. \left. + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n B} + \frac {b ^ {2} \omega^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} ^ {5 / 2} B} + \frac {b \sigma^ {2}}{p _ {\mathrm {a}} ^ {3 / 2} n B} \right] \right) \\ = \mathcal {O} \left(\frac {1}{\varepsilon} \left[ \Delta_ {0} \left(L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \right. \right. \\ \left. \left. + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n B} + \frac {\varepsilon}{\sqrt {p _ {\mathrm {a}}}} \right] \right) \\ = \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B} + \frac {1}{\sqrt {p _ {\mathrm {a}}}}\right). \\ \end{array} +$$ + +Using $\frac{\sigma^2}{n\varepsilon B} \geq 1$ , we can conclude the proof of the inequality. The number of stochastic gradients that each node calculates equals $B_{\mathrm{init}} + 2BT = \mathcal{O}(B_{\mathrm{init}} + BT)$ . + +Corollary 4. Suppose that assumptions of Corollary 3 hold, batch size $B \leq \min \left\{\frac{\sigma}{p_{\mathrm{a}}\sqrt{\varepsilon}n}, \frac{L_{\sigma}^2}{1_{p_{\mathrm{a}}}^2\hat{L}^2}\right\}$ , we take RandK compressors with $K = \Theta\left(\frac{Bd\sqrt{\varepsilon n}}{\sigma}\right)$ . Then the communication complexity equals $\mathcal{O}\left(\frac{d\sigma}{\sqrt{p_{\mathrm{a}}\sqrt{n\varepsilon}}} + \frac{L_{\sigma}\Delta_0d}{p_{\mathrm{a}}\sqrt{n\varepsilon}}\right)$ , and the expected number of stochastic gradient calculations per node equals $\mathcal{O}\left(\frac{\sigma^2}{\sqrt{p_{\mathrm{a}}n\varepsilon}} + \frac{L_{\sigma}\Delta_0\sigma}{p_{\mathrm{a}}\varepsilon^{3/2}n}\right)$ . + +Proof. The communication complexity equals + +$$ +\mathcal {O} (d + K T) = \mathcal {O} \left(d + \frac {\Delta_ {0}}{\varepsilon} \left[ K L + K \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + K \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) \right] + K \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right). +$$ + +Due to $B \leq \frac{L_{\sigma}^{2}}{\mathbb{1}_{p_{\mathrm{a}}}^{2}\widehat{L}^{2}}$ , we have $\mathbb{1}_{p_{\mathrm{a}}}\widehat{L} + \frac{L_{\sigma}}{\sqrt{B}} \leq \frac{2L_{\sigma}}{\sqrt{B}}$ and + +$$ +\mathcal {O} (d + K T) = \mathcal {O} \left(d + \frac {\Delta_ {0}}{\varepsilon} \left[ K L + K \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + K \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \frac {L _ {\sigma}}{\sqrt {B}} \right] + K \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right). +$$ + +From Theorem 6, we have $\omega + 1 = \frac{d}{K}$ . Since $K = \Theta\left(\frac{Bd\sqrt{\varepsilon n}}{\sigma}\right) = \mathcal{O}\left(\frac{d}{p_{\mathrm{a}}\sqrt{n}}\right)$ , the communication complexity equals + +$$ +\begin{array}{l} \mathcal {O} (d + K T) = \mathcal {O} \left(d + \frac {\Delta_ {0}}{\varepsilon} \left[ \frac {d}{p _ {\mathrm {a}} \sqrt {n}} L + \frac {d}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {d}{p _ {\mathrm {a}} \sqrt {n}} L _ {\sigma} \right] + \frac {d \sigma}{\sqrt {p _ {\mathrm {a}}} \sqrt {n \varepsilon}}\right) \\ = \mathcal {O} \left(\frac {d \sigma}{\sqrt {p _ {\mathrm {a}}} \sqrt {n \varepsilon}} + \frac {L _ {\sigma} \Delta_ {0} d}{p _ {\mathrm {a}} \sqrt {n} \varepsilon}\right) \\ \end{array} +$$ + +And the expected number of stochastic gradient calculations per node equals + +$$ +\begin{array}{l} \mathcal {O} \left(B _ {\text {i n i t}} + B T\right) \\ = \mathcal {O} \left(\frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon} + \frac {B \omega}{\sqrt {p _ {\mathrm {a}}}} \sqrt {\frac {\sigma^ {2}}{n \varepsilon B}} + \frac {\Delta_ {0}}{\varepsilon} \left[ B L + B \frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + B \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \left(\mathbb {1} _ {p _ {\mathrm {a}}} \widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) \right] + B \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right) \\ = \mathcal {O} \left(\frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon} + \frac {B d}{K \sqrt {p _ {\mathrm {a}}}} \sqrt {\frac {\sigma^ {2}}{n \varepsilon B}} + \frac {\Delta_ {0}}{\varepsilon} \left[ B L + B \frac {d}{K p _ {\mathrm {a}} \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + B \sqrt {\frac {\sigma^ {2}}{p _ {\mathrm {a}} ^ {2} \varepsilon n ^ {2} B}} \frac {L _ {\sigma}}{\sqrt {B}} \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon}\right) \\ = \mathcal {O} \left(\frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon} + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon \sqrt {B}} + \frac {\Delta_ {0}}{\varepsilon} \left[ \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} L + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} L _ {\sigma} \right]\right) \\ = \mathcal {O} \left(\frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon} + \frac {L _ {\sigma} \Delta_ {0} \sigma}{p _ {\mathrm {a}} \varepsilon^ {3 / 2} n}\right). \\ \end{array} +$$ + +![](images/3e8ee8b7540c02b6e7f3d0b41c5c51b299af53f0f1ee65db20d092d3eb37037b.jpg) + +# F Analysis of DASHA-PP under Polyak-Lojasiewicz Condition + +In this section, we provide the theoretical convergence rates of DASHA-PP under Polyak-Lojasiewicz Condition. + +Assumption 9. The function $f$ satisfy (Polyak-Lojasiewicz) PL-condition: + +$$ +\left\| \nabla f (x) \right\| ^ {2} \geq 2 \mu \left(f (x) - f ^ {*}\right), \quad \forall x \in \mathbb {R}, \tag {30} +$$ + +where $f^{*} = \inf_{x\in \mathbb{R}^{d}}f(x) > - \infty$ . + +Under Polyak-Lojasiewicz condition, a (random) point $\widehat{x}$ is $\varepsilon$ -solution, if $\operatorname{E}[f(\widehat{x})] - f^{*} \leq \varepsilon$ . + +We now provide the convergence rates of DASHA-PP under PL-condition. + +# F.1 Gradient Setting + +Theorem 8. Suppose that Assumption 1, 2, 3, 7, 8 and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {2 0 0 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 8}{n p _ {\mathrm {a}} ^ {2}} \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)} \widehat {L}\right) ^ {- 1}, \frac {a}{4 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0 = \nabla f_i(x^0)$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP), then $\operatorname{E}\left[f(x^T)\right] - f^* \leq (1 - \gamma \mu)^T \Delta_0$ . + +Let us provide bounds up to logarithmic factors and use $\tilde{\mathcal{O}} (\cdot)$ notation. The provided theorem states that to get $\varepsilon$ -solution DASHA-PP have to run + +$$ +\widetilde {\mathcal {O}} \left(\frac {\omega + 1}{p _ {\mathrm {a}}} + \frac {L}{\mu} + \frac {\omega \widehat {L}}{p _ {\mathrm {a}} \mu \sqrt {n}} + \frac {\widehat {L}}{p _ {\mathrm {a}} \mu \sqrt {n}}\right), +$$ + +communication rounds. The method DASHA from (Tyurin and Richtárik, 2023), have to run + +$$ +\widetilde {\mathcal {O}} \left(\omega + \frac {L}{\mu} + \frac {\omega \widehat {L}}{\mu \sqrt {n}}\right), +$$ + +communication rounds to get $\varepsilon$ -solution. The difference is the same as in the general nonconvex case (see Section 6.1). Up to Lipschitz constants factors, we get the degeneration up to $1 / p_{\mathrm{a}}$ factor due to the partial participation. + +# F.2 Finite-Sum Setting + +Theorem 9. Suppose that Assumption 1, 2, 3, 7, 4, 8, and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , probability $p_{page} = \frac{B}{m + B}$ , $b = \frac{p_{page}p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {2 0 0 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) + \frac {4 8}{n p _ {\mathrm {a}} ^ {2} p _ {p a g e}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\mathrm {m a x}} ^ {2}}{B}\right)\right) ^ {- 1}, \frac {a}{2 \mu}, \frac {b}{2 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0 = \nabla f_i(x^0)$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP-PAGE), then $\operatorname{E}\left[f(x^T)\right] - f^* \leq (1 - \gamma \mu)^T \Delta_0$ . + +The provided theorem states that to get $\varepsilon$ -solution DASHA-PP have to run + +$$ +\widetilde {\mathcal {O}} \left(\frac {\omega + 1}{p _ {\mathrm {a}}} + \frac {m}{p _ {\mathrm {a}} B} + \frac {L}{\mu} + \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\operatorname* {m a x}}}{\sqrt {B}}\right) + \frac {\sqrt {m}}{p _ {\mathrm {a}} \mu \sqrt {n B}} \left(\widehat {L} + \frac {L _ {\operatorname* {m a x}}}{\sqrt {B}}\right)\right), +$$ + +communication rounds. The method DASHA-PAGE from (Tyurin and Richtárik, 2023), have to run + +$$ +\widetilde {\mathcal {O}} \left(\omega + \frac {m}{B} + \frac {L}{\mu} + \frac {\omega}{\mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\mathrm {m a x}}}{\sqrt {B}}\right) + \frac {\sqrt {m}}{\mu \sqrt {n B}} \left(\frac {L _ {\mathrm {m a x}}}{\sqrt {B}}\right)\right), +$$ + +communication rounds to get $\varepsilon$ -solution. We can guarantee the degeneration up to $1 / p_{\mathrm{a}}$ factor due to the partial participation only if $B = \mathcal{O}\left(\frac{L_{\max}^2}{\hat{L}^2}\right)$ . The same conclusion we have in Section 6.2. + +# F.3 Stochastic Setting + +Theorem 10. Suppose that Assumption 1, 2, 3, 7, 5, 6, 8 and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b \in \left(0, \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}} \right]$ , + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {2 0 0 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) + \frac {4 0}{n p _ {\mathrm {a}} b} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)}\right) ^ {- 1}, \frac {a}{2 \mu}, \frac {b}{2 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP-MVR), then + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {T}) - f ^ {*} \right] \\ \leq \left(1 - \gamma \mu\right) ^ {T} \left(\Delta_ {0} + \frac {2 \gamma}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {4 0 \gamma b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) \\ + \frac {1}{\mu} \left(\frac {1 0 0 b ^ {2} \omega (2 \omega + 1)}{p _ {\mathrm {a}} ^ {2}} + \frac {2 0 b}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{n B}. \\ \end{array} +$$ + +The provided theorems states that to get $\varepsilon$ -solution DASHA-PP have to run + +$$ +\widetilde {\mathcal {O}} \left(\frac {\omega + 1}{p _ {\mathrm {a}}} + \underbrace {\frac {\omega}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}}} _ {\mathcal {P} _ {2}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + \frac {L}{\mu} + \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \underbrace {\frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)} _ {\mathcal {P} _ {1}}\right) \tag {31} +$$ + +communication rounds. We take $b = \Theta \left( \min \left\{ \frac{p_{\mathrm{a}}}{\omega} \sqrt{\frac{\mu n \varepsilon B}{\sigma^2}}, \frac{p_{\mathrm{a}} \mu n \varepsilon B}{\sigma^2} \right\} \right) \geq$ + +$$ +\Theta \left(\min \left\{ \begin{array}{c} p _ {\mathrm {a}} \\ \omega^ {2}, \end{array} \frac {p _ {\mathrm {a}} \mu n \varepsilon B}{\sigma^ {2}} \right\}\right). +$$ + +The method DASHA-SYNC-MVR from (Tyurin and Richtárik, 2023), have to run + +$$ +\widetilde {\mathcal {O}} \left(\omega + \frac {\sigma^ {2}}{\mu n \varepsilon B} + \frac {L}{\mu} + \frac {\omega}{\mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \tag {32} +$$ + +communication rounds to get $\varepsilon$ -solution9. + +In the stochastic setting, the comparison is a little bit more complicated. As in the finite-sum setting, we have to take $B = \mathcal{O}\left(\frac{L_{\sigma}^{2}}{\widehat{L}^{2}}\right)$ to guarantee the degeneration up to $1 / p_{\mathrm{a}}$ of the term $\mathcal{P}_1$ from (31). However, DASHA-PP-MVR has also suboptimal term $\mathcal{P}_2$ . This suboptimality is tightly connected with the suboptimality of $B_{\mathrm{init}}$ in the general nonconvex case, which we discuss in Section 6.3, and it also appears in the analysis of DASHA-MVR (Tyurin and Richtárik, 2023). Let us provide the counterpart of Corollary 4. The corollary reveals that we can escape regimes when $\mathcal{P}_2$ is the bottleneck by choosing the parameters of the compressors. + +Corollary 5. Suppose that assumptions of Theorem 10 hold, batch size $B \leq \min \left\{\frac{\sigma}{p_{\mathrm{a}}\sqrt{\mu\varepsilon n}}, \frac{L_{\sigma}^{2}}{\hat{L}^{2}}\right\}$ , we take RandK compressors with $K = \Theta\left(\frac{Bd\sqrt{\mu\varepsilon n}}{\sigma}\right)$ . Then the communication complexity equals + +$$ +\widetilde {\mathcal {O}} \left(\frac {d \sigma}{p _ {\mathrm {a}} \sqrt {\mu \varepsilon n}} + \frac {d L _ {\sigma}}{p _ {\mathrm {a}} \mu \sqrt {n}}\right), +$$ + +and the expected number of stochastic gradient calculations per node equals + +$$ +\widetilde {\mathcal {O}} \left(\frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon} + \frac {\sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon}}\right). +$$ + +Up to Lipschitz constants, DASHA-PP-MVR has the state-of-the-art oracle complexity under PL-condition (see (Li et al., 2021a)). Moreover, DASHA-PP-MVR has the state-of-the-art communication complexity of DASHA for a small enough $\mu$ . + +# F.4 Proofs of Theorems + +The following proofs almost repeat the proofs from Section E. And one of the main changes is that instead of Lemma 3, we use the following lemma. + +# F.4.1 Standard Lemma under Polyak-Lojasiewicz Condition + +Lemma 11. Suppose that Assumptions 1 and 9 hold and + +$$ +\operatorname {E} \left[ f (x ^ {t + 1}) \right] + \gamma \Psi^ {t + 1} \leq \operatorname {E} \left[ f (x ^ {t}) \right] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] + (1 - \gamma \mu) \gamma \Psi^ {t} + \gamma C, +$$ + +where $\Psi^t$ is a sequence of numbers, $\Psi^t \geq 0$ for all $t \in [T]$ , constant $C \geq 0$ , constant $\mu > 0$ , and constant $\gamma \in (0,1/\mu)$ . Then + +$$ +\operatorname {E} \left[ f \left(x ^ {T}\right) - f ^ {*} \right] \leq (1 - \gamma \mu) ^ {T} \left(\left(f \left(x ^ {0}\right) - f ^ {*}\right) + \gamma \Psi^ {0}\right) + \frac {C}{\mu}. \tag {33} +$$ + +Proof. We subtract $f^{*}$ and use PL-condition (30) to get + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) - f ^ {*} \right] + \gamma \Psi^ {t + 1} \leq \operatorname {E} \left[ f \left(x ^ {t}\right) - f ^ {*} \right] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f \left(x ^ {t}\right) \right\| ^ {2} \right] + \gamma \Psi^ {t} + \gamma C \\ \leq (1 - \gamma \mu) \mathrm {E} [ f (x ^ {t}) - f ^ {*} ] + (1 - \gamma \mu) \gamma \Psi^ {t} + \gamma C \\ = (1 - \gamma \mu) (\mathrm {E} [ f (x ^ {t}) - f ^ {*} ] + \gamma \Psi^ {t}) + \gamma C. \\ \end{array} +$$ + +Unrolling the inequality, we have + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) - f ^ {*} \right] + \gamma \Psi^ {t + 1} \leq (1 - \gamma \mu) ^ {t + 1} \left(\left(f (x ^ {0}) - f ^ {*}\right) + \gamma \Psi^ {0}\right) + \gamma C \sum_ {i = 0} ^ {t} (1 - \gamma \mu) ^ {i} \\ \leq (1 - \gamma \mu) ^ {t + 1} \left(\left(f (x ^ {0}) - f ^ {*}\right) + \gamma \Psi^ {0}\right) + \frac {C}{\mu}. \\ \end{array} +$$ + +It is left to note that $\Psi^t\geq 0$ for all $t\in [T]$ + +![](images/c7fc9bedf03675936c90891cbb1bc692e2d3587869117b286dcc952910378927.jpg) + +# F.4.2 Generic Lemma + +We now provide the counterpart of Lemma 6. + +Lemma 12. Suppose that Assumptions 2, 7, 8 and 9 hold and let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , then + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1 0 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Proof. Let us fix some constants $\kappa, \eta \in [0, \infty)$ that we will define later. Using the same reasoning as in Lemma 6, we can get + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] \\ + \kappa \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \left(\gamma + \kappa (1 - a) ^ {2}\right) \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {\kappa a ^ {2} ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + \left(\frac {2 \kappa \omega}{n p _ {\mathrm {a}}} + \frac {2 \eta \omega}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Let us take $\kappa = \frac{2\gamma}{a}$ . One can show that $\gamma + \kappa (1 - a)^2 \leq (1 - \frac{a}{2})\kappa$ , and thus + +$$ +\begin{array}{l} E [ f (x ^ {t + 1}) ] \\ + \frac {2 \gamma}{a} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \mathrm {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma}{a} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {2 \gamma a ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {a ^ {2} (2 \omega + 1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {4 \gamma \omega}{a n p _ {\mathrm {a}}} + \frac {2 \eta \omega}{p _ {\mathrm {a}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| k _ {i} ^ {t + 1} \| ^ {2} \right]. \\ \end{array} +$$ + +Considering the choice of $a$ , one can show that $\left(\frac{a^2(2\omega + 1 - p_{\mathrm{a}})}{p_{\mathrm{a}}} + (1 - a)^2\right) \leq 1 - a$ . If we take $\eta = \frac{4\gamma((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , then $\left(\frac{2\gamma a((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2} + \eta \left(\frac{a^2(2\omega + 1 - p_{\mathrm{a}})}{p_{\mathrm{a}}} + (1 - a)^2\right)\right) \leq \left(1 - \frac{a}{2}\right)\eta$ and + +$$ +\begin{array}{l} E [ f (x ^ {t + 1}) ] \\ + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(\frac {2 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \omega}{n p _ {\mathrm {a}} ^ {3}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1 0 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +It it left to consider that $\gamma \leq \frac{a}{2\mu}$ , and therefore $1 - \frac{a}{2} \leq 1 - \gamma \mu$ . + +# F.4.3 Proof for DASHA-PP under PL-condition + +Theorem 8. Suppose that Assumption 1, 2, 3, 7, 8 and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {2 0 0 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 8}{n p _ {\mathrm {a}} ^ {2}} \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)} \widehat {L}\right) ^ {- 1}, \frac {a}{4 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0 = \nabla f_i(x^0)$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP), then $\operatorname{E}\left[f(x^T)\right] - f^* \leq (1 - \gamma \mu)^T \Delta_0$ . + +Proof. Let us fix constants $\nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 12, Lemma 7, and the law of total expectation, we obtain + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ 2 \widehat {L} ^ {2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \rho \mathrm {E} \left[ \frac {2 (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \widehat {L} ^ {2} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 0 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \nu \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \rho \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \right] \\ + \left(\gamma + \nu (1 - b) ^ {2}\right) \operatorname {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \nu \frac {2 b ^ {2} (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +By taking $\nu = \frac{2\gamma}{b}$ , one can show that $(\gamma + \nu (1 - b)^2) \leq \left(1 - \frac{b}{2}\right)\nu$ , and + +$$ +\operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 0 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}} - \rho \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \operatorname {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Note that $b = \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , thus + +$$ +\begin{array}{l} \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 b ^ {2} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \\ \leq \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right). \\ \end{array} +$$ + +And if we take $\rho = \frac{40b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2} +\frac{8\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , then + +$$ +\left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) \leq \left(1 - \frac {b}{2}\right) \rho , +$$ + +and + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 0 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}} \right. \\ \left. - \frac {8 0 b \gamma \omega (2 \omega + 1) (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}} - \frac {1 6 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}}\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. \right. + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right)\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Due to $\frac{p_{\mathrm{a}}}{2} \leq b \leq p_{\mathrm{a}}$ , we have + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 0 \gamma \omega (2 \omega + 1) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} - \frac {2 4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {3}}\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. \right. + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right)\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ \left. \right. + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right)\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Note that $\gamma \leq \frac{a}{4\mu} \leq \frac{p_{\mathrm{a}}}{4\mu} \leq \frac{b}{2\mu}$ , thus $1 - \frac{b}{2} \leq 1 - \gamma \mu$ and + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + (1 - \gamma \mu) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +In the view of Lemma 11 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {2 (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {4 ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \quad \frac {2}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {4 0 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right], \\ \end{array} +$$ + +we can conclude the proof of the theorem. + +# F.4.4 Proof for DASHA-PP-PAGE under PL-condition + +Theorem 9. Suppose that Assumption 1, 2, 3, 7, 4, 8, and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , probability + +$$ +p _ {p a g e} = \frac {B}{m + B}, b = \frac {p _ {p a g e} p _ {\mathrm {a}}}{2 - p _ {\mathrm {a}}}, +$$ + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {2 0 0 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\max} ^ {2}}{B}\right) + \frac {4 8}{n p _ {\mathrm {a}} ^ {2} p _ {p a g e}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {p a g e}) L _ {\max} ^ {2}}{B}\right)\right) ^ {- 1}, \frac {a}{2 \mu}, \frac {b}{2 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0 = \nabla f_i(x^0)$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP-PAGE), then $\operatorname{E}\left[f(x^T)\right] - f^* \leq (1 - \gamma \mu)^T \Delta_0$ . + +Proof. Let us fix constants $\nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 12, Lemma 8, and the law of total expectation, we obtain + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {1 0 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + \frac {1 0 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \frac {2 b ^ {2}}{p _ {\text {p a g e}}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left(\left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{n p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left(\left(\frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right). \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right) \right. \\ \left. - \nu \left(\frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\max} ^ {2}}{n p _ {\mathrm {a}} B}\right) - \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\max} ^ {2}}{p _ {\mathrm {a}} B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. + \left(\gamma + \nu \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right)\right) E \left[ \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2} \right] \right. \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {2 \nu \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} \right. \\ \left. + \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right]. \\ \end{array} +$$ + +Due to $b = \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}} \leq p_{\mathrm{page}}$ , one can show that $\left(p_{\mathrm{page}} \left(1 - \frac{b}{p_{\mathrm{page}}}\right)^2 + (1 - p_{\mathrm{page}})\right) \leq 1 - b$ . Thus, if we take $\nu = \frac{2\gamma}{b}$ , then + +$$ +\left. \right.\left(\gamma + \nu \left(p _ {\text {p a g e}} \left(1 - \frac {b}{p _ {\text {p a g e}}}\right) ^ {2} + (1 - p _ {\text {p a g e}})\right)\right) \leq \gamma + \nu (1 - b) = \left(1 - \frac {b}{2}\right) \nu , +$$ + +therefore + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right) \right. \\ \left. - \frac {2 \gamma}{b n p _ {\mathrm {a}}} \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) - \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{p _ {\mathrm {a}} B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \\ \left. + \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Next, with the choice of $b = \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , we ensure that + +$$ +\left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right) \leq 1 - b. +$$ + +If we take $\rho = \frac{40b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2p_{\mathrm{page}}} +\frac{8\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2p_{\mathrm{page}}}$ , then + +$$ +\left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {4 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \rho \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {p a g e}}} + p _ {\mathrm {p a g e}} \left(1 - \frac {b}{p _ {\mathrm {p a g e}}}\right) ^ {2} + (1 - p _ {\mathrm {p a g e}})\right)\right) \leq \left(1 - \frac {b}{2}\right) \rho , +$$ + +therefore + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(2 \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right) \right. \\ - \frac {2 \gamma}{b n p _ {\mathrm {a}}} \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \left. - \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} p _ {\text {p a g e}}} + \frac {8 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \left(2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Let us simplify the inequality. First, due to $b \geq \frac{p_{\mathrm{page}} p_{\mathrm{a}}}{2}$ , we have + +$$ +\frac {2 \gamma}{b n p _ {\mathrm {a}}} \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \leq \frac {8 \gamma}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right). +$$ + +Second, due to $b \leq p_{\mathrm{a}}p_{\mathrm{page}}$ and $p_{\mathrm{aa}} \leq p_{\mathrm{a}}^2$ , we get + +$$ +\begin{array}{l} \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3} p _ {\text {p a g e}}} + \frac {8 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \left(2 \left(1 - p _ {\mathrm {a}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \leq \left(\frac {4 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \left(2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \leq \frac {8 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\mathrm {m a x}} ^ {2}}{B}\right) \\ + \frac {1 6 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\mathrm {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ \leq \frac {8 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right) \\ + \frac {1 6 \gamma}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right). \\ \end{array} +$$ + +Combining all bounds together, we obtain the following inequality: + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max} ^ {2}}{B}\right) \right. \\ \left. - \frac {2 4 \gamma}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {(1 - p _ {\text {p a g e}}) L _ {\max } ^ {2}}{B}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Note that $\gamma \leq \frac{b}{2\mu}$ , thus $1 - \frac{b}{2} \leq 1 - \gamma \mu$ and + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + (1 - \gamma \mu) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right]. \\ \end{array} +$$ + +It is left to apply Lemma 11 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {2 (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {4 ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {2}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {4 0 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}} + \frac {8 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\text {p a g e}}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +to conclude the proof. + +# F.4.5 Proof for DASHA-PP-MVR under PL-condition + +Theorem 10. Suppose that Assumption 1, 2, 3, 7, 5, 6, 8 and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , + +$$ +b \in \left(0, \frac {p _ {\mathrm {a}}}{2 - p _ {\mathrm {a}}} \right], +$$ + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {2 0 0 \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) + \frac {4 0}{n p _ {\mathrm {a}} b} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)}\right) ^ {- 1}, \frac {a}{2 \mu}, \frac {b}{2 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0$ for all $i \in [n]$ in Algorithm 1 (DASHA-PP-MVR), then + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {T}) - f ^ {*} \right] \\ \leq \left(1 - \gamma \mu\right) ^ {T} \left(\Delta_ {0} + \frac {2 \gamma}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \left(\frac {4 0 \gamma b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) \\ + \frac {1}{\mu} \left(\frac {1 0 0 b ^ {2} \omega (2 \omega + 1)}{p _ {\mathrm {a}} ^ {2}} + \frac {2 0 b}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{n B}. \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + \frac {1 0 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| k _ {i} ^ {t + 1} \right\| ^ {2} \right]. \\ \end{array} +$$ + +Proof. Let us fix constants $\nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 12, Lemma 10, and the law of total expectation, we obtain + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + \frac {1 0 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| k _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {2 b ^ {2} \sigma^ {2}}{B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + 2 b ^ {2} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left(\frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + (1 - b) ^ {2} \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left(\frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} B} + \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right). \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(\gamma + \nu (1 - b) ^ {2}\right) \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 \nu \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \nu \frac {2 b ^ {2}}{n p _ {\mathrm {a}}} + \rho \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +By taking $\nu = \frac{2\gamma}{b}$ , one can show that $\left(\gamma + \nu (1 - b)^2\right) \leq \left(1 - \frac{b}{2}\right)\nu$ , and + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} - \frac {2 \gamma}{b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \left. \right. + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b}{n p _ {\mathrm {a}}} + \rho \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Note that $b \leq \frac{p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , thus + +$$ +\begin{array}{l} \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho \left(\frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {a}}} + (1 - b) ^ {2}\right)\right) \\ \leq \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right). \\ \end{array} +$$ + +And if we take $\rho = \frac{40b\gamma\omega(2\omega + 1)}{np_{\mathrm{a}}^2} +\frac{8\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , then + +$$ +\left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2}} + \rho (1 - b)\right) \leq \rho , +$$ + +and + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ - \frac {2 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right) \\ \left. - \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} + \frac {8 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. \right. + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right)\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b}{n p _ {\mathrm {a}}} + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Let us simplify the inequality. First, due to $b \leq p_{\mathrm{a}}$ and $(1 - p_{\mathrm{a}}) \leq \left(1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}} \right)$ , we have + +$$ +\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} + \frac {2 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 8 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right) +$$ + +$$ +\begin{array}{l} = \frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right) \\ + \frac {8 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}\right) \\ \leq \frac {4 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \\ + \frac {8 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right), \\ \end{array} +$$ + +therefore + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {5 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {1 0 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {2 (1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + 2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. \right. + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right)\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b}{n p _ {\mathrm {a}}} + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B} \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {2 0 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \left. \right. + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right)\left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {4 \gamma b}{n p _ {\mathrm {a}}} + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Also, we can simplify the last term: + +$$ +\begin{array}{l} \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \frac {2 b ^ {2}}{p _ {\mathrm {a}}} \\ = \frac {8 0 b ^ {3} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {3}} + \frac {1 6 b ^ {2} \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} ^ {2}} \\ \end{array} +$$ + +$$ +\leq \frac {8 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {1 6 b \gamma}{n p _ {\mathrm {a}}}, +$$ + +thus + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {1 0 0 \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \frac {2 0 \gamma}{n p _ {\mathrm {a}} b} \left(\frac {(1 - b) ^ {2} L _ {\sigma} ^ {2}}{B} + \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {1 0 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 0 \gamma b}{n p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {1 0 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 0 \gamma b}{n p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +Note that $\gamma \leq \frac{b}{2\mu}$ , thus $1 - \frac{b}{2} \leq 1 - \gamma \mu$ and + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {t + 1}) \right] + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {4 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. \right. + \left(1 - \gamma \mu\right) \frac {2 \gamma}{b} \mathrm {E} \left[\left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + (1 - \gamma \mu) \left(\frac {4 0 b \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {1 0 0 b ^ {2} \gamma \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {2 0 \gamma b}{n p _ {\mathrm {a}}}\right) \frac {\sigma^ {2}}{B}. \\ \end{array} +$$ + +It is left to apply Lemma 11 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {2 (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {4 ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \quad \frac {2}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(\frac {4 0 b \omega (2 \omega + 1)}{n p _ {\mathrm {a}} ^ {2}} + \frac {8 (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}}\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +and $C = \left(\frac{100b^2\omega(2\omega + 1)}{p_{\mathrm{a}}^2} +\frac{20b}{p_{\mathrm{a}}}\right)\frac{\sigma^2}{nB}$ to conclude the proof. + +![](images/d4c6953915a506ef8f066036c9af8b53ae97f693a2e4afb5e717bc2a110d1335.jpg) + +Corollary 5. Suppose that assumptions of Theorem 10 hold, batch size $B \leq \min \left\{\frac{\sigma}{p_{\mathrm{a}}\sqrt{\mu\varepsilon n}}, \frac{L_{\sigma}^{2}}{\hat{L}^{2}}\right\}$ , we take RandK compressors with $K = \Theta\left(\frac{Bd\sqrt{\mu\varepsilon n}}{\sigma}\right)$ . Then the communication complexity equals + +$$ +\widetilde {\mathcal {O}} \left(\frac {d \sigma}{p _ {\mathrm {a}} \sqrt {\mu \varepsilon n}} + \frac {d L _ {\sigma}}{p _ {\mathrm {a}} \mu \sqrt {n}}\right), +$$ + +and the expected number of stochastic gradient calculations per node equals + +$$ +\widetilde {\mathcal {O}} \left(\frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon} + \frac {\sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon}}\right). +$$ + +Proof. In the view of Theorem 10, DASHA-PP have to run + +$$ +\widetilde {\mathcal {O}} \left(\frac {\omega + 1}{p _ {\mathrm {a}}} + \frac {\omega}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + \frac {L}{\mu} + \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) +$$ + +communication rounds in the stochastic settings to get $\varepsilon$ -solution. Note that $K = \mathcal{O}\left(\frac{d}{p_{\mathrm{a}}\sqrt{n}}\right)$ . Moreover, we can skip the initialization procedure and initialize $h_i^0$ and $g_i^0$ , for instance, with zeros because the initialization error is under a logarithm. Considering Theorem 6, the communication complexity equals + +$$ +\begin{array}{l} \widetilde {\mathcal {O}} \left(K \frac {\omega + 1}{p _ {\mathrm {a}}} + K \frac {\omega}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + K \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + K \frac {L}{\mu} + K \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + K \frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \\ = \widetilde {\mathcal {O}} \left(K \frac {\omega + 1}{p _ {\mathrm {a}}} + K \frac {\omega}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + K \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + K \frac {L}{\mu} + K \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + K \frac {\sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon} B}\right) \\ = \widetilde {\mathcal {O}} \left(\frac {d}{p _ {\mathrm {a}}} + \frac {d}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + \frac {K \sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + \frac {d L}{p _ {\mathrm {a}} \mu \sqrt {n}} + \frac {d}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {K \sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon} B}\right) \\ = \widetilde {\mathcal {O}} \left(\frac {d}{p _ {\mathrm {a}}} + \frac {d \sigma}{p _ {\mathrm {a}} \sqrt {\mu n \varepsilon B}} + \frac {d \sigma}{p _ {\mathrm {a}} \sqrt {\mu \varepsilon n}} + \frac {d L}{p _ {\mathrm {a}} \mu \sqrt {n}} + \frac {d}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {d L _ {\sigma}}{p _ {\mathrm {a}} \mu \sqrt {n}}\right) \\ = \widetilde {\mathcal {O}} \left(\frac {d \sigma}{p _ {\mathrm {a}} \sqrt {\mu \varepsilon n}} + \frac {d L _ {\sigma}}{p _ {\mathrm {a}} \mu \sqrt {n}}\right). \\ \end{array} +$$ + +The expected number of stochastic gradient calculations per node equals + +$$ +\widetilde {\mathcal {O}} \left(B \frac {\omega + 1}{p _ {\mathrm {a}}} + B \frac {\omega}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + B \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + B \frac {L}{\mu} + B \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + B \frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right)\right) +$$ + +$$ +\begin{array}{l} = \widetilde {\mathcal {O}} \left(B \frac {\omega + 1}{p _ {\mathrm {a}}} + B \frac {\omega}{p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + B \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + B \frac {L}{\mu} + B \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + B \frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\frac {L _ {\sigma}}{\sqrt {B}}\right)\right) \\ = \widetilde {\mathcal {O}} \left(\frac {B d}{K p _ {\mathrm {a}}} + \frac {B d}{K p _ {\mathrm {a}}} \sqrt {\frac {\sigma^ {2}}{\mu n \varepsilon B}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon} + B \frac {L}{\mu} + \frac {B d}{K p _ {\mathrm {a}} \mu \sqrt {n}} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon}}\right) \\ = \widetilde {\mathcal {O}} \left(\frac {\sigma}{p _ {\mathrm {a}} \sqrt {\mu \varepsilon n}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu \varepsilon n \sqrt {B}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon} + \frac {\sigma L}{p _ {\mathrm {a}} \mu^ {3 / 2} \sqrt {\varepsilon} n} + \frac {\sigma}{p _ {\mathrm {a}} \mu^ {3 / 2} \sqrt {\varepsilon} n} \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon}}\right) \\ = \widetilde {\mathcal {O}} \left(\frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon} + \frac {\sigma L _ {\sigma}}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon}}\right). \\ \end{array} +$$ + +![](images/2b1b00066a369ae02f5bcae6d80f1fd5170da4d10c38649cd48ff6d8664f4e18.jpg) + +By analogy to (Tyurin and Richtárik, 2023), we provide a "synchronized" version of the algorithm. With a small probability, participating nodes calculate and send a mega batch without compression. This helps us to resolve the suboptimality of DASHA-PP-MVR w.r.t. $\omega$ . Note that this suboptimality is not a problem. We show in Corollary 4 that DASHA-PP-MVR can have the optimal oracle complexity and SOTA communication complexity with the particular choices of parameters of the compressors. + +Algorithm 8 DASHA-PP-SYNC-MVR +1: Input: starting point $x^0\in \mathbb{R}^d$ , stepsize $\gamma >0$ , momentum $a\in (0,1]$ , momentum $b\in$ +(0,1], probability $p_{\mathrm{mega}}\in (0,1]$ , batch size $B^{\prime}$ and $B$ , probability $p_a\in (0,1]$ that a node is +participating(a), number of iterations $T\geq 1$ +2: Initialize $g_i^0,h_i^0$ on the nodes and $g^0 = \frac{1}{n}\sum_{i = 1}^{n}g_i^0$ on the server +3: for $t = 0,1,\ldots ,T - 1$ do +4: $x^{t + 1} = x^{t} - \gamma g^{t}$ +5: $c^{t + 1} = \left\{ \begin{array}{ll}1, & \text{with probability } p_{\mathrm{mega}},\\ 0, & \text{with probability } 1 - p_{\mathrm{mega}} \end{array} \right.$ +6: Broadcast $x^{t + 1},x^{t}$ to all participating(a) nodes +7: for $i = 1,\dots ,n$ in parallel do +8: if ith node is participating(a) then +9: if $c^{t + 1} = 1$ then +10: Generate i.i.d. samples $\{\xi_{ik}^{t + 1}\}_{k = 1}^{B'}$ of size $B^{\prime}$ from $\mathcal{D}_i$ +11: $k_{i}^{t + 1} = \frac{1}{B'}\sum_{k = 1}^{B'}\nabla f_{i}(x^{t + 1};\xi_{ik}^{t + 1}) - \frac{1}{B'}\sum_{k = 1}^{B'}\nabla f_{i}(x^{t};\xi_{ik}^{t + 1}) - \frac{b}{p_{\mathrm{mega}}}\left(h_{i}^{t} - \frac{1}{B'}\sum_{k = 1}^{B'}\nabla f_{i}(x^{t};\xi_{ik}^{t + 1})\right)$ +12: $m_i^{t + 1} = \frac{1}{p_a} k_i^{t + 1} - \frac{a}{p_a} (g_i^t -h_i^t)$ +13: else +14: Generate i.i.d. samples $\{\xi_{ij}^{t + 1}\}_{j = 1}^{B}$ of size $B$ from $\mathcal{D}_i$ +15: $k_{i}^{t + 1} = \frac{1}{B}\sum_{j = 1}^{B}\nabla f_{i}(x^{t + 1};\xi_{ij}^{t + 1}) - \frac{1}{B}\sum_{j = 1}^{B}\nabla f_{i}(x^{t};\xi_{ij}^{t + 1})$ +16: $m_i^{t + 1} = C_i\left(\frac{1}{p_a} k_i^{t + 1} - \frac{a}{p_a} (g_i^t -h_i^t)\right)$ +17: end if +18: $h_i^{t + 1} = h_i^t +\frac{1}{p_a} k_i^{t + 1}$ +19: $g_i^{t + 1} = g_i^t +m_i^{t + 1}$ +20: Send $m_i^{t + 1}$ to the server +21: else +22: $h_i^{t + 1} = h_i^t$ +23: $m_i^{t + 1} = 0$ +24: $g_i^{t + 1} = g_i^t$ +25: end if +26: end for +27: $g^{t + 1} = g^t +\frac{1}{n}\sum_{i = 1}^{n}m_i^{t + 1}$ +28: end for +29: Output: $\hat{x}^T$ chosen uniformly at random from $\{x^t\}_{k = 0}^{T - 1}$ (a): For the formal description see Section 2.2. + +In the following theorem, we provide the convergence rate of DASHA-PP-SYNC-MVR. + +Theorem 11. Suppose that Assumptions 1, 2, 3, 5, 6, 7 and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{mega}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , probability $p_{\mathrm{mega}} \in (0, 1]$ , batch size $B' \geq B \geq 1$ + +$$ +\gamma \leq \left(L + \sqrt {\frac {8 (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right) + \frac {1 6}{n p _ {m e g a} p _ {\mathrm {a}} ^ {2}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right)}\right) ^ {- 1}, +$$ + +and $h_i^0 = g_i^0$ for all $i \in [n]$ in Algorithm 8. Then + +$$ +\begin{array}{l} \operatorname {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \leq \frac {1}{T} \left[ \frac {2 \Delta_ {0}}{\gamma} + \frac {4}{p _ {m e g a} p _ {\mathrm {a}}} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \frac {4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {m e g a} p _ {\mathrm {a}}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2} \right] \\ + \frac {1 2 \sigma^ {2}}{n B ^ {\prime}}. \\ \end{array} +$$ + +First, we introduce the expected density of compressors (Gorbunov et al., 2021; Tyurin and Richtárik, 2023). + +Definition 12. The expected density of the compressor $\mathcal{C}_i$ is $\zeta_{\mathcal{C}_i} \coloneqq \sup_{x \in \mathbb{R}^d} \operatorname{E}\left[\|\mathcal{C}_i(x)\|_0\right]$ , where $\|x\|_0$ is the number of nonzero components of $x \in \mathbb{R}^d$ . Let $\zeta_{\mathcal{C}} = \max_{i \in [n]} \zeta_{\mathcal{C}_i}$ . + +Note that $\zeta_{\mathcal{C}}$ is finite and $\zeta_{\mathcal{C}} \leq d$ . + +In the next corollary, we choose particular algorithm parameters to reveal the communication and oracle complexity. + +Corollary 6. Suppose that assumptions from Theorem 11 hold, probability $p_{\text{mega}} = \min \left\{ \frac{\zeta_c}{d}, \frac{n\varepsilon B}{\sigma^2} \right\}$ , batch size $B' = \Theta \left( \frac{\sigma^2}{n\varepsilon} \right)$ , and $h_i^0 = g_i^0 = \frac{1}{B_{\text{init}}} \sum_{k=1}^{B_{\text{init}}} \nabla f_i(x^0; \xi_{ik}^0)$ for all $i \in [n]$ , initial batch size $B_{\text{init}} = \Theta \left( \frac{B}{p_{\text{mega}} \sqrt{p_a}} \right) = \Theta \left( \max \left\{ \frac{Bd}{\sqrt{p_a} \zeta_c}, \frac{\sigma^2}{\sqrt{p_a} n\varepsilon} \right\} \right)$ , then DASHA-PP-SYNC-MVR needs + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \left(\frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} + \sqrt {\frac {d}{p _ {\mathrm {a}} ^ {2} \zeta_ {\mathcal {C}} n}}\right) \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\frac {\widehat {L}}{\sqrt {B}} + \frac {L _ {\sigma}}{B}\right) \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right). +$$ + +communication rounds to get an $\varepsilon$ -solution, the expected communication complexity is equal to $\mathcal{O}(d + \zeta_{\mathcal{C}}T)$ , and the expected number of stochastic gradient calculations per node equals $\mathcal{O}(B_{\mathrm{init}} + BT)$ , where $\zeta_{\mathcal{C}}$ is the expected density from Definition 12. + +The main improvement of Corollary 6 over Corollary 3 is the size of the initial batch size $B_{\mathrm{init}}$ . However, Corollary 4 reveals that we can avoid regimes when DASHA-PP-MVR is suboptimal. + +We also provide a theorem under PL-condition (see Assumption 9). + +Theorem 13. Suppose that Assumptions 1, 2, 3, 5, 6, 7, 8 and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{mega}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , probability $p_{\mathrm{mega}} \in (0, 1]$ , batch size $B' \geq B \geq 1$ + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {1 6 (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) + \left(\frac {4 8 L _ {\sigma} ^ {2}}{n p _ {m e g a} p _ {\mathrm {a}} ^ {2} B} + \frac {2 4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}}{n p _ {m e g a} p _ {\mathrm {a}} ^ {2}}\right)}\right) ^ {- 1}, \frac {a}{2 \mu}, \frac {b}{2 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0$ for all $i \in [n]$ in Algorithm 8. Then + +$$ +\begin{array}{l} \operatorname {E} \left[ f (x ^ {T}) - f ^ {*} \right] \\ \leq \left(1 - \gamma \mu\right) ^ {T} \left(\Delta_ {0} + \frac {2 \gamma}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {m e g a}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) + \frac {2 0 \sigma^ {2}}{\mu n B ^ {\prime}}. \\ \end{array} +$$ + +Let us provide bounds up to logarithmic factors and use $\tilde{\mathcal{O}} (\cdot)$ notation. + +Corollary 7. Suppose that assumptions from Theorem 13 hold, probability $p_{\text{mega}} = \min \left\{\frac{\zeta_C}{d}, \frac{\mu n\varepsilon B}{\sigma^2}\right\}$ , batch size $B' = \Theta\left(\frac{\sigma^2}{\mu n\varepsilon}\right)$ then DASHA-PP-SYNC-MVR needs + +$$ +T := \widetilde {\mathcal {O}} \left(\frac {\omega + 1}{p _ {\mathrm {a}}} + \frac {d}{p _ {\mathrm {a}} \zeta_ {\mathcal {C}}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + \frac {L}{\mu} + \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\frac {L _ {\sigma}}{\sqrt {B}} + \widehat {L}\right) + \left(\frac {\sqrt {d}}{p _ {\mathrm {a}} \mu \sqrt {\zeta_ {\mathcal {C}} n}} + \frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}}\right) \left(\frac {L _ {\sigma}}{\sqrt {B}} + \widehat {L}\right)\right). +$$ + +communication rounds to get an $\varepsilon$ -solution, the expected communication complexity is equal to $\widetilde{\mathcal{O}}(\zeta_{\mathcal{C}}T)$ , and the expected number of stochastic gradient calculations per node equals $\widetilde{\mathcal{O}}(BT)$ where $\zeta_{\mathcal{C}}$ is the expected density from Definition 12. + +The proof of this corollary almost repeats the proof of Corollary 6. Note that we can skip the initialization procedure and initialize $h_i^0$ and $g_i^0$ , for instance, with zeros because the initialization error is under a logarithm. + +Let us assume that $\frac{d}{\zeta c} = \Theta (\omega)$ (holds for the Rand $K$ compressor), then the convergence rate of DASHA-PP-SYNC-MVR is + +$$ +\widetilde {\mathcal {O}} \left(\frac {\omega + 1}{p _ {\mathrm {a}}} + \frac {\sigma^ {2}}{p _ {\mathrm {a}} \mu n \varepsilon B} + \frac {L}{\mu} + \frac {\omega}{p _ {\mathrm {a}} \mu \sqrt {n}} \left(\frac {L _ {\sigma}}{\sqrt {B}} + \widehat {L}\right) + \frac {\sigma}{p _ {\mathrm {a}} n \mu^ {3 / 2} \sqrt {\varepsilon B}} \left(\frac {L _ {\sigma}}{\sqrt {B}} + \widehat {L}\right)\right). \tag {34} +$$ + +Comparing (34) with the rate of DASHA-PP-MVR (31), one can see that DASHA-PP-SYNC-MVR improves the suboptimal term $\mathcal{P}_2$ from (31). However, Corollary 5 reveals that we can escape these suboptimal regimes by choosing the parameter $K$ of Rand $K$ compressors in a particular way. + +# G.1 Proof for DASHA-PP-SYNC-MVR + +In this section, we provide the proof of the convergence rate for DASHA-PP-SYNC-MVR. There are four different sources of randomness in Algorithm 8: the first one from random samples $\xi^{t+1}$ , the second one from compressors $\{\mathcal{C}_i\}_{i=1}^n$ , the third one from availability of nodes, and the fourth one from $c^{t+1}$ . We define $\mathrm{E}_k[\cdot]$ , $\mathrm{E}_{\mathcal{C}}[\cdot]$ , $\mathrm{E}_{p_{\mathrm{a}}}[\cdot]$ and $\mathrm{E}_{p_{\mathrm{mega}}}[\cdot]$ to be conditional expectations w.r.t. $\xi^{t+1}$ , $\{\mathcal{C}_i\}_{i=1}^n$ , availability, and $c^{t+1}$ , accordingly, conditioned on all previous randomness. Moreover, we define $\mathrm{E}_{t+1}[\cdot]$ to be a conditional expectation w.r.t. all randomness in iteration $t+1$ conditioned on all previous randomness. + +Let us denote + +$$ +k _ {i, 1} ^ {t + 1} := \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} (x ^ {t + 1}; \xi_ {i k} ^ {t + 1}) - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} (x ^ {t}; \xi_ {i k} ^ {t + 1}) - \frac {b}{p _ {\mathrm {m e g a}}} \left(h _ {i} ^ {t} - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} (x ^ {t}; \xi_ {i k} ^ {t + 1})\right), +$$ + +$$ +k _ {i, 2} ^ {t + 1} := \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right), +$$ + +$$ +h _ {i, 1} ^ {t + 1} := \left\{ \begin{array}{l l} h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 1} ^ {t + 1}, & i ^ {\text {t h}} \text {n o d e i s p a r t i c i p a t i n g}, \\ h _ {i} ^ {t}, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +$$ +h _ {i, 2} ^ {t + 1} := \left\{ \begin{array}{l l} h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1}, & i ^ {\text {t h}} \text {n o d e i s p a r t i c i p a t i n g}, \\ h _ {i} ^ {t}, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +$$ +g _ {i, 1} ^ {t + 1} := \left\{ \begin{array}{l l} g _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 1} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right), & i ^ {\text {t h}} \text {n o d e i s p a r t i c i p a t i n g}, \\ g _ {i} ^ {t}, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +$$ +g _ {i, 2} ^ {t + 1} := \left\{ \begin{array}{l l} g _ {i} ^ {t} + \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right), & i ^ {\text {t h}} \text {n o d e i s p a r t i c i p a t i n g}, \\ g _ {i} ^ {t}, & \text {o t h e r w i s e}, \end{array} \right. +$$ + +$$ +\begin{array}{l} h _ {1} ^ {t + 1} := \frac {1}{n} \sum_ {i = 1} ^ {n} h _ {i, 1} ^ {t + 1}, h _ {2} ^ {t + 1} := \frac {1}{n} \sum_ {i = 1} ^ {n} h _ {i, 2} ^ {t + 1}, g _ {1} ^ {t + 1} := \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i, 1} ^ {t + 1}, \text {a n d} g _ {2} ^ {t + 1} := \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {i, 2} ^ {t + 1}. \\ \text {N o t e , t h a t} \end{array} +$$ + +$$ +h ^ {t + 1} = \left\{ \begin{array}{l l} h _ {1} ^ {t + 1}, & c ^ {t + 1} = 1, \\ h _ {2} ^ {t + 1}, & c ^ {t + 1} = 0, \end{array} \right. +$$ + +and + +$$ +g ^ {t + 1} = \left\{ \begin{array}{l l} g _ {1} ^ {t + 1}, & c ^ {t + 1} = 1, \\ g _ {2} ^ {t + 1}, & c ^ {t + 1} = 0 \end{array} \right. +$$ + +First, we will prove two lemmas. + +Lemma 13. Suppose that Assumptions 3, 5, 7 and 8 hold and let us consider sequences $\{g_i^{t+1}\}_{i=1}^n$ and $\{h_i^{t+1}\}_{i=1}^n$ from Algorithm 8, then + +$$ +\begin{array}{l} \left. \right. \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {a}} \left[ \operatorname {E} _ {p _ {m e g a}} \left[\left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right]\right]\right] \\ \leq \frac {2 (1 - p _ {\text {m e g a}}) \omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i, 2} ^ {t + 1} \right\| ^ {2} + \left(\frac {(p _ {\mathrm {a}} - p _ {\mathrm {a a}}) a ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} + \frac {2 (1 - p _ {\text {m e g a}}) a ^ {2} \omega}{n ^ {2} p _ {\mathrm {a}}}\right) \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}, \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {m e g a}}} \left[\left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right]\right]\right] \\ \leq \frac {2 (1 - p _ {\text {m e g a}}) \omega}{p _ {\mathrm {a}}} \left\| k _ {i, 2} ^ {t + 1} \right\| ^ {2} + \left(\frac {(1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + \frac {2 (1 - p _ {\text {m e g a}}) a ^ {2} \omega}{p _ {\mathrm {a}}}\right) \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2}, \quad \forall i \in [ n ]. \\ \end{array} +$$ + +Proof. First, we get the bound for $\mathrm{E}_{t + 1}\left[\left\| g^{t + 1} - h^{t + 1}\right\| ^2\right]$ : + +$$ +\begin{array}{l} \left. \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \operatorname {E} _ {p _ {\mathrm {m e g a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \right] \right. \\ = p _ {\text {m e g a}} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {1} ^ {t + 1} - h _ {1} ^ {t + 1} \right\| ^ {2} \right] + (1 - p _ {\text {m e g a}}) \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {2} ^ {t + 1} - h _ {2} ^ {t + 1} \right\| ^ {2} \right] \right]. \\ \end{array} +$$ + +Using + +$$ +\mathrm {E} _ {p _ {\mathrm {a}}} \left[ g _ {i, 1} ^ {t + 1} - h _ {i, 1} ^ {t + 1} \right] = g _ {i} ^ {t} + k _ {i, 1} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) - h _ {i} ^ {t} - k _ {i, 1} ^ {t + 1} = (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) +$$ + +and + +$$ +\operatorname {E c} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g _ {i, 2} ^ {t + 1} - h _ {i, 2} ^ {t + 1} \right] \right] = g _ {i} ^ {t} + k _ {i, 2} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) - h _ {i} ^ {t} - k _ {i, 2} ^ {t + 1} = (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right), +$$ + +we have + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[\left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right]\right]\right] \\ \stackrel {(1 6)} {=} p _ {\text {m e g a}} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {1} ^ {t + 1} - h _ {1} ^ {t + 1} - \mathrm {E} _ {p _ {\mathrm {a}}} \left[ g _ {1} ^ {t + 1} - h _ {1} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + \left(1 - p _ {\text {m e g a}}\right) \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {2} ^ {t + 1} - h _ {2} ^ {t + 1} - \operatorname {E} _ {p _ {\mathrm {a}}} \left[ g _ {2} ^ {t + 1} - h _ {2} ^ {t + 1} \right] \right\| ^ {2} \right] \right] \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +We can use Lemma 1 two times with i) $r_i = g_i^t - h_i^t$ and $s_i = -a(g_i^t - h_i^t)$ and ii) $r_i = g_i^t - h_i^t$ and $s_i = p_{\mathrm{a}}\mathcal{C}_i\left(\frac{1}{p_{\mathrm{a}}} k_{i,2}^{t + 1} - \frac{a}{p_{\mathrm{a}}} (g_i^t - h_i^t)\right) - k_{i,2}^{t + 1}$ , to obtain + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \right] \\ \leq \frac {p _ {\mathrm {m e g a}} a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \left. \right. + \left(1 - p _ {\text {m e g a}}\right)\left(\frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {\mathcal {C}} \left[\left\| p _ {\mathrm {a}} \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(k _ {i, 2} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right)\right\| ^ {2} \right]\right) \\ + \left(1 - p _ {\text {m e g a}}\right) \left(\frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \\ = \frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. \right. + \left(1 - p _ {\text {m e g a}}\right)\left(\frac {p _ {\mathrm {a}}}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {\mathcal {C}} \left[\left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right)\right\| ^ {2} \right]\right) \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \\ \leq \frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + \frac {\left(1 - p _ {\mathrm {m e g a}}\right) p _ {\mathrm {a}} \omega}{n ^ {2}} \sum_ {i = 1} ^ {n} \left\| \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \\ = \frac {a ^ {2} \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + \frac {(1 - p _ {\mathrm {m e g a}}) \omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i, 2} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +In the last inequality, we use Assumption 7. Next, using (15), we have + +$$ +\begin{array}{l} \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \right] \\ \leq \frac {2 (1 - p _ {\mathrm {m e g a}}) \omega}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| k _ {i, 2} ^ {t + 1} \right\| ^ {2} + \left(\frac {(p _ {\mathrm {a}} - p _ {\mathrm {a a}}) a ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} + \frac {2 (1 - p _ {\mathrm {m e g a}}) \omega a ^ {2}}{n ^ {2} p _ {\mathrm {a}}}\right) \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +The second inequality can be proved almost in the same way: + +$$ +\mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \right] +$$ + +$$ +\begin{array}{l} = p _ {\text {m e g a}} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i, 1} ^ {t + 1} - h _ {i, 1} ^ {t + 1} \right\| ^ {2} \right] + (1 - p _ {\text {m e g a}}) \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i, 2} ^ {t + 1} - h _ {i, 2} ^ {t + 1} \right\| ^ {2} \right] \right] \\ \stackrel {(1 6)} {=} p _ {\text {m e g a}} \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i, 1} ^ {t + 1} - h _ {i, 1} ^ {t + 1} - (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \right] + (1 - p _ {\text {m e g a}}) \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i, 2} ^ {t + 1} - h _ {i, 2} ^ {t + 1} \right\| ^ {2} \right] \right] \\ + p _ {\text {m e g a}} (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ = \frac {p _ {\mathrm {m e g a}} \left(1 - p _ {\mathrm {a}}\right) a ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + \left(1 - p _ {\mathrm {m e g a}}\right) \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| g _ {i, 2} ^ {t + 1} - h _ {i, 2} ^ {t + 1} \right\| ^ {2} \right] \right] \\ + p _ {\text {m e g a}} (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 6)} {=} \frac {p _ {\mathrm {m e g a}} (1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - p _ {\mathrm {m e g a}}) \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \| g _ {i, 2} ^ {t + 1} - h _ {i, 2} ^ {t + 1} - (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \| ^ {2} \right] \right] \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 6)} {=} \frac {p _ {\mathrm {m e g a}} (1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - p _ {\mathrm {m e g a}}) \operatorname {E} _ {\mathcal {C}} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \| g _ {i, 2} ^ {t + 1} - h _ {i, 2} ^ {t + 1} - (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \| ^ {2} \right] \right] \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ = \frac {p _ {\mathrm {m e g a}} (1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + \left(1 - p _ {\text {m e g a}}\right) p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \left\| g _ {i} ^ {t} + \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1}\right) - (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \right] \\ + \left(1 - p _ {\text {m e g a}}\right) \left(1 - p _ {\mathrm {a}}\right) \left\| g _ {i} ^ {t} - h _ {i} ^ {t} - (1 - a) \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ = \frac {p _ {\mathrm {m e g a}} (1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + \left(1 - p _ {\text {m e g a}}\right) p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \left(1 - p _ {\text {m e g a}}\right) \left(1 - p _ {\mathrm {a}}\right) a ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \stackrel {(1 6)} {=} \left(\frac {p _ {\mathrm {m e g a}} (1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + \frac {(1 - p _ {\mathrm {m e g a}}) (1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}}\right) \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + \left(1 - p _ {\mathrm {m e g a}}\right) p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right\| ^ {2} \right] \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ = \frac {(1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ + \left(1 - p _ {\text {m e g a}}\right) p _ {\mathrm {a}} \mathrm {E} _ {\mathcal {C}} \left[ \left\| \mathcal {C} _ {i} \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) - \left(\frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \frac {a}{p _ {\mathrm {a}}} \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right)\right) \right\| ^ {2} \right] \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \leq \frac {(1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \left| \left| g _ {i} ^ {t} - h _ {i} ^ {t} \right| \right| ^ {2} \\ + \frac {\left(1 - p _ {\mathrm {m e g a}}\right) \omega}{p _ {\mathrm {a}}} \left\| k _ {i , 2} ^ {t + 1} - a \left(g _ {i} ^ {t} - h _ {i} ^ {t}\right) \right\| ^ {2} \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \\ \stackrel {(1 5)} {\leq} \frac {2 (1 - p _ {\mathrm {m e g a}}) \omega}{p _ {\mathrm {a}}} \| k _ {i, 2} ^ {t + 1} \| ^ {2} + \left(\frac {(1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + \frac {2 (1 - p _ {\mathrm {m e g a}}) a ^ {2} \omega}{p _ {\mathrm {a}}}\right) \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \\ + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +![](images/e93b7680441bb94028cc2246f27b55c2d5b3b5989e976aed071adcd80806c81f.jpg) + +Lemma 14. Suppose that Assumptions 3, 5, 6 and 8 hold and let us consider sequence $\{h_i^{t + 1}\}_{i = 1}^n$ from Algorithm 8, then + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\text {m e g a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\text {m e g a}} p _ {\mathrm {a}} B ^ {\prime}} + \left(\frac {2 p _ {\text {m e g a}} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + \frac {(1 - p _ {\text {m e g a}}) L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {m e g a}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \left(p _ {m e g a} \left(1 - \frac {b}{p _ {m e g a}}\right) ^ {2} + (1 - p _ {m e g a})\right) \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}, \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. \right. \operatorname {E} _ {k} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \operatorname {E} _ {p _ {\text {m e g a}}} \left[\left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right]\right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} p _ {\text {m e g a}} B ^ {\prime}} + \left(\frac {2 p _ {\text {m e g a}} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + \frac {(1 - p _ {\text {m e g a}}) L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ \left. \right. + \frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\text {m e g a}} p _ {\mathrm {a}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right\| ^ {2} + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + \left(1 - p _ {\text {m e g a}}\right)\right)\left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right\| ^ {2}, \quad \forall i \in [ n ], \\ \end{array} +$$ + +and + +$$ +\mathrm {E} _ {k} \left[ \left\| k _ {i, 2} ^ {t + 1} \right\| ^ {2} \right] \leq \left(\frac {L _ {\sigma} ^ {2}}{B} + L _ {i} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2}, \quad \forall i \in [ n ], +$$ + +Proof. First, we prove the bound for $\operatorname{E}_k\left[\operatorname{E}_{p_{\mathrm{a}}}\left[\operatorname{E}_{p_{\mathrm{mega}}}\left[\left\|h^{t+1}-\nabla f(x^{t+1})\right\|^2\right]\right]\right]$ . Using + +$$ +\mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] +$$ + +$$ +\begin{array}{l} = h _ {i} ^ {t} + \mathrm {E} _ {k} \left[ \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} (x ^ {t + 1}; \xi_ {i k} ^ {t + 1}) - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} (x ^ {t}; \xi_ {i k} ^ {t + 1}) - \frac {b}{p _ {\mathrm {m e g a}}} \left(h _ {i} ^ {t} - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} (x ^ {t}; \xi_ {i k} ^ {t + 1})\right) \right] \\ = h _ {i} ^ {t} + \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) - \frac {b}{p _ {\mathrm {m e g a}}} \left(h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t})\right) \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] \\ = h _ {i} ^ {t} + \mathrm {E} _ {k} \left[ \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right) \right] \\ = h _ {i} ^ {t} + \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right), \\ \end{array} +$$ + +we have + +$$ +\begin{array}{l} \left. \right. \operatorname {E} _ {k} \left[ \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \operatorname {E} _ {p _ {\mathrm {m e g a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ = p _ {\text {m e g a}} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] + (1 - p _ {\text {m e g a}}) \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {2} ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] \right] \\ \stackrel {(1 6)} {=} p _ {\text {m e g a}} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {1} ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {1} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + (1 - p _ {\text {m e g a}}) \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {2} ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] \\ \left. + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + \left(1 - p _ {\text {m e g a}}\right)\right) \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \right. \\ \end{array} +$$ + +We can use Lemma 1 two times with i) $r_i = h_i^t$ and $s_i = k_{i,1}^{t + 1}$ and ii) $r_i = h_i^t$ and $s_i = k_{i,2}^{t + 1}$ , to obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \right. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \right. \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \right.\left| \right.\left| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right| ^ {2} \left. \right]\left. \right]\left. \right]\left. \right] \\ \leq p _ {\text {m e g a}} \left(\frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 1} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right| \right| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {m e g a}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2}\right) \\ + \left(1 - p _ {\text {m e g a}}\right) \left(\frac {1}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right\| ^ {2} \right] + \frac {p _ {\mathrm {a}} - p _ {\mathrm {a a}}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right) \\ \left. + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + (1 - p _ {\text {m e g a}})\right) \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(1 5)} {\leq} \frac {p _ {\mathrm {m e g a}}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \operatorname {E} _ {k} \left[ \left| \left| k _ {i, 1} ^ {t + 1} - \operatorname {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ + \frac {1 - p _ {\mathrm {m e g a}}}{n ^ {2} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \mathrm {E} _ {k} \left[ \left\| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left| \left| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \left(1 - p _ {\mathrm {m e g a}}\right)\right) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}. \tag {35} \\ \end{array} +$$ + +Let us consider $\mathrm{E}_k\left[\left\| k_{i,1}^{t + 1} - \mathrm{E}_k\left[k_{i,1}^{t + 1}\right]\right\|^2\right]$ . + +$$ +\begin{array}{l} \operatorname {E} _ {k} \left[ \left\| k _ {i, 1} ^ {t + 1} - \operatorname {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right\| ^ {2} \right] \\ = \mathrm {E} _ {k} \left[ \right.\left\| \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i k} ^ {t + 1}\right) - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} \left(x ^ {t}; \xi_ {i k} ^ {t + 1}\right) - \frac {b}{p _ {\text {m e g a}}} \left(h _ {i} ^ {t} - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} \left(x ^ {t}; \xi_ {i k} ^ {t + 1}\right)\right)\right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\mathrm {m e g a}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \| ^ {2} \right] \\ = \operatorname {E} _ {k} \left[ \right.\left\| \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i k} ^ {t + 1}\right) - \frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} \left(x ^ {t}; \xi_ {i k} ^ {t + 1}\right) + \frac {b}{p _ {\mathrm {m e g a}}} \left(\frac {1}{B ^ {\prime}} \sum_ {k = 1} ^ {B ^ {\prime}} \nabla f _ {i} \left(x ^ {t}; \xi_ {i k} ^ {t + 1}\right)\right)\right. \\ \left. - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) + \frac {b}{p _ {\mathrm {m e g a}}} \left(\nabla f _ {i} \left(x ^ {t}\right)\right)\right) \| ^ {2} \right] \\ = \frac {1}{B ^ {\prime 2}} \sum_ {k = 1} ^ {B ^ {\prime}} \mathrm {E} _ {k} \left[ \left\| \frac {b}{p _ {\text {m e g a}}} \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i k} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right)\right) \right. \right. \\ \left. + \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) \left(\nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i k} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i k} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right)\right) \| ^ {2} \right], \\ \end{array} +$$ + +where we used independence of the mini-batch samples. Using (15), we get + +$$ +\begin{array}{l} \operatorname {E} _ {k} \left[ \left| \left| k _ {i, 1} ^ {t + 1} - \operatorname {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ \leq \frac {2 b ^ {2}}{B ^ {\prime 2} p _ {\mathrm {m e g a}} ^ {2}} \sum_ {k = 1} ^ {B ^ {\prime}} \operatorname {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i k} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \\ + \frac {2}{B ^ {\prime 2}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} \sum_ {k = 1} ^ {B ^ {\prime}} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} (x ^ {t + 1}; \xi_ {i k} ^ {t + 1}) - \nabla f _ {i} (x ^ {t}; \xi_ {i k} ^ {t + 1}) - \left(\nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t})\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Due to Assumptions 5 and 6, we have + +$$ +\left. \right. \mathrm {E} _ {k} \left[\left\| k _ {i, 1} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right]\right\| ^ {2} \right] \leq \frac {2 b ^ {2} \sigma^ {2}}{B ^ {\prime} p _ {\text {m e g a}} ^ {2}} + \frac {2 L _ {\sigma} ^ {2}}{B ^ {\prime}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} \| x ^ {t + 1} - x ^ {t} \| ^ {2}. \tag {36} +$$ + +Next, we estimate the bound for $\mathrm{E}_k\left[\| k_{i,2}^{t + 1} - \mathrm{E}_k\left[k_{i,2}^{t + 1}\right]\| ^2\right]$ . + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ = \operatorname {E} _ {k} \left[ \left\| \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \frac {1}{B} \sum_ {j = 1} ^ {B} \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right] \\ = \frac {1}{B ^ {2}} \sum_ {j = 1} ^ {B} \mathrm {E} _ {k} \left[ \left\| \nabla f _ {i} \left(x ^ {t + 1}; \xi_ {i j} ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}; \xi_ {i j} ^ {t + 1}\right) - \left(\nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Due to Assumptions 6, we have + +$$ +\left. \right. \mathrm {E} _ {k} \left[\left\| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right]\right\| ^ {2} \right] \leq \frac {L _ {\sigma} ^ {2}}{B} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2}. \tag {37} +$$ + +Plugging (36) and (37) into (35), we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[\left|\left| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right|\right| ^ {2} \right]\right]\right] \\ \leq \frac {p _ {\text {m e g a}}}{n p _ {\mathrm {a}}} \left(\frac {2 b ^ {2} \sigma^ {2}}{B ^ {\prime} p _ {\text {m e g a}} ^ {2}} + \frac {2 L _ {\sigma} ^ {2}}{B ^ {\prime}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} \| x ^ {t + 1} - x ^ {t} \| ^ {2}\right) \\ + \frac {\left(1 - p _ {\mathrm {m e g a}}\right) L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \end{array} +$$ + +$$ ++ \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + (1 - p _ {\mathrm {m e g a}})\right) \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. +$$ + +Using Assumption 3, we get + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} B ^ {\prime}} + \left(\frac {2 p _ {\mathrm {m e g a}} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \frac {(1 - p _ {\mathrm {m e g a}}) L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + (1 - p _ {\mathrm {m e g a}})\right) \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}. \\ \end{array} +$$ + +Using almost the same derivations, we can prove the second inequality: + +$$ += p _ {\text {m e g a}} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 1} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] + (1 - p _ {\text {m e g a}}) \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 2} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right) \right\| ^ {2} \right] \right] +$$ + +$$ +\stackrel {(1 6)} {=} p _ {\mathrm {m e g a}} \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 1} ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 1} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] + (1 - p _ {\mathrm {m e g a}}) \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \left\| h _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ h _ {i, 2} ^ {t + 1} \right] \right] \right\| ^ {2} \right] \right] +$$ + +$$ +\begin{array}{l} \left. \right. \operatorname {E} _ {k} \left[ \right. \operatorname {E} _ {p _ {\mathrm {a}}} \left[ \right. \operatorname {E} _ {p _ {\mathrm {m e g a}}} \left[ \right.\left| \right.\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right| ^ {2} \left. \right]\left. \right]\left. \right]\left. \right] \\ \left. + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + \left(1 - p _ {\text {m e g a}}\right)\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right. \\ = p _ {\mathrm {m e g a}} p _ {\mathrm {a}} \mathrm {E} _ {k} \left[ \left\| h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 1} ^ {t + 1} - \left(h _ {i} ^ {t} + \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right]\right) \right\| ^ {2} \right] \\ \left. + p _ {\text {m e g a}} \left(1 - p _ {\mathrm {a}}\right) \left\| h _ {i} ^ {t} - \left(h _ {i} ^ {t} + \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right]\right) \right\| ^ {2} \right. \\ + \left(1 - p _ {\mathrm {m e g a}}\right) p _ {\mathrm {a}} \mathrm {E} _ {k} \left[ \left\| h _ {i} ^ {t} + \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \left(h _ {i} ^ {t} + \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right]\right) \right\| ^ {2} \right] \\ \left. + \left(1 - p _ {\text {m e g a}}\right) \left(1 - p _ {\mathrm {a}}\right) \left\| h _ {i} ^ {t} - \left(h _ {i} ^ {t} + \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right]\right) \right\| ^ {2} \right. \\ \left. + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + (1 - p _ {\text {m e g a}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right. \\ = p _ {\mathrm {m e g a}} p _ {\mathrm {a}} \mathrm {E} _ {k} \left[ \left\| \frac {1}{p _ {\mathrm {a}}} k _ {i, 1} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + p _ {\text {m e g a}} \left(1 - p _ {\mathrm {a}}\right) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {m e g a}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + (1 - p _ {\mathrm {m e g a}}) p _ {\mathrm {a}} \mathrm {E} _ {k} \left[ \left\| \frac {1}{p _ {\mathrm {a}}} k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right\| ^ {2} \right] \\ + (1 - p _ {\mathrm {m e g a}}) (1 - p _ {\mathrm {a}}) \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \left. + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + (1 - p _ {\text {m e g a}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right. \\ \stackrel {(1 6)} {=} \frac {p _ {\mathrm {m e g a}}}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 1} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ + \frac {\left(1 - p _ {\text {m e g a}}\right)}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ + \frac {p _ {\text {m e g a}} (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) - \frac {b}{p _ {\text {m e g a}}} \left(h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right)\right) \right\| ^ {2} \\ + \frac {\left(1 - p _ {\mathrm {m e g a}}\right) \left(1 - p _ {\mathrm {a}}\right)}{p _ {\mathrm {a}}} \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. + \left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + (1 - p _ {\text {m e g a}})\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right. \\ \stackrel {(1 5)} {\leq} \frac {p _ {\mathrm {m e g a}}}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 1} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 1} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ + \frac {\left(1 - p _ {\mathrm {m e g a}}\right)}{p _ {\mathrm {a}}} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 2} ^ {t + 1} - \mathrm {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right| \right| ^ {2} \right] \\ + \frac {2 (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \| ^ {2} \\ + \frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \left(1 - p _ {\mathrm {m e g a}}\right)\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Using (36) and (37), we get + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \right. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \right. \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \right.\left| \right.\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right| ^ {2} \left. \right]\left. \right]\left. \right]\left. \right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}} B ^ {\prime}} + \frac {2 p _ {\mathrm {m e g a}} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \frac {\left(1 - p _ {\mathrm {m e g a}}\right) L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} \\ + \frac {2 (1 - p _ {\mathrm {a}})}{p _ {\mathrm {a}}} \| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \| ^ {2} \\ + \frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \left(1 - p _ {\mathrm {m e g a}}\right)\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +Next, due to Assumption 3, we obtain + +$$ +\begin{array}{l} \left. \right. \mathrm {E} _ {k} \left[ \right. \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \right. \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \right.\left| \right.\left| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right| ^ {2} \left. \right]\left. \right]\left. \right]\left. \right] \\ \leq \frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}} B ^ {\prime}} + \left(\frac {2 p _ {\mathrm {m e g a}} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \frac {(1 - p _ {\mathrm {m e g a}}) L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) L _ {i} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ + \frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \left(1 - p _ {\mathrm {m e g a}}\right)\right) \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}. \\ \end{array} +$$ + +The third inequality can be proved with the help of (37) and Assumption 3. + +$$ +\begin{array}{l} \mathrm {E} _ {k} \left[ \left| \left| k _ {i, 2} ^ {t + 1} \right| \right| ^ {2} \right] \\ \stackrel {(1 6)} {=} \operatorname {E} _ {k} \left[ \left\| k _ {i, 2} ^ {t + 1} - \operatorname {E} _ {k} \left[ k _ {i, 2} ^ {t + 1} \right] \right\| ^ {2} \right] + \left\| \nabla f _ {i} (x ^ {t + 1}) - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \\ \leq \frac {L _ {\sigma} ^ {2}}{B} \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \left\| \nabla f _ {i} \left(x ^ {t + 1}\right) - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \\ \leq \left(\frac {L _ {\sigma} ^ {2}}{B} + L _ {i} ^ {2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2}. \\ \end{array} +$$ + +Theorem 11. Suppose that Assumptions 1, 2, 3, 5, 6, 7 and 8 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{mega}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , probability $p_{\mathrm{mega}} \in (0, 1]$ , batch size $B' \geq B \geq 1$ + +$$ +\gamma \leq \left(L + \sqrt {\frac {8 (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right) + \frac {1 6}{n p _ {m e g a} p _ {\mathrm {a}} ^ {2}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right)}\right) ^ {- 1}, +$$ + +and $h_i^0 = g_i^0$ for all $i \in [n]$ in Algorithm 8. Then + +$$ +\operatorname {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \leq \frac {1}{T} \left[ \frac {2 \Delta_ {0}}{\gamma} + \frac {4}{p _ {\text {m e g a}} p _ {\mathrm {a}}} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \frac {4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\text {m e g a}} p _ {\mathrm {a}}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2} \right] +$$ + +$$ ++ \frac {1 2 \sigma^ {2}}{n B ^ {\prime}}. +$$ + +Proof. Due to Lemma 2 and the update step from Line 5 in Algorithm 8, we have + +$$ +\begin{array}{l} \mathbf {E} _ {t + 1} \left[ f (x ^ {t + 1}) \right] \\ \leq \mathrm {E} _ {t + 1} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \frac {\gamma}{2} \| g ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ = \mathrm {E} _ {t + 1} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \frac {\gamma}{2} \| g ^ {t} - h ^ {t} + h ^ {t} - \nabla f (x ^ {t}) \| ^ {2} \right] \\ \stackrel {(1 6)} {\leq} \operatorname {E} _ {t + 1} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left(\left\| g ^ {t} - h ^ {t} \right\| ^ {2} + \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \right). \\ \end{array} +$$ + +Let us fix constants $\kappa, \eta, \nu, \rho \in [0, \infty)$ that we will define later. Considering Lemma 13, Lemma 14, and the law of total expectation, we obtain + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \kappa \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \left(\| g ^ {t} - h ^ {t} \| ^ {2} + \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2}\right) \right] \\ \left. + \kappa \mathrm {E} \left[ \mathrm {E} _ {k} \left[ \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] \right] \right] \right] \right] \right] \\ \left. \left. + \eta \mathrm {E} \left[ \mathrm {E} _ {k} \left[ \mathrm {E} _ {\mathcal {C}} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \right] \right] \right] \right]\right) \right] \\ \left. \right. + \nu \mathrm {E} \left[ \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[\left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right]\right]\right]\right] \\ \left. \right. + \rho \mathrm {E} \left[ \mathrm {E} _ {k} \left[ \mathrm {E} _ {p _ {\mathrm {a}}} \left[ \mathrm {E} _ {p _ {\mathrm {m e g a}}} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} \left(x ^ {t + 1}\right)\right\| ^ {2} \right]\right]\right]\right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \| \nabla f (x ^ {t}) \| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} + \gamma \left(\| g ^ {t} - h ^ {t} \| ^ {2} + \| h ^ {t} - \nabla f (x ^ {t}) \| ^ {2}\right) \right] \\ + \kappa \mathrm {E} \left(\frac {2 (1 - p _ {\text {m e g a}}) \omega}{n p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \left(\frac {\left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) a ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} + \frac {2 \left(1 - p _ {\mathrm {m e g a}}\right) a ^ {2} \omega}{n ^ {2} p _ {\mathrm {a}}}\right) \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}\right) \\ + \eta \mathrm {E} \left(\frac {2 (1 - p _ {\text {m e g a}}) \omega}{p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \left(\frac {\left(1 - p _ {\mathrm {a}}\right) a ^ {2}}{p _ {\mathrm {a}}} + \frac {2 \left(1 - p _ {\mathrm {m e g a}}\right) a ^ {2} \omega}{p _ {\mathrm {a}}}\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + \nu \mathrm {E} \left( \right.\frac {2 b ^ {2} \sigma^ {2}}{n p _ {\text {m e g a}} p _ {\mathrm {a}} B ^ {\prime}} + \left(\frac {2 p _ {\text {m e g a}} L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + \frac {(1 - p _ {\text {m e g a}}) L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ \left. + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \left(1 - p _ {\mathrm {m e g a}}\right)\right) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left( \right.\frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}} B ^ {\prime}} + \left(\frac {2 p _ {\mathrm {m e g a}} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + \frac {(1 - p _ {\mathrm {m e g a}}) L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \\ \end{array} +$$ + +$$ ++ \frac {2 (1 - p _ {\mathrm {a}}) b ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} + \left(p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + (1 - p _ {\mathrm {m e g a}})\right) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2}. +$$ + +Let us simplify the last inequality. Since $B' \geq B$ and $b = \frac{p_{\mathrm{mega}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}} \leq p_{\mathrm{mega}}$ , we have $1 - p_{\mathrm{mega}} \leq 1$ , + +$$ +\frac {2 p _ {\mathrm {m e g a}} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B ^ {\prime}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} \leq \frac {2 p _ {\mathrm {m e g a}} L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B}, +$$ + +$$ +\left(p _ {\text {m e g a}} \left(1 - \frac {b}{p _ {\text {m e g a}}}\right) ^ {2} + (1 - p _ {\text {m e g a}})\right) \leq 1 - b, +$$ + +and + +$$ +\left(\frac {2 \left(1 - p _ {\mathrm {a}}\right) b ^ {2}}{p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + p _ {\mathrm {m e g a}} \left(1 - \frac {b}{p _ {\mathrm {m e g a}}}\right) ^ {2} + (1 - p _ {\mathrm {m e g a}})\right) \leq 1 - b. +$$ + +Thus + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \kappa \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) - \frac {\gamma}{2} \left\| \nabla f (x ^ {t}) \right\| ^ {2} - \left(\frac {1}{2 \gamma} - \frac {L}{2}\right) \left\| x ^ {t + 1} - x ^ {t} \right\| ^ {2} + \gamma \left(\left\| g ^ {t} - h ^ {t} \right\| ^ {2} + \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2}\right) \right] \\ + \kappa \mathrm {E} \left(\frac {2 \omega}{n p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {\left(\left(2 \omega + 1\right) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) a ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2}} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g ^ {t} - h ^ {t} \right\| ^ {2}\right) \\ + \eta \mathrm {E} \left(\frac {2 \omega}{p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {(2 \omega + 1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} + (1 - a) ^ {2} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2}\right) \\ + \nu \mathrm {E} \left(\frac {2 b ^ {2} \sigma^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} B ^ {\prime}} + \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n ^ {2} p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} + (1 - b) \left\| h ^ {t} - \nabla f \left(x ^ {t}\right) \right\| ^ {2}\right) \\ + \rho \mathrm {E} \left(\frac {2 b ^ {2} \sigma^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}} B ^ {\prime}} + \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right. \\ \left. + (1 - b) \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2}\right). \\ \end{array} +$$ + +After rearranging the terms, we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \kappa \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 \kappa \omega}{n p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) - \frac {2 \eta \omega}{p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(\gamma + \kappa (1 - a) ^ {2}\right) \operatorname {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ + \left(\kappa \frac {\left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) a ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {(2 \omega + 1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + (\gamma + \nu (1 - b)) E \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}. \\ \end{array} +$$ + +Let us take $\kappa = \frac{\gamma}{a}$ , thus $\gamma + \kappa (1 - a)^2 \leq \kappa$ and + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {\gamma}{a} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 \gamma \omega}{a n p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) - \frac {2 \eta \omega}{p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma}{a} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ \left. + \left(\frac {\gamma \left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) a}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {(2 \omega + 1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \right. \\ + (\gamma + \nu (1 - b)) \operatorname {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}. \\ \end{array} +$$ + +Next, since $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , we have $\left(\frac{(2\omega + 1 - p_{\mathrm{a}})a^2}{p_{\mathrm{a}}} + (1 - a)^2\right) \leq 1 - a$ . We the choice $\eta = \frac{\gamma((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , we guarantee $\frac{\gamma((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})a}{np_{\mathrm{a}}^2} + \eta \left(\frac{(2\omega + 1 - p_{\mathrm{a}})a^2}{p_{\mathrm{a}}} + (1 - a)^2\right) \leq \eta$ and + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) - \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \omega}{n p _ {\mathrm {a}} ^ {3}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right) \Bigg) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + (\gamma + \nu (1 - b)) E \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}} \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \hat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + (\gamma + \nu (1 - b)) E \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\text {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\text {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}, \\ \end{array} +$$ + +where simplified the term using $p_{\mathrm{aa}} \geq 0$ . Let us take $\nu = \frac{\gamma}{b}$ to obtain + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \hat {L} ^ {2}\right) \right. \\ \left. - \left(\frac {2 \gamma L _ {\sigma} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \left(\frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \left(\frac {2 \gamma b}{n p _ {\text {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\text {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}. \\ \end{array} +$$ + +Next, we take $\rho = \frac{2\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2p_{\mathrm{mega}}}$ , thus + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \left(\frac {2 \gamma L _ {\sigma} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \left(\frac {2 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}}\right) \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {2 \gamma b}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n p _ {\mathrm {a}} ^ {3} p _ {\mathrm {m e g a}} ^ {2}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}. \\ \end{array} +$$ + +Since $\frac{p_{\mathrm{mega}}p_{\mathrm{a}}}{2} \leq b \leq p_{\mathrm{mega}}p_{\mathrm{a}}$ and $1 - p_{\mathrm{a}} \leq 1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}} \leq 1$ , we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \left(\frac {4 \gamma L _ {\sigma} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2} B} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {3}}\right) - \left(\frac {4 \gamma L _ {\sigma} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2} B} + \frac {4 \gamma (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {6 \gamma \sigma^ {2}}{n B ^ {\prime}} \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. \right. - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) - \left(\frac {8 \gamma L _ {\sigma} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2} B} + \frac {8 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}}\right)\right) E \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {6 \gamma \sigma^ {2}}{n B ^ {\prime}}. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t + 1} - h ^ {t + 1} \| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \frac {\gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \frac {\gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \frac {\gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \frac {2 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {6 \gamma \sigma^ {2}}{n B ^ {\prime}}. \\ \end{array} +$$ + +It is left to apply Lemma 3 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {(2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + \frac {\left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + \frac {1}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \frac {2 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {a}} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +and $C = \frac{6\sigma^2}{nB'}$ to conclude the proof. + +Corollary 6. Suppose that assumptions from Theorem 11 hold, probability $p_{\text{mega}} = \min \left\{ \frac{\zeta_{\mathcal{C}}}{d}, \frac{n\varepsilon B}{\sigma^2} \right\}$ , batch size $B' = \Theta \left( \frac{\sigma^2}{n\varepsilon} \right)$ , and $h_i^0 = g_i^0 = \frac{1}{B_{\text{init}}} \sum_{k=1}^{B_{\text{init}}} \nabla f_i(x^0; \xi_{ik}^0)$ for all $i \in [n]$ , initial batch size $B_{\text{init}} = \Theta \left( \frac{B}{p_{\text{mega}} \sqrt{p_{\text{a}}}} \right) = \Theta \left( \max \left\{ \frac{Bd}{\sqrt{p_{\text{a}}} \zeta_{\mathcal{C}}}, \frac{\sigma^2}{\sqrt{p_{\text{a}}} n\varepsilon} \right\} \right)$ , then DASHA-PP-SYNC-MVR needs + +$$ +T := \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \left(\frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} + \sqrt {\frac {d}{p _ {\mathrm {a}} ^ {2} \zeta_ {\mathcal {C}} n}}\right) \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\frac {\widehat {L}}{\sqrt {B}} + \frac {L _ {\sigma}}{B}\right) \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right). +$$ + +communication rounds to get an $\varepsilon$ -solution, the expected communication complexity is equal to $\mathcal{O}(d + \zeta_{\mathcal{C}}T)$ , and the expected number of stochastic gradient calculations per node equals $\mathcal{O}(B_{\mathrm{init}} + BT)$ , where $\zeta_{\mathcal{C}}$ is the expected density from Definition 12. + +Proof. Due to the choice of $B'$ , we have + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \leq \frac {1}{T} \left[ 2 \Delta_ {0} \left(L + \sqrt {\frac {8 (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right) + \frac {1 6}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right)}\right) \right. \\ \left. + \frac {4}{p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \frac {4 \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right)}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2} \right] \\ + \frac {2 \varepsilon}{3}. \\ \end{array} +$$ + +Using + +$$ +\operatorname {E} \left[ \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} \right] = \operatorname {E} \left[ \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \frac {1}{B _ {\mathrm {i n i t}}} \sum_ {k = 1} ^ {B _ {\mathrm {i n i t}}} \nabla f _ {i} (x ^ {0}; \xi_ {i k} ^ {0}) - \nabla f (x ^ {0}) \right\| ^ {2} \right] \leq \frac {\sigma^ {2}}{n B _ {\mathrm {i n i t}}} +$$ + +and + +$$ +\frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} \left[ \left\| h _ {i} ^ {0} - \nabla f _ {i} \left(x ^ {0}\right) \right\| ^ {2} \right] = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \mathrm {E} \left[ \left\| \frac {1}{B _ {\mathrm {i n i t}}} \sum_ {k = 1} ^ {B _ {\mathrm {i n i t}}} \nabla f _ {i} \left(x ^ {0}; \xi_ {i k} ^ {0}\right) - \nabla f _ {i} \left(x ^ {0}\right) \right\| ^ {2} \right] \leq \frac {\sigma^ {2}}{n B _ {\mathrm {i n i t}}}, +$$ + +we have + +$$ +\begin{array}{l} \mathrm {E} \left[ \left\| \nabla f (\widehat {x} ^ {T}) \right\| ^ {2} \right] \leq \frac {1}{T} \left[ 2 \Delta_ {0} \left(L + \sqrt {\frac {8 (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right) + \frac {1 6}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}} \left(\left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right)}\right) \right. \\ \left. + \frac {8 \sigma^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} B _ {\mathrm {i n i t}}} \right] \\ + \frac {2 \varepsilon}{3}. \\ \end{array} +$$ + +Therefore, we can take the following $T$ to get $\varepsilon$ -solution. + +$$ +T = \mathcal {O} \left(\frac {1}{\varepsilon} \left[ \Delta_ {0} \left(L + \sqrt {\frac {\omega^ {2}}{n p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right) + \frac {1}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}} \left(\widehat {L} ^ {2} + \frac {L _ {\sigma} ^ {2}}{B}\right)}\right) + \frac {\sigma^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} B _ {\mathrm {i n i t}}} \right]\right) +$$ + +Considering the choice of $p_{\mathrm{mega}}$ and $B_{\mathrm{init}}$ , we obtain + +$$ +\begin{array}{l} T = \mathcal {O} \left(\frac {1}{\varepsilon} \left[ \Delta_ {0} \left(L + \left(\frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} + \sqrt {\frac {d}{p _ {\mathrm {a}} ^ {2} \zeta_ {\mathcal {C}} n}}\right) \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\frac {\widehat {L}}{\sqrt {B}} + \frac {L _ {\sigma}}{B}\right)\right) + \frac {\sigma^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} B _ {\mathrm {i n i t}}} \right]\right) \\ = \mathcal {O} \left(\frac {\Delta_ {0}}{\varepsilon} \left[ L + \left(\frac {\omega}{p _ {\mathrm {a}} \sqrt {n}} + \sqrt {\frac {d}{p _ {\mathrm {a}} ^ {2} \zeta_ {\mathcal {C}} n}}\right) \left(\widehat {L} + \frac {L _ {\sigma}}{\sqrt {B}}\right) + \frac {\sigma}{p _ {\mathrm {a}} \sqrt {\varepsilon} n} \left(\frac {\widehat {L}}{\sqrt {B}} + \frac {L _ {\sigma}}{B}\right) \right] + \frac {\sigma^ {2}}{\sqrt {p _ {\mathrm {a}}} n \varepsilon B}\right). \\ \end{array} +$$ + +The expected communication complexity equals $\mathcal{O}(d + p_{\mathrm{mega}}d + (1 - p_{\mathrm{mega}})\zeta_{\mathcal{C}}) = \mathcal{O}(d + \zeta_{\mathcal{C}})$ and the expected number of stochastic gradient calculations per node equals $\mathcal{O}\left(B_{\mathrm{init}} + p_{\mathrm{mega}}B' + (1 - p_{\mathrm{mega}})B\right) = \mathcal{O}\left(B_{\mathrm{init}} + B\right)$ . + +Theorem 13. Suppose that Assumptions 1, 2, 3, 5, 6, 7, 8 and 9 hold. Let us take $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , $b = \frac{p_{\mathrm{mega}} p_{\mathrm{a}}}{2 - p_{\mathrm{a}}}$ , probability $p_{\mathrm{mega}} \in (0, 1]$ , batch size $B' \geq B \geq 1$ + +$$ +\gamma \leq \min \left\{\left(L + \sqrt {\frac {1 6 (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) + \left(\frac {4 8 L _ {\sigma} ^ {2}}{n p _ {m e g a} p _ {\mathrm {a}} ^ {2} B} + \frac {2 4 (1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}) \widehat {L} ^ {2}}{n p _ {m e g a} p _ {\mathrm {a}} ^ {2}}\right)}\right) ^ {- 1}, \frac {a}{2 \mu}, \frac {b}{2 \mu} \right\}, +$$ + +and $h_i^0 = g_i^0$ for all $i \in [n]$ in Algorithm 8. Then + +$$ +\begin{array}{l} E \left[ f (x ^ {T}) - f ^ {*} \right] \\ \leq (1 - \gamma \mu) ^ {T} \left(\Delta_ {0} + \frac {2 \gamma}{b} \left\| h ^ {0} - \nabla f (x ^ {0}) \right\| ^ {2} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {m e g a}} \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {0} - \nabla f _ {i} (x ^ {0}) \right\| ^ {2}\right) + \frac {2 0 \sigma^ {2}}{\mu n B ^ {\prime}}. \\ \end{array} +$$ + +Proof. Let us fix constants $\kappa, \eta, \nu, \rho \in [0, \infty)$ that we will define later. As in the proof of Theorem 11, we can get + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \kappa \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {2 \kappa \omega}{n p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) - \frac {2 \eta \omega}{p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(\gamma + \kappa (1 - a) ^ {2}\right) \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ \left. + \left(\kappa \frac {\left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) a ^ {2}}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {(2 \omega + 1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \right. \\ + (\gamma + \nu (1 - b)) \operatorname {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\mathrm {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}. \\ \end{array} +$$ + +Let us take $\kappa = \frac{2\gamma}{a}$ , thus $\gamma + \kappa (1 - a)^2 \leq \left(1 - \frac{a}{2}\right)\kappa$ and + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {2 \gamma}{a} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \eta \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} \left[ f (x ^ {t}) \right] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {4 \gamma \omega}{a n p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) - \frac {2 \eta \omega}{p _ {\mathrm {a}}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \end{array} +$$ + +$$ +\begin{array}{l} \left. - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma}{a} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ \left. + \left(\frac {2 \gamma \left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) a}{n p _ {\mathrm {a}} ^ {2}} + \eta \left(\frac {(2 \omega + 1 - p _ {\mathrm {a}}) a ^ {2}}{p _ {\mathrm {a}}} + (1 - a) ^ {2}\right)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \right. \\ + (\gamma + \nu (1 - b)) E \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\text {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\text {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}. \\ \end{array} +$$ + +Next, since $a = \frac{p_{\mathrm{a}}}{2\omega + 1}$ , we have $\left(\frac{(2\omega + 1 - p_{\mathrm{a}})a^2}{p_{\mathrm{a}}} + (1 - a)^2\right) \leq 1 - a$ . We the choice $\eta = \frac{2\gamma((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2}$ , we guarantee $\frac{\gamma((2\omega + 1)p_{\mathrm{a}} - p_{\mathrm{aa}})a}{np_{\mathrm{a}}^2} + \eta \left(\frac{(2\omega + 1 - p_{\mathrm{a}})a^2}{p_{\mathrm{a}}} + (1 - a)^2\right) \leq \left(1 - \frac{a}{2}\right)\eta$ and + +$$ +\operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] +$$ + +$$ +\begin{array}{l} + \nu \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \nu \left(\frac {2 L _ {\sigma} ^ {2}}{n p _ {\mathrm {a}} B} + \frac {2 (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + (\gamma + \nu (1 - b)) E \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\nu \frac {2 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right) b ^ {2}}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \left| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right| \right| ^ {2} \right] \\ + \left(\frac {2 \nu b ^ {2}}{n p _ {\text {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\text {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}, \\ \end{array} +$$ + +where simplified the term using $p_{\mathrm{aa}} \geq 0$ . Let us take $\nu = \frac{2\gamma}{b}$ to obtain + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \rho \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \left(\frac {4 \gamma L _ {\sigma} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \rho \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} + \rho (1 - b)\right) \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} \left(x ^ {t}\right) \right\| ^ {2} \right] \\ + \left(\frac {4 \gamma b}{n p _ {\text {m e g a}} p _ {\mathrm {a}}} + \frac {2 \rho b ^ {2}}{p _ {\mathrm {a}} p _ {\text {m e g a}}}\right) \frac {\sigma^ {2}}{B ^ {\prime}}, \\ \end{array} +$$ + +Next, we take $\rho = \frac{8\gamma(p_{\mathrm{a}} - p_{\mathrm{aa}})}{np_{\mathrm{a}}^2p_{\mathrm{mega}}}$ , thus + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \left(\frac {4 \gamma L _ {\sigma} ^ {2}}{b n p _ {\mathrm {a}} B} + \frac {4 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{b n p _ {\mathrm {a}} ^ {2}}\right) - \left(\frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}}\right) \left(\frac {2 L _ {\sigma} ^ {2}}{p _ {\mathrm {a}} B} + \frac {2 (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{p _ {\mathrm {a}}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {2 \gamma \left(\left(2 \omega + 1\right) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \left(\frac {4 \gamma b}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}}} + \frac {1 6 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) b ^ {2}}{n p _ {\mathrm {a}} ^ {3} p _ {\mathrm {m e g a}} ^ {2}}\right) \frac {\sigma^ {2}}{B}, \\ \end{array} +$$ + +Since $\frac{p_{\mathrm{mega}} p_{\mathrm{a}}}{2} \leq b \leq p_{\mathrm{mega}} p_{\mathrm{a}}$ and $1 - p_{\mathrm{a}} \leq 1 - \frac{p_{\mathrm{aa}}}{p_{\mathrm{a}}} \leq 1$ , we get + +$$ +\begin{array}{l} \operatorname {E} \left[ f \left(x ^ {t + 1}\right) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \operatorname {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \operatorname {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \widehat {L} ^ {2}\right) \right. \\ \left. - \left(\frac {8 \gamma L _ {\sigma} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2} B} + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {3}}\right) - \left(\frac {1 6 \gamma L _ {\sigma} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2} B} + \frac {1 6 \gamma (1 - p _ {\mathrm {a}}) \widehat {L} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}}\right)\right) \mathrm {E} \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {2 \gamma \left(\left(2 \omega + 1\right) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {2 0 \gamma \sigma^ {2}}{n B ^ {\prime}} \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \left| \left| \nabla f (x ^ {t}) \right| \right| ^ {2} \right] \\ \left. \right. - \left(\frac {1}{2 \gamma} - \frac {L}{2} - \frac {8 \gamma (2 \omega + 1) \omega}{n p _ {\mathrm {a}} ^ {2}} \left(\frac {L _ {\sigma} ^ {2}}{B} + \hat {L} ^ {2}\right) - \left(\frac {2 4 \gamma L _ {\sigma} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2} B} + \frac {2 4 \gamma \left(1 - \frac {p _ {\mathrm {a a}}}{p _ {\mathrm {a}}}\right) \hat {L} ^ {2}}{n p _ {\mathrm {m e g a}} p _ {\mathrm {a}} ^ {2}}\right)\right) E \left[ \| x ^ {t + 1} - x ^ {t} \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {2 \gamma \left(\left(2 \omega + 1\right) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {2 0 \gamma \sigma^ {2}}{n B ^ {\prime}}. \\ \end{array} +$$ + +Using Lemma 4 and the assumption about $\gamma$ , we get + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {8 \gamma (p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + \left(1 - \frac {a}{2}\right) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t} - h ^ {t} \right\| ^ {2} \right] + \left(1 - \frac {a}{2}\right) \frac {2 \gamma \left((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t} - h _ {i} ^ {t} \right\| ^ {2} \right] \\ + \left(1 - \frac {b}{2}\right) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \left(1 - \frac {b}{2}\right) \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {2 0 \gamma \sigma^ {2}}{n B ^ {\prime}}. \\ \end{array} +$$ + +Due to $\gamma \leq \frac{a}{2\mu}$ and $\gamma \leq \frac{b}{2\mu}$ , we have + +$$ +\begin{array}{l} \mathrm {E} \left[ f (x ^ {t + 1}) \right] + \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \left\| g ^ {t + 1} - h ^ {t + 1} \right\| ^ {2} \right] + \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| g _ {i} ^ {t + 1} - h _ {i} ^ {t + 1} \right\| ^ {2} \right] \\ + \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t + 1} - \nabla f (x ^ {t + 1}) \right\| ^ {2} \right] + \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t + 1} - \nabla f _ {i} (x ^ {t + 1}) \right\| ^ {2} \right] \\ \leq \operatorname {E} [ f (x ^ {t}) ] - \frac {\gamma}{2} \operatorname {E} \left[ \| \nabla f (x ^ {t}) \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + (1 - \gamma \mu) \frac {2 \gamma ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + (1 - \gamma \mu) \frac {2 \gamma}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + (1 - \gamma \mu) \frac {8 \gamma \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ + \frac {2 0 \gamma \sigma^ {2}}{n B ^ {\prime}}. \\ \end{array} +$$ + +It is left to apply Lemma 11 with + +$$ +\begin{array}{l} \Psi^ {t} = \frac {2 (2 \omega + 1)}{p _ {\mathrm {a}}} \mathrm {E} \left[ \| g ^ {t} - h ^ {t} \| ^ {2} \right] + \frac {2 ((2 \omega + 1) p _ {\mathrm {a}} - p _ {\mathrm {a a}})}{n p _ {\mathrm {a}} ^ {2}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \| g _ {i} ^ {t} - h _ {i} ^ {t} \| ^ {2} \right] \\ + \quad \frac {2}{b} \mathrm {E} \left[ \left\| h ^ {t} - \nabla f (x ^ {t}) \right\| ^ {2} \right] + \frac {8 \left(p _ {\mathrm {a}} - p _ {\mathrm {a a}}\right)}{n p _ {\mathrm {a}} ^ {2} p _ {\mathrm {m e g a}}} \mathrm {E} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| h _ {i} ^ {t} - \nabla f _ {i} (x ^ {t}) \right\| ^ {2} \right] \\ \end{array} +$$ + +and $C = \frac{20\sigma^2}{nB'}$ to conclude the proof. \ No newline at end of file diff --git a/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/images.zip b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a6f4f6ee3de436fc28e37e6671d49e1bd0e35af2 --- /dev/null +++ b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c64db58a88addcaf46c8e5ff24f1b14e6d84910dbd93ea8e61c177057cd57573 +size 12549563 diff --git a/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/layout.json b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cab4bbb471b41944f4bb0dae763968ed650e4ff9 --- /dev/null +++ b/acomputationandcommunicationefficientmethodfordistributednonconvexproblemsinthepartialparticipationsetting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a37f81f93699f20f2a7eb68356db0e48b3fab1760c6f377cb7ae0bf467e11849 +size 2880602 diff --git a/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_content_list.json b/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ab090ecc92b092d274e7b10ade7bf2cf146386a9 --- /dev/null +++ b/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:780f3eea09db3ad9e857695bc5518a352cb962669d3aa008d9eb20b618dc0670 +size 76611 diff --git a/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_model.json b/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..aa33fa4631815ad87552282a0f6cb6e3e3acc162 --- /dev/null +++ b/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1afd4fa3764f8b84dc6d15afff9f9a237060187d65cb0a2b53112d10c3a8caec +size 93994 diff --git a/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_origin.pdf b/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4b5d1dec6ef410bc8dd9d1f51d83bcfb9adb0202 --- /dev/null +++ b/acrossmomentapproachforcausaleffectestimation/44494c13-e9f5-464f-af17-009b85d4be8d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21774711e025ee43813ad044682d7195c1641043f86a206bc9503464ada3f43d +size 420822 diff --git a/acrossmomentapproachforcausaleffectestimation/full.md b/acrossmomentapproachforcausaleffectestimation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6540c07eac77f72f75bae0b4a3e1064b8f5adf0b --- /dev/null +++ b/acrossmomentapproachforcausaleffectestimation/full.md @@ -0,0 +1,348 @@ +# A Cross-Moment Approach for Causal Effect Estimation + +Yaroslav Kivva + +School of Computer and Communication Sciences + +EPFL, Lausanne, Switzerland + +yaroslav.kivva@epfl.ch + +Saber Salehkaleybar + +School of Computer and Communication Sciences + +EPFL, Lausanne, Switzerland + +saber-salehkaleybar@epfl.ch + +Negar Kiyavash + +College of Management of Technology + +EPFL, Lausanne, Switzerland + +negar.kiyavash@epfl.ch + +# Abstract + +We consider the problem of estimating the causal effect of a treatment on an outcome in linear structural causal models (SCM) with latent confounders when we have access to a single proxy variable. Several methods (such as difference-in-difference (DiD) estimator or negative outcome control) have been proposed in this setting in the literature. However, these approaches require either restrictive assumptions on the data generating model or having access to at least two proxy variables. We propose a method to estimate the causal effect using cross moments between the treatment, the outcome, and the proxy variable. In particular, we show that the causal effect can be identified with simple arithmetic operations on the cross moments if the latent confounder in linear SCM is non-Gaussian. In this setting, DiD estimator provides an unbiased estimate only in the special case where the latent confounder has exactly the same direct causal effects on the outcomes in the pre-treatment and post-treatment phases. This translates to the common trend assumption in DiD, which we effectively relax. Additionally, we provide an impossibility result that shows the causal effect cannot be identified if the observational distribution over the treatment, the outcome, and the proxy is jointly Gaussian. Our experiments on both synthetic and real-world datasets showcase the effectiveness of the proposed approach in estimating the causal effect. + +# 1 Introduction + +Estimating the effect of a treatment (or an action) on an outcome is an important problem in many fields such as healthcare [SJS17], social sciences [Gan10], and economics [IR]. Randomized control trials are the gold standard to estimate causal effects. However, in many applications, performing randomized experiments are too costly or even infeasible, say due to ethical or legal concerns. Thus, estimating the causal effect from merely observational studies is one of the main topics of interest in causal inference. This problem has been studied extensively in two main frameworks, potential outcome (PO) framework [Rub74] and structural causal model (SCM) framework [Pea09]. The main + +quantity of interest in PO framework is the individual-based response variable, i.e., the value of outcome for a specific individual in the population considering a particular value for the treatment. In SCM framework, a set of structural causal assignments are defined to describe the data generation mechanism among a set of variables. This set of assignments is often represented by a directed acyclic graph (DAG) to show the causal relationships among the variables in the model. It can be shown that the two frameworks are logically equivalent in the sense that any theorem in one can be translated to the other [PJS17]. + +Difference-in-Difference (DiD) $[\mathrm{L}^{+}11]$ is one of the most frequently used non-experimental methods to estimate the effect of a treatment by comparing the average of outcome before and after applying the treatment in a treatment and control group. In fact, 26 of 100 most cited papers published by the American Economic Review used some variant of DiD or two-way fixed effect (an extension to multi-group and multi-time slots) to estimate the causal effect [DCD22]. DiD is an estimation process in PO framework for the setting where we have access to a population partitioned into control and treatment groups. The goal is to estimate the effect of treatment $D$ on outcome $Y$ where $D$ is equal to one if a treatment is given to an individual and zero otherwise. It is also assumed that the value of the outcome is observed just before giving any treatment (this pre-treatment value is denoted by $Z$ ) and it can be seen as a proxy variable for latent common causes of $D$ and $Y$ . DiD method computes the causal effect by subtracting the difference of average outcome in two groups before applying treatment (i.e., $\mathbb{E}[Z|D = 1] - \mathbb{E}[Z|D = 0]$ ) from the one after the treatment (i.e., $\mathbb{E}[Y|D = 1] - \mathbb{E}[Y|D = 0]$ ). It can be shown the output of DiD is an unbiased estimate of the causal effect under some assumptions such as the parallel/common trend assumption which states that the outcome of the treatment group would have followed the same trend as the control group in the absence of the treatment (see (2) for the exact definition). + +Although the initial setting of DiD is in PO framework, its counterpart in the SCM framework was considered in the negative outcome control approach $\mathrm{[SRC^{+}16]}$ . A negative outcome variable is a type of proxy variable that is not causally affected by the treatment. The causal graph in this approach is represented in Figure 1 where the unmeasured common cause of $D$ and $Y$ is represented by a latent variable $U$ and $D$ is not a cause of proxy variable $Z$ . The causal effect of $D$ on $Y$ cannot be identified from the observational distribution over + +![](images/cb044e6dd93913fc994aa3176cb4125c392ad65e65622ce71dab1c64ee893fc2.jpg) +Figure 1: The suggested causal graph in SCM framework for the approaches in DiD and negative outcome control. + +$(D,Y,Z)$ because of the common confounder $U$ . However, imposing further assumptions on the SCM, the causal effect of $D$ on $Y$ can become identified. Such assumptions include monotonicity [SRC+16], knowledge of the conditional probability $P(Z|U)$ [KP14], or having at least two proxy variables [KP14, MGTT18, $\mathrm{TYC}^{+}20$ , CPS+20], all of which may not hold in practice (see related work in Section 4 for a more detailed discussion). Recently, [SGKZ20] considered linear SCMs with non-Gaussian exogenous noise and proposed a method that can identify the causal effect for the causal graph in Figure 1 from the observational distribution over $(D,Y,Z)$ . The proposed method is based on solving an over-complete independent component analysis (OICA) [HKO01]. However given the landscape of the optimization problem, in practice, OICA can get stuck in bad local minima and return wrong results [DGZT19]. + +In this paper, we consider the setup in causal graph in Figure 1 in linear SCM where we have access to a proxy variable $Z$ . We propose a "Cross-Moment" algorithm that estimates the causal effect using cross moments between the treatment, the outcome, and the proxy variable. Our main contributions are as follows: + +- We show that the causal effect can be identified correctly from the observational distribution if there exists $n \in \mathbb{N}$ such that for the latent confounder $U$ , we have: $\mathbb{E}[U^n] \neq (n - 1)\mathbb{E}[U^{n - 2}]\mathbb{E}[U^2]$ (Theorem 1). Under additional mild assumption (Assumption 3), this condition implies that our proposed method can recover the causal effect when $U$ is non-Gaussian. Additionally, when the observational distribution is jointly Gaussian, we prove that it is impossible to identify the causal effect uniquely (Theorem 3). + +- Unlike previous work [SGKZ20, AHZ21, $\mathrm{YGN}^{+}22$ ] which requires solving an OICA problem, the proposed approach only performs simple arithmetic operations on cross moments. Therefore, it does not suffer from the drawbacks of OICA such as getting stuck in bad local optima. +- We show that DiD estimator in general provides a biased estimate of the causal effect over the data generated by the linear SCM consistent with the causal graph in Figure 1 unless the latent confounder has exactly the same values of direct causal effects on the outcomes in the pre-treatment and post-treatment phases. Our proposed method does not require such a strong restriction. + +The structure of the paper is as follows. In Section 2, we define the notation and provide some background on DiD estimator. In Section 3, we describe Cross-Moment algorithm and show that it recovers the true causal effect under mild assumptions on the distribution of the latent confounder. We also show that DiD estimator is in general biased if the data generative model follows a linear SCM. In Section 4, we review the related work. In Section 5, we evaluate the proposed algorithm experimentally and show its superior performance compared to the state of the art. Finally, we conclude the paper in Section 6. + +# 2 Preliminaries and Notations + +Throughout the paper, we denote random variables by capital letters and their realizations by small letters e.g., $X$ and $x$ , respectively. Bold capital letters are used to specify a set of random variables and their realizations are denoted by small capital letters (e.g., $\mathbf{X}$ and $\mathbf{x}$ ). + +A SCM $\mathcal{M}$ over a set of random variables $\mathbf{V}$ is defined by a set of assignments $\{X := f_X^{\mathcal{M}}(Pa_{\mathcal{G}}(X), \epsilon_X)\}_{X \in \mathbf{V}}$ , where $\epsilon_X$ is the exogenous noise corresponding to $X$ and $Pa_{\mathcal{G}}(X) \subseteq \mathbf{V}$ . It is assumed that the exogenous noises are mutually independent. Let us denote by $\mathbf{O}$ and $\mathbf{U}$ , the set of observed and unobserved variables in $\mathbf{V}$ , respectively. Note that $\mathbf{V} = \mathbf{O} \cup \mathbf{U}$ and $\mathbf{O} \cap \mathbf{U} = \emptyset$ . + +The set of assignments in SCM $\mathcal{M}$ is commonly represented by a DAG. Let $\mathcal{G} = (\mathbf{V},\mathbf{E})$ be a DAG with the set of vertices $\mathbf{V}$ and set of edges $\mathbf{E}$ . For ease of notation, we use the notation of $\mathbf{V}$ for the set of vertices in the graph. We also use the term "vertex" and "random variable" interchangeably. Each vertex in the graph represents some random variable and each direct edge shows a direct causal relationship between a pair of random variables. In particular, we say that $X$ is a parent of $Y$ or, equivalently, $Y$ is a child of $X$ if $(X,Y)\in \mathbf{E}$ . We define $Pa_{\mathcal{G}}(X)$ as a set of all parents of $X$ in graph $\mathcal{G}$ . + +# 2.1 Difference-in-Difference (DiD) + +Difference-in-difference (DiD) was proposed in the PO framework in order to estimate the causal effect from observational studies under some assumptions. In this framework, the population under study is divided into control and treatment groups and only individuals in the treatment group receive the treatment. In particular, the treatment variable $D$ represents treatment assignment which is equal to 1 if the treatment was given and 0 otherwise. Let $Y(0)$ and $Y(1)$ be two random variables representing the outcome under treatment value $D = 0$ and $D = 1$ , respectively. Denote the value of the outcome right before administering the treatment by $Z$ and assume it is measurable. Our goal is to obtain the average causal effect in the treatment group $\mathbb{E}[Y(1) - Y(0)|D = 1]$ . DiD estimate of the average causal effect equals: + +$$ +\left(\mathbb {E} [ Y | D = 1 ] - \mathbb {E} [ Y | D = 0 ]\right) - \left(\mathbb {E} [ Z | D = 1 ] - \mathbb {E} [ Z | D = 0 ]\right). \tag {1} +$$ + +This quantity is an unbiased estimate of the average causal effect as long as the following assumptions hold. + +- Stable Unit Treatment Value Assumption (SUTVA): + +$$ +Y = D Y (1) + (1 - D) Y (0). +$$ + +Common trend assumption: + +$$ +\mathbb {E} [ Y (0) - Z (0) | D = 1 ] = \mathbb {E} [ Y (0) - Z (0) | D = 0 ]. \tag {2} +$$ + +SUTVA states that the potential outcome for each individual is not related to the treatment value of the other individuals. The common trend assumption states that there would be the same "trend" in both groups in the absence of treatment which allows us to subtract group-specific means of the outcome in estimating the average causal effect in (1). + +# 3 Methodology: Cross-Moment Algorithm + +In this section, we propose Cross-Moment algorithm to estimate the causal effect of treatment $D$ on outcome $Y$ . Throughout this section, we consider linear SCMs, i.e., each random variable in SCM $\mathcal{M}$ is a linear combination of its parents and its corresponding exogenous noise. More precisely, the linear assignments in $\mathcal{M}$ for the causal graph in Figure 2 are: + +$$ +U := \epsilon_ {u}, +$$ + +$$ +Z := \alpha_ {z} U + \epsilon_ {z} = \alpha_ {z} \epsilon_ {u} + \epsilon_ {z}, \tag {3} +$$ + +$$ +D := \alpha_ {d} U + \epsilon_ {d} = \alpha_ {d} \epsilon_ {u} + \epsilon_ {d}, +$$ + +$$ +Y := \beta D + \gamma U + \epsilon_ {y} = (\alpha_ {d} \beta + \gamma) \epsilon_ {u} + \beta \epsilon_ {d} + \epsilon_ {y}, +$$ + +Without loss of generality, we assume that $\epsilon_{u},\epsilon_{y},\epsilon_{z},\epsilon_{d}$ are arbitrary random variables with zero mean. This assumption can always be achieved by centering the observational data. + +Moreover, we assume that the only observed random variables are given by $\mathbf{O} = \{D,Y,Z\}$ . Our goal is to identify $\beta$ (the causal effect of $D$ on $Y$ ) from the distribution over the observed variables $\mathbf{O}$ . We consider SCMs that satisfy the following assumption. + +Assumption 1. In the linear SCM given by (3), $\alpha_{z} \neq 0$ and $\mathrm{Var}(\epsilon_d) > 0$ . + +Assumption 1 is necessary for identifying the causal effect. In particular, if $\alpha_{z} = 0$ , the directed edge from $U$ to $Z$ is removed and the causal effect cannot be identified even if all the + +![](images/f61d4a5a3b4cfbbe7734e614b534a2b026f9ab84ad4573b0d41ac47050ae488b.jpg) +Figure 2: The considered causal graph with linear assignments in the SCM framework. + +exogenous noises are non-Gaussian as shown in [SGKZ20]. Moreover, if $\mathrm{Var}(\epsilon_d) = 0$ , then $\epsilon_{d}$ is zero almost surely (as we assumed that all the exogenous noises are mean zero). In this case, we can construct another SCM $\mathcal{M}'$ which encodes the same observational distribution as our original SCM but results in a different value of the causal effect of $D$ on $Y$ compared to the original SCM. More specifically, in this SCM, we delete the directed edge from $U$ to $Y$ and change the structural assignment of $Y$ to $Y := (\beta + \gamma / \alpha_{d}) D + \epsilon_{y}$ . Hence, the assumption $\mathrm{Var}(\epsilon_d) > 0$ is necessary for the unique identification of the causal effect. + +Under Assumption 1, it can be shown that: + +$$ +\beta = \frac {\operatorname {C o v} (D , Y) - \frac {\alpha_ {d}}{\alpha_ {z}} \operatorname {C o v} (Y , Z)}{\operatorname {V a r} (D) - \frac {\alpha_ {d}}{\alpha_ {z}} \operatorname {C o v} (D , Z)}, \tag {4} +$$ + +where $\operatorname{Cov}(A, B)$ denotes the covariance of random variables $A$ and $B$ and $\operatorname{Var}(A)$ is the variance of $A$ . $\beta$ is identifiable as long as the ratio $\alpha_d / \alpha_z$ is known as we can obtain $\operatorname{Cov}(D, Y)$ , $\operatorname{Cov}(Y, Z)$ , $\operatorname{Var}(D)$ , and $\operatorname{Cov}(D, Z)$ from the observational distribution. In the sequel, we will show how this ratio can be learnt as long as $\epsilon_u$ has bounded moments. + +Assumption 2. For all $n\in \mathbb{N}$ , assume that: $\mathbb{E}[\epsilon_u^n ] < \infty$ + +When the bounded moment assumption of 2 holds, the following theorem provides an approach for recovering $\alpha_{d} / \alpha_{z}$ . + +Theorem 1. For variables $Z$ and $D$ as defined in (3), under Assumptions 2, $\frac{\alpha_d}{\alpha_z}$ can be determined uniquely if $\exists n \in \mathbb{N}$ such that: + +$$ +\mathbb {E} \left[ \hat {\epsilon} _ {u} ^ {n} \right] \neq (n - 1) \mathbb {E} \left[ \hat {\epsilon} _ {u} ^ {n - 2} \right] \mathbb {E} \left[ \hat {\epsilon} _ {u} ^ {2} \right], \tag {5} +$$ + +where $\hat{\epsilon}_u = \sqrt{\alpha_d\alpha_z}\epsilon_u$ + +The detailed proof of the Theorem 1 is provided in the Appendix A. + +It is interesting to see for what families of distributions, the condition in Theorem 1 is satisfied. Assume (5) is not satisfied. Recall that from the definition of SCM (3), $\mathbb{E}[\hat{\epsilon}_u] = 0$ and $\mathbb{E}[(\hat{\epsilon}_u)^2] = \mathbb{E}[DZ]$ . These in a combination with $\mathbb{E}[\hat{\epsilon}_u^n] = (n - 1)\mathbb{E}[\hat{\epsilon}_u^{n - 2}]\mathbb{E}[\hat{\epsilon}_u^2]$ for any $n\in \mathbb{N}$ determine + +![](images/4a5deadb94f8f82f7c39eb67c02db8b8072f4f90c8edc450c1df47b4983732af.jpg) +Figure 3: Example of causal graph extended with two observed covariates, two latent confounders, and corresponding proxy variables. + +uniquely all the moments of $\hat{\epsilon}_u$ . More specifically, recursively solving for $\mathbb{E}[\hat{\epsilon}_u^n]$ we have $\mathbb{E}[\hat{\epsilon}_u^n] = (n - 1)!!\mathbb{E}[(\hat{\epsilon}_u)^2]$ for even $n \geq 1$ and $\mathbb{E}[\hat{\epsilon}_u^n] = 0$ for odd $n \geq 1$ where $n!!$ denotes double factorial. Double factorial notation $n!!$ denotes the product of all numbers from 1 to $n$ with the same parity as $n$ . Specifically the moments of Gaussian distribution satisfy the aforementioned moment equation. Therefore when $\epsilon_u$ is Gaussian, we cannot identify the causal effect. Under some mild technical assumption on $\epsilon_u$ (see Assumption 3 in the following), we can prove that the moments of $\epsilon_u$ uniquely determine its distribution. As a result, as long as $\epsilon_u$ is non-Gaussian, we can identify the causal effect. + +Assumption 3. We assume that there exists some $s > 0$ such that the power series $\sum_{k} \mathbb{E}[\epsilon_u^k] r^k / k!$ converges for any $0 < r < s$ . + +Corollary 1. Under Assumptions 1, 2 and 3, the causal effect $\beta$ can be recovered uniquely as long as $\epsilon_{u}$ is not Gaussian. + +In [SGKZ20], it was shown that $\beta$ can be recovered as long as all exogenous noises are non-Gaussian. Therefore, our result relaxes the restrictions on the model in [SGKZ20] by allowing $\epsilon_{D},\epsilon_{Y},\epsilon_{Z}$ to be Gaussian. + +Based on Theorem 1, we present Cross-Moment algorithm in Algorithm 1 that computes coefficient $\beta$ from the distribution over the observed variables $Z, D, Y$ . Algorithm 1 is comprised of two functions GetRatio and GetBeta. In the proof of Theorem 1, we show that $|\alpha_d / \alpha_z| = (\mathrm{num} / \mathrm{den})^{1 / (n - 2)}$ for the smallest $n$ such that $\mathrm{den} \neq 0$ where num and den are defined in lines 6 and 7 of function GetRatio, respectively. Moreover, $\mathbb{E}[DZ]$ has the same sign as $\alpha_d / \alpha_z$ and we can recover the sign of the ratio $\alpha_d / \alpha_z$ from $\mathbb{E}[DZ]$ . Thus, in lines 8-10, for the smallest $n$ such that $\mathrm{den} \neq 0$ , we obtain the ratio $\alpha_d / \alpha_z$ and then use it in function GetBeta to recover $\beta$ . + +For ease of presentation, we presented the Cross-Moment algorithm for the specific causal graph in Figure 2 with only one proxy. However, the Cross-Moment algorithm can be utilized in a more general setting with additional covariates and latent confounders such as in the graph depicted in Figure 3. This generalization is stated in the following theorem. + +Theorem 2. Suppose that the linear SEM of (3) with the graph in Figure 2 is extended with observed covariates $\mathbf{X}$ , non-Gaussian latent confounders $\mathbf{U}$ and proxy variables $\mathbf{Z}$ such that + +- none of the observed covariate is a descendant of any latent variable; +- no latent confounder $U \in \mathbf{U}$ of variables $D$ and $Y$ is an ancestor of any other latent confounder; +- for each latent confounder $U \in \mathbf{U}$ there exists a unique proxy variable $Z \in \mathbf{Z}$ which is not an ancestor of $Y$ ; +- each latent confounder and its unique proxy variable satisfy the Assumptions 1, 2 and 3. + +Then the causal effect $\beta$ from $D$ to $Y$ can be computed uniquely from the observational distribution. + +The proof of the theorem appears in Appendix A. The main idea of the proof is to reduce the problem to a set of sub-problems that can be solved with Cross-Moment algorithm. As mentioned earlier, an example of the causal graph satisfying the conditions of the theorem is depicted in Figure 3. + +Algorithm 1 Cross-Moment algorithm +1: Function GetBeta(D,Z,Y) +2: ratio := GetRatio(D,Z) +3: $\beta := (\mathbb{E}[DY] - \text{ratio} \cdot \mathbb{E}[YZ]) / (\mathbb{E}[D^2] - \text{ratio} \cdot \mathbb{E}[DZ])$ +4: Return $\beta$ +1: Function GetRatio(D,Z) +2: findRatio := False +3: $n := 2$ +4: while findRatio ≠ True do +5: $n := n + 1$ +6: num := $\mathbb{E}[D^{n-1}Z] - (n-1)\mathbb{E}[D^{n-2}]\mathbb{E}[DZ]$ +7: den := $\mathbb{E}[Z^{n-1}D] - (n-1)\mathbb{E}[Z^{n-2}]\mathbb{E}[DZ]$ +8: if den ≠ 0 then +9: ratio := sign( $\mathbb{E}[DZ]$ ) | $(\frac{\text{num}}{\text{den}})^{1/(n-2)}$ +10: findRatio := True +11: end if +12: end while +13: Return: ratio + +# 3.1 Impossibility Result + +In the previous sections, we showed that the causal effect $\beta$ can be identified if the distribution of latent confounder is non-Gaussian. Herein, we show that no algorithm can learn $\beta$ uniquely if the observed variables are jointly Gaussian in any linear SCM defined by (3) satisfying the following assumption. + +Assumption 4. In the linear SCM defined by (3), $\alpha_{d} \neq 0, \gamma \neq 0$ and $\mathrm{Var}(\epsilon_z) > 0$ . + +Theorem 3. Suppose that the observed variables in linear SCM defined by (3) are jointly Gaussian. Under Assumptions 1, 2 and 4, the total causal effect $\beta$ cannot be identified uniquely. + +The proof of the Theorem 3 appears in the Appendix A. The key idea in the proof is to show that there exist two linear SCMs that encode the same observational distribution and are consistent with the causal graph in Figure 2 but the causal effect of $D$ on $Y$ has two different values in these two models. + +Note that it is known that the causal structure is not identifiable in a linear SCM with Gaussian exogenous noises [PJS17]. Our impossibility result here is different from the non-identifiability result in linear Gaussian models. Specifically, in the linear Gaussian models, the goal is to recover all the coefficients in the linear SCM from the observational distribution. In our setting, we have additional knowledge of the exact DAG (restriction on the form of the linear SCM in (3)), and the goal is to identify a specific coefficient (i.e., $\beta$ ) from the linear SCM. Therefore, we have more constraints on the model and need to infer less information about it. Still, we show that the target coefficient $\beta$ cannot be determined in the causal graph in Figure 2 if the observed variables are jointly Gaussian. + +# 3.2 Bias in DiD Estimator + +Suppose that the data is generated from a linear SCM consistent with the causal graph in Figure 2. We show that DiD estimator is biased except when the latent variable $U$ has the exact same direct causal effect on $Z$ that it has on $Y$ , i.e., $\alpha_{Z} = \gamma$ . Our Cross-Moment algorithm identifies the true causal effect without any such restrictive assumption on the coefficients of the linear SCM. + +DiD estimator is given by the following linear regression $\left[\mathrm{L}^{+}11\right]$ : + +$$ +\hat {Y} = \hat {\beta} _ {1} T + \hat {\beta} _ {2} D + \hat {\beta} D T, \tag {6} +$$ + +where $\hat{\beta}_1, \hat{\beta}_2$ , and $\hat{\beta}$ are the regression coefficients and $T$ is a binary variable that equals zero for the pre-treatment phase and equals one otherwise. In the pre-treatment phase, $Z$ (i.e., the outcome before the treatment) is predicted as $\hat{\beta}_2D$ and in the post-treatment phase, $Y$ (the outcome after giving treatment to the treatment group) is predicted accordingly as $\hat{\beta}_1 + (\hat{\beta} + \hat{\beta}_2)D$ . In order to obtain + +the regression coefficients, the expectation of squared residuals over the population is minimized as follows (see Appendix C for the derivations of the following minimization problem and subsequent equations in this section): + +$$ +\min _ {\hat {\beta} _ {1}, \hat {\beta} _ {2}, \hat {\beta}} \mathbb {E} [ (Z - \hat {\beta} _ {2} D) ^ {2} ] + \mathbb {E} [ (Y - \hat {\beta} _ {1} - (\hat {\beta} + \hat {\beta} _ {2}) D) ^ {2} ]. +$$ + +This results in the following regression coefficients: + +$$ +\hat {\beta} _ {1} = 0, \quad \hat {\beta} _ {2} = \frac {\mathbb {E} [ Z D ]}{\mathbb {E} [ D ^ {2} ]}, \quad \hat {\beta} = \frac {\mathbb {E} [ Y D ] - \mathbb {E} [ Z D ]}{\mathbb {E} [ D ^ {2} ]}. +$$ + +DiD estimator returns $\hat{\beta}$ in the above equation as the estimation of causal effect which is equal to: + +$$ +\hat {\beta} = \beta + \frac {\alpha_ {d} \left(\gamma - \alpha_ {z}\right) \mathbb {E} \left[ U ^ {2} \right]}{\mathbb {E} \left[ D ^ {2} \right]}. \tag {7} +$$ + +Thus, $\hat{\beta}$ is an unbiased estimate of $\beta$ only when $\gamma = \alpha_{z}$ . In other words, latent variable $U$ should have the same direct causal effect on $Z$ and $Y$ . This is akin to the so-called common trend assumption which says that the average natural drift (here, the effect of $U$ ) is assumed to be the same across both the control and treatment groups. This result is consistent with the findings in [RSBP23], which studied a similar phenomenon in the DiD setting. In summary, whenever the common trend assumption is violated, the DiD estimator is biased. + +# 4 Related work + +In the past few years, there has been a growing interest in the literature to exploit proxy variables to de-bias the effect of latent confounders. A special type of such proxy variable is the so-called negative outcome which is a variable known not to be causally affected by the treatment [LTC10]. For instance, the variable $Z$ in Figure 1 may be considered as negative outcome. In fact, $[ \mathrm{SRC}^{+}16 ]$ interpreted DiD as a negative outcome control approach and proposed a method inspired by change-in-change [AI06] to identify the causal effect under the assumption that $Y(0)$ and $Z$ are monotonic increasing functions of latent confounders and some observed covariates. + +[KP14] considered three settings in causal inference with proxy variables: 1- There exists only one proxy variable such as $Z$ as a negative outcome. In this case, for discrete finite variables $Z$ and $U$ , they showed that the causal effect can be identified if $\operatorname{Pr}(Z|U)$ is known from some external studies such as pilot studies. 2- Two proxy variables, for instance $Z$ and $W$ are considered where $U$ , $Z$ , and $W$ are all discrete finite variables and $Z$ does not have a directed path to $D$ or $Y$ . It has been shown that the causal effect is identifiable under some assumptions on the conditional probabilities of $\operatorname{Pr}(Y|D,U)$ and $\operatorname{Pr}(Z,W|X)$ . In the setting, it is not necessary to know $\operatorname{Pr}(Z|U)$ but two proxy variables are required to identify the causal effect. 3- In linear SCM, [KP14] showed that $\beta$ (the average causal effect of $D$ on $Y$ ) can be recovered using two proxy variables. Later, [MGTT18] also considered a setting with two proxy variables $Z$ and $W$ . Unlike the second setting in [KP14], here, $Z$ and $W$ can be parents of $D$ and $Y$ , respectively. For the discrete finite variables, they showed that the causal effect can be identified if the matrix $P(W|Z,D = d)$ is invertible. Moreover, they provided the counterpart of this condition for continuous variables. [SMNTT20] extended the identification result in [MGTT18], with a weaker set of assumptions. Still, they required two proxy variables to identify the causal effect. Based on the results in [MGTT18], [TYC+20] introduced a proximal causal inference in PO framework. Later, [CPS+20] provided an alternative proximal identification result to that of [MGTT18], again when two proxy variables were available. More recently, [SLZ+23] considered the PO framework under linearity assumptions for treatment and post-treatment phase. In this setting, the authors showed the identifiability of causal effect if in the pre-treatment phase, the latent confounder jointly with observed outcome follows a multivariate Gaussian distribution, and in the treatment phase, the exogenous noise of outcome variable is non-Gaussian. + +In linear SCMs, to the best of our knowledge, the methods that can identify the causal effect with only one proxy variable in Figure 2 are based on solving an OICA problem. In particular, [SGKZ20] considered linear SCM with non-Gaussian exogenous noises in the presence of latent variables. They showed that under some structural conditions, the causal effects among observed variables can be identified and the causal graph in Figure 2 satisfies such structural conditions. $\mathrm{[YGN^{+}22]}$ extended the results in [SGKZ20]. They showed that the causal structure (direction) can always be identified, and the causal effect can be identified up to equivalence classes depending on the graphical + +conditions (the causal graph in Figure 2 is still uniquely identifiable). However, both proposed methods [SGKZ20, YGN+22] require solving an OICA, and existing algorithms for solving such a problem might get stuck in bad local optima. Recently, [AHZ21] provided two graphical conditions for the same setting in [SGKZ20] which are necessary for the identification of the causal structure. These conditions are closely related to the sparsity of the causal graphs. For the causal graph in Figure 2, the method proposed in [AHZ21] for estimating the causal effect is the same as the one in [SGKZ20] and thus has the same drawback. Concurrent to our submission, $\mathrm{[CHC^{+}23]}$ considered the problem of causal discovery for the linear non-Gaussian models. Under the specific assumptions on the true causal graph, they proposed an algorithm for learning the causal structure as well as the causal coefficients using high-order cumulants. + +In PO framework, the setting of having just a pre-treatment phase and a post-treatment phase can be generalized to the case with multiple time slots in the panel data model $\mathrm{[ABD^{+}21]}$ . In this paper, we mainly focus on the setting with two groups and two time slots but one can also study the extensions of the current work for other settings in the panel data model described in the following. Consider two $N\times T$ matrices $\mathbf{Y}$ and $\mathbf{D}$ where $N$ is the number of individuals in the population and $T$ is the number of time slots. Assume that only the outcome for some individuals and time slots is observable. In particular: $Y_{it} = (1 - D_{it})Y_{it}(0) + D_{it}Y_{it}(1)$ , where the realized outcome for individual $i$ at time slot $t$ is denoted by $Y_{it}(D_{it})$ . DiD method has been proposed for the case $T = 2$ , i.e., two time slots (pre-treatment and post-treatment phases). In the literature, other cases have been also studied for various assumptions on matrix $\mathbf{Y}$ . For instance, in unconfounded case [RR83, IR], the number of individuals is much larger than the number of time slots ( $N\gg T$ ), and the treatment is provided only at the last time slot. Another setting is that of synthetic control [AG03, ADH10, ADH15, DI16] where $T\gg N$ . In this setting, there is a single treated individual (suppose individual $N$ ) and the goal is to estimate its missing potential outcomes for any $t\in [T_0,T]$ after administering the treatment at time $T_0$ . The last setting considers $N\approx T$ and a two-way-fixed-effect (TWFE) regression model has been proposed to estimate the causal effect (see for a survey on TWFE in [DCd20]). It is noteworthy that TWFE estimator is equivalent to DiD estimator for two groups and two time slots. + +# 5 Experiments + +In this section, we first evaluate our algorithm on synthetic data and compare it to DiD estimator and as well as the related work in [KP14] which estimates the causal effect in linear SCMs with two proxy variables. Further, we apply our algorithm to a real dataset provided by [CK93]. The implementation of the algorithm and additional experimental results are provided in https://github.com/ykivva/Cross-Moments-Method. + +# 5.1 Synthetic data + +We generated samples according to the SCM in (3) and with all the exogenous noises distributed according to exponential distribution. Note that the distribution of $\epsilon_{u}$ , i.e., the exponential distribution satisfies Assumptions 2 and 3. Therefore $\beta$ is identifiable according to the Corollary 1. + +Given the observational data, we estimated the value of $\beta$ from the following four approaches: + +1. Cross-Moment algorithm (proposed in this work). +2. DiD estimator of (6). +3. A simple linear regression model based on the following equation: $\hat{Y} = \alpha Z + \hat{\beta} D$ +4. Causal effect estimate for linear SCM with two proxy variables (proposed in [KP14]). In the experiments, we call this estimate "two-proxy" method. + +It is noteworthy that we also evaluated the method in [SGKZ20] which uses OICA as a subroutine. Unfortunately, the performance was too poor to be included. + +For each sample size, we sampled parameters $\alpha_{z},\alpha_{d},\beta ,\gamma$ randomly and then generated the samples of $Z,D,Y$ accordingly to (3) (More details regarding the data generation mechanism can be found in Appendix B). We ran an experiment 10 times and reported the the average relative error for each value of sample size: err $= \mathbb{E}\left[\left|\frac{\beta - \hat{\beta}}{\beta}\right|\right]$ + +Figure 4: The performance measure err against the number of samples. Colored regions show the standard deviation of the err. +![](images/48dfb25d00cf2c6299966b62d6ebc4fd35f30175ec95d1c7ab22c34f52e93c47.jpg) +(a) The average relative error of Cross-Moment, DiD estimator, and simple linear regression with one proxy variable. + +![](images/c4eb926d22cc3a184d23c8bac1f9f46722a958c537894c83447d3cf4c09f0f40.jpg) +(b) The average relative error of three variants of Cross-Moment algorithm and the two-proxy method in [KP14] when we have access to two proxy variables. + +Figure 4a, depicts the performances of Cross-Moment algorithm, DiD estimator, and the simple linear regression when we have access to only one proxy variable. The colored region around each curve shows the empirical standard deviation of $|(\beta - \hat{\beta}) / \beta|$ . Cross-Moment algorithm outperforms the other two methods significantly. In fact, DiD estimator is biased if $\alpha_z \neq \gamma$ which occurs with measure one as $\alpha_z$ and $\gamma$ are generated randomly. Moreover, DiD estimate is no better than simple linear regression if $\gamma$ is not close to $\alpha_z$ . In the literature, it has been noted that the parallel trend assumption (in linear SCM, this assumption is equivalent to the condition $\alpha_z = \gamma$ ) is violated if the scale of the proxy variable $Z$ and outcome variable $Y$ are different which can be the case in many practical applications [L+11]. + +We compared Cross-Moment with the two-proxy method in [KP14] when we have access to two proxy variables. In particular, we assumed that there is an additional proxy variable $W$ such that $W := \alpha_w U + \epsilon_w$ . For Cross-Moment algorithm, we considered three versions: I - "Cross-Moment: Z", which estimates the causal effect by using only proxy variable $Z$ (we denote this estimate by $\beta_Z$ ), II - "Cross-Moment: W", which estimates $\beta$ from only proxy variable $W$ (which we denote the estimate by $\beta_W$ ), III - "Cross-Moment: W-Z", which estimates $\beta$ from aggregating the estimates of the methods I and II. In particular, "Cross-Moment: W-Z" uses bootstrapping method (Monte Carlo algorithm for case resampling [ET94]) to estimate the variances of estimates $\beta_Z$ and $\beta_W$ , denoted by $\sigma_{\beta_Z}^2$ and $\sigma_{\beta_W}^2$ , respectively. Subsequently, $\beta$ is estimated by combining two estimates $\beta_Z$ and $\beta_W$ with an inverse-variance weighting scheme [SHK11] where we give a higher weight to the estimate with the lower variance: $\frac{\sigma_{\beta_Z}^2}{\sigma_{\beta_Z}^2 + \sigma_{\beta_W}^2} \beta_W + \frac{\sigma_{\beta_W}^2}{\sigma_{\beta_Z}^2 + \sigma_{\beta_W}^2} \beta_Z$ . When $\mathrm{Var}(\epsilon_w) / \mathrm{Var}(W)$ and $\mathrm{Var}(\epsilon_z) / \mathrm{Var}(Z)$ are small, the causal effect can be estimated with a low estimation error from either $Z$ or $W$ as they contain low noise versions of the latent confounder $U$ . In our experiments, we considered the case where one of the proxy variables (herein, $W$ ) is too noisy but not the other one. Specifically, we choose $\mathrm{Var}(\epsilon_w) / \mathrm{Var}(\epsilon_u) = 10$ and $\mathrm{Var}(\epsilon_z) / \mathrm{Var}(\epsilon_u) = 0.1$ . Figure 4b illustrates the performances of the three aforementioned variants of Cross-Moment algorithm and the two-proxy method in [KP14]. "Cross-Moment: Z" has the best performance since it uses $Z$ with less noise as the proxy variable. Moreover, "Cross-Moment: W-Z" has a comparable performance by combining the estimates of $\beta_Z$ and $\beta_W$ . The two-proxy estimate does not exhibit robustness and has a large average relative error for various values of sample size. + +# 5.2 Minimum Wage and Employment Dataset + +We evaluate our method on the real data which contains information about fast-food stores (Burger King, Roy Rogers, and Wendy's stores) in New Jersey and Pennsylvania in 1992, and some details about them such as minimum wage, product prices, open hours, etc [CK93]. The goal of study was to estimate the effect of the increment in minimum wage in New Jersey from $4.25 to$ 5.05 per hour on the employment rate. + +
TWFECross-Moment
With X2.682.68
Without X3.244.03
+ +Table 1: Causal effect estimation of minimum wage on employment level in the real dataset in [CK93]. + +The data was collected by interviews in two waves, before and after the rise in the minimum wage. The information was gathered from 410 restaurants with similar average food prices, store hours, and employment levels. In this experiment, stores from Pennsylvania are treated as a control group and stores from New Jersey are considered as the treatment group. We define employment level $Y$ as $Y = Y_{f} + \frac{1}{2} Y_{p}$ , where $Y_{p}$ is a number of employees working part-time and $Y_{f}$ is a number of employees working full-time. + +First, we reproduced the results in [CK93]. We considered an extended version of TWFE model [CK93]: + +$$ +\hat {Y} = \hat {\beta} _ {1} T + \mathbf {X} ^ {T} \hat {\alpha} + \hat {\beta} _ {2} D + \hat {\beta} D T, +$$ + +where $\hat{Y}$ is the estimate of number of employees in the store, $T$ is a binary variable that equals 0 prior to raising the minimum wage and equals to 1 after the raise. $D$ is equal to 0 if the store is in Pennsylvania and equal to 1 if the store is in New Jersey. $\mathbf{X}$ is a vector that contains additional information such as the opening hours, product prices, etc. $\hat{\alpha}$ is also a vector of parameters corresponding to the vector $\mathbf{X}$ . We dropped all the stores from the dataset that contain NaN values after which 246 restaurants were left. The estimate of $\beta$ computed by TWFE is given in the first row of Table 1. According to [CK93], the estimate of $\beta$ is equal to 2.76. The difference in estimation is due to the slight difference in the features of the vector $\mathbf{X}$ , i.e., [CK93] added a few additional manually computed features to $\mathbf{X}$ . + +For the Cross-Moment algorithm, in order to incorporate the features $\mathbf{X}$ in estimating $\beta$ , we first regressed $Y$ on $\mathbf{X}$ and then used $Y - \mathbf{X} \hat{\alpha}$ instead of $Y$ as the outcome. The result of applying Cross-Moment algorithm to this newly defined outcome is given in the first row of Table 1 and is very close to the estimate by TWFE. + +Finally, we assumed that the additional information $\mathbf{X}$ gathered during the interview is not available. Then TWFE model for the employment level takes the following form + +$$ +\hat {Y} = \hat {\beta} _ {1} T + \hat {\beta} _ {2} D + \hat {\beta} D T. +$$ + +We used the previously pre-processed dataset but dropped the columns corresponding to $\mathbf{X}$ . Subsequently, we applied TWFE and Cross-Moment method to estimate $\beta$ . The respective estimates appear in the second row of Table 1, which stipulate the rise in the minimum wage had a positive effect on the employment level. + +# 6 Conclusion + +We considered the problem of estimating the causal effect of a treatment on an outcome in the linear SCM where we have access to a proxy variable for the latent confounder of the treatment and the outcome. This problem has been studied in both PO framework (such as DiD estimator) and SCM framework (such as the negative outcome control approach). We proposed a method that uses cross moments between the treatment, the outcome, and the proxy variable and recovers the true causal effect if the latent confounder is non-Gaussian. We also showed that the causal effect cannot be identified if the joint distribution over the observed variable are Gaussian. Unlike previous work which requires solving an OICA problem, our performs simple arithmetic operations on the cross moments. We evaluated our proposed method on synthetic and real datasets. Our experimental results show the proposed algorithm has remarkable performance for synthetic data and provides consistent results with previous studies on the real dataset we tested on. + +# Acknowledgments and Disclosure of Funding + +This research was in part supported by the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40_180545 and Swiss SNF project 200021_204355/1. + +# References + +$\left[\mathrm{ABD}^{+}21\right]$ Susan Athey, Mohsen Bayati, Nikolay Doudchenko, Guido Imbens, and Khashayar Khosravi. Matrix completion methods for causal panel data models. Journal of the American Statistical Association, 116(536):1716-1730, 2021. +[ADH10] Alberto Abadie, Alexis Diamond, and Jens Hainmueller. Synthetic control methods for comparative case studies: Estimating the effect of california's tobacco control program. Journal of the American statistical Association, 105(490):493-505, 2010. +[ADH15] Alberto Abadie, Alexis Diamond, and Jens Hainmueller. Comparative politics and the synthetic control method. American Journal of Political Science, 59(2):495-510, 2015. +[AG03] Alberto Abadie and Javier Gardeazabal. The economic costs of conflict: A case study of the basque country. American economic review, 93(1):113-132, 2003. +[AHZ21] Jeffrey Adams, Niels Hansen, and Kun Zhang. Identification of partially observed linear causal models: Graphical conditions for the non-gaussian and heterogeneous cases. Advances in Neural Information Processing Systems, 34:22822-22833, 2021. +[AI06] Susan Athey and Guido W Imbens. Identification and inference in nonlinear difference-in-differences models. *Econometrica*, 74(2):431-497, 2006. +[Bro83] Gavin Brown. P. billingsley, probability and measure (wiley, 1979), pp. 532, £ 28·95. Proceedings of the Edinburgh Mathematical Society, 26(3):398–399, 1983. +$\left[\mathrm{CHC}^{+}23\right]$ Ruichu Cai, Zhiyi Huang, Wei Chen, Zhifeng Hao, and Kun Zhang. Causal discovery with latent confounders based on higher-order cumulants. arXiv preprint arXiv:2305.19582, 2023. +[CK93] David Card and Alan B Krueger. Minimum wages and employment: A case study of the fast food industry in new jersey and pennsylvania, 1993. +$\left[\mathrm{CPS}^{+}20\right]$ Yifan Cui, Hongming Pu, Xu Shi, Wang Miao, and Eric Tchetgen Tchetgen. Semiparametric proximal causal inference. arXiv preprint arXiv:2011.08411, 2020. +[DCd20] Clément De Chaisemartin and Xavier d'Haultfoeuille. Two-way fixed effects estimators with heterogeneous treatment effects. American Economic Review, 110(9):2964-96, 2020. +[DCD22] Clément De Chaisemartin and Xavier D'Haultfoeuille. Difference-in-differences estimators of intertemporal treatment effects. Technical report, National Bureau of Economic Research, 2022. +[DGZT19] Chenwei Ding, Mingming Gong, Kun Zhang, and Dacheng Tao. Likelihood-free overcomplete ica and applications in causal discovery. Advances in neural information processing systems, 32, 2019. +[DI16] Nikolay Doudchenko and Guido W Imbens. Balancing, regression, difference-in-differences and synthetic control methods: A synthesis. Technical report, National Bureau of Economic Research, 2016. +[ET94] Bradley Efron and Robert J Tibshirani. An introduction to the bootstrap. CRC press, 1994. +[Gan10] Markus Gangl. Causal inference in sociological research. Annual review of sociology, 36:21-47, 2010. +[HKO01] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent Component Analysis. John Wiley & Sons, 2001. +[IR] Guido W Imbens and Donald B Rubin. Causal inference in statistics, social, and biomedical sciences (2015). Google Scholar Google Scholar Digital Library Digital Library. + +[KP14] Manabu Kuroki and Judea Pearl. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):423-437, 2014. +$[\mathrm{L}^{+}11]$ Michael Lechner et al. The estimation of causal effects by difference-in-difference methods. Foundations and Trends® in Econometrics, 4(3):165-224, 2011. +[LTC10] Marc Lipsitch, Eric Tchetgen Tchetgen, and Ted Cohen. Negative controls: a tool for detecting confounding and bias in observational studies. Epidemiology (Cambridge, Mass.), 21(3):383, 2010. +[MGTT18] Wang Miao, Zhi Geng, and Eric J Tchetgen Tchetgen. Identifying causal effects with proxy variables of an unmeasured confounder. Biometrika, 105(4):987-993, 2018. +[Pea09] Judea Pearl. Causality. Cambridge university press, 2009. +[PJS17] Jonas Peters, Dominik Janzing, and Bernhard Scholkopf. Elements of causal inference: foundations and learning algorithms. The MIT Press, 2017. +[RR83] Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41-55, 1983. +[RSBP23] Jonathan Roth, Pedro HC Sant'Anna, Alyssa Bilinski, and John Poe. What's trending in difference-in-differences? a synthesis of the recent econometrics literature. Journal of Econometrics, 2023. +[Rub74] Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5):688, 1974. +[SGKZ20] Saber Salehkaleybar, AmirEmad Ghassami, Negar Kiyavash, and Kun Zhang. Learning linear non-gaussian causal models in the presence of latent variables. J. Mach. Learn. Res., 21:39-1, 2020. +[SHK11] Bimal K Sinha, Joachim Hartung, and Guido Knapp. Statistical meta-analysis with applications. John Wiley & Sons, 2011. +[SJS17] Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pages 3076-3085. PMLR, 2017. +[SLZ+23] Kang Shuai, Shanshan Luo, Yue Zhang, Feng Xie, and Yangbo He. Identification and estimation of causal effects using non-gaussianity and auxiliary covariates. arXiv preprint arXiv:2304.14895, 2023. +[SMNTT20] Xu Shi, Wang Miao, Jennifer C Nelson, and Eric J Tchetgen Tchetgen. Multiply robust causal inference with double-negative control adjustment for categorical unmeasured confounding. Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(2):521-540, 2020. +$\left[\mathrm{SRC}^{+}16\right]$ Tamar Sofer, David B Richardson, Elena Colicino, Joel Schwartz, and Eric J Tchetgen Tchetgen. On negative outcome control of unobserved confounding as a generalization of difference-in-differences. Statistical science: a review journal of the Institute of Mathematical Statistics, 31(3):348, 2016. +[TYC+20] Eric J Tchetgen Tchetgen, Andrew Ying, Yifan Cui, Xu Shi, and Wang Miao. An introduction to proximal causal learning. arXiv preprint arXiv:2009.10982, 2020. +[YGN+22] Yuqin Yang, AmirEmad Ghassami, Mohamed Nafea, Negar Kiyavash, Kun Zhang, and Ilya Shpitser. Causal discovery in linear latent variable models subject to measurement error. Advances in Neural Information Processing Systems, 35:874-886, 2022. \ No newline at end of file diff --git a/acrossmomentapproachforcausaleffectestimation/images.zip b/acrossmomentapproachforcausaleffectestimation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..020215f171f47b3a5d13f12453653468af91277a --- /dev/null +++ b/acrossmomentapproachforcausaleffectestimation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db28e92ba53a1c06094c80248b25dfdf75f9309e68aad9a5ae2d67bda8415b4b +size 144658 diff --git a/acrossmomentapproachforcausaleffectestimation/layout.json b/acrossmomentapproachforcausaleffectestimation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7cda011177fdacb5de814371f04bbd8bcc99d78f --- /dev/null +++ b/acrossmomentapproachforcausaleffectestimation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:678cfdeea99c9731470edf327c98b6b64ae3f95a26ec506d77c5b35d1424848b +size 564106 diff --git a/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_content_list.json b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d9f589d4ae1a1c7c51b07f1506c434fd81849956 --- /dev/null +++ b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c9b506b3ca9c320d353d6bebe30b4f5abbb0864df0458e143603d3b8708b2bd +size 111776 diff --git a/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_model.json b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_model.json new file mode 100644 index 0000000000000000000000000000000000000000..05c69dfbae9f8825d52fff3c4eea0395e54c3d54 --- /dev/null +++ b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56907039b57cb66550ddbda0490de7d6b1d62ca7b516a564495e820ef0e68188 +size 136060 diff --git a/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_origin.pdf b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff7ca6cc8896ffc1bdbea587c17cd631cd303a44 --- /dev/null +++ b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/37490337-5d9a-4b6b-88ae-e397c50c1a70_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d8d920d8bff2679fd91f25e4734521f69b022ca16303c5b7f08a933ae5784ab +size 4663844 diff --git a/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/full.md b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1d0e0ad6a94c819b6b5196bd6fbbfe2940b002df --- /dev/null +++ b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/full.md @@ -0,0 +1,469 @@ +# A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated Class Incremental Learning for Vision Tasks + +Sara Babakniya + +Computer Science +University of Southern California +Los Angeles, CA +babakniy@usc.edu + +Zalan Fabian + +Electrical and Computer Engineering University of Southern California Los Angeles, CA zfabian@usc.edu + +Chaoyang He + +FedML Sunnyvale, CA ch@fedml.ai + +Mahdi Soltanolkotabi + +Electrical and Computer Engineering University of Southern California Los Angeles, CA soltanol@usc.edu + +Salman Avestimehr + +Electrical and Computer Engineering University of Southern California Los Angeles, CA avestime@usc.edu + +# Abstract + +Deep learning models often suffer from forgetting previously learned information when trained on new data. This problem is exacerbated in federated learning (FL), where the data is distributed and can change independently for each user. Many solutions are proposed to resolve this catastrophic forgetting in a centralized setting. However, they do not apply directly to FL because of its unique complexities, such as privacy concerns and resource limitations. To overcome these challenges, this paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions. This data can be later exploited alongside the training data to mitigate catastrophic forgetting. To preserve privacy, the generative model is trained on the server using data-free methods at the end of each task without requesting data from clients. Moreover, our solution does not demand the users to store old data or models, which gives them the freedom to join/leave the training at any time. Additionally, we introduce SuperImageNet, a new regrouping of the ImageNet dataset specifically tailored for federated continual learning. We demonstrate significant improvements compared to existing baselines through extensive experiments on multiple datasets. + +# 1 Introduction + +Federated learning (FL) [40, 29] is a decentralized machine learning technique that enables privacy-preserving collaborative learning. In FL, multiple users (clients) train a common (global) model in coordination with a server without sharing personal data. In recent years, FL has attracted tremendous attention in both research and industry and has been successfully employed in various fields, such as autonomous driving [17], next-word prediction [21], health care [13], and many more. + +Despite its popularity, deploying FL in practice requires addressing critical challenges, such as resource limitation and statistical and system heterogeneity [27, 33]. While tackling these challenges is an essential step towards practical and efficient FL, there are still common assumptions in most FL frameworks that are too restrictive in realistic scenarios. + +![](images/577df0bdcec55db2ce4422607af3b5334f46c054ea45ba6289f070d7e800a621.jpg) +Figure 1: In the real world, users constantly change their interests, observe new data, or lose some of the old ones. As a result, the training dataset is divided into different tasks. For example, here, at $\text{Task} = 1$ , the clients' datasets dominantly include pictures of animals, and by the end of the training $\text{Task} = T$ , the trend shifts towards landscapes. + +In particular, one of the most common assumptions is that clients' local data distribution is fixed and does not change over time. However, in real-world applications [49], clients' data constantly evolve due to changes in the environment, trends, or new interests. For example, [6] presents the real-world data of an online shop, suggesting interest in items shifts through seasons. Another example arises in healthcare, where a model trained on old diseases should be able to generalize to new diseases [58]. In such scenarios (Figure 1), the model must rapidly adapt to the incoming data while preserving performance on past data distributions to avoid catastrophic forgetting [28, 39]. + +In the centralized setting, such problems have been explored in continual learning [48, 34] (also called lifelong learning [3] or incremental learning [9, 7] based on the initial settings and assumptions). In recent years, various algorithms have been proposed in Continual Learning (CL) to tackle catastrophic forgetting from different angles and can achieve promising performance in different scenarios. + +Despite all the significant progress, most CL methods are not directly applicable to the federated setting due to inherent differences (Table 1) between the two settings. For instance, experience replay [47] is a popular approach, where a portion of past data points is saved to maintain some representation of previous distributions throughout the training. However, deploying experience replay in FL has resource and privacy limitations. It requires clients to store and keep their data, which may increase the memory usage of already resource-limited clients. Furthermore, users may not be able to store data for more than a specific time due to privacy concerns. Finally, depending solely on the clients to preserve the past is not reliable, as clients leaving means losing their data. + +
ChallengeLimitation
Low memoryClients cannot store many examples
Clients drop outCauses loss of information stored in memory
New clients joinNew clients only have access to new classes
PrivacyLimits data saving and sharing of the clients
+ +Table 1: Challenges that limit the direct use of continual learning methods in federated settings. + +To address the aforementioned problems, we propose MFCL, Mimicking Federated Continual Learning: a privacy-preserving federated continual learning approach without episodic memory. In particular, MFCL is based on training a generative model in the server and sharing it with clients to sample synthetic examples of past data instead of storing the actual data on the client side. The generative model training is data-free in the sense that no form of training data is required from the clients, and only the global model is used in this step. It is specifically crucial because this step does not require powerful clients and does not cause any extra data leakage. Finally, this algorithm has competitive performance; our numerical experiments demonstrate improvement by $10\% - 20\%$ in average accuracy while reducing the training overhead of the clients. + +Moreover, benchmarking federated continual learning in practical scenarios requires a large dataset to split among tasks and clients. However, existing datasets are not sufficiently large, causing most of the existing works in federated continual learning evaluating on a few clients (5 to 20) [45, 24, 52]. To enable more practical evaluations, we release a new regrouping of the ImageNet dataset, SuperImageNet. SuperImageNet enables evaluation with many clients and ensures all clients are assigned sufficient training samples regardless of the total number of tasks and active clients. + +We summarize our contributions below: + +- We propose a novel framework to tackle the federated class incremental learning problem more efficiently for many users. Our framework specifically targets applications where past data samples on clients are unavailable. +- We point out potential issues with relying on client-side memory for FCL. Furthermore, we propose using a generative model trained by the server in a data-free manner to help overcome catastrophic forgetting while preserving privacy. +- We modify the client-side training of traditional FL techniques in order to mitigate catastrophic forgetting using a generative model. +- We propose a new regrouping of the ImageNet dataset, SuperImageNet, tailored to federated continual learning settings that can be scaled to a large number of clients and tasks. +- We demonstrate the efficacy of our method in more realistic scenarios with a larger number of clients and more challenging datasets such as CIFAR-100 and TinyImageNet. + +# 2 Related Work + +Continual Learning. Catastrophic forgetting [39] is a fundamental problem in machine learning: when we train a model on new examples, its performance degrades when evaluated on past data. This problem is investigated in continual learning (CL) [59], and the goal is for the model to learn new information while preserving its knowledge of old data. A large body of research has attempted to tackle this problem from different angles, such as adding regularization terms [31, 1, 41], experience replay by storing data in memory [2, 10, 4, 35], training a generative model [56, 53, 32], or architecture parameter isolation [16, 38, 19, 51]. + +In CL settings, the training data is presented to the learner as a sequence of datasets - commonly known as tasks. In each timestamp, only one dataset (task) is available, and the learner's goal is to perform well on all the current and previous tasks. + +Recent work focuses on three main scenarios, namely task-, domain- and class-incremental learning (IL) [54]. In Task-IL, tasks are disjoint, and the output spaces are separated by task IDs provided during training and test time. For Domain-IL, the output space does not change for different tasks, but the task IDs are no longer provided. Finally, in Class-IL, new tasks introduce new classes to the output space, and the number of classes increases incrementally. Here, we work on Class-IL, which is the more challenging and realistic, especially in FL. In most of the FL applications, there is no task ID available, and it is preferred to learn a single model for all the observed data. + +Class Incremental Learning. In standard centralized Class-IL, the model is trained on a sequence of non-overlapping $T$ tasks $\{\mathcal{T}^{(1)},\mathcal{T}^{(2)},\dots,\mathcal{T}^{(T)}\}$ where the data distribution of task $t$ , $D^t$ , is fixed but unknown in advance, while all the tasks share the same output space $(\mathcal{V})$ . For task $t$ , $D^t$ consists of $N^t$ pairs of samples and their labels $\{(x_i^t,y_i^t)\}_{i = 1}^{N^t}\}$ , where all the newly introduced classes $(y_{i}^{t})$ belong to $\mathcal{V}^t (y_i^t\in \{\mathcal{V}^t\})$ and $\bigcup_{j = 1}^{t - 1}\{\mathcal{V}^j\} \cap \{\mathcal{V}^t\} = \emptyset$ ). Moreover, a shared output space among all tasks means that at the end of task $t$ , the total number of available classes equals $q = \sum_{i = 1}^{t}|\mathcal{V}^i|$ . + +Federated Continual Learning. In real-life scenarios, users' local data is not static and may evolve. For instance, users' interests may change over time due to seasonal variations, resulting in more examples for a given class. On the other hand, reliability issues or privacy concerns may lead to users losing part of their old data as well. In Federated Continual Learning (FCL), the main focus is to adapt the global model to new data while maintaining the knowledge of the past. + +Even though FCL is an important problem, it has only gained attention very recently, and [58] is the first paper on this topic. It focuses on Task-IL, which requires a unique task ID per task during inference. Furthermore, it adapts separate masks per task to improve personalized performance without preserving a common global model. This setting is considerably different than ours as we target Class-IL with a single global model to classify all the classes seen so far. [37] employs server and client-side knowledge distillation using a surrogate dataset. [15] relaxes the problem as clients have access to large memory to save the old examples and share their data, which is different from the standard FL setting. Some works, such as [26, 44, 52], explore the FCL problem in domains other than image classification. [42] has proposed using variational embedding to send data to the server securely and then server-side training to rehearse the previous task for Domain-IL. + +This work focuses on Class-IL for supervised image classification without memory replay, similar to [45, 24]. However, [24] allows overlapping classes between tasks and focuses on few-shot learning, which is different from the standard Class-IL. The most related work to ours is [45], where authors propose FedCIL. This work also benefits from generative replay to compensate for the absence of old data and overcome forgetting. In FedCIL, clients train the discriminator and generator locally. Then, the server takes a consolidation step after aggregating the updates. In this step, the server generates synthetic data using all the generative models trained by the clients to consolidate the global model and improve the performance. The main difference between this work and ours is that in our work, the generative model is trained by the server in a data-free manner, which can reduce clients' training time and computation and does not require their private data (detailed comparison in Appendix H). + +Data-Free Knowledge Distillation. Knowledge distillation (KD) [25] is a popular method to transfer knowledge from a well-trained teacher model to a (usually) smaller student model. Common KD methods are data-driven, and at least a small portion of training data is required. However, in some cases, training data may not be available during knowledge distillation due to privacy concerns. + +To tackle this problem, a new line of work [12, 22] proposes data-free knowledge distillation. In such methods, a generative model is used as a training data substitute. This generative model is trained to generate synthetic images such that the teacher model predicts them as their assigned label (Figure 2). This method has recently become popular in CL [57, 50] as well, mainly due to the fact that it can eliminate the need for memory in preserving knowledge. Data-free KD has been previously used in FL [60] to reduce the effect of data heterogeneity. However, to the best of our knowledge, this is the first work that adapted such a technique in the context of federated continual learning. + +![](images/04488200ac2e20cc10b269bcff1a20b4bd8b73502008d7b2f197ab1b3b67341b.jpg) +Figure 2: Data-Free Knowledge Distillation. The generator receives random noise as input labels and synthesizes images that are labeled correctly by the trained teacher model. + +# 3 Federated Class Incremental Learning with MFCL + +In federated Class-IL, a shared model is trained on $T$ different tasks. However, the distributed and private nature of FL makes it distinct from the centralized version. In FL, users may join, drop out, or change their data independently. Besides, required data or computation power for some centralized algorithms may not be available in FL due to privacy and resource constraints. + +To address the aforementioned problems, we propose MFCL, which is less reliant on the client-side memory and computational power. This algorithm includes two essential parts: first, at the end of each task, the server trains a generative model with data-free knowledge distillation methods to learn the representation of the seen classes. Second, clients can reduce catastrophic forgetting by generating synthetic images from the trained generative model obtained from the server side. This way, clients are not required to use their memory for storing old data. Moreover, this technique can address the problem of newly connected clients without past data. Furthermore, since the server trains the generative model training without additional information, this step does not introduce new privacy issues. Finally, MFCL can help mitigate the data heterogeneity problem, as clients can synthesize samples from classes they do not own [60] in memory. Next, we explain the two key parts of MFCL: server-side generative model (Figure 3 Left) and client-side continual learning (Figure 3 Right). + +# 3.1 Server-Side: Generative Model + +The motivation for deploying a generative model is to synthesize images that mimic the old tasks and to avoid storing past data. However, training these generative models on the client's side, where the training data exists, is computationally expensive, requires a large amount of training data and can be potentially privacy concerning. On the other hand, the server has only access to the global model and aggregated weights and no data. We propose training a generative model on the server, but in a data-free manner, i.e., utilizing model-inversion image synthesis [57, 50]. In such approaches, the goal is to synthesize images optimized with respect to the discriminator (global model). Then, the + +![](images/d2db9af73b4f6345c1f5112a93ab28f05ffbf48280ec0c3d4e30548811dda7d5.jpg) +Figure 3: Overview of MFCL. Left. The server aggregates the updates every round and trains a generator using data-free methods at the end of each task. Right. Clients train their models locally using their local data and synthetic images of past tasks from the generator. + +![](images/c19fc42d0172f9c8fc9f9feea5c690bf2ea0c4c598e1a05325911eb5059dc7d2.jpg) + +generative model is shared with the clients to generate images during local training. To this aim, we utilize a generative model with ConvNet architecture, $\mathcal{G}$ , that takes noise $z\sim \mathcal{N}(0,1)$ as input and produces a synthetic sample $\tilde{x}$ , resembling the original training input with the same dimensions. In order to train this model, we must balance the various training objectives we detail next. + +Cross Entropy Loss. First, the synthetic data should be labeled correctly by the current discriminator model (global model or $\mathcal{F}$ ). To this end, we employ cross entropy classification loss between its assigned label $z$ and the prediction of $\mathcal{F}$ on synthetic data $\tilde{x}$ . Note that noise dimension can be arbitrary and greater than the current discovered classes of task $t$ ; therefore, we only consider the first $q$ dimension here, where $q = \sum_{i=1}^{t} |\mathcal{V}^i|$ (which is equal to the total number of classes seen in the previous tasks). Then, we can define the cross-entropy loss as + +$$ +\mathcal {L} _ {C E} = C E (\operatorname {a r g m a x} (z [: q ]), \mathcal {F} (\tilde {x})). \tag {1} +$$ + +Diversity Loss. Synthetic images can suffer from a lack of class diversity. To solve this problem, we utilize the information entropy (IE) loss [12]. For a probability vector $\mathfrak{p} = (p_1, p_2, \dots, p_q)$ , information entropy is evaluated as $\mathcal{H}_{info}(\mathfrak{p}) = -\frac{1}{q}\sum_{i}p_{i}\log(p_{i})$ . Based on the definition, inputs with uniform data distributions have the maximum IE. Hence, to encourage $\mathcal{G}$ to produce diverse samples, we deploy the diversity loss defined as + +$$ +\mathcal {L} _ {d i v} = - \mathcal {H} _ {i n f o} \left(\frac {1}{b s} \sum_ {i = 1} ^ {b s} \mathcal {F} \left(\tilde {x} _ {i}\right)\right). \tag {2} +$$ + +This loss measures the IE for samples of a batch (bs: batch size). Maximizing this term encourages the output distribution of the generator to be more uniform and balanced for all the available classes. + +Batch Statistics Loss. Prior works [22, 57, 50] in the centralized setting have recognized that the distribution of synthetic images generated by model inversion methods can drift from real data. Therefore, in order to avoid such problems, we add batch statistics loss $\mathcal{L}_{BN}$ to our generator training objective. Specifically, the server has access to the statistics (mean and standard deviation) of the global model's BatchNorm layers obtained from training on real data. We want to enforce the same statistics in all BatchNorm layers on the generated synthetic images as well. To this end, we minimize the layer-wise distances between the two statistics written as + +$$ +\mathcal {L} _ {B N} = \frac {1}{L} \sum_ {i = 1} ^ {L} K L (\mathcal {N} (\mu_ {i}, \sigma_ {i} ^ {2}), \mathcal {N} (\tilde {\mu} _ {i}, \tilde {\sigma} _ {i} ^ {2})) = \log \frac {\hat {\sigma}}{\sigma} - \frac {1}{2} (1 - \frac {\sigma^ {2} + (\mu - \hat {\mu}) ^ {2}}{\hat {\sigma} ^ {2}}). \tag {3} +$$ + +Here, $L$ denotes the total number of BatchNorm layers, $\mu_{i}$ and $\sigma_{i}$ are the mean and standard deviation stored in BatchNorm layer $i$ of the global model, $\tilde{\mu}_i$ , $\tilde{\sigma}_i$ are measured statistics of BatchNorm layer $i$ for the synthetic images. Finally, $KL$ stands for the Kullback-Leibler (KL) divergence. + +We want to note that this loss does not rely on the BatchNorm layer itself but rather on their stored statistics $(\tilde{\mu}_i,\tilde{\sigma}_i)$ . $\mathcal{G}$ aims to generate synthetic images similar to the real ones such that the global model would not be able to classify them purely based on these statistics. One way to achieve this is to ensure that synthetic and real images have similar statistics in the intermediate layers, and this is + +the role of $\mathcal{L}_{BN}$ . In our experiments, we employed the most common baseline model in CL, which already contains BatchNorm layers and measures those statistics. However, these layers are not a necessity and can be substituted by similar ones, such as GroupNorm. In general, if no normalization layer is used in the model, clients can still compute the running statistics of specific layers and share them with the server, and later, the server can use them in the training of the $\mathcal{G}$ . + +Image Prior Loss. In natural images, adjacent pixels usually have values close to each other. Adding prior loss is a common technique to encourage a similar trend in the synthetic images [22]. In particular, we can create the smoothed (blurred) version of an image by applying a Gaussian kernel and minimizing the distance of the original and Smooth $(\tilde{x})$ using the image prior loss + +$$ +\mathcal {L} _ {p r} = \left| \left| \tilde {x} - S m o o t h (\tilde {x}) \right| \right| _ {2} ^ {2}. \tag {4} +$$ + +In summary, we can write the training objective of $\mathcal{G}$ as Equation 5 where $w_{div}, w_{BN}$ and $w_{pr}$ control weight of each term. + +$$ +\min _ {\mathcal {G}} \mathcal {L} _ {C E} + w _ {d i v} \mathcal {L} _ {d i v} + w _ {B N} \mathcal {L} _ {B N} + w _ {p r} \mathcal {L} _ {p r}, \tag {5} +$$ + +# 3.2 Client-side: Continual Learning + +For client-side training, our solution is inspired by the algorithm proposed in [50]. In particular, the authors distill the stability-plasticity dilemma into three critical requirements of continual learning and aim to address them one by one. + +Current Task. To have plasticity, the model needs to learn the new features in a way that is least biased towards the old tasks. Therefore, instead of including all the output space in the loss, the CE loss can be computed for the new classes only by splitting the linear heads and excluding the old ones, which we can write as + +$$ +\mathcal {L} _ {C E} ^ {t} = \left\{ \begin{array}{l l} C E \left(\mathcal {F} _ {t} (x), y\right), & i f y \in \mathcal {Y} ^ {t} \\ 0, & O. W. \end{array} \right. \tag {6} +$$ + +Previous Tasks. To overcome forgetting, after the first task, we train the model using synthetic and real data simultaneously. However, the distribution of the synthetic data might differ from the real one, and it becomes important to prevent the model from distinguishing old and new data only based on the distribution difference. To address this problem, we only use the extracted features of the data. To this aim, clients freeze the feature extraction part and only update the classification head (represented by $\mathcal{F}_t^*$ ) for both real $(x)$ and synthetic $(\tilde{x})$ images. This fine-tuning loss is formulated as + +$$ +\mathcal {L} _ {F T} ^ {t} = C E \left(\mathcal {F} _ {t} ^ {*} ([ x, \tilde {x} ]), y\right). \tag {7} +$$ + +Finally, to minimize feature drift and forgetting of the previous tasks, the common method is knowledge distillation over the prediction layer. However, [50] proposed importance-weighted feature distillation: instead of using the knowledge in the decision layer, they use the output of the feature extraction part of the model (penultimate layer). This way, only the more significant features of the old model are transferred, enabling the model to learn the new features from the new tasks. This loss can be written as + +$$ +\mathcal {L} _ {K D} ^ {t} = \left| \left| \mathcal {W} \left(\mathcal {F} _ {t} ^ {1: L - 1} ([ x, \tilde {x} ])\right) - \mathcal {W} \left(\mathcal {F} _ {t - 1} ^ {1: L - 1} ([ x, \tilde {x} ])\right) \right| \right| _ {2} ^ {2}, \tag {8} +$$ + +where $\mathcal{W}$ is the frozen linear head of the model trained on the last task $(\mathcal{W} = \mathcal{F}_{t - 1}^{L})$ + +In summary, the final objective on the client side as + +$$ +\min _ {\mathcal {F} _ {t}} \mathcal {L} _ {C E} ^ {t} + w _ {F T} \mathcal {L} _ {F T} ^ {t} + w _ {K D} \mathcal {L} _ {K D} ^ {t}, \tag {9} +$$ + +where $w_{FT}$ and $w_{KD}$ are hyper-parameters determining the importance of each loss term. + +# 3.3 Summary of MFCL Algorithm + +In summary, during the first task, clients train the model using only the $\mathcal{L}_{CE}$ part of (9) and send their updates to the server where the global model gets updated (FedAvg) for $R$ rounds. At the end of + +training task $t = 1$ , the server trains the generative model by optimizing (5), using the latest global model. Finally, the server freezes and saves $\mathcal{G}$ and the global model $(\mathcal{F}_{t-1})$ . This procedure repeats for all future tasks, with the only difference being that for $t > 1$ , the server needs to send the current global model $(\mathcal{F}_t)$ , precious task's final model $(\mathcal{F}_{t-1})$ and $\mathcal{G}$ to clients. Since $\mathcal{F}_{t-1}$ and $\mathcal{G}$ are fixed during training $\mathcal{F}_t$ , the server can send them to each client once per task to reduce the communication cost. To further decrease this overhead, we can employ communication-efficient methods in federated learning, such as [5], that can highly compress the model with minor performance degradation, which we leave for future work. Algorithm 1 in the Appendix A shows different steps of MFCL. + +# 4 SuperImageNet + +In centralized Class-IL, the tasks are disjoint, and each task reveals a new set of classes; therefore, the total number of classes strongly limits the number of tasks. Moreover, we must ensure that each task has sufficient training data for learning. Thus, the number of examples per class is essential in creating CL datasets. However, the dataset needs to be split along the task dimension and clients in a Federated Class-IL setup. For instance, CIFAR-100, a popular dataset for benchmarking FL algorithms, consists of 100 classes, each with 500 examples, which must be partitioned into $T$ tasks, and each task's data is split among $N$ clients. In other words, for a single task, a client has access to only $\frac{1}{T \times N}$ of that dataset; in a common scenario where $N = 100$ and $T = 10$ , we can assign only 50 samples to each client (about 5 example per class in i.i.d data distribution), which is hardly enough. + +To resolve this problem, prior works have used a small number of clients [45, 24, 52], combined multiple datasets [58], employed a surrogate dataset [37] or allowed data sharing among the clients [15]. However, these solutions may not be possible, applicable, or may violate the FL's assumptions. This demonstrates the importance of introducing new benchmark datasets for federated continual settings. + +![](images/67cf785e39eb014ed5b56286a6042627171213caac62f71ec55fe27ad0c67c84.jpg) +Figure 4: Building SuperImageNet by regrouping ImageNet dataset. Labels in Blue are the original labels, and in Red are the labels in SuperImageNet. + +We introduce SuperImageNet, a dataset created by superclassing the ImageNet [14] dataset, thus + +greatly increasing the number of available samples for each class. There are 3 versions of the dataset, each offering a different trade-off between the number of classes (for Class-IL) and the number of examples per class (for FL) as shown in Table 4. For example, SuperImageNet-M has $10x$ more samples per class compared to CIFAR-100, which allows for an order of magnitude increase in the number of fed + +
Dataset# examples/class# classes
SuperImageNet-S2500100
SuperImageNet-M500075
SuperImageNet-L750050
+ +Table 2: Versions of SuperImageNet + +erated clients in while maintaining the same amount of training data per client. As shown in Figure 4, we have merged classes of similar concepts to increase the sample size per class. + +# 5 Experiments + +Setting. We demonstrate the efficacy of our method on three challenging datasets: CIFAR-100 [30], TinyImageNet [43] and SuperImageNet-L $^1$ . For all datasets, we use the baseline ResNet18 [23] as the global model and ConvNet architecture for $\mathcal{G}$ , which we explain in detail in the Appendix C. + +Table 3 summarizes the setting for each dataset. For each dataset, there are 10 non-overlapping tasks $(T = 10)$ , and we use Latent Dirichlet Allocation (LDA) [46] with $\alpha = 1$ to distribute the data of each task among the clients. Clients train the local model using an SGD optimizer, and all the results were reported after averaging over 3 different random initializations (seeds). We refer to Appendix F for other hyperparameters. + +
Dataset#Client#Client per round#classes per task
CIFAR-10050510
TinyImageNet1001020
SuperImageNet-L300305
+ +Table 3: Training parameters of each dataset. + +Metric. We use three metrics – Average Accuracy, Average Forgetting, and Wallclock time. + +Average Accuracy $(\tilde{\mathcal{A}})$ : Let us define Accuracy $(\mathcal{A}^t)$ as the accuracy of the model at the end of task $t$ , over all the classes observed so far. Then, $\tilde{\mathcal{A}}$ is average of all $\mathcal{A}^t$ for all the $T$ available tasks. + +Average Forgetting $(\tilde{f})$ : Forgetting $(f^t)$ of task $t$ is defined as the difference between the highest accuracy of the model on task $t$ and its performance at the end of the training. Therefore, we can evaluate the average forgetting by averaging all the $f^t$ for task 1 to $T - 1$ at the end of task $T$ . + +Wallclock time. This is the time the server or clients take to perform one FL round in seconds. The time is measured rounds on our local GPU NVIDIA-A100 and averaged between different clients. + +Baseline. We compare our method with FedAvg [40], FedProx [33], FedProx $^+$ , FedCIL [45], FedLwF-2T [52] and Oracle. FedAvg and FedProx are the two most common aggregation methods; specifically, FedProx is designed for non-i.i.d data distributions and tries to minimize the distance of the client's update from the global model. Inspired by FedProx, we also explore adding a loss term to minimize the change of the current global model from one from the previous task, which we name FedProx $^+$ . FedCIL is a GAN-based method where clients train the discriminator and generator locally to generate synthetic samples from the old tasks. FedLwF-2T is another method designed for federated continual learning. In this method, clients have two additional knowledge distillation loss terms: their local model trained on the previous task and the current global model. Finally, Oracle is an upper bound on the performance: during the training of the $i_{th}$ task, clients have access to all of their training data from $t = 1$ to $t = i$ . + +# 5.1 Results + +Figure 5 shows the accuracy of the model on all the observed classes so far. In all three datasets, MFCL consistently outperforms the baselines by a large margin (up to $25\%$ absolute improvement in test accuracy). In the CIFAR-100 dataset, the only baseline that can also correctly classify some examples from past data is FedCIL. Both MFCL and FedCIL benefit from a generative model (roughly the same size) to remember the past. Here, a similar generative model to the one in the [45] for the CIFAR-10 dataset is used. Since, in FedCIL, the clients train the generative and global models simultaneously, they require more training iteration. We repeat the same process and adapt similar architectures for the other two datasets. But, given that GANs are not straightforward to fine-tune, this method does not perform well or converge. We explain more in the Appendix H. + +We have further compared the performance and overhead of the methods in Table 4. The first two metrics, Average Accuracy and Average Forgetting reveal how much the model is learning new tasks while preserving its performance on the old task. As expected, FedAvg and FedProx have the highest forgetting values because they are not designed for such a scenario. Also, high forgetting for FedLwF-2T indicates that including teachers in the absence of old data cannot be effective. Notably, $\mathrm{FedProx}^{+}$ has a lower forgetting value, mainly due to the fact that it also has lower performance for each task. Finally, FedCIL and MFCL have experienced the least forgetting with knowledge transferred from the old task to the new ones. Particularly, MFCL has the smallest forgetting, which means it is the most successful in preserving the learned knowledge. + +We also compare the methods based on their computational costs. It is notable that some methods change after learning the first task; therefore, we distinguish between the cost of the first task and the other ones. As depicted, for $T > 1$ , MFCL slightly increases the training time caused by employing the generative model. But, as a trade-off, it can significantly improve performance and forgetting. + +![](images/5ce1ff9c96f6e0213286f2e782acdec0caeb5112f06a3db30f49875df4975586.jpg) +Figure 5: Test Accuracy vs. # observed tasks for (a) CIFAR-100, (b) TinyImageNet, (C) SuperImageNet-L datasets. After each task, the model is evaluated on all the seen tasks so far. + +The server cost in MFCL is similar to FedAvg except at the end of each task, where it needs to train the generative model. This extra computation cost should not be a bottleneck because it occurs once per task, and servers usually have access to better computing power compared to clients. + +Table 4: Performance of the different baselines in terms of Average Accuracy. Average Forgetting and Wallclock time for CIFAR-100 dataset. + +
Average Accuracy\(\hat{\mathcal{A}}\)(%)Average forgetting\(\hat{f} \)(%)Training time (s)\(T=1\)Training time (s)\(T>1\)Server Runtime (s)
FedAvg22.27 ± 0.2278.77 ± 0.83≈ 1.2≈ 1.2≈ 1.8
FedProx22.00 ± 0.3178.17 ± 0.33≈ 1.98≈ 1.98≈ 1.8
FedCIL26.8 ± 0.4438.19 ± 0.31≈ 17.8≈ 24.5≈ 2.5 for T = 1,≈ 4.55 for T > 1
FedLwF-2T22.17 ± 0.1375.08 ± 0.72≈ 1.2≈ 3.4≈ 1.8
MFCL (Ours)44.98 ± 0.1228.3 ± 0.78≈ 1.2≈ 3.7≈ 330 (once per task),≈ 1.8 O.W.
Oracle67.12 ± 0.4--≈ 1.2≈ 1.2 × T≈ 1.8
+ +# 5.2 Ablation Studies + +Here, we demonstrate the importance of each component in our proposed algorithm, both on the server and client side, by ablating their effects one by one. Table 5 shows our results, where each row removes a single loss component, and each column represents the corresponding test accuracy $(\mathcal{A}^t)$ , average accuracy $(\tilde{\mathcal{A}})$ , average forgetting $(\tilde{f})$ and their difference from our proposed method. The first three rows are the losses for training the generative model. Our experiments show that Batch Statistics Loss $(\mathcal{L}_{BN})$ and Diversity loss $(\mathcal{L}_{div})$ play an essential role in the final performance. The next three rows reflect the importance of client-side training. In particular, the fourth row $(Ours - w / o\mathcal{L}_{CE}^{t})$ represents the case where clients use all the linear heads of the model for cross-entropy instead of splitting the heads and using the part related to the current task only. The following two rows show the impact of removing $\mathcal{L}_{FT}^{t}$ and $\mathcal{L}_{KD}^{t}$ from the client loss. In all three cases, the loss considerably drops, demonstrating the importance of all components. Finally, FedAvg + Gen shows the performance of the case where the server trains the generative model, and clients use its synthetic data the same way as the real ones without further modifications. In the Appendix G, we perform additional ablations on hyperparameters, such as weights of each loss term, generator model size, and noise dimension. + +Table 5: Ablation study for MFCL on CIFAR-100 + +
MethodA1A2A3A4A5A6A7A8A9A10AΔFΔ
Ours-w/o LBN70.0047.0243.9338.9835.9834.1432.6030.1727.9324.3638.51-6.4745.95+17.65
Ours-w/o Lpr70.4752.3349.9044.8742.0939.5638.1835.2133.7432.4043.87-1.1129.47+1.17
Ours-w/o Ldiv69.8753.4847.6039.6035.4332.9530.8127.1525.1422.3438.44-6.5444.80+16.5
Ours-w/o LCE70.1040.1033.4026.7021.3319.2417.9614.0013.6911.2826.78-18.2072.24+43.94
Ours-w/o LFT70.3746.1742.1637.5733.9132.2930.9428.2527.0024.6437.33-7.6542.85+14.55
Ours-w/o LKD70.1045.9238.6031.0126.4524.0721.3218.0216.8516.2930.86-14.1253.64+25.34
FedAvg + Gen70.5740.0730.9123.7520.3817.5616.0212.9013.1811.5725.69-19.2960.46+32.16
Ours71.5055.0050.7345.7342.3840.6238.9736.1835.4733.2544.98-28.3-
+ +# 6 Discussion + +Privacy of MFCL. Federated Learning, specifically FedAvg, is vulnerable to different attacks, such as data poisoning, model poisoning, backdoor attacks, and gradient inversion attacks [27, 36, 18, 20, 11, 33]. We believe, MFCL generally does not introduce any additional privacy issues and still it is prone to the same set of attacks as FedAvg. MFCL trains the generative model based on the weights of the global model, which is already available to all clients in the case of FedAvg. On the contrary, in some prior work in federated continual learning, the clients need to share a locally trained generative model or perturbed private data, potentially causing more privacy problems. + +Furthermore, for FedAvg, various solutions and defenses, such as differential privacy or secure aggregation [55, 8], are proposed to mitigate the effect of such privacy attacks. One can employ these solutions in the case of MFCL as well. Notably, in MFCL, the server does not require access to the individual client's updates and uses the aggregated model for training. Therefore, training a generative model is still viable after incorporating these mechanisms. + +In MFCL, the server trains the generator using only client updates. Figure 6 presents random samples of real and synthetic images from the CIFAR-100 dataset. Images of the same column correspond to real and synthetic samples from the same class. Synthetic samples do not resemble any specific training examples of the clients and thus preserve privacy. However, they consist of some common knowledge about the class and effectively represent the whole class. Therefore, they can significantly reduce catastrophic forgetting. + +![](images/db7eef99230d7a542bc451ff6271d21b55115c99bc2ec69823195cb54ad98768.jpg) + +![](images/38805c393d1953b75cc2247454bf3ed06d77ab00216cedc6dbdf7be311438e35.jpg) +Class 1 +Figure 6: Real vs synthetic data generated by the generative model for CIFAR-100 dataset. + +![](images/b34f727183ace4974c8b226764bdc7b8f6147b3cc49c31cc82cbe282a215d5f5.jpg) + +![](images/05f6fb58ee5aff68117d2c852693b7b6b425b44446abdbec59efb19584619c61.jpg) +Class 2 + +![](images/d2ac8d6991991bfafe820cab93cbabb1967e063c64c707cd54f128704d6721f4.jpg) + +![](images/74500197847e32ac8b9410ca2ee58e996cd6948f9cb06e835c9dccbecd89a925.jpg) +Class 3 + +![](images/9c889d7921e9eadb75cf23199c89711a7e62a082a92f3ca54631e4de6c2fed65.jpg) + +![](images/e8340e3f406e7fb320990cee8a9779f29807d537ce7cd686a2f53c5c7705fbe5.jpg) +Class 4 + +![](images/6b4cd126923e1b1ef7142045a7f26095c5358775b99474aa9c97390c761ffced.jpg) + +![](images/8b60b86001a16c42cbb21152c78bd34cb6f3d7ac771807fc45f5c1352a6dd5f7.jpg) +Class 5 + +![](images/b49787ae3725bfe5644fac81be8c9fb07a1472b26a536eff54e7cdd31a8e2b46.jpg) + +![](images/32ed699da9ace518120522d51374ae482c78d9b94664e365a052bd6503a59313.jpg) +Class 6 + +![](images/13727650de22a270768774bed75739774f9ddaa2dd813db5158250b142322acb.jpg) + +![](images/fec7a29fb2d43cebda9a81cd30f8eda19a4b37aee75c7d141cadf48874c2e2ff.jpg) +Class 7 + +![](images/248008bdf9ba9e6d8b10d95024a6d659a0cccd39a1c417e4ed32c211c23552ba.jpg) + +![](images/77161ebc551edb785386625120d770f09d773fa0ad31281e31de8adfc404c553.jpg) +Class 8 + +Limitations. In our method, clients need the generative model, the final global model of the last task, and the current global model, which adds overheads such as communication between the server and clients and storage. However, there are fundamental differences between storing the generative model and actual data. First, the memory cost is independent of the task size: as the number of tasks increases, clients either have to delete some of the existing examples of the memory to be able to add new ones or need to increase the memory size. In contrast, the generative model size is constant. Finally, clients can delete the generative model while not participating in the FL process and retrieve it later if they join. On the other hand, deleting data samples from memory results in a permanent loss of information. We have delved into this in Appendix D. + +# 7 Conclusion + +This work presents a federated Class-IL framework while addressing resource limitations and privacy challenges. We exploit generative models trained by the server in a data-free fashion, obviating the need for expensive on-device memory on clients. Our experiments demonstrate that our method can effectively alleviate catastrophic forgetting and outperform the existing state-of-the-art solutions. + +# 8 Acknowledgment + +This material is based upon work supported by ONR grant N00014-23-1-2191, ARO grant W911NF22-1-0165, Defense Advanced Research Projects Agency (DARPA) under Contract No. FASTNICS HR001120C0088 and HR001120C0160, and gifts from Intel and Qualcomm. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. + +# References + +[1] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139-154, 2018. +[2] Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, and Lucas Page-Caccia. Online continual learning with maximal interfered retrieval. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11849-11860. Curran Associates, Inc., 2019. +[3] Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3366-3375, 2017. +[4] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. Advances in neural information processing systems, 32, 2019. +[5] Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu, and Salman Avestimehr. Federated sparse training: Lottery aware model compression for resource constrained edge. arXiv preprint arXiv:2208.13092, 2022. +[6] Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: Continual learning with a memory of diverse samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8218-8227, 2021. +[7] Eden Belouadah, Adrian Popescu, and Ioannis Kanellos. A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks, 135:38-54, 2021. +[8] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482, 2016. +[9] Francisco M Castro, Manuel J Marín-Jiménez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV), pages 233-248, 2018. +[10] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc'Aurelio Ranzato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019. +[11] Chien-Lun Chen, Sara Babakniya, Marco Paolieri, and Leana Golubchik. Defending against poisoning backdoor attacks on federated meta-learning. ACM Transactions on Intelligent Systems and Technology (TIST), 13(5):1-25, 2022. +[12] Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, and Qi Tian. Data-free learning of student networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3514-3522, 2019. +[13] Yiqiang Chen, Xin Qin, Jindong Wang, Chaohui Yu, and Wen Gao. Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems, 35(4):83-93, 2020. +[14] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009. +[15] Jiahua Dong, Lixu Wang, Zhen Fang, Gan Sun, Shichao Xu, Xiao Wang, and Qi Zhu. Federated class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10164-10173, 2022. +[16] Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, and Marcus Rohrbach. Adversarial continual learning. In European Conference on Computer Vision, pages 386-402. Springer, 2020. +[17] Ahmet M Elbir, Burak Soner, and Sinem Coleri. Federated learning in vehicular networks. arXiv preprint arXiv:2006.01412, 2020. +[18] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to {Byzantine-Robust} federated learning. In 29th USENIX security symposium (USENIX Security 20), pages 1605-1622, 2020. + +[19] Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017. +[20] Jonas Geiping, Hartmut Bauermeister, Hannah Droge, and Michael Moeller. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems, 33:16937-16947, 2020. +[21] Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604, 2018. +[22] Matan Haroush, Itay Hubara, Elad Hoffer, and Daniel Soudry. The knowledge within: Methods for data-free model compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8494-8502, 2020. +[23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[24] Sean M Hendryx, Dharma Raj KC, Bradley Walls, and Clayton T Morrison. Federated reconnaissance: Efficient, distributed, class-incremental learning. arXiv preprint arXiv:2109.00150, 2021. +[25] Geoffrey Hinton, Oriol Vinyls, Jeff Dean, et al. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7), 2015. +[26] Ziyue Jiang, Yi Ren, Ming Lei, and Zhou Zhao. Fedspeech: Federated text-to-speech with continual learning. arXiv preprint arXiv:2110.07216, 2021. +[27] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1-2):1-210, 2021. +[28] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526, 2017. +[29] Jakub Konečný, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016. +[30] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. +[31] Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. Advances in neural information processing systems, 30, 2017. +[32] Timothee Lesort, Hugo Caseles-Dupré, Michael Garcia-Ortiz, Andrei Stoian, and David Filliat. Generative models from the perspective of continual learning. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE, 2019. +[33] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3):50-60, 2020. +[34] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017. +[35] David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. +[36] Lingjuan Lyu, Han Yu, and Qiang Yang. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133, 2020. +[37] Yuhang Ma, Zhongle Xie, Jue Wang, Ke Chen, and Lidan Shou. Continual federated learning based on knowledge distillation. +[38] Arun Mallya and Svetlana Lazebnik. Packet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7765-7773, 2018. + +[39] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109-165. Elsevier, 1989. +[40] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017. +[41] Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard Turner, and Mohammad Emtiyaz E Khan. Continual deep learning by functional regularisation of memorable past. Advances in Neural Information Processing Systems, 33:4453-4464, 2020. +[42] Tae Jin Park, Kenichi Kumatani, and Dimitrios Dimitriadis. Tackling dynamics in federated incremental learning with variational embedding rehearsal. arXiv preprint arXiv:2110.09695, 2021. +[43] Hadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. CS231N course, Stanford Univ., Stanford, CA, USA, 5, 2014. +[44] Aman Priyanshu, Mudit Sinha, and Shreyans Mehta. Continual distributed learning for crisis management. arXiv preprint arXiv:2104.12876, 2021. +[45] Daiqing Qi, Handong Zhao, and Sheng Li. Better generative replay for continual federated learning. arXiv preprint arXiv:2302.13001, 2023. +[46] Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020. +[47] David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019. +[48] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017. +[49] Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef, and Itai Zeitak. Overcoming forgetting in federated learning on non-iid data. arXiv preprint arXiv:1910.07796, 2019. +[50] James Smith, Yen-Chang Hsu, Jonathan Balloch, Yilin Shen, Hongxia Jin, and Zsolt Kira. Always be dreaming: A new approach for data-free class-incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9374-9384, 2021. +[51] James Smith, Cameron Taylor, Seth Baer, and Constantine Dovrolis. Unsupervised progressive learning and the stam architecture. arXiv preprint arXiv:1904.02021, 2019. +[52] Anastasiia Usmanova, François Portet, Philippe Lalanda, and German Vega. A distillation-based approach integrating continual learning and federated learning for pervasive services. arXiv preprint arXiv:2109.04197, 2021. +[53] Gido M Van de Ven and Andreas S Tolias. Generative replay with feedback connections as a general strategy for continual learning. arXiv preprint arXiv:1809.10635, 2018. +[54] Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019. +[55] Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454-3469, 2020. +[56] Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Zhengyou Zhang, and Yun Fu. Incremental classifier learning with generative adversarial networks. arXiv preprint arXiv:1802.00853, 2018. +[57] Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8715-8724, 2020. +[58] Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, and Sung Ju Hwang. Federated continual learning with weighted inter-client transfer. In International Conference on Machine Learning, pages 12073-12086. PMLR, 2021. + +[59] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987-3995. PMLR, 2017. +[60] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. + +# A MFCL Algorithm + +Algorithm 1 summarizes our method. Here, for every task, clients train the local model using the shared generative model. At the end of each task, the server updates the generative model using data-free methods. + +Algorithm 1 MFCL +1: $N$ : #Clients, $[\mathcal{C}_N]$ : Client Set, $K$ : #Clients per Round, $u_i$ : client i Update, $E$ : Local Epoch +2: $R$ : FL Rounds per Task, $T$ : #Tasks, $t$ : current task, $|\mathcal{V}|^t$ : Task $t$ Size, $q$ : #Discovered Classes +3: $\mathcal{F}_t$ : Global Model for task t, $\mathcal{G}_t$ : Generative Model, $E_{\mathcal{G}}$ : Generator Training Epoch +4: $q \gets 0$ +5: $\mathcal{G}, \mathcal{F}_1 \gets$ initialize() +6: for $t = 1$ to $T$ do +7: $q \gets q + |\mathcal{Y}^t|$ +8: $\mathcal{F}_t \gets$ updateArchitecture( $\mathcal{F}_t, q$ ) # Add new observed classes in the classification layer. +9: for $r = 1$ to $R$ do +10: $C_K \gets$ RandomSelect([ $\mathcal{C}_N$ ], $K$ ) +11: for $c \in C_K$ in parallel do +12: $u_c \gets$ localUpdate( $\mathcal{F}_t, \mathcal{G}, \mathcal{F}_{t-1}, E$ ) # For $t = 1$ we do not need $\mathcal{F}_0$ and $\mathcal{G}$ . +13: end for +14: $\mathcal{F}_t \gets$ globalAggregation( $\mathcal{F}_t, [u_c]$ ) +15: end for +16: $\mathcal{F}_t \gets$ freezeModel( $\mathcal{F}_t$ ) # Fix Global model. +17: $\mathcal{G} \gets$ trainDFGenerator( $\mathcal{F}_t, E_{\mathcal{G}}, q$ ) # Train the generative model. +18: $\mathcal{G} \gets$ freezeModel( $\mathcal{G}$ ) # Fix generator weights. +19: end for + +# B Code for Reproduction + +The codebase for this work and regrouping the ImageNet dataset is available at https://github.com/SaraBabakN/MFCL-NeurIPS23. + +# C Details of the Generative Model + +Architectures. In Table 6, we show the generative model architectures used for CIFAR-100, TinyImageNet, and SuperImageNet datasets. In all experiments, the global model has ResNet18 architecture. For the CIFAR-100 and TinyImageNet datasets, we change the first CONV layer kernel size to $3 \times 3$ from $7 \times 7$ . In this table, CONV layers are reported as CONV $K \times K(C_{in}, C_{out})$ , where $K$ , $C_{in}$ and $C_{out}$ are the size of the kernel, input channel and output channel of the layer, respectively. + +Weight Initialization. The generative model is randomly initialized for the first task and trained from scratch. For all the future tasks $(t > 1)$ , the server uses the previous generative model (t - 1) as the initialization. + +Synthetic Samples Generation. To generate the synthetic data, clients sample i.i.d noise, which later would determine the classes via the argmax function applied to the first q elements (considering q is the total number of seen classes). Given the noise is sampled i.i.d, the probability of generating samples from class $i$ equals $\frac{1}{q}$ . Although this might not lead to the same number of synthetic samples from each class in every batch, the generated class distribution is uniform over all classes. Thus, in expectation, we have class balance in generated samples. + +Catastrophic Forgetting in the Generative Model. The effectiveness of the $\mathcal{G}$ is closely linked to the performance of the global model. If the global model forgets old classes after completing a task, the quality of corresponding synthetic data will decline. Hence, it is crucial to select a reliable generative model and a robust global model. A good generative model can assist the global model in preventing forgetting when learning new tasks. This model can then serve as a teacher for the next round of the $\mathcal{G}$ model. + +Global Aggregation Method. In this work, we have employed FedAvg to aggregate the client updates. Since the generator is always trained after the aggregation, its training is not impacted by changing the aggregation method. However, the generative model uses the aggregated model as its discriminator, and it is directly affected by the quality of the final global model. Therefore, any aggregation mechanism that improves the global model's performance would also help the generative model and vice versa. + +Table 6: Generative model Architecture + +
CIFAR-100TinyImageNetSuperImageNet
FC(200,128 × 8 × 8)FC(400,128 × 8 × 8)FC(200,64 × 7 × 7)
reshape(-,128,8,8)reshape(-,128,8,8)reshape(-,64,7,7)
BatchNorm(128)BatchNorm(128)BatchNorm(64)
Interpolate(2)Interpolate(2)Interpolate(2)
CONV3 × 3(128,128)CONV3 × 3(128,128)CONV3 × 3(64,64)
BatchNorm(128)BatchNorm(128)BatchNorm(64)
LeakyReLULeakyReLULeakyReLU
Interpolate(2)Interpolate(2)Interpolate(2)
CONV3 × 3(128,64)CONV3 × 3(128,128)CONV3 × 3(64,64)
BatchNorm(64)BatchNorm(128)BatchNorm(64)
LeakyReLULeakyReLULeakyReLU
CONV3 × 3(64,3)Interpolate(2)Interpolate(2)
TanhCONV3 × 3(128,64)CONV3 × 3(64,64)
BatchNorm(3)BatchNorm(3)BatchNorm(64)
LeakyReLULeakyReLU
CONV3 × 3(64,3)Interpolate(2)
TanhCONV3 × 3(64,64)
BatchNorm(3)BatchNorm(64)
LeakyReLU
Interpolate(2)
CONV3 × 3(64,64)
BatchNorm(64)
LeakyReLU
CONV3 × 3(64,3)
Tanh
BatchNorm(3)
+ +# D Overheads of generative model + +Client-side. Using $\mathcal{G}$ on the client side would increase the computational costs compared to vanilla FedAvg. However, existing methods in CL often need to impose additional costs such as memory, computing, or both to mitigate catastrophic forgetting. Nevertheless, there are ways to reduce costs for MFCL. For example, clients can perform inference once, generate and store synthetic images only for training, and then delete them all. They can further reduce costs by requesting that the server generate synthetic images and send them the data instead of $\mathcal{G}$ . Here, we raise two crucial points about the synthesized data. Firstly, there is an intrinsic distinction between storing synthetic (or $\mathcal{G}$ ) and actual data; the former is solely required during training, and clients can delete them right after the training. Conversely, the data in episodic memory should always be saved on the client's side because once deleted, it becomes unavailable. Secondly, synthetic data is shared knowledge that can assist anyone with unbalanced data or no memory in enhancing their model's performance. In contrast, episodic memory can only be used by one client. + +Server-side. The server needs to train the $\mathcal{G}$ once per task. It is commonly assumed that the server has access to more powerful computing power and can compute more information in a faster time compared to clients. This training step does not have overhead on the client side and might slow down the whole process. However, tasks do not change rapidly in real life, giving the server ample time to train the generative model before any trends or client data shifts occur. + +Communication Cost. Transmitting the generative model can be a potential overhead for MFCL, as it is a cost that clients must bear once per task to prevent or reduce catastrophic forgetting. However, several possible methods, such as compression, can significantly reduce this cost while maintaining excellent performance. This could be an interesting direction for future research. + +# E More on the Privacy of MFCL + +MFCL with Differential Privacy. We want to highlight that the generator can only be as good as the discriminator in data-free generative model training. If the global model can learn the decision boundaries and individual classes with a DP guarantee, the generator can learn this knowledge and present it through the synthetic example. Otherwise, if the global model fails to learn the current tasks, there is not much knowledge to preserve for the future. With the DP guarantee, the main challenge is training a reasonable global model; improving this performance can also help the generative model. + +MFCL with Secure Aggregation. If the clients do not trust the server with their updates, a potential solution is Secure Aggregation. In a nutshell, secure aggregation is a defense mechanism that ensures update privacy, especially when the server is potentially malicious. More importantly, since MFCL also does not require individual updates, it is compatible with secure aggregation and can be employed to align with Secure Aggregation. + +Privacy Concerns Associated with Data Storage. Currently, some different regulations and rules limit the storage time of users' data. Usually, the service providers do not own the data forever and are obligated to erase it after a specific duration. Sometimes, the data is available only in the form of a stream, and it never gets stored. But most of the time, data is available for a short period of enough to perform a few rounds of training. In this way, if multiple service providers participate in federated learning, their data would dynamically change as they delete old data and acquire new ones. + +MFCL and Batch Statistics. MFCL benefits from Batch Statistics Loss $(\mathcal{L}_{BN})$ in training the generative model. However, some defense mechanisms suggest not sharing local Batch Statistics with the server. While training the generative model without the $\mathcal{L}_{BN}$ is still possible, it can reduce the accuracy. Addressing this is an interesting future direction. + +# F Hyperparameters + +Table 7 presents some of the more important parameters and settings for each experiment. + +Table 7: Parameter Settings in different datasets + +
DatasetCIFAR-100TinyImageNetSuperImageNet-L
Data Size32 × 3264 × 64224 × 224
# Tasks101010
# Classes per task10205
# Samples per class5005007500
LRAll task start with 0.1 and exponentially decay to 0.01
Batch Size323232
Synthetic Batch Size323232
FL round per task100100100
Local epoch10101
+ +# G Hyperparameter tuning for MFCL + +Hyperparameters can play an essential role in the final performance of algorithms. In our experiments, we have adapted the commonly used parameters, and here, we show how sensitive the final performance is regarding each hyperparameter. This is particularly important because hyperparameter tuning is very expensive in federated learning and can be unfeasible in continual learning. To this aim, we change one parameter at a time while fixing the rest. In Table 8, we report the final $\tilde{\mathcal{A}}$ of each hyperparameter on CIFAR-100 datasets with 10 tasks. + +$w_{div}$ : Weight of diversity loss $(\mathcal{L}_{div})$ + +$w_{BN}$ : Weight of Batch Statistics loss $(\mathcal{L}_{BN})$ + +$w_{pr}$ : Weight of Image Prior loss $(\mathcal{L}_{FT})$ + +Z_dim: Input noise dimension for training the $\mathcal{G}$ model. + +gen_epoch: Number of iteration to train the $\mathcal{G}$ model. + +This is the setting that we used $w_{div} = 1$ , $w_{BN} = 75$ , $w_{pr} = 0.001$ , $Z\_dim = 200$ , $gen\_epoch = 5000$ and the average accuracy equals $45.1\%$ . (There may be a minor difference between this value and the result in the main manuscript. This discrepancy arises because we only ran the ablation for a single seed, whereas the results reported in the primary manuscript are the average of three different seeds.) + +Table 8: Effect of different hyperparameters on the final $\widetilde{A}$ (in %) for CIFAR-100 dataset. + +
wdivAwBNAwprAZ_dimAgen_epochA
0.144.350.140.120.000143.1011042.3910040.77
0.544.37143.900.00145.120045.1500045.1
145.11044.770.0143.56100045.011000043.35
244.087545.10.144.73
544.5710045.02144.37
+ +This table shows how robust the final performance is with respect to each parameter, which is preferred both in federated and continual learning problems. + +# H Comparison between MFCL and FedCIL + +Here, we would like to highlight some distinctions between our algorithm and FedCIL, both of which aim to alleviate catastrophic forgetting using generative models. + +- In FedCIL, clients train the local generative model every round, which adds great computational overhead. On the other hand, in our approach, the generative model is trained on the server and only once per task. +- Training models in GANs usually require a large amount of data that is not commonly available, especially on edge devices. Our data-free generative models address this issue. +- Training the generative model directly from the training dataset may pose a risk of exposing sensitive training data, which contradicts the goal of FL. On the other hand, MFCL uses only the information from the global model. +- FedCIL is limited to simpler datasets and FL settings, such as MNIST and CIFAR10, with fewer clients and less complex architectures. In contrast, our approach can handle more complex datasets, such as CIFAR100, TinyImageNet, and SuperImagenet, with a much larger number of clients. +- Training GAN models usually require more careful hyperparameter tuning. To train FedCIL for TinyImageNet and SuperImageNet, we tried SGD and Adam optimizers with learning rates $\in \{0.1, 0.05, 0.01\}$ and local epoch $\in \{1, 2\}$ . Furthermore, we adopt a generative model architecture with a similar input dimension and a total number of parameters in MFCL. However, the model did not converge to a good performance. While a more extensive hyperparameter search might improve the results, it can indicate the difficulty of the hyperparameter tuning of this algorithm. It is worth mentioning that in order to train the CIFAR-10 dataset, we used a local epoch $8 \times$ larger than the other baselines; otherwise, the performance on this dataset would also degrade. + +In conclusion, FedCIL can be a good fit for a cross-silo federated learning setting with only a few clients, each possessing a large amount of data and computing resources. Meanwhile, while still applicable in the above setting, our method is also suitable for edge devices with limited data and power. \ No newline at end of file diff --git a/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/images.zip b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5c7560f6f1f4735f6585f9f7c005afb63a06e07e --- /dev/null +++ b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53c4b7ec826d53e6773c5d32faf7e507bec03775e669b6ef82961a10203d093e +size 623819 diff --git a/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/layout.json b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c55b7989c8e207ebdad65f4f4c6c7b9ed34dc686 --- /dev/null +++ b/adatafreeapproachtomitigatecatastrophicforgettinginfederatedclassincrementallearningforvisiontasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:017932634da74c1ef0aaf062edbc82d5280690922de42d50d8394b68185de1e3 +size 601184 diff --git a/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_content_list.json b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bb7a3f1f1bd6c91abb2fd838d6bc6eb1eb1a8100 --- /dev/null +++ b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62f4a3e6babe720772331cf3ee3bf24b33c3b66b7e009b48d72174903b19b429 +size 69505 diff --git a/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_model.json b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b5a4584731065a24e10751d0f5034bf5bb0b8511 --- /dev/null +++ b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67b3c7566a6d83298bba30e75c003e4ef235588bfb25557dcd4f979915da0c39 +size 87343 diff --git a/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_origin.pdf b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cc0a4ed94094258827b83f1183024c961ca52d84 --- /dev/null +++ b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/6b63f092-da0b-49c3-b6a1-3376dc1a232d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8af066c2dadc4058813939b9dc19c3d2a510197742bd94c3cb6af52a2c30b78 +size 952045 diff --git a/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/full.md b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e9b7ab6d88d6fe9648c4af6597a70097c4ad87e9 --- /dev/null +++ b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/full.md @@ -0,0 +1,248 @@ +# A Dataset for Analyzing Streaming Media Performance over HTTP/3 Browsers + +Sapna Chaudhary + +IIIT Delhi + +sapnac@iiitd.ac.in + +Naval Kumar Shukla + +IIIT Delhi + +naval19065@iiitd.ac.in + +Sandip Chakraborty + +IIT Kharagpur + +sandipc@cse.iitkgp.ac.in + +Mukulika Maity + +IIIT Delhi + +mukulika@iiitd.ac.in + +# Abstract + +HTTP/3 is a new application layer protocol supported by most browsers. It uses QUIC as an underlying transport protocol. QUIC provides multiple benefits, like faster connection establishment, reduced latency, and improved connection migration. Hence, popular browsers like Chrome/Chromium, Microsoft Edge, Apple Safari, and Mozilla Firefox have started supporting it. This paper presents an HTTP/3-supported browser dataset collection tool named H3B. It collects the application and network-level logs during YouTube streaming. We consider YouTube one of the most popular video streaming applications supporting QUIC. Using this tool, we collected a dataset of over 5936 YouTube sessions covering 5464 hours of streaming over 5 different geographical locations and 5 different bandwidth patterns. We believe our tool and as well as the dataset1 could be used in multiple applications such as a better configuration of application/transport protocols based on the network conditions, intelligent integration of network and application, predicting YouTube's QoE, etc. We analyze the dataset and observe that during an HTTP/3 streaming, not all requests are served by HTTP/3. Instead, whenever the network condition is unfavorable, the browser chooses to fallback, and the application requests are transmitted using HTTP/2 over the old-standing transport protocol TCP. We observe that such switching of protocols impacts the performance of video streaming applications. + +# 1 Introduction + +HTTP/3 is the latest variant of the popular Hypertext Transfer Protocol (HTTP), which has recently been widely adopted by major Internet giants such as Google, Facebook, Cloudflare, Akamai, Apple, and many others. Unlike its predecessors, HTTP/3 uses QUIC [20, 14] as the underlying transport protocol. QUIC is expected to provide reduced latency and better application QoE (Quality of Experience) compared to TCP, the widely used transport protocol for the Internet, for the past three decades. The QUIC developers [23] and several other works [21, 37, 11, 39, 44] have shown the superiority of QUIC compared to TCP in terms of supporting better QoE with less stall for video streaming. + +In this paper, we present a large-scale dataset containing network-application information for 5464 of YouTube streaming sessions over popular browsers supporting both HTTP/3 and the legacy HTTP/2. Notably, YouTube is one of the most popular streaming media services on the Internet today. To + +collect this large-scale dataset, we develop H3B, an emulation-based toolbox that captures the application performance statistics for YouTube video streaming coupled with the underlying network traffic traces. For this purpose, we explored a feature in the YouTube browser application, named stats for nerds, which shows the played video statistics in terms of the frames dropped, the current resolution, Internet connectivity in the device, and the YouTube playback buffer heath. However, it misses critical information like bitrate, variation in bitrate, and stalling/rebuffering used to compute the end user's QoE. Notably, the network bandwidth directly impacts YouTube adaptive bitrate (ABR) decisions [29], where the YouTube client dynamically decides the best playback bitrate depending on the underlying network conditions. Therefore, to analyze the application performance, one needs to see the network behavior as well to perform root cause analysis as and when application QoE suffers. Hence, we develop H3B that collects application and network layer logs simultaneously while streaming videos over YouTube. We collected the data over two web browsers - Chrome/Chromium and Firefox. The tool takes (1) video ID and (2) the network bandwidth pattern as the input and generates the application and network logs with annotated QoE information. H3B emulates that network behavior using a benchmark network emulation tool called Mahimahi [32]. + +We use H3B to launch a measurement campaign for YouTube across 5 different geographical locations and 5 different bandwidth patterns. Out of them, 2 are traces collected under mobility in WiFi and cellular networks. Specifically, we focus on the poor network bandwidth patterns as there is minimal data on video streaming applications' performance for poor networks [17, 42]. Further, several existing studies [28, 26, 2, 46] suggest that poor or fluctuating network conditions provide the opportunity to use intelligent learning-driven algorithms for optimizing the streaming media performance. Notably, the Internet speed is still deficient in significant parts of the globe, particularly the developing world [1, 8, 27, 34]. We have collected 5464 of streaming hours across 5936 streaming sessions of data through H3B. To benchmark the performance of YouTube streaming sessions over HTTP/3 browsers compared to the legacy HTTP/2, we have also collected the logs and traffic data over the HTTP/2 setup under similar network configurations. + +While analyzing the impact of the HTTP/3 protocol on YouTube performance, we have a few interesting observations from the dataset collected above. We observe that during HTTP/3 streaming, even though it is assumed that it will use QUIC underneath, we observe that often the data is sent over TCP. Such a phenomenon is called fallback [23, 18] where QUIC uses the help of TCP on a network path where UDP (QUIC) is blocked. However, there was no such blocking in our setup, yet the browser chooses to fall back to TCP whenever the application suffers over QUIC. We next compute the QoE obtained by both HTTP/3 and legacy HTTP/2 streaming and perform hypothesis testing on QoE obtained for both. We observe that QoE obtained over HTTP/3 streaming is not a winner, contradicting the observations made by prior work and QUIC developers themselves [23, 35, 3, 40, 36, 19, 25]. We expect that the browser implementation of the legacy support for HTTP/3 might be a cause for the same. To further validate whether indeed this protocol fallback was the cause for poor application QoE, we modified the Chromium browser source code to stop such protocol switching forcibly. Then, using our tool H3B, we further launch a measurement campaign over this modified browser. We observe that forcibly stopping the protocol switching often improves the application QoE compared to the original one. + +To the best of our knowledge, this is the first large-scale YouTube streaming dataset over an HTTP/3 browser. We have also collected the benchmarked HTTP/2 traffic (legacy traffic by disabling QUIC on the browser) for all the equivalent scenarios, which can be used to explore the pros and cons of various network configurations and protocol design choices over an HTTP/3 browser. Overall, our dataset has the following attributes: + +- Multi-bandwidths: The dataset contains the application and network logs under different bandwidth patterns, i.e., high, low, very low, and under mobility (over WiFi and cellular network). This can be used to study how different network bandwidth patterns impact YouTube's QoE. +- Multi-locations: The dataset was collected for 5 different geographical locations, i.e., Delhi, Bangalore, New York, Germany, and Singapore. This can be used to study whether location plays any role in YouTube's QoE. +- Multi-protocol: The dataset contains both HTTP/3 and legacy HTTP/2. This can be used to analyze benefits/issues with HTTP/3. +- Multi-video: The dataset was collected for 46 videos of different genres: News, Entertainment, and Education. This can be used to study the impact of video type on YouTube QoE. + +- Time interval: This data collection happened for 18 months. This allows a time variance study of the QoE during the course of browser updates. + +Such a dataset can be used to develop intelligent models for network-application integration over an HTTP/3 browser while considering the backward compatibility with HTTP/2. In general, such a dataset can be used for the following problems: + +- Dynamic tuning of protocol design choices/ configurations based on the network conditions: This will allow learning the network environment to tune the parameters of, say, transport protocol congestion control, ABR (Adaptive BitRate) streaming parameters, etc. +- Intelligent network-application integration for better application QoE: Simultaneous logs of the network and application layer will facilitate designing network-aware applications. +- Prediction & Optimization of YouTube QoE: Further, given the diversity of our dataset, the same can be used to develop a prediction model for predicting QoE. Further, our poor network dataset provides an opportunity to use intelligent learning-driven algorithms for optimizing QoE. + +# 2 Related Work + +We divide the related work into three dataset categories: YouTube, DASH, and QoE datasets. YouTube Dataset: Gutterman et al. [17] works on the prediction of video QoE such as buffer state, quality of a video, stalling to be experienced. They collected data for 425 video sessions over YouTube for WiFi and LTE networks under static and mobility scenarios. Karagkioules et al. [22] collected a dataset of around 374 hours of YouTube videos on a mobile device using their tool Wrapper-app. The authors have collected the data from the network and application layer and extracted the application logs via stats for nerds and DNS queries. They have experimented at different bandwidth levels: 500kbit/s, 1024kbit/s, 3000kbit/s, and 100kbit/s. Wassermann et al. [42] used an app called YoMoApp [41] for collecting datasets of YouTube streaming over a cellular network. They collected a dataset of over 360 different mobile users, over 70 cellular network operators, and a total of 3000 video sessions. The author replicated the design and functionalities of the YouTube application for data collection. Though the prior work has focused on data collection of YouTube; there needs to be a specific focus on the poor or variable network bandwidth observed in developing countries. Further, we found that the existing dataset misses critical QoE information such as [22] misses stalling information and [42] uses the resolution not the birates which provides a more detailed assessment of quality. Moreover, some dataset does not use the production endpoint of YouTube [41]. + +DASH Dataset: Taraghi et al. [38] released a dataset containing different video codecs and bitrates with a maximum resolution of 8K. Feuvre et al. [24] release a dataset of HEVC, which was ranging from HD to UHD bitrates. Such MPEG-DASH packaged content dataset allows (1) efficient usage of these codecs, as not all devices support all the available codecs, (2) experimenting with different DASH adaptation techniques which support several codecs. DASH dataset is complementary to our dataset as in the comparison between HTTP/3 and legacy, the DASH algo and the codec used for the streamed video remain the same. Our dataset can boost DASH algorithms during poor network and protocol switchings. + +QoE dataset: These datasets can be used for improving the rate adaptation of DASH algorithms and are collected using two traditional video quality assessment techniques. (1) Subjective assessment: Such assessment typically uses a MOS score that represents video quality perceived by the end user. The prior work typically creates a dataset of high-quality videos along with their distorted videos where they incorporate quality change/degradation and/or stalling events. Chen et al. [10] designed a model to predict the time-varying subjective quality (TVSQ value). Duanmu et al. [13] proposed a streaming QoE index, which accounts for the instantaneous degradation of quality and the initial rebuffering and stalling perceived by the end user. Bampis et al. [6] have created a database of Netflix videos and prepared distorted videos by imposing different playout patterns such as imposing different compression rates, rebuffering rates. This dataset simulates the real network by using different bandwidth patterns. They compute the MOS score from the subjective evaluation with respect to the frame index for all the distorted videos. (2) Objective assessment: such assessment computes quality scores such as PSNR and SSIM. Bampis et al. [5] have created a database of 420 distorted videos. They compute continuous and retrospective prediction scores such as MOS, PSNR, and SSIM. The dataset also includes the number of rebuffering events and different playback bitrates. + +QoE datasets differ from ours (1) we have not artificially created distorted video dataset; rather, we use realistic bandwidth patterns that causes quality drop and stalling instances while streaming videos over YouTube. Different QoE datasets either have no stalling [10] or fixed stalling events at fixed durations [13], or fixed stalling patterns [15] (2) Length of a video is at max 300 sec, whereas each of our video duration is about 3000 sec (3) None of the datasets have network logs in addition to application logs. We have time-synchronized application and network logs that can be used to better characterize the impact of the network on application QoE (4) All datasets are of HAS (HTTP Adaptive Streaming) with only one version of HTTP, we provide with two HTTP protocols and two web browsers, different locations (5) The datasets contain QoE information in an aggregated fashion that lacks temporal patterns such as [5] provides only one rebuffering duration for the entire video. + +# 3 H3B Tool + +In this section, we present the design of our tool H3B. The tool takes video id and network bandwidth pattern as input and provides application layer logs regarding application QoE. Network layer logs in terms of packet exchanges with protocol as output. Details follow: + +# 3.1 Input to H3B + +YouTube video selection: We first create a list of 46 YouTube videos, each lasting 40 minutes to 1 hour as shown in Table 1. The genres of videos are News, Entertainment, Education, Indian talk shows, Comedy, Stanford online lectures, and British TV series. The minimum and maximum video quality of all the videos were $144\mathrm{p}$ and $1080\mathrm{p}$ . We use the YouTube developer's API to fetch necessary information about a particular video using its unique identifier. We made HTTP GET request with the video's unique identifier. It is important to note that this endpoint is no longer available, maybe due to changes in YouTube's policies. We obtain a mapping between itag, bitrate, and the video quality corresponding to a particular video using the YouTube developers API. Table 2 shows the mapping of itag to the corresponding bitrate and the quality label of a video. Multiple quality labels include $144\mathrm{p}$ , $240\mathrm{p}$ , $360\mathrm{p}$ , $480\mathrm{p}$ , $720\mathrm{p}$ , and $1080\mathrm{p}$ . YouTube supports both constant bitrate (CBR) and variable bitrate (VBR) encoding; thus, a same quality label can have multiple bitrates and hence multiple itags. For example, Table 2 shows three different bitrates for each quality label and corresponding itags. + +Table 1: Details of the selected videos + +
Number of Videos46
Video duration40 minutes - 1 hour
Types of VideosNews, Entertainment and Education videos
Minimum Video Quality144p
Maximum Video Quality1080p
+ +Table 2: Video-Info Table of two sample videos + +
Video ID→-SiOHTHIN4
Itag1372213513413316018136242136398244397243396278394
Bitrate446658574321093084565462830167912182657606620290852444712029085149718884942973740549280339955411211094436
Quality1080p720p480p360p240p144p360p720p240p720p720p480p480p360p360p144p144p
Video ID→4QCHHal-pt4
Itag2482472442432422781822137136135397134396133395278
Bitrate268053115035937568224122072248679869951275964974044664521804135101962970671560378138795830354518517998699
Quality1080p720p480p360p240p144p360p720p1080p720p480p480p360p360p240p240p144p
+ +Network emulation: The purpose of such an H3B tool is to replay a given network behavior and stream YouTube videos on that network. While it is not possible to replay bandwidth patterns over an "in-the-wild" setup, a large number of recent studies [28, 12, 31, 45, 4, 9] have relied on benchmark network emulator frameworks like Mahimahi [33] to analyze the protocol performance in a realistic setup. Accordingly, to emulate a specific network bandwidth over the test setup, we use Mahimahi Link Shell (mm-link) emulation, which emulates network links using user-specified packet delivery trace files. Mahimahi maintains two queues - one for the uplink traffic and the second for the + +downlink traffic. Whenever packets arrive from the Mahimahi's mm-link or Internet, it is placed directly into one of two packet queues depending upon whether it is uplink or downlink. Then it releases the packets based on the input packet-delivery trace file. So, each line in a packet-delivery trace file represents the time at which the packet of size MTU can be delivered. Also, mm-link wraps to the beginning of the input packet-delivery trace file on reaching its end. We write a Python script to generate such packet-delivery trace file to support the corresponding network bandwidth. Other than emulating bandwidth patterns, we support emulating any natural network packet trace collected using a packet capture tool such as Wireshark or tcpdump. We converted the packet traces to packet-delivery trace files using the mechanism used in [28]. + +![](images/54e030467b4d4fb955e9ade10988924ca4ef5868c3646893272b3a984a468916.jpg) +Figure 1: H3B architecture + +# 3.2 H3B tool + +H3B first creates a new user profile and then saves the location of the user data directory, where application logs are to be stored. It allows enabling and disabling QUIC while streaming the videos. In Chrome/Chromium, the -enable-quick flag is employed, while in Firefox, a set of preferences including network.http. http3. enable, network. http. http3. enable_0rtt, network. http. http3. priority, network. http. http3. support_version1, and network. http. http3. enabled are adjusted accordingly. Notably, when we disable QUIC, the browser setup uses the legacy HTTP/2 instead of HTTP/3. In order to embed the YouTube video inside the browser, we created our I-Frame and appended the YouTube video id at the end of "https://www.youtube.com/embed", which is called an I-Frame source URL [43]. The autoplay was also set inside the I-frame to play the video once the player was loaded automatically. We collect the application logs by creating a log extension and then integrate it into the browser. Loading this log extension was easier in Chromium, but in the case of Firefox, it was a difficult task. Therefore, for Firefox, we use the command-line tool web-ext to run this extension with the -verbose flag to print the logs in the terminal. We have used console.log API inside the log extension to collect the logs. It logs all the HTTP requests and responses between the client and the server. Inside the logs, we observe two types of requests (1) the video playback request, which contains the video segment information, and (2) the QoE request. In addition to the application level logs, we collect the network level logs. We use the packet capture tool tcpdump to collect the network packet captures (pcap). On the completion of video streaming, the application and network logs are stored in the local file directory, terminating the tcpdump process along with the user profile. + +# 3.3 Output of H3B + +H3B generates an application log and the corresponding network log at its output. + +```javascript +"52": { "type": "videoplayback", "request-ts": 6.583864990234375, "complete-ts": 13.9705791015625, "total_time": 7.386714111328125, "total_bytes": 174508, "complete_range": [ 0, 174508 "58": { "type": "streamingstats", "request-ts": 10.362389892578125, "itag": 397, "buffer_health": "10.027:0.00", "cmd": "10.027:0.000", "bwe": "10.027:130000", "vps": "10.027:B" }, "complete_itag": 397, "complete_rbuf_sec": 0.0, "complete_rbuf": 0, }, "complete_rn": 1, "complete_clen": 150413396, "complete_dur": 2624.64, "kbytes/second": 23070.876465714333 }, +``` + +Application log structure: The application logs provide two types of information. (1) Video-related information while streaming a YouTube video: The timestamp of the requested segment, total bytes of the segment, the itag value (tells the audio and video quality), the duration of the requested segment, and the protocol (TCP or QUIC) it uses for the segment request. (2) Statistics about video streaming: Information like the amount of video data that has been rendered and has been played, the quality of the segment stored in the buffer, buffer health (tell at any time $t$ how much amount of video is buffered in the buffer), and the playback duration in terms of how much video has been played. The sample application log description is shown above, and the details of important parameters in Table 3. + +Table 3: Application log description + +
Application log parameterDescription
request-tsthe request timestamp of the requested segment
complete-tsthe complete timestamp of the requested segment
total_timethe total timestamp from segment request to segment request gets complete
total_bytesthe total bytes of the segment
rangebytes of the segment data to be downloaded
itagthe requested video segment quality
rbufthe receiver buffer in seconds
clenthe maximum possible length of the requested segment
durduration of the downloaded segment
buffer_healthat real-time t for how much duration the video has been buffered
cmtat real-time t how much duration of the video has been played
+ +We convert the raw application log into a JSON file by only extracting the required features. We compute QoE using the application logs and video_info file shown in Table 2 that provides itag to bitrate mapping. Thus, we obtain QoE.csv that contains the average bitrate, average bitrate variation, average stall, and QoE. We use the formula-based QoE (Quality of Experience) metric used in Pensieve [28]; $QoE = Avg$ . Bitrate - Avg. Bitrate Variation - $4.3 \times Avg$ . Stall + +Various QoE formulas used in previous literature are mentioned in [7]. The prior work either provides more preference to bitrate, bitrate variation, or to rebuffering. We calculate QoE at the chunk level not providing more preference to any one factor. [7] discusses the influencing factors such as quality switching frequency, quality switching magnitude, quality switching direction, duration of rebuffering, frequency of rebuffering, bitrate, initial delay, and user engagement. Our QoE metric considers most of them, including initial delay which is nothing but the initial stall duration. Since we are computing the average bitrate, variation, and stall the impact of factors like quality switching direction, and frequency of rebuffering also gets included. + +Network log structure: The network logs are packet captures (pcap) obtained using tcpdump. Packet captures contain detailed information about a packet such as timestamp, source, and destination IP address, port number, application protocol (HTTP), transport protocol (TCP or QUIC or UDP), fields specific to TCP (SYN, FIN, RST, etc.), packet length, sequence number, acknowledgment + +number (for TCP, for QUIC one has to decrypt the packet header), packet type (TCP data/ACK, QUIC handshake/initial/payload, etc.), whether retransmitted, etc. Since we are interested in correlating the number of bytes transferred over two transport protocols TCP and QUIC with application layer QoE, we extract five fields and convert them into a csv format. The structure of the network.csv is shown below: + +Timestamp, Source IP, Destination IP, protocol, length 03:04:42, 192.168.29.48, 142.251.12.188, QUIC, 559 + +# 4 Dataset Description + +We launched a measurement campaign using H3B over 5 different geographical locations. The locations are: Delhi, Bangalore, New York, Germany, and Singapore. Such a measurement was conducted using two different web browsers Chrome/Chromium and Firefox. We stream once over HTTP/3 and once over legacy HTTP/2. One of the testbeds is set up inside our campus premises, and the rest are set up using Digital Ocean machines. We emulate 5 different bandwidth patterns namely Dynamic High (DH): a good bandwidth, Dynamic Low (DL): a poor bandwidth and Dynamic Very Low (DVL): a very poor bandwidth, Real: real packet captures under mobility over WiFi and over cellular network. + +Bandwidth patterns: To emulate DH, DL, and DVL bandwidth patterns, we created user-specified packet delivery trace files. Table 4 shows the bandwidth patterns where each pattern has the starting bandwidth, last bandwidth, and the Jump required to move from the starting bandwidth to the last bandwidth. After each jump, that bandwidth stays at that bandwidth for the Jump duration. This pattern from start to last bandwidth and then back from last to start bandwidth repeats in a cyclic fashion. Note that to emulate these bandwidth patterns, we use the Mahimahi network emulation tool. We also replay real packet captures. The real packet captures are of two categories, over WiFi and cellular network. Since we focus on a poor network, we utilize 105 mobility traces collected while watching YouTube videos over WiFi [16]. For cellular networks, we collected "in-the-wild" traces from 10 Android smartphone users watching YouTube videos of their choice using the cellular network. The volunteers are from 4 developing countries namely Kenya, Saudi Arabia, Pakistan, and India. The data was collected from 14 cities. Note that we focus on low and middle-income developing countries for collecting the data. The volunteers were instructed to collect network traces (pcap) using a pcapDroid application on their smartphones while watching the YouTube videos. The data was collected while the volunteers were traveling to/from their workspace to home. The phones used by volunteers had android versions 9-12 + +Table 4: Bandwidth Patterns + +
Bandwidth PatternStarting BandwidthLast BandwidthJump (Kbps)Jump Duration
Dynamic High (DH)1152Kbps896Kbps-256240 sec
Dynamic Low (DL)640Kbps128Kbps-256240 sec
Dynamic Very Low (DVL)64Kbps256Kbps+6460 sec
RealMobility Traces from [16] and "in-the-wild" volunteer traces
+ +Dataset structure: As part of the campaign, we obtain application and network level dataset of 5936 sessions (Firefox: 100 over HTTP/3 + 100 over HTTP/2, Chromium: 1864 over HTTP/3 + 1864 over HTTP/2, Original browser: 1004 over HTTP/3 + modified browser: 1004 over HTTP/3) for a total duration of 5464 hours. Fig. 2 (a) shows the hierarchy of the dataset collected. $FS1$ , $FS2 \dots$ , and $FS100$ are the streaming session pairs over the Firefox browser. $S1$ , $S2$ , $S3 \dots$ , and $S1864$ are the streaming session pairs over Chromium browser. Each streaming session pair consists of streaming once over HTTP/3 and once over HTTP/2. Fig. 2(b) shows the hierarchy of the dataset collected for original and modified chromium browser over HTTP/3. $S1865$ , $S1866 \dots$ , and $S2868$ are the streaming session pairs over the original & modified chromium browser. Table 5 shows the duration of the collected dataset over different browsers, locations, and network conditions. + +![](images/02610436303b0c9a0b027b9cf7bd6f8641da9c37d7def88f230712589f9b7f1f.jpg) +(a) + +![](images/ee0d38fa4edf801e9337c55e426844b4d650072919a9b5a1f0e9d4a05cd169a7.jpg) +(b) +Figure 2: Dataset Hierarchy of (a) 1864 video session pairs over chromium and 100 video session pairs over Firefox browser on DH, DL, DVL and Real bandwidth patterns across Delhi, Bangalore, Singapore, Germany, and New York. (b) 1004 video session pairs over modified and original HTTP/3 enabled chromium-browser in Delhi on DVL bandwidth pattern. + +Table 5: Dataset Details + +
ConfigurationDurationConfigurationDuration
Chromium Delhi duration3941 hoursFirefox DVL duration142 hours
Chromium Bangalore duration490 hoursChromium Total DH duration278 hours
Chromium Singapore duration253 hoursChromium Total DL duration1712 hours
Chromium Germany duration459 hoursChromium Total DVL duration157 hours
Chromium New York duration218 hoursChromium Total Real duration894 hours
Chromium Total OB duration738 hoursChromium Total MB duration715 hours
+ +![](images/42670643e980eb8f0343d6784668c57910835028ce7c14c115c62b2e714a9499.jpg) +(a) + +![](images/855a95cb4efac9ffe4072a58aa78a5b84e399d9d9c90c71c2e4ba1caa06a7af8.jpg) +(b) + +![](images/63f22152063f22800b41857aa6fa0b464576cc02ac87b5ec8bc7d2c6da5dd588.jpg) +(c) +Figure 3: Hypothesis testing results over 1964 streaming session pairs on QoE over (a) Chromium and Firefox Browser, (b) different bandwidth patterns, and (c) TCP Fallback over Chrome/Chromium: Percentage of bytes transferred over TCP across all HTTP/3 streaming sessions for various bandwidth patterns. DH: Dynamic High, DL: Dynamic Low, DVL: Dynamic Very Low, and Real. + +# 5 Dataset Analysis + +We now use our dataset for analyzing the performance of HTTP/3. For the same, we compare the performance of HTTP/3 with legacy HTTP/2. + +QoE of HTTP/3 vs legacy HTTP/2: To compare the QoE obtained over HTTP/3 and HTTP/2 statistically, we perform hypothesis testing over all 1964 streaming session pairs. We find out in what percentage of sessions HTTP/3 provides (a) better, (b) statistically similar, and (c) worse QoE compared to HTTP/2. Case (b) is considered the null hypothesis and vice versa as an alternative hypothesis. If the null hypothesis gets rejected, then we perform the two-tail test [30] to check whether the HTTP/3-enabled browser performs better. Fig. 3(a) shows the hypothesis-testing results on computed QoE values over Chromium and Firefox browsers. In the case of Chromium, we observe that for $81.5\%$ of the cases, the two browsers behave differently. For $41\%$ of the cases, we found the legacy HTTP/2 browser outperforms HTTP/3 one. In the case of Firefox, we have a similar observation that HTTP/3 is not always a winner. When we look into the network logs for these $41\%$ + +cases where HTTP/3 underperforms legacy HTTP/2: there were several instances of QUIC to TCP and TCP to QUIC protocol switching. From Fig. 3(b), we observe that for the scenarios under DL and DVL, HTTP/2 performs better than HTTP/3 for more than $40\%$ of the video session pairs. Fig. 3(c) shows the CDF graph of the percentage of TCP bytes experienced at various bandwidth patterns. We observe that for $DH$ , there is the almost negligible presence of TCP traffic for $80\%$ times. Indeed, it is expected that QUIC does not face many connection failures for high bandwidth. For $DL$ , $DVL$ and Real, the presence of TCP traffic is $32\%$ , $52\%$ , and $98\%$ respectively for $70\%$ times. Fig. 4(a) indicates that HTTP/3 outperformed legacy for more than $40\%$ cases in Bangalore and Germany. However, across all five geographical regions, there are more than $30\%$ cases when legacy yielded better application QoE compared to HTTP/3. Again, correlating this TCP fallback in Fig. 4(b), we observe that Delhi has more instances of TCP traffic within HTTP/3 enabled streaming, compared to other locations; Fig 4(a) shows that HTTP/2 provides better QoE for these two cities (Delhi and Singapore) compared to HTTP/3. Thus, we observe that protocol switching impacts video streaming performance. + +![](images/4fdbb34e66b74416d818cb51f046ce24b7837aa571a604d6b004aae304e1c06e.jpg) +(a) + +![](images/5bc9d8de3d4767b3c9737158c3d77a33dcad699663b3927f729e45fbdcb9da7b.jpg) +(b) +Figure 4: (a) Hypothesis testing results over different geographical locations. (b) TCP Fallback over Chrome/Chromium across locations. D: Delhi, B: Bangalore, N: New York, G: Germany and S: Singapore and (c) Hypothesis testing results over 1004 streaming session pairs on QoE of Original Chromium Browser (OB) and Modified Chromium Browser (MB) + +![](images/273bf033907c2bb08542d6783ec32c848d19453ff5f3495ec77ed6ec87919350.jpg) +(c) + +QoE of HTTP/3 supported original vs modified browser: We observe that such a protocol switching occurs due to faulty implementation of fallback at the browser. Hence, an HTTP/3 browser tends to fall back to TCP at a low-bandwidth network. We, therefore, modify the Chromium source code by disabling the fallback completely. We then launch another campaign using H3B. We obtain 1004 YouTube session pairs over original and modified browser. We then perform hypothesis testing and observe that HTTP/3 modified browser outperforms original browser for $60\%$ of cases. Thus, we conclude that fallback to TCP hinders achieving the benefits of HTTP/3. + +# 6 Discussion + +We now discuss the limitations of our dataset, ethical considerations, and how the dataset can be utilized for future research. + +# 6.1 Data Limitations + +(1) More diversity in networking conditions: Though our dataset comprises various bandwidth patterns with a specific focus on poor bandwidth, there can be more diversity by including high bandwidth network types and other poor network bandwidth types. We allow replaying real packet traces; collected under mobility for both WiFi and cellular. This can be extended further with diverse traces collected under various mobility/poor bandwidth scenarios. (2) More diversity in locations: We have collected data for 5 locations. There can be much more diversity in locations focusing on locations from developing countries. (3) Diversity in platform: Our data collection was performed from a desktop platform; it can also be extended to include mobile platforms. + +# 6.2 Ethical Considerations + +This paper does not directly interact with human subjects or use any network services beyond their usage restrictions. All the network services used in this work (Digital Ocean machines) have been paid as per the usage. + +# 6.3 Research Topics + +Dynamic tuning of protocol configurations based on learning the network environment: Given the diversity of our dataset in terms of different network conditions, this can help better configure the protocol hyper-parameters used for video streaming. For example, to provide better application QoE, one can analyze the network conditions to tune hyperparameters like deciding the optimal transport protocol i.e., QUIC or TCP, tuning the transport protocol's congestion control or tuning the ABR (Adaptive Bitrate Streaming) parameters, and so on. + +Intelligent network-application integration: We believe our dataset can allow intelligent network and application integration. To provide better application QoE, the applications should adapt themselves to network conditions. Since our dataset has both types of logs, this can enable better design of applications that will possibly look for signatures in the network behavior and adapt accordingly. + +Predicting and optimizing YouTube's QoE: Our dataset provides the QoE of YouTube applications under different network bandwidth patterns. Utilizing such data, one can develop a learning model to predict YouTube's QoE. Further, our dataset for poor or variable networks provides an opportunity to use intelligent learning techniques for optimizing YouTube QoE. + +Predicting QoE of HTTP/3 based video streaming application: Our dataset contains the network packet exchanges and the corresponding QoE. We believe our dataset will be valuable in training a learning model by observing the network packet exchanges and QoE. Such a model can predict the QoE of other video streaming applications that use similar packet exchanges like YouTube. + +# 7 Conclusion + +In this paper, we presented H3B, a toolbox to collect application and network layer logs for YouTube video streaming. Such a tool can emulate any given network pattern, which is one of its major features. We utilized this tool to launch a measurement campaign for 5 different geographical locations and 5 different bandwidth patterns. We obtained a dataset of 5464 streaming hours over 5936 sessions of YouTube video streaming. We further analyzed the dataset and observed that HTTP/3 is not always a winner compared to legacy HTTP/2. We believe our dataset will be valuable to the community for developing various solutions to provide better application QoE on top of the HTTP/3 browsers. + +# References + +[1] S. Ahmad, A. L. Haamid, Z. A. Qazi, Z. Zhou, T. Benson, and I. A. Qazi. A view from the other side: Understanding mobile phone characteristics in the developing world. In Proceedings of the 2016 Internet Measurement Conference. +[2] Z. Akhtar, Y. S. Nam, R. Govindan, S. Rao, J. Chen, E. Katz-Bassett, B. Ribeiro, J. Zhan, and H. Zhang. Oboe: Auto-tuning video ABR algorithms to network conditions. In Proceedings of SIGCOMM'18. ACM. +[3] S. Arisu and A. C. Begen. Quickly starting media streams using quic. In Proceedings of the 23rd Packet Video Workshop, 2018. +[4] V. Arun, M. T. Arashloo, A. Saeed, M. Alizadeh, and H. Balakrishnan. Toward formally verifying congestion control behavior. In Proceedings of the 2021 ACM SIGCOMM 2021 Conference. +[5] C. G. Bampis, Z. Li, I. Katsavounidis, T.-Y. Huang, C. Ekanadham, and A. C. Bovik. Towards perceptually optimized adaptive video streaming-a realistic quality of experience database. IEEE Transactions on Image Processing, 30:5182-5197, 2021. +[6] C. G. Bampis, Z. Li, A. K. Moorthy, I. Katsavounidis, A. Aaron, and A. C. Bovik. Study of temporal effects on subjective video quality of experience. IEEE Transactions on Image Processing, 26(11):5217-5231, 2017. +[7] N. Barman and M. G. Martini. Qoe modeling for http adaptive video streaming-a survey and open challenges. IEEE Access, 7:30831-30859, 2019. + +[8] T. Böttger, F. Cuadrado, G. Tyson, I. Castro, and S. Uhlig. Open connect everywhere: A glimpse at the internet ecosystem through the lens of the netflix cdn. ACM SIGCOMM Computer Communication Review, 2018. +[9] F. Cangialosi, A. Narayan, P. Goyal, R. Mittal, M. Alizadeh, and H. Balakrishnan. Site-to-site internet traffic control. In Proceedings of the Sixteenth European Conference on Computer Systems, 2021. +[10] C. Chen, L. K. Choi, G. De Veciana, C. Caramanis, R. W. Heath, and A. C. Bovik. Modeling the time—varying subjective quality of http video streams with rate adaptations. IEEE Transactions on Image Processing, 23(5):2206-2221, 2014. +[11] I. S. David Schinazi, Fan Yang. Chrome is deploying HTTP/3 and IETF QUIC. https://blog.chromium.org/2020/10/chrome-is-deploying-http3-and-ietf-quic.html, 2020. [Online; accessed 19-October-2021]. +[12] M. Dong, T. Meng, D. Zarchy, E. Arslan, Y. Gilad, B. Godfrey, and M. Schapira. PCC vivace: Online-learning congestion control. In 15th {USENIX} Symposium on Networked Systems Design and Implementation, 2018. +[13] Z. Duanmu, K. Zeng, K. Ma, A. Rehman, and Z. Wang. A quality-of-experience index for streaming video. IEEE Journal of Selected Topics in Signal Processing, 11(1):154-166, 2016. +[14] M. Duke. RFC 9369 QUIC version 2. 2023. +[15] D. Ghadiyaram, J. Pan, and A. C. Bovik. A subjective and objective study of stalling events in mobile streaming videos. IEEE Transactions on Circuits and Systems for Video Technology, 29(1):183-197, 2017. +[16] C. Gutterman, K. Guo, S. Arora, X. Wang, L. Wu, E. Katz-Bassett, and G. Zussman. Requet Dataset. https://github.com/Wimnet/RequetDataSet, 2019. [accessed December 19, 2023]. +[17] C. Gutterman, K. Guo, S. Arora, X. Wang, L. Wu, E. Katz-Bassett, and G. Zussman. Requet: Real-time qoe detection for encrypted youtube traffic. In Proceedings of the 10th ACM Multimedia Systems Conference, pages 48-59, 2019. +[18] R. Hamilton. Re: [QUIC] graceful fallback to tcp. https://mailarchive.ietf.org/arch/msg/quic/ph1IAVBa5pW1AgDvr8fbD741Auw/, 2016. [Online; accessed December 19, 2023]. +[19] J. Herbots, M. Wijnants, W. Lamotte, and P. Quax. Cross-layer metrics sharing for quicker video streaming. In Proceedings of CoNEXT'20. +[20] J. Iyengar and M. Thomson. RFC 9000: QUIC: A UDP-based multiplexed and secure transport. Omtermet Engomeeromg Task Force, 2021. +[21] A. M. Kakhki, S. Jero, D. Choffnes, C. Nita-Rotaru, and A. Mislove. Taking a long look at quic: an approach for rigorous evaluation of rapidly evolving transport protocols. In Proceedings of the 2017 Internet Measurement Conference, pages 290-303, 2017. +[22] T. Karagkioules, D. Tsilimantos, S. Valentin, F. Wamser, B. Zeidler, M. Seufert, F. Loh, and P. Tran-Gia. A public dataset for youtube's mobile streaming client. In 2018 Network Traffic Measurement and Analysis Conference (TMA), pages 1-6. IEEE, 2018. +[23] A. Langley, A. Riddoch, A. Wilk, A. Vicente, C. Krasic, D. Zhang, F. Yang, F. Kouranov, I. Swett, J. Iyengar, et al. The quic transport protocol: Design and internet-scale deployment. In Proceedings of the conference of the ACM special interest group on data communication, 2017. +[24] J. Le Feuvre, J.-M. Thiesse, M. Parmentier, M. Raulet, and C. Daguet. Ultra high definition hevc dash data set. In Proceedings of the 5th ACM Multimedia Systems Conference, pages 7-12, 2014. + +[25] D. Lorenzi, M. Nguyen, F. Tashtarian, S. Milani, H. Hellwagner, and C. Timmerer. Days of future past: an optimization-based adaptive bitrate algorithm over http/3. In Proceedings of the 2021 Workshop on Evolution, Performance and Interoperability of QUIC. +[26] N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim. Applications of deep reinforcement learning in communications and networking: A survey. IEEE Communications Surveys & Tutorials, 21(4):3133-3174, 2019. +[27] K. MacMillan, T. Mangla, J. Saxon, and N. Flamster. Measuring the performance and network utilization of popular video conferencing applications. In Proceedings of the 21st ACM Internet Measurement Conference, 2021. +[28] H. Mao, R. Netravali, and M. Alizadeh. Neural adaptive video streaming with pensieve. In ACM SIGCOMM, pages 197-210, 2017. +[29] A. Mondal, S. Sengupta, B. R. Reddy, M. Koundinya, C. Govindarajan, P. De, N. Ganguly, and S. Chakraborty. Candid with youtube: Adaptive streaming behavior and implications on data consumption. In $NOSSDAV'17$ . +[30] D. C. Montgomery. Design and analysis of experiments. John wiley & sons, 2017. +[31] J. Nejati and A. Balasubramanian. An in-depth study of mobile browser performance. In Proceedings of the 25th International Conference on World Wide Web, 2016. +[32] R. Netravali, A. Sivaraman, S. Das, A. Goyal, K. Winstein, J. Mickens, and H. Balakrishnan. Mahimahi: Accurate record-and-replay for HTTP. In Usenix Annual Technical Conference, pages 417-429, 2015. +[33] R. Netravali, A. Sivaraman, S. Das, A. Goyal, K. Winstein, J. Mickens, and H. Balakrishnan. Mahimahi: Accurate Record-and-Replay for HTTP. In 2015 USENIX Annual Technical Conference, 2015. +[34] T. R. A. of India. The Indian Telecom Services Performance Indicators. https://www.trai.gov.in/sites/default/files/QPIR_27082021.pdf, 2021. +[35] M. Palmer, T. Krüger, B. Chandrasekaran, and A. Feldmann. The quic fix for optimal video streaming. In Proceedings of the Workshop on the Evolution, Performance, and Interoperability of QUIC, pages 43-49, 2018. +[36] C. Perkins and J. Ott. Real-time audio-visual media transport over QUIC. In In Proceedings of EPIQ'20. +[37] I. Swett. QUIC Deployment Experience @ Google. https://www.ietf.org/proceedings/96/slides/slides-96-quic-3.pdf, 2017. [Online; accessed 19-October-2021]. +[38] B. Taraghi, H. Amirpour, and C. Timmerer. Multi-codec ultra high definition 8k mpeg-dash dataset. In Proceedings of the 13th ACM Multimedia Systems Conference, pages 216-220, 2022. +[39] S. Tellakula. Comparing HTTP/3 vs. HTTP/2 Performance. https://blog.cloudflare.com/http-3-vs-http-2/, 2020. [Online; accessed 19-October-2021]. +[40] T. Van, H. A. Tran, S. Souihi, and A. Mellouk. Empirical study for dynamic adaptive video streaming service based on Google transport QUIC protocol. In 2018 IEEE 43rd Conference on Local Computer Networks, 2018. +[41] F. Wamser, M. Seufert, P. Casas, R. Irmer, P. Tran-Gia, and R. Schatz. Yomoapp: A tool for analyzing qoe of youtube http adaptive streaming in mobile networks. In 2015 European Conference on Networks and Communications (EuCNC). IEEE, 2015. +[42] S. Wassermann, P. Casas, M. Seufert, and F. Wamser. On the analysis of youtube qoe in cellular networks through in-smarphone measurements. In 2019 12th IFIP Wireless and Mobile Networking Conference (WMNC). IEEE, 2019. + +[43] YouTube. YouTube Embedded Players and Player Parameters. https://developers.google.com/youtube/player_parameters, 2021. [Online; accessed December 19, 2023]. +[44] A. Yu. Benchmarking QUIC. https://medium.com/@the.real.yushuf/benchmarking-quic-1fd043e944c7, 2020. [Online; accessed 19-October-2020]. +[45] Z. Zheng, Y. Ma, Y. Liu, F. Yang, Z. Li, Y. Zhang, J. Zhang, W. Shi, W. Chen, D. Li, et al. Xlink: Qoe-driven multi-path QUIC transport in large-scale video services. In Proceedings of the 2021 ACM SIGCOMM 2021 Conference. +[46] X. Zuo, J. Yang, M. Wang, and Y. Cui. Adaptive bitrate with user-level QoE preference for video streaming. In IEEE INFOCOM, pages 1279-1288. IEEE, 2022. \ No newline at end of file diff --git a/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/images.zip b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6eadd566741c2cf8474ec7b5a4e50f0a10098149 --- /dev/null +++ b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cddea89bbabbc42f574cc8d9bc07fd1b7689146bf488fa2fb8c543f7af38bbb +size 416055 diff --git a/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/layout.json b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6539c8882c577cba16719340a0532fdcbddbe68f --- /dev/null +++ b/adatasetforanalyzingstreamingmediaperformanceoverhttp3browsers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ecf800ca6bb9da38eba1613778921a855e3aaf6e7e571c430111411fbf9d43e +size 304023 diff --git a/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_content_list.json b/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..58ecdc7efd695fd914cb7bb9e321938ac8423683 --- /dev/null +++ b/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9f1bf73f5a6e50c3b65eba7003a920aa8fbbd5371fb078dc578a636ed243268 +size 71344 diff --git a/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_model.json b/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0bd935b0309dc3f5c5c9a804c780851e12586fa5 --- /dev/null +++ b/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:591d0b122b3161f6868e0469512ab84a5035ff598c4284df3bd8486ce45c8a61 +size 93168 diff --git a/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_origin.pdf b/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ba5de132057a0eeddf163f72e852e444581200b5 --- /dev/null +++ b/adatasetofrelighted3dinteractinghands/b60c6cf4-b1eb-4cfa-bb5a-de073de1c20b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:577e81331fb63cc2b6d0ac3f0bf768ae66f837bab43170826996faa12acc72f3 +size 7727228 diff --git a/adatasetofrelighted3dinteractinghands/full.md b/adatasetofrelighted3dinteractinghands/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3044f61fce19153fd800eaa4682b5513ad57cb5b --- /dev/null +++ b/adatasetofrelighted3dinteractinghands/full.md @@ -0,0 +1,320 @@ +# A Dataset of Relighted 3D Interacting Hands + +Gyeongsik Moon + +mks0601@meta.com + +Shunsuke Saito + +shunsukesaito@meta.com + +Weipeng Xu + +xuweipeng@meta.com + +Rohan Joshi + +rohanjoshi@meta.com + +Julia Buffalini + +jbuffalini@meta.com + +Harley Bellan + +harleybellan@meta.com + +Nicholas Rosen + +nicholasrosen@meta.com + +Jesse Richardson + +jesserichardson@meta.com + +Mallorie Mize + +malloriemize@meta.com + +Philippe de Bree + +phillippedebree@meta.com + +Tomas Simon + +tsimon@meta.com + +Bo Peng + +bopeng@meta.com + +Shubham Garg + +ssgarg@meta.com + +Kevyn McPhail + +kmcphail@meta.com + +Takaaki Shiratori + +tshiratori@meta.com + +Meta Reality Labs Research + +# Abstract + +The two-hand interaction is one of the most challenging signals to analyze due to the self-similarity, complicated articulations, and occlusions of hands. Although several datasets have been proposed for the two-hand interaction analysis, all of them do not achieve 1) diverse and realistic image appearances and 2) diverse and large-scale groundtruth (GT) 3D poses at the same time. In this work, we propose Re:InterHand, a dataset of relighted 3D interacting hands that achieve the two goals. To this end, we employ a state-of-the-art hand relighting network with our accurately tracked two-hand 3D poses. We compare our Re:InterHand with existing 3D interacting hands datasets and show the benefit of it. Our Re:InterHand is available in here. + +# 1 Introduction + +Humans often make two-hand interactions during daily conversation or when interacting with objects. Self-similarity, complicated articulations, and small sizes of hands make analyzing such two-hand interactions greatly challenging. In particular, when the input of an analyzing system is a single image, the problem becomes much more difficult as in most cases, most of a hand is occluded by the other hand. + +One fundamental direction to successfully analyze interacting hands is collecting large-scale 3D interacting hands datasets, which contain in-the-wild images and corresponding 3D groundtruth (GT). Unfortunately, this is not trivial. Due to the inherent scale and depth ambiguity, true 3D data is not obtainable from a single 2D observation. In addition, a single 2D observation does not provide enough information of other viewpoints, necessary for the 3D data collection. Therefore, there have been three alternative approaches to collect 3D hand data. + +![](images/863b20ea4376ff2d6b47ef98b50b41e464e9b6fa7a38bb928feffc5d750a1046.jpg) +(a) InterHand2.6M (Lab dataset) +Figure 1: Comparison of datasets from existing data capture approaches ((a), (b), and (c)) and our new Re:InterHand dataset ((d)). ITW represents in-the-wild environments. + +![](images/825f272bbdc13b5d005a3509bc8dbd89c17523f2765a03939abff188d913750b.jpg) +(b) HIC (Natural dataset) + +![](images/b0ee79b68fbe952a6ee962dcddcc00b13dc7774c85458c141b4ae37a3425e17b.jpg) +(c) Ego3DHands (Composited dataset) + +![](images/d303463560770b678974a6688802b7d4f15834f9510d4dbd08fbc3fa031918ca.jpg) +(d) Re:InterHand (Ours) + +
Appearance diverse?NoNoYesYes
Appearance realistic?YesYesNoMiddle
Appearance close to ITW?NoYesNoMiddle
GT 3D pose diverse?YesNoYesYes
+ +# 1.1 Lab datasets + +Lab datasets are captured from specially designed studios with hundreds of calibrated and synchronized cameras. InterHand2.6M [30, 29] is the most widely used 3D interacting hands dataset, and it is captured from a studio with 100 calibrated and synchronized cameras. Fig. 1 (a) shows an image example of InterHand2.6M. + +Pros. They provide large-scale, diverse, and accurate GT 3D poses. + +Cons. Images have monotonous appearances. The figure shows that images have far and limited diversities of color, backgrounds, and illuminations compared to those of in-the-wild images. + +# 1.2 Natural datasets + +Natural datasets, such as HIC [52] and RGB2Hands [53], are captured from daily environments with a much smaller number of cameras, for example, a single RGBD camera. Fig. 1 (b) shows an image example of HIC. + +Pros. As the figure shows, the image appearance is close to those of in-the-wild images. + +Cons. The diversity and scale of the dataset is limited. Although the capture setup is much lighter than that of lab datasets, bringing the setup and capturing at diverse places is not easy, which makes appearance diversity limited (e.g., in front of desks). As only a few cameras are used, such datasets could not provide accurate annotations for complicated interacting hands. Therefore, they provide simple poses. + +# 1.3 Composited datasets + +Composited datasets, such as Ego3DHands [23], are a composition of hand images with random background images. The purpose of the composition is to enhance the appearance diversity of lab images or synthesized images. Fig. 1 (c) shows an example of it. + +Pros. They often have accurate and diverse GT 3D poses as the composition is performed on lab datasets or synthesized datasets. + +Cons. The figure shows that its image appearances are not realistic due to the light inconsistency between foreground and background. + +# 1.4 The proposed Re:InterHand dataset + +All the existing three approaches have their own limitations. In this work, we propose Re:InterHand dataset, which complements all three existing dataset collection approaches. Fig. 1 (d) shows an + +Table 1: Comparison of hand datasets that provide GT 3D poses. To count the number of images, we consider images from different viewpoints at the same time step as different ones. + +
DatasetsImage appearanceGT# of subjects# of imagesTwo-hand interactions
Dexter+Object [44]Natural3D fingertip coord.13KNo
STB [59]Natural3D joint coord.136KNo
EgoDexter [33]Natural3D fingertip coord.43KNo
RHD [61]Composite3D joint coord.2044KNo
PanopticStudio [43]Lab3D joint coord.N/A15KNo
FPHA [9]Natural3D joint coord. + 3D obj.6105KNo
GANerated [31]Composite3D joint coord.N/A264KNo
FreiHAND [62]NaturalMANO32134KNo
ObMan [15]CompositeMANO + 3D obj.20150KNo
EHF [39]LabSMPL-X1100No
HO3D [13]Natural3D joint coord. + 3D obj.1078KNo
ContactPose [3]Lab3D joint coord. + 3D obj.502.9MNo
HUMBI [56, 55]LabMANO45324MNo
DexYCB [4]NaturalMANO + 3D obj.10582KNo
AGORA [38]RealisticSMPL-X35019KNo
DART [8]CompositeMANON/A787KNo
BlurHand [34]LabMANO11156KNo
H2O [21]NaturalMANO + 3D obj.4571KWeak
Assembly101 [42]Natural3D joint coord. + action labels53111MWeak
AssemblyHands [35]Natural3D joint coord.343MWeak
ARCTIC [7]LabMANO + SMPL-X + 3D obj.102.1MWeak
HIC [52]NaturalMANO136KStrong
RGB2Hands [53]Natural3D joint coord. wo. fingertips21KStrong
InterHand2.6M [30]LabMANO272.6MStrong
Ego3DHands [23]Composited3D joint coord. + masks155KStrong
Re:InterHand (Ours)RealisticMANO + masks101.5MStrong
+ +image example of our Re:InterHand dataset. Our dataset is constructed by rendering 3D hands with accurately tracked 3D poses and relighting it with diverse environment maps. By using accurately tracked 3D poses from our multi-camera studio, we could secure diverse GT 3D poses. For the relighting, we employ a state-of-the-art hand relighting network [17], which provides diverse and realistic image appearances. The figure shows that our rendered data has close appearances compared to those of in-the-wild images. + +# 2 Related works + +3D hand datasets. Tab. 1 shows comparisons of various 3D hand datasets. Motivated by the Kinect device, early datasets comprise depthmaps [49, 51, 47, 57, 44]. For more practical applications without requiring depth cameras, RGB-based datasets have been introduced. STB [59] includes sequences with simple hand poses. HIC [52] is one of the earliest approaches to address two-hand interactions. RHD [61] consists of synthetically rendered images using commercial software and composited with web-crawled background images. EgoDexter [33] includes sequences with simple hand-object interactions. Panoptic Studio [43] is captured from a specially designed dome, and it contains whole-body humans. FPHA [9] includes hand sequences captured from first-person viewpoints. GANerated [31] is synthetically generated using generative adversarial networks and composited with background images. EHF [39] is a small-scale dataset captured from a multi-camera studio. It includes a whole-body performance of a single subject. ObMan [15] includes simple hand-object interactions. It is synthetically rendered using commercial software and composited with background images. FreiHAND [62] is captured with a portable multi-camera setup in various places. It consists of natural images and composited images. Mueller et al. [32] introduced a synthetic depth map dataset of interacting two hands. YT3D [20] includes web-crawled videos and 3D pseudo-GT of hands. NeuralAnnot [29] introduced 3D pseudo-GT of hands on MSCOCO [24, 18] dataset. Both YT3D and NeuralAnnot fit a 3D hand model [40] to 2D joint coordinates to obtain 3D pseudo-GT. + +![](images/2f60b4cbe93c17fb77d1ddfe61c31ca717303a9c5e9233aa5461085910051741.jpg) +Figure 2: Left: HIC [52], InterHand2.6M [30], and our Re:InterHand have a short distance between two hands. Right: InterHand2.6M [30] and our Re:InterHand have many samples where two hands are in contact. We count a sample as contacting if the shortest distance between the vertices of two hands is smaller than $3\mathrm{mm}$ . We exclude samples of ARCTIC [7] whose distance between two hands is longer than 1 meter. + +![](images/5f0b0e8b4bae74de44c62ae66bc88ba2f486f1b66cfd072832ed49f59080d388.jpg) + +They mostly contain single-hand 3D pseudo-GT without 3D relative translation between two hands due to depth and scale ambiguity. HO3D [13] includes 3D hands interacting with various types of objects. RGB2Hand [53] introduced a small-scale 3D interacting hands dataset with 3D joint coordinate annotations without fingertips. InterHand2.6M [30] is a large-scale 3D interacting hands dataset, captured from a specially designed multi-camera studio. ContactPose [3] contains sequences of 3D hands and contact maps, generated from hand-object interactions. HUMBI [56, 55] is a large-scale dataset that provides 3D whole-body annotations, captured from a specially designed multicamera studio. DexYCB [4] includes large-scale 3D hands interacting with various types of objects. + +Ego3DHands [23] is a composition with random background images and rendered two-hand images. H2O [21] contains two hands interacting with objects. AGORA [38] is rendered with 3D scans of people and scenes. Like our Re:InterHand dataset, AGORA considers light consistency between foreground and background, which makes their image appearances realistic. Ego4D [11] includes a huge amount of first-person viewpoint videos; however, it does not provide 3D hand annotations. DART [8] contains rendered images of a single hand with accessories and their texture map, alpha-blended with background images from MSCOCO [24]. Assembly101 [42] contains large-scale videos of 3D hands assembling several objects. AssemblyHands [35] improved Assembly101 [42] with a better annotation pipeline. ARCTIC [7] includes 3D hands and whole-body annotation with 3D objects. BlurHand [34] is made from a subset of InterHand2.6M [30]. It includes blurred hand images and corresponding GT 3D hands. + +![](images/66e5f76d5b0a8dc130167452301291aba5d85c62a90b141daaa68af54e3db021.jpg) +Figure 3: t-SNE of two-hands' 3D pose of our Re:InterHand, InterHand2.6M [30], and HIC [52]. + +Although there have been many 3D hands datasets introduced, there is a small number of datasets with strong two-hand interactions [52, 53, 30, 23]. Among them, InterHand2.6M [30] and HIC [52] are widely used as RGB2Hands [53] have no 3D fingertip annotations, and images of Ego3DHands [23] are not photorealistic. Some datasets [61, 43, 39, 20, 3, 56, 55, 21, 38, 29, 42, 35, 7, 34] have two-hand annotations; however, they have weak interactions between hands. Fig. 2 shows that only HIC [52], InterHand2.6M [30], and our Re:InterHand have a short distance between two hands and meaningful ratio of contacting samples. Unfortunately, none of such two-hand datasets has achieved the two goals at the same time: 1) rich and realistic image appearances and 2) accurate and diverse GT 3D poses of interacting hands. Our Re:InterHand is the first dataset that achieves the two goals. In addition, Fig. 3 shows that our Re:InterHand has much more diverse 3D interacting hand poses than InterHand2.6M [30], and HIC [52]. + +![](images/52532e5dd2df861aa77052ffbf3389bd7fb4017c019625e76c9482c14863c827.jpg) +Figure 4: The overall pipeline of constructing our Re:InterHand dataset. + +3D interacting hands recovery. Due to the absence of large-scale datasets, early works [36, 1, 52, 50, 32, 53] are based on a fitting framework, which fits 3D hand models to geometric observations, such as RGBD sequence [36], hand segmentation map [32], and dense matching map [53]. InterHand2.6M [30, 29] motivated many regression-based methods [41, 58, 22, 14, 5, 6, 19, 25, 26]. Such regression-based methods outperform the above fitting-based approaches while running in real-time. Li et al. [22] introduced a Transformer-based network with the cross-attention between right and left hands. Moon [26] presented a 3D interacting hands recovery network that addresses the domain gap between multi-camera datasets and in-the-wild datasets, which results in robust performance on in-the-wild images. + +Relighting humans. Several works [54, 46, 60, 60] are proposed to relight faces and bodies; however, these models are not animatable. To enable relighting with animation, Bi et al. [2] presented a deep relightable appearance model for facial avatars. DART [8] provides a dataset of relighted hands; however, their images are not photorealistic as they do not consider light consistency between foreground and background. Iwase et al. [17] introduced an efficient neural relighting system for photorealistic hand relighting using a student-teacher framework and feature-based relighting [37]. We use the relighting system of Iwase et al. [17] due to their high-quality results and rendering efficiency. + +# 3 Dataset construction + +Fig. 4 shows the overall pipeline for the construction of our dataset. It consists of two stages: capture and relight. + +# 3.1 Capture stage + +The capture stage captures hand data from our multi-camera studio. We capture data from 10 subjects, as shown in Fig. 5. Two types of sequences, peak poses and range of motion, are captured following InterHand2.6M [30]. The peak pose is a sequence, which includes a transition from a neutral pose to a pre-defined pose and then transition back to the neutral pose. The purpose of the peak pose is to capture as diverse poses as possible including extreme poses and maximal finger bent. The range of motion is a sequence, which includes natural hand motion driven with minimal instructions, such as waving hands as if friends are coming over. In this way, we could capture both 1) diverse poses from the peak pose sequences and 2) natural hand motion from the range of motion sequences. We provide more image and pose examples of our dataset in the supplementary material. + +![](images/69c1086a9926245f923ec84092a3e7b4d51cb8afb2501dc2745c2d3616125387.jpg) +Figure 5: Each column shows images of a subject of our Re:InterHand dataset. For each column, the top image with the neutral pose is from the capture stage, and the remaining images with captured poses are from the relight stage. + +Capture studio. Our capture studio has 469 lights and 170 calibrated synchronized cameras. All cameras lie on the front, side, and top hemispheres of the hand and are placed at a distance of about one meter from it. Images are captured with $4096 \times 2668$ pixels at 90 frames per second (fps). Following Bi et al. [2], we interleave fully lit frames and partially lit frames at every 3 frames. The capture stage only uses fully lit frames, and the relight stage uses partially lit frames to train the relighting network. + +2D joint coordinates and 3D scans. We process the raw video data by performing 2D joint detection [45] and 3D scan [12]. The 2D joint detector is trained on our held-out manually annotated dataset, which includes 900K images with rotation center coordinates of hand joints, where our manual annotation tool is similar to that of Moon et al. [30]. Our 2D joint detector has an error of 2.5 pixels in a $1024 \times 667$ image space. + +3D joint coordinates. InterHand2.6M [30] triangulated detected multi-view 2D joint coordinates with the RANSAC algorithm. We found that their approach suffers from temporally inconsistent results as the triangulation does not take into account the similarity between close frames. For example, some joints could have inconsistent semantic positions across viewpoints due to the failure of the 2D joint detector. In this case, triangulated 3D coordinates of such joints could be very different between close frames if selected viewpoints by RANSAC are different. Instead of triangulation, we train a 3D joint detection network, which takes a voxelized 3D scan of hands and is supervised with multi-view 2D joint coordinates. Our network produces much more temporally consistent and smooth results as inputs of close frames (i.e., voxelized 3D scans) are almost the same. + +The network is designed with V2V-PoseNet [27], a state-of-the-art 3D joint detection network from voxelized hands. First, we make two volumes from 3D scans by making 3D bounding boxes around the mean of initially obtained left and right hands' 3D joint coordinates, where the initial ones are obtained with the RANSAC algorithm. Then, we voxelize 3D scans around each 3D bounding box to (96, 96, 96) resolution. The voxelized 3D scans are passed to the V2V-PoseNet, which consists of stacked 3D convolutional layers. We perform soft-argmax [48] to the output of the V2V-PoseNet, which produces 3D joint coordinates in a differentiable way. The obtained 3D joint coordinates are supervised with multi-view 2D joint coordinates by projecting the 3D ones to each viewpoint and calculating $L1$ distance from the 2D ones. We train V2V-PoseNet on all frames, which takes 1 day, and test it on the same frames to obtain 3D joint coordinates of them. Our obtained 3D joint coordinates have an error of $2.0\mathrm{mm}$ . The errors are measured against our held-out human-annotated set. + +3D hand model fitting. We additionally obtain 3D meshes of hands as 1) they provide useful surface information that does not exist in the 3D joint coordinates and 2) they are inputs of the relighting network [17]. To this end, we fit 3D hand models, such as MANO [40], to the obtained 3D joint coordinates and 3D scans of the above using NeuralAnnot [29]. The 3D hand model is a parametric model that produces 3D hand meshes from 3D pose and identity (ID) codes. The 3D pose represents 3D joint angles, and ID codes determine 3D hand shape, such as thickness, in the zero pose. + +NeuralAnnot takes a single image and 3D joint coordinates as inputs and outputs 3D pose and ID codes, used to drive 3D hand models. We use the network architecture of Pose2Pose [28] for NeuralAnnot. The 3D pose and ID codes are supervised with the 3D joint coordinates after performing forward kinematics. Also, 3D meshes from the 3D pose and ID codes are supervised with 3D scans by minimizing the closest distance between 3D meshes and 3D scans. Several regularizers, such as 1) $L2$ regularizers to 3D pose and ID codes, which prevents extreme meshes, and 2) a collision avoidance regularizer are applied as well. We separately train NeuralAnnot for each subject, and the ID code is directly optimized, not regressed from the input image and 3D joint coordinates. In this way, 3D hands from the same subject have a consistent ID code. Training NeuralAnnot takes less than 1 hour for each capture. After training NeuralAnnot, we test it on the training set and + +manually inspect all frames. Frames with wrong fitting results are excluded for the following relight stage. Fig. 6 shows that it produces temporally consistent results. We checked that the MANO meshes from NeuralAnnot have $1.3\mathrm{mm}$ errors from the 3D scans without any translation/rotation/scale alignments. + +![](images/ccdb5d5dd49533d411ad744b4d2b02fb50ff34bdc180b2ce5b57068bf8fa7792.jpg) + +![](images/8aa834d31e559f5f0618c475700aa6692686f38cd57d0bdaee371d85af516d7d.jpg) +Figure 6: Comparison of 3D human model fits from (a) triangulation of InterHand2.6M [30] and (b) our V2V-PoseNet. The three frames are consecutive ones, and the time difference between near frames is 0.02 seconds. Given the very short time difference between frames, the three frames should have almost the same 3D hands. (a) not only suffers from the collisions but also suffers from temporal inconsistency between very close frames. On the other hand, (b) does not suffer from the collisions and achieves temporal consistency between close frames. + +# 3.2 Relight stage + +After capturing data in the above capture stage, we train a relighting network [17] for each subject following their original training strategy. Following them, we train the relighting networks on single-hand data as 3D hand model fittings are more accurate for the single-hand data than the two-hand data, which makes training the relighting network more stable. Please note that the single-hand data to train the relighting networks are also obtained from NeuralAnnot by training and testing it on single-hand captures. For more details, please refer to Iwase et al. [17]. After training the relighting networks, we use the 3D poses from NeuralAnnot [29] of the above capture stage to render two hands with specified camera parameters. For illuminations, we use 2144 high-resolution environment maps of Gardner et al. [10]. + +# 4 Dataset release + +Our Re:InterHand dataset includes 1) relighted images, 2) non-binary masks, and 3) 3D hand model fittings, as shown in Fig. 7. The relighted images and non-binary foreground masks are from Sec. 3.2, and 3D hand model fittings are from Sec. 3.1. Out of 10 captures, we split 7 captures for the training set and the remaining 3 captures for the testing set. + +Relighted images. To render relighted images, we first sample cameras out of our 170 cameras for each capture to make overall rendering faster and remove redundancy. To sample cameras, we sum 2D joint detection confidence from Sec. 3.1 for each camera. Then, we pick the top 50 cameras based on the sum of confidence values. In this way, we can exclude cameras where hands are almost not visible. The farthest iterative sampling algorithm samples $N$ cameras from the selected 50 cameras based on the camera positions to obtain as diverse viewpoints as possible. For the frame-based + +![](images/767e9cb1c077d2b60a876a7a7de7eef50cf479f51e441cb78533c6ee9176cc61.jpg) + +![](images/3c4d288c3f1b8a8adc934965032af1e046f07d04018bfac8160b2b5fdf545a35.jpg) + +![](images/0466fdde8c3955f66acb46bd190542069900866e33bb7711bee302457bc453c7.jpg) +(a) +(b) +(c) +Figure 7: (a) relighted rendered images, (b) masked images, and (c) MANO fitting overlay. + +![](images/0747518e7afe277be17b8997b72e2309845a98e509ee58db87a3efd6ca119acf.jpg) +(a) +(b) +(c) + +research, we downsample captures at 5 fps and set $N = 20$ . Then, we render images with a different environment map for each frame, which results in 493K images. Also, for the video-based research, we set $N = 5$ and render images at 30 fps with a different environment map for each segment, which results in 739K images. For both frame-based and video-based split, images with the same frame index and different viewpoints are rendered from the shared environment map in a multi-view consistent way. + +Table 2: RRVE comparison of InterWild [26] trained on different data including variants of our dataset. We use the 3rd-person viewpoint split of our Re:InterHand. + +
Training setsTesting sets
InterHand2.6MHICRe:InterHand
InterHand2.6M + MSCOCO19.7423.5937.59
+ Capture stage19.1423.0934.23
+ Capture stage and Composite19.5024.3731.30
+ Capture stage and Composite (w. AdaIn [16])19.4424.8327.96
+ Capture stage and Relight stage (Ours)19.4021.3620.07
+ +![](images/bb76c1b27624dfaee330c55d3080f417b47a8e707276a662491bf2497f504dff.jpg) +(a) Capture stage + +![](images/c4052c336c24f107dcda1b4d1636450d709caa31d9212d7ded05dfbbcf90dd94.jpg) +(b) Capture stage + Composite + +![](images/236b29d46a76a51dba928dfb46daa37cd28a96b8c4527babab83981e23d670e0.jpg) +(c) Capture stage + Composite (w. Adalin) + +![](images/d83a032b31224ae8033d093440ab7c82e950a97f213f907e72910a39cb364d38.jpg) +(d) Capture stage + Relight stage (Ours) +Figure 8: Image examples of datasets constructed in Tab. 2. + +One advantage of our approach is that we can render images with any novel camera parameters. In addition to the above pre-defined 3rd-person viewpoints, we also render relighted images from random egocentric viewpoints to contribute our Re:InterHand to the egocentric 3D hand community. To this end, we first manually put a reference camera in the middle of two eyes using 3D scans that include both hands and a face. The orientation of the reference camera is set to see the center of the hands. Then, for each frame, we randomize 3D camera positions within $20\mathrm{cm}$ of a 3D box around the reference camera. We also randomize the 3D orientation of the camera by applying $[-30^{\circ}, 30^{\circ}]$ pitch, yaw, and roll. The principal point is set to the image center, and the focal length is randomized from [0.7, 1.8] times the image size. To simulate the fisheye cameras, often used for egocentric viewpoints, we randomize distortion of the fisheye cameras by pre-defined mean and standard deviation. For the frame-based research, we render images with a different environment map for each frame at 30 fps, which results in 148K images. Also, for the video-based research, we render images with a different environment map for each segment at 30 fps, which results in 148K images. + +For each peak pose sequence, we exclude frames at the first and last segment whose velocity of hands is less than a threshold to remove many neutral pose frames. Both 3rd-person and egocentric viewpoints images are rendered in 1K resolution. + +Non-binary masks. We provide non-binary masks, obtained from the relight stage. The non-binary mask is different from binary masks rendered from MANO fittings as the non-binary ones are perfectly aligned with images including detailed silhouettes, such as nail and muscle bulging. + +3D hand model fittings. We provide MANO [40] fitting as it is the most widely used 3D hand model in the community. Also, we provide the 3D hand model fittings, used to render relighted images. + +# 5 Experiments + +For all experiments, we report right hand-relative vertex error (RRVE), a Euclidean distance (mm) between estimated and GT 3D meshes of two hands after aligning translation of the right hand's root joint (i.e., wrist). Note that the most widely used metric of previous works [58, 22, 26] (MPVPE) is calculated after aligning the translation of the right and left hand separately; hence, their MPVPE does not consider relative position between two hands, while our RRVE does. For the 3rd-person viewpoint experiments, we report RRVE on the test split of InterHand2.6M (H) [30], HIC [52], and the test split of our Re:InterHand. For the egocentric viewpoint experiments, we report RRVE on the test split of our Re:InterHand after training methods on the training split of it. For all experiments, + +Table 3: Benchmark of state-of-the-art methods with the RRVE metric. Methods with $\dagger$ are tested with GT hand boxes. Methods with $*$ indicate that they are trained additionally on Re:IntHand. We use the 3rd-person viewpoint split of our Re:IntHand. + +
MethodsTesting sets
InterHand2.6MHICRe:InterHand
IntagHand [22]†19.9667.1152.91
InterWild [26]19.7423.5937.59
InterWild [26]*19.4021.3620.07
+ +Table 4: Benchmark on the egocentric viewpoint split of our Re:InterHand. + +
MethodsRRVE
InterWild [26]28.89
+ +the frame-based split of Re:InterHand is used. For all datasets, the errors are calculated only for two-hand samples. + +Effectiveness of the relight stage. Tab. 2 shows the effectiveness of the relight stage. It is noteworthy that our relight stage greatly reduces the error of HIC, which consists of real and natural images. Please note that images of HIC are entirely novel ones as they are only used for the testing purpose. Our relight stage also significantly reduces the test error on our Re:InterHand test set while slightly reducing errors on InterHand2.6M. As both InterHand2.6M and data from the capture stage consist of lab images, the data from the capture studio (the second row of the table) reduces the error on InterHand2.6M the most. However, it could not improve the test results on HIC with real and natural images as image appearances are far from those of real images, as shown in Fig. 8 (a). The variants with composition (the third and fourth rows) make the performance on HIC of the baseline (the first row) worse. We think the reason is that their images have inconsistent light between foreground and background, as shown in Fig. 8 (b) and (c). For more harmonic images, we applied AdaIn [16] to raw RGB pixels of the foreground to make them follow distributions of the background pixels. Unfortunately, as it is not aware of the reflections, it often changes the colors of hands to unrealistic colors without preserving skin colors and only changing lights, which results in performance degradation on HIC with real and natural image appearances. + +Benchmark. Tab. 3 and 4 provide benchmark results with IntagHand [22] and InterWild [26], state-of-the-art 3D interacting hands recovery methods. We use their official checkpoints, and GT hand boxes are used for IntagHand as it assumes them. + +# 6 Conclusion + +Summary. We present Re:InterHand dataset, which provides images with highly realistic and diverse appearances of interacting hands and their corresponding GT 3D hands. To this end, our accurately tracked 3D poses, the state-of-the-art relighting network [17], and a number of high-resolution environment maps are used. We hope our dataset can make the community one step closer to the 3D interacting hands recovery in the wild. + +Limitations. As Fig. 5 shows, our rendered images have cut at the forearm area. This is because our relighting network only takes a 3D hand geometry, not a whole-body one. We think this is not a severe issue as most 3D hand analysis systems take cropped hand images followed by hand detectors, where hand detectors can be trained on large-scale real datasets only with 2D annotations. Also, we observed that there are sometimes artifacts in the relighted images. This is because the relighting network is trained on single-hand data and tested on two-hand data, which sometimes results in pose generalization failure. We expect a better relighting network could alleviate this issue. + +# References + +[1] Luca Ballan, Aparna Taneja, Jürgen Gall, Luc Van Gool, and Marc Pollefeys. Motion capture of hands in action using discriminative salient points. In ECCV, 2012. +[2] Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn Mcphail, Ravi Ramamoorthi, Yaser Sheikh, and Jason Saragih. Deep reightable appearance models for animatable faces. ACM TOG, 2021. +[3] Samarth Brahmbhatt, Chengcheng Tang, Christopher D. Twigg, Charles C. Kemp, and James Hays. ContactPose: A dataset of grasps with object contact and hand pose. In ECCV, August 2020. +[4] Yu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, et al. DexYCB: A benchmark for capturing hand grasping of objects. In CVPR, 2021. +[5] Xinhan Di and Pengqian Yu. LWA-HAND: Lightweight attention hand for interacting hand reconstruction. In ECCVW, 2022. +[6] Zicong Fan, Adrian Spurr, Muhammed Kocabas, Siyu Tang, Michael J Black, and Otmar Hilliges. Learning to disambiguate strongly interacting hands via probabilistic per-pixel part segmentation. In 3DV, 2021. +[7] Zicong Fan, Omid Taheri, Dimitrios Tzionas, Muhammed Kocabas, Manuel Kaufmann, Michael J. Black, and Otmar Hilliges. ARCTIC: A dataset for dexterous bimanual hand-object manipulation. In CVPR, 2023. +[8] Daiheng Gao, Yuliang Xiu, Kailin Li, Lixin Yang, Feng Wang, Peng Zhang, Bang Zhang, Cewu Lu, and Ping Tan. DART: Articulated hand model with diverse accessories and rich textures. In NeurIPS Datasets and Benchmarks Track, 2022. +[9] Guillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, and Tae-Kyun Kim. First-person hand action benchmark with RGB-D videos and 3D hand pose annotations. In CVPR, 2018. +[10] Marc-Andre Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, and Jean-François Lalonde. Learning to predict indoor illumination from a single image. ACM TOG, 2017. +[11] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4D: Around the world in 3,000 hours of egocentric video. In CVPR, 2022. +[12] Kaiwen Guo, Peter Lincoln, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen, Geoff Harvey, Sergio Orts-Escalano, Rohit Pandey, Jason Dourgarian, et al. The relightables: Volumetric performance capture of humans with realistic relighting. ACM TOG, 2019. +[13] Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. Honnotate: A method for 3D annotation of hand and object poses. In CVPR, 2020. +[14] Shreyas Hampali, Sayan Deb Sarkar, Mahdi Rad, and Vincent Lepetit. Keypoint transformer: Solving joint identification in challenging hands and object interactions for accurate 3D pose estimation. In CVPR, 2022. +[15] Yana Hasson, Gül Varol, Dimitris Tzionas, Igor Kalevatykh, Michael J. Black, Ivan Laptev, and Cordelia Schmid. Learning joint reconstruction of hands and manipulated objects. In CVPR, 2019. +[16] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. +[17] Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, and Jason Saragih. RelightableHands: Efficient neural relighting of articulated hand models. In CVPR, 2023. +[18] Sheng Jin, Lumin Xu, Jin Xu, Can Wang, Wentao Liu, Chen Qian, Wanli Ouyang, and Ping Luo. Whole-body human pose estimation in the wild. In ECCV, 2020. +[19] Dong Uk Kim, Kwang In Kim, and Seungryul Baek. End-to-end detection and pose estimation of two interacting hands. In ICCV, 2021. +[20] Dominik Kulon, Riza Alp Guler, Iasonas Kokkinos, Michael M. Bronstein, and Stefanos Zafeiriou. Weakly-supervised mesh-convolutional hand reconstruction in the wild. In CVPR, 2020. + +[21] Taein Kwon, Bugra Tekin, Jan Stuhmer, Federica Bogo, and Marc Pollefeys. H2O: Two hands manipulating objects for first person interaction recognition. In ICCV, 2021. +[22] Mengcheng Li, Liang An, Hongwen Zhang, Lianpeng Wu, Feng Chen, Tao Yu, and Yebin Liu. Interacting attention graph for single image two-hand reconstruction. In CVPR, 2022. +[23] Fanqing Lin, Connor Wilhelm, and Tony Martinez. Two-hand global 3D pose estimation using monocular RGB. In WACV, 2021. +[24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. +[25] Hao Meng, Sheng Jin, Wentao Liu, Chen Qian, Mengxiang Lin, Wanli Ouyang, and Ping Luo. 3D interacting hand pose estimation by hand de-occlusion and removal. In ECCV, 2022. +[26] Gyeongsik Moon. Bringing inputs to shared domains for 3D interacting hands recovery in the wild. In CVPR, 2023. +[27] Gyeongsik Moon, Ju Yong Chang, and Kyoung Mu Lee. V2V-PoseNet: Voxel-to-voxel prediction network for accurate 3D hand and human pose estimation from a single depth map. In CVPR, 2018. +[28] Gyeongsik Moon, Hongsuk Choi, and Kyoung Mu Lee. Accurate 3D hand pose estimation for whole-body 3D human mesh estimation. In CVPRW, 2022. +[29] Gyeongsik Moon, Hongsuk Choi, and Kyoung Mu Lee. NeuralAnnot: Neural annotator for 3D human mesh training sets. In CVPRW, 2022. +[30] Gyeongsik Moon, Shoou-I Yu, He Wen, Takaaki Shiratori, and Kyoung Mu Lee. InterHand2.6M: A dataset and baseline for 3D interacting hand pose estimation from a single RGB image. In ECCV, 2020. +[31] Franziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas, and Christian Theobalt. GANerated hands for real-time 3D hand tracking from monocular RGB. In CVPR, 2018. +[32] Franziska Mueller, Micah Davis, Florian Bernard, Oleksandr Sotnychenko, Miceal Verschoor, Miguel A Otaduy, Dan Casas, and Christian Theobalt. Real-time pose and shape reconstruction of two interacting hands with a single depth camera. ACM TOG, 2019. +[33] Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas, and Christian Theobalt. Real-time hand tracking under occlusion from an egocentric RGB-D sensor. In ICCV, 2017. +[34] Yeounguk Oh, JoonKyu Park, Jaeha Kim, Gyeongsik Moon, and Kyoung Mu Lee. Recovering 3D hand mesh sequence from a single blurry image: A new dataset and temporal unfolding. In CVPR, 2023. +[35] Takehiko Ohkawa, Kun He, Fadime Sener, Tomas Hodan, Luan Tran, and Cem Keskin. AssemblyHands: towards egocentric activity understanding via 3D hand pose estimation. In CVPR, 2023. +[36] Iasonas Oikonomidis, Nikolaos Kyriazis, and Antonis A Argyros. Tracking the articulated motion of two strongly interacting hands. In CVPR, 2012. +[37] Rohit Pandey, Sergio Orts Escolano, Chloe Legendre, Christian Haene, Sofien Bouaziz, Christoph Rhemann, Paul Debevec, and Sean Fanello. Total relighting: learning to relight portraits for background replacement. ACM TOG, 2021. +[38] Priyanka Patel, Chun-Hao P Huang, Joachim Tesch, David T Hoffmann, Shashank Tripathi, and Michael J Black. AGORA: Avatars in geography optimized for regression analysis. In CVPR, 2021. +[39] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3D hands, face, and body from a single image. In CVPR, 2019. +[40] Javier Romero, Dimitrios Tzionas, and Michael J Black. Embodied Hands: Modeling and capturing hands and bodies together. ACM TOG, 2017. +[41] Yu Rong, Jingbo Wang, Ziwei Liu, and Chen Change Loy. Monocular 3D reconstruction of interacting hands via collision-aware factorized refinements. In 3DV, 2021. +[42] Fadime Sener, Dibyadip Chatterjee, Daniel Shelepov, Kun He, Dipika Singhania, Robert Wang, and Angela Yao. Assembly101: A large-scale multi-view video dataset for understanding procedural activities. In CVPR, 2022. + +[43] Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. Hand keypoint detection in single images using multiview bootstrapping. In CVPR, 2017. +[44] Srinath Sridhar, Franziska Mueller, Michael Zollhöfer, Dan Casas, Antti Oulasvirta, and Christian Theobalt. Real-time joint tracking of a hand manipulating an object from RGB-D input. In ECCV, 2016. +[45] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In CVPR, 2019. +[46] Tiancheng Sun, Jonathan T Barron, Yun-Ta Tsai, Zexiang Xu, Xueming Yu, Graham Fyffe, Christoph Rhemann, Jay Busch, Paul E Debevec, and Ravi Ramamoorthi. Single image portrait relighting. ACM TOG, 2019. +[47] Xiao Sun, Yichen Wei, Shuang Liang, Xiaou Tang, and Jian Sun. Cascaded hand pose regression. In CVPR, 2015. +[48] Xiao Sun, Bin Xiao, Fangyin Wei, Shuang Liang, and Yichen Wei. Integral human pose regression. In ECCV, 2018. +[49] Danhang Tang, Hyung Jin Chang, Alykhan Tejani, and Tae-Kyun Kim. Latent regression forest: Structured estimation of 3D articulated hand posture. In CVPR, 2014. +[50] Jonathan Taylor, Lucas Bordeaux, Thomas Cashman, Bob Corish, Cem Keskin, Toby Sharp, Eduardo Soto, David Sweeney, Julien Valentin, Benjamin Luff, et al. Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences. ACM TOG, 2016. +[51] Jonathan Thompson, Murphy Stein, Yann Lecun, and Ken Perlin. Real-time continuous pose recovery of human hands using convolutional networks. ACM TOG, 2014. +[52] Dimitrios Tzionas, Luca Ballan, Abhilash Srikantha, Pablo Aponte, Marc Pollefeys, and Juergen Gall. Capturing hands in action using discriminative salient points and physics simulation. IJCV, 2016. +[53] Jiayi Wang, Franziska Mueller, Florian Bernard, Suzanne Sorli, Oleksandr Sotnychenko, Neng Qian, Miguel A Otaduy, Dan Casas, and Christian Theobalt. RGB2Hands: real-time tracking of 3D hand interactions from monocular RGB video. ACM TOG, 2020. +[54] Shugo Yamaguchi, Shunsuke Saito, Koki Nagano, Yajie Zhao, Weikai Chen, Kyle Olszewski, Shigeo Morishima, and Hao Li. High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM TOG, 2018. +[55] Jae Shin Yoon, Zhixuan Yu, Jaesik Park, and Hyun Soo Park. HUMBI: A large multiview dataset of human body expressions and benchmark challenge. TPAMI, 2021. +[56] Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, and Hyun Soo Park. HUMBI: A large multiview dataset of human body expressions. In CVPR, 2020. +[57] Shanxin Yuan, Qi Ye, Bjorn Stenger, Siddhant Jain, and Tae-Kyun Kim. Bighand2.2M benchmark: Hand pose dataset and state of the art analysis. In CVPR, 2017. +[58] Baowen Zhang, Yangang Wang, Xiaoming Deng, Yinda Zhang, Ping Tan, Cuixia Ma, and Hongan Wang. Interacting two-hand 3D pose and shape reconstruction from single color image. In ICCV, 2021. +[59] Jiawei Zhang, Jianbo Jiao, Mingliang Chen, Liangqiong Qu, Xiaobin Xu, and Qingxiong Yang. 3D hand pose tracking and estimation using stereo matching. arXiv preprint arXiv:1610.07214, 2016. +[60] Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escalano, Philip Davidson, Christoph Rhemann, Paul Debevec, et al. Neural light transport for relighting and view synthesis. ACM TOG, 2021. +[61] Christian Zimmermann and Thomas Brox. Learning to estimate 3D hand pose from single RGB images. In ICCV, 2017. +[62] Christian Zimmermann, Duygu Ceylan, Jimei Yang, Bryan Russell, Max Argus, and Thomas Brox. FreiHand: A dataset for markerless capture of hand pose and shape from single RGB images. In ICCV, 2019. \ No newline at end of file diff --git a/adatasetofrelighted3dinteractinghands/images.zip b/adatasetofrelighted3dinteractinghands/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1e7a3560ba49777fa09dea99b04fc12ddbc9fa3d --- /dev/null +++ b/adatasetofrelighted3dinteractinghands/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda6d87109df3961e6f9e3cae313020cfe1a0d084d23181fe1797d3aa34001b9 +size 752526 diff --git a/adatasetofrelighted3dinteractinghands/layout.json b/adatasetofrelighted3dinteractinghands/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..71f2bbcb9476b894a20cce4d331361a2219a9f06 --- /dev/null +++ b/adatasetofrelighted3dinteractinghands/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d28cbf11d7d72afd1b3efe56697bd07b89d1c29b92837139d3850b95e5913052 +size 341825 diff --git a/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_content_list.json b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d9520f4be4592af73097d2d0216d252e7ba95998 --- /dev/null +++ b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97c6928e0730fbff9efad1822ad57abdad2bd23e33d139b98b8945fec8fb5a28 +size 148840 diff --git a/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_model.json b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_model.json new file mode 100644 index 0000000000000000000000000000000000000000..470c709c446a5c5c16c0966ea6e1539965ba09db --- /dev/null +++ b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ed5974e951110a5c8ede994bf7b34f57a50501a61bab714fe343adac46a361d +size 172724 diff --git a/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_origin.pdf b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7c0fb705e456bcee36cbdbeab6a073b227999c7d --- /dev/null +++ b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/60f491c9-c460-45f6-8d5b-c31aa90e2e91_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b8ce2c952f0b8de1cfe5d4ff7427306f440c6db4f8a8ea49e4a497274275309 +size 787120 diff --git a/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/full.md b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7d4b330e55cdf3bb9f3fca23f83f51af76abb31a --- /dev/null +++ b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/full.md @@ -0,0 +1,657 @@ +# A Deep Instance Generative Framework for MILP Solvers Under Limited Data Availability + +Zijie Geng $^{1}$ , Xijun Li $^{1,2}$ , Jie Wang $^{1,3*}$ , Xiao Li $^{1}$ , Yongdong Zhang $^{1}$ , Feng Wu $^{1}$ + +1University of Science and Technology of China 2Noah's Ark Lab, Huawei + +$^{3}$ Institute of Artificial Intelligence, Hefei Comprehensive National Science Center + +{ustcgzj,lixijun,xiao_li}@mail.ustc.edu.cn + +{jiewangx,zhyd73,fengwu}@ustc.edu.cn + +# Abstract + +In the past few years, there has been an explosive surge in the use of machine learning (ML) techniques to address combinatorial optimization (CO) problems, especially mixed-integer linear programs (MILPs). Despite the achievements, the limited availability of real-world instances often leads to sub-optimal decisions and biased solver assessments, which motivates a suite of synthetic MILP instance generation techniques. However, existing methods either rely heavily on expert-designed formulations or struggle to capture the rich features of real-world instances. To tackle this problem, we propose G2MILP, the first deep generative framework for MILP instances. Specifically, G2MILP represents MILP instances as bipartite graphs, and applies a masked variational autoencoder to iteratively corrupt and replace parts of the original graphs to generate new ones. The appealing feature of G2MILP is that it can learn to generate novel and realistic MILP instances without prior expert-designed formulations, while preserving the structures and computational hardness of real-world datasets, simultaneously. Thus the generated instances can facilitate downstream tasks for enhancing MILP solvers under limited data availability. We design a suite of benchmarks to evaluate the quality of the generated MILP instances. Experiments demonstrate that our method can produce instances that closely resemble real-world datasets in terms of both structures and computational hardness. The deliverables are released at https://miralab-ustc.github.io/L20-G2MILP. + +# 1 Introduction + +Mixed-integer linear programming (MILP)—a powerful and versatile modeling technique for many real-world problems—lies at the core of combinatorial optimization (CO) research and is widely adopted in various industrial optimization scenarios, such as scheduling [1], planning [2], and portfolio [3]. While MILPs are $\mathcal{NP}$ -hard problems [4], machine learning (ML) techniques have recently emerged as a powerful approach for either solving them directly or assisting the solving process [5, 6]. Notable successes include [7] for node selection, [8] for branching decision, and [9] for cut selection, etc. + +Despite the achievements, the limited availability of real-world instances, due to labor-intensive data collection and proprietary issues, remains a critical challenge to the research community [5, 10, 11]. Developing practical MILP solvers usually requires as many instances as possible to identify issues through white-box testing [12]. Moreover, machine learning methods for improving MILP solvers often suffer from sub-optimal decisions and biased assessments under limited data availability, thus + +compromising their generalization to unseen problems [13]. These challenges motivate a suite of synthetic MILP instance generation techniques, which fall into two categories. Some methods rely heavily on expert-designed formulations for specific problems, such as Traveling Salesman Problems (TSPs) [14] or Set Covering problems [15]. However, these methods cannot cover real-world applications where domain-specific expertise or access to the combinatorial structures, due to proprietary issues, is limited. Other methods construct general MILP instances by sampling from an encoding space that controls a few specific statistics [16]. However, these methods often struggle to capture the rich features and the underlying combinatorial structures, resulting in an unsatisfactory alignment with real-world instances. + +Developing a deep learning (DL)-based MILP instance generator is a promising approach to address this challenge. Such a generator can actively learn from real-world instances and generate new ones without expert-designed formulations. The generated instances can simulate realistic scenarios, cover more cases, significantly enrich the datasets, and thereby enhance the development of MILP solvers at a relatively low cost. Moreover, this approach has promising technical prospects for understanding the problem space, searching for challenging cases, and learning representations, which we will discuss further in Section 5. While similar techniques have been widely studied for Boolean satisfiability (SAT) problems [17], the development of DL-based MILP instance generators remains a complete blank due to higher technical difficulties, i.e., it involves not only the intrinsic combinatorial structure preservation but also high-precision numerical prediction. This paper aims to lay the foundation for the development of such generators and further empower MILP solver development under limited data availability. + +In this paper, we propose G2MILP, which is the first deep generative framework for MILP instances. We represent MILP instances as weighted bipartite graphs, where variables and constraints are vertices, and non-zero coefficients are edges. With this representation, we can use graph neural networks (GNNs) to effectively capture essential features of MILP instances [8, 18]. Using this representation, we recast the original task as a graph generation problem. However, generating such complex graphs from scratch can be computationally expensive and may destroy the intrinsic combinatorial structures of the problems [19]. To address this issue, we propose a masked variational autoencoder (VAE) paradigm inspired by masked language models (MLM) [20, 21] and VAE theories [22-24]. The proposed paradigm iteratively corrupts and replaces parts of the original graphs using sampled latent vectors. This approach allows for controlling the degree to which we change the original instances, thus balancing the novelty and the preservation of structures and hardness of the generated instances. To implement the complicated generation steps, we design a decoder consisting of four modules that work cooperatively to determine multiple components of new instances, involving both structure and numerical prediction tasks simultaneously. + +We then design a suite of benchmarks to evaluate the quality of generated MILP instances. First, we measure the structural distributional similarity between the generated samples and the input training instances using multiple structural statistics. Second, we solve the instances using the advanced solver Gurobi [12], and we report the solving time and the numbers of branching nodes of the instances, which directly indicate their computational hardness [19, 25]. Our experiments demonstrate that G2MILP is the very first method capable of generating instances that closely resemble the training sets in terms of both structures and computational hardness. Furthermore, we show that G2MILP is able to adjust the trade-off between the novelty and the preservation of structures and hardness of the generated instances. Third, we conduct a downstream task, the optimal value prediction task, to demonstrate the potential of generated instances in enhancing MILP solvers. The results show that using the generated instances to enrich the training sets reduces the prediction error by over $20\%$ on several datasets. The deliverables are released at https://miralab-ustc.github.io/L20-G2MILP. + +# 2 Related Work + +Machine Learning for MILP Machine learning (ML) techniques, due to its capability of capturing rich features from data, has shown impressive potential in addressing combinatorial optimization (CO) problems [26-28], especially MILP problems [5]. Some works apply ML models to directly predict the solutions for MILPs [29-31]. Others attempt to incorporate ML models into heuristic components in modern solvers [7, 9, 32, 33]. Gasse et al. [8] proposed to represent MILP instances as bipartite graphs, and use graph neural networks (GNNs) to capture features for branching decisions. Our + +proposed generative framework can produce novel instances to enrich the datasets, which promises to enhance the existing ML methods that require large amounts of i.i.d. training data. + +MILP Instance Generation Many previous works have made efforts to generate synthetic MILP instances for developing and testing solvers. Existing methods fall into two categories. The first category focuses on using mathematical formulations to generate instances for specific combinatorial optimization problems such as TSP [14], set covering [15], and mixed-integer knapsack [34]. The second category aims to generate general MILP instances. Bowly [16] proposed a framework to generate feasible and bounded MILP instances by sampling from an encoding space that controls a few specific statistics, e.g., density, node degrees, and coefficient mean. However, the aforementioned methods either rely heavily on expert-designed formulations or struggle to capture the rich features of real-world instances. G2MILP tackles these two issues simultaneously by employing deep learning techniques to actively generate instances that resemble real-world problems, and it provides a versatile solution to the data limitation challenge. In [35], we further extend G2MILP to learn to generate challenging MILP instance. + +Deep Graph Generation A plethora of literature has investigated deep learning models for graph generation [36], including auto-regressive methods [37], variational autoencoders (VAEs) [23], and generative diffusion models [38]. These models have been widely used in various fields [39] such as molecule design [21, 40, 41] and social network generation [42, 43]. G2SAT [17], the first deep learning method for SAT instance generation, has received much research attention [19, 44]. Nevertheless, it is non-trivial to adopt G2SAT to MILP instance generation (see Appendix C.1), as G2SAT does not consider the high-precision numerical prediction, which is one of the fundamental challenges in MILP instance generation. In this paper, we propose G2MILP—the first deep generative framework designed for general MILP instances—and we hope to open up a new research direction for the research community. + +# 3 Methodology + +In this section, we present our G2MILP framework. First, in Section 3.1, we describe the approach to representing MILP instances as bipartite graphs. Then, in Section 3.2, we derive the masked variational autoencoder (VAE) generative paradigm. In Section 3.3, we provide details on the implementation of the model framework. Finally, in Section 3.4, we explain the training and inference processes. The model overview is in Figure 1. More implementation details can be found in Appendix A. The code is released at https://github.com/MIRALab-USTC/L20-G2MILP. + +# 3.1 Data Representation + +A mixed-linear programming (MILP) problem takes the form of: + +$$ +\min _ {\boldsymbol {x} \in \mathbb {R} ^ {n}} \boldsymbol {c} ^ {\top} \boldsymbol {x}, \quad \text {s . t .} \boldsymbol {A} \boldsymbol {x} \leq \boldsymbol {b}, \boldsymbol {l} \leq \boldsymbol {x} \leq \boldsymbol {u}, x _ {j} \in \mathbb {Z}, \forall j \in \mathcal {I}, \tag {1} +$$ + +where $\pmb{c} \in \mathbb{R}^n$ , $\pmb{A} \in \mathbb{R}^{m \times n}$ , $\pmb{b} \in \mathbb{R}^m$ , $\pmb{l} \in (\mathbb{R} \cup \{-\infty\})^n$ , $\pmb{u} \in (\mathbb{R} \cup \{+\infty\})^n$ , and the index set $\mathcal{I} \subset \{1, 2, \dots, n\}$ includes those indices $j$ where $x_j$ is constrained to be an integer. + +To represent each MILP instance, we construct a weighted bipartite graph $\mathcal{G} = (\mathcal{V} \cup \mathcal{W}, \mathcal{E})$ as follows [18, 29]. + +- The constraint vertex set $\mathcal{V} = \{v_{1},\dots ,v_{m}\}$ , where each $v_{i}$ corresponds to the $i^{\mathrm{th}}$ constraint in Equation 1. The vertex feature $\pmb{v}_{i}$ of $v_{i}$ is described by the bias term, i.e., $\pmb{v}_i = (b_i)$ . +- The variable vertex set $\mathcal{W} = \{w_1, \dots, w_n\}$ , where each $w_j$ corresponds to the $j^{\text{th}}$ variable in Equation 1. The vertex feature $w_j$ of $w_j$ is a 9-dimensional vector that contains information of the objective coefficient $c_j$ , the variable type, and the bounds $l_j, u_j$ . +- The edge set $\mathcal{E} = \{e_{ij}\}$ , where an edge $e_{ij}$ connects a constraint vertex $v_i \in \mathcal{V}$ and a variable vertex $w_j \in \mathcal{W}$ . The edge feature $e_{ij}$ is described by the coefficient, i.e., $e_{ij} = (a_{ij})$ , and there is no edge between $v_i$ and $w_j$ if $a_{ij} = 0$ . + +As described above, each MILP instance is represented as a weighted bipartite graph, equipped with a tuple of feature matrices $(V, W, E)$ , where $V, W, E$ denote stacks of vertex features $\mathbf{v}_i$ , $\mathbf{w}_j$ and + +![](images/92e0304060f5a146d2670393bdab6d5b5ad65e8399aafcba79b88a602fbc72f8.jpg) +Figure 1: Overview of G2MILP. (a) Masking Process $\tilde{p}(\tilde{\mathcal{G}}|\mathcal{G})$ . Given a MILP instance, which is represented as a bipartite graph $\mathcal{G}$ , we randomly label a constraint vertex $\tilde{v}$ as [mask] to obtain the masked graph $\tilde{\mathcal{G}}$ . (b) Encoder $q_{\phi}(\mathbf{Z}|\mathcal{G})$ . The encoder is $\mathrm{GNN}_{\phi}$ followed by two networks, $\mu_{\phi}$ and $\Sigma_{\phi}$ , for resampling. During training, we use the encoder to obtain the latent vectors $\boldsymbol{z}_{v_i}$ and $\boldsymbol{z}_{w_j}$ for all vertices. (c) Decoder $p_{\theta}(\mathcal{G}|\tilde{\mathcal{G}},\mathbf{Z})$ . We use $\mathrm{GNN}_{\phi}$ to obtain the node features $\boldsymbol{h}_{\tilde{v}}$ and $\boldsymbol{h}_{w_j}$ . Then four modules work cooperatively to reconstruct the original graph $\mathcal{G}$ based on the node features and the latent vectors. They sequentially determine ① the bias terms, ② the degrees, ③ the logits, and ④ the weights. During inference, the model is decoder-only, and we draw the latent vectors from a standard Gaussian distribution to introduce randomness. We repeat the above mask-and-generate process several times so as to produce new instances. + +edge features $e_{ij}$ , respectively. Such a representation contains all information of the original MILP instance [18]. We use the off-the-shelf observation function provided by Ecole [45] to build the bipartite graphs from MILP instances. We then apply a graph neural network (GNN) to obtain the node representations $h_{v_i}^{\mathcal{G}}$ and $h_{w_j}^{\mathcal{G}}$ , also denoted as $h_{v_i}$ and $h_{w_j}$ for simplicity. More details on the data representation can be found in Appendix A.1. + +# 3.2 Masked VAE Paradigm + +We then introduce our proposed masked VAE paradigm. For the ease of understanding, we provide an intuitive explanation here, and delay the mathematical derivation to Appendix A.2. + +Given a graph $\mathcal{G}$ drawn from a dataset $\mathcal{D}$ , we corrupt it through a masking process, denoted by $\tilde{\mathcal{G}} \sim \tilde{p}(\tilde{\mathcal{G}}|\mathcal{G})$ . We aim to build a parameterized generator $p_{\theta}(\hat{\mathcal{G}}|\tilde{\mathcal{G}})$ that can generate new instances $\hat{\mathcal{G}}$ from the corrupted graph $\tilde{\mathcal{G}}$ . We train the generator by maximizing the log-likelihood $\log p_{\theta}(\mathcal{G}|\tilde{\mathcal{G}}) = \log p_{\theta}(\hat{\mathcal{G}} = \mathcal{G}|\tilde{\mathcal{G}})$ of reconstructing $\mathcal{G}$ given $\tilde{\mathcal{G}}$ . Therefore, the optimization objective is: + +$$ +\underset {\boldsymbol {\theta}} {\arg \max } \mathbb {E} _ {\mathcal {G} \sim \mathcal {D}} \mathbb {E} _ {\tilde {\mathcal {G}} \sim \tilde {p} (\tilde {\mathcal {G}} | \mathcal {G})} \log p _ {\boldsymbol {\theta}} (\mathcal {G} | \tilde {\mathcal {G}}). \tag {2} +$$ + +To model the randomness in the generation process and produce diverse instances, we follow the standard VAE framework [22, 23] to introduce a latent variable $\mathbf{Z} = (z_{v_1},\dots ,z_{v_m},z_{w_1},\dots ,z_{w_n})$ which contains the latent vectors for all vertices. During training, the latent vectors are sampled from a posterior distribution given by a parameterized encoder $q_{\phi}$ , while during inference, they are independently sampled from a prior distribution such as a standard Gaussian distribution. The decoder $p_\theta$ in the masked VAE framework generates new instances from the masked graph $\tilde{\mathcal{G}}$ together with the sampled latent variable $\mathbf{Z}$ . + +The evidence lower bound (ELBO), also known as the variational lower bound, is a lower bound estimator of the log-likelihood, and is what we actually optimize during training, because it is more tractable. We can derive the ELBO as: + +$$ +\log p _ {\boldsymbol {\theta}} (\mathcal {G} | \tilde {\mathcal {G}}) \geq \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathcal {G})} \left[ \log p _ {\boldsymbol {\theta}} (\mathcal {G} | \tilde {\mathcal {G}}, \mathbf {Z}) \right] - D _ {\mathrm {K L}} \left[ q _ {\phi} (\mathbf {Z} | \mathcal {G}) \| p _ {\boldsymbol {\theta}} (\mathbf {Z}) \right], \tag {3} +$$ + +where $p_{\theta}(\mathbf{Z})$ is the prior distribution of $\mathbf{Z}$ and is usually taken as the standard Gaussian $\mathcal{N}(\mathbf{0},\mathbf{I})$ , and $D_{\mathrm{KL}}[\cdot \| \cdot ]$ denotes the KL divergence. Therefore, we formulate the loss function as: + +$$ +\mathcal {L} = \mathbb {E} _ {\mathcal {G} \sim \mathcal {D}} \mathbb {E} _ {\tilde {\mathcal {G}} \sim \tilde {p} (\tilde {\mathcal {G}} | \mathcal {G})} \left[ \underbrace {\mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathcal {G})} \left[ - \log p _ {\boldsymbol {\theta}} (\mathcal {G} | \tilde {\mathcal {G}} , \mathbf {Z}) \right]} _ {\mathcal {L} _ {\text {r e c}}} + \beta \cdot \underbrace {D _ {\mathrm {K L}} \left[ q _ {\phi} (\mathbf {Z} | \mathcal {G}) \| \mathcal {N} (\mathbf {0} , \boldsymbol {I}) \right]} _ {\mathcal {L} _ {\text {p r i o r}}} \right]. \tag {4} +$$ + +In the formula: (1) the first term $\mathcal{L}_{\mathrm{rec}}$ is the reconstruction loss, which urges the decoder to rebuild the input data according to the masked data and the latent variables. (2) The second term $\mathcal{L}_{\mathrm{prior}}$ is used to regularize the posterior distribution in the latent space to approach a standard Gaussian distribution, so that we can sample $\mathbf{Z}$ from the distribution when inference. (3) $\beta$ is a hyperparameter to control the weight of regularization, which is critical in training a VAE model [46]. + +# 3.3 Model Implementation + +To implement Equation 4, we need to instantiate the masking process $\tilde{p} (\tilde{\mathcal{G}} |\mathcal{G})$ , the encoder $q_{\phi}(\mathbf{Z}|\mathcal{G})$ and the decoder $p_{\theta}(\mathcal{G}|\tilde{\mathcal{G}},\mathbf{Z})$ , respectively. + +Masking Process For simplicity, we uniformly sample a constraint vertex $\tilde{v} \sim \mathcal{U}(\mathcal{V})$ and mask it and its adjacent edges, while keeping the variable vertices unchanged. Specifically, we label the vertex $\tilde{v}$ with a special [mask] token, and add virtual edges that link $\tilde{v}$ with each variable vertex. The vertex $\tilde{v}$ and the virtual edges are assigned special embeddings to distinguish them from the others. We further discuss on the masking scheme in Appendix C.2. + +Encoder The encoder $q_{\phi}(\mathbf{Z}|\mathcal{G})$ is implemented as: + +$$ +q _ {\phi} (\mathbf {Z} | \mathcal {G}) = \prod_ {u \in \mathcal {V} \cup \mathcal {W}} q _ {\phi} \left(\boldsymbol {z} _ {u} | \mathcal {G}\right), \quad q _ {\phi} \left(\boldsymbol {z} _ {u} | \mathcal {G}\right) = \mathcal {N} \left(\boldsymbol {\mu} _ {\phi} \left(\boldsymbol {h} _ {u} ^ {\mathcal {G}}\right), \exp \boldsymbol {\Sigma} _ {\phi} \left(\boldsymbol {h} _ {u} ^ {\mathcal {G}}\right)\right), \tag {5} +$$ + +where $h_u^G$ is the node representation of $u$ obtained by a GNN $\phi$ , $\mathcal{N}$ denotes the Gaussian distribution, and $\pmb{\mu}$ and $\pmb{\Sigma}$ output the mean and the log variance, respectively. + +Decoder The decode $p_{\theta}$ aims to reconstruct $\mathcal{G}$ during training. We apply a $\mathrm{GNN}_{\theta}$ to obtain the node representations $h_u^{\tilde{\mathcal{G}}}$ , denoted as $h_u$ for simplicity. To rebuild the masked constraint vertex $\tilde{v}$ , the decoder sequentially determines: ① the bias $b_{\tilde{v}}$ (i.e., the right-hand side of the constraint), ② the degree $d_{\tilde{v}}$ of $\tilde{v}$ (i.e., the number of variables involved in the constraint), ③ the logits $\delta_{\tilde{v},u}$ for all variable vertices $u$ to indicate whether they are connected with $\tilde{v}$ (i.e., whether the variables are in the constraint), and ④ the weights $e_{\tilde{v},u}$ of the edges (i.e., the coefficients of the variables in the constraint). The decoder is then formulated as: + +$$ +p _ {\boldsymbol {\theta}} (\mathcal {G} | \tilde {\mathcal {G}}, \mathbf {Z}) = p _ {\boldsymbol {\theta}} \left(b _ {\tilde {v}} | \tilde {\mathcal {G}}, \mathbf {Z}\right) \cdot p _ {\boldsymbol {\theta}} \left(d _ {\tilde {v}} | \tilde {\mathcal {G}}, \mathbf {Z}\right) \cdot \prod_ {u \in \mathcal {W}} p _ {\boldsymbol {\theta}} \left(\delta_ {\tilde {v}, u} | \tilde {\mathcal {G}}, \mathbf {Z}, d _ {\tilde {v}}\right) \cdot \prod_ {u \in \mathcal {W}: \delta_ {\tilde {v}, u} = 1} p _ {\boldsymbol {\theta}} \left(e _ {\tilde {v}, u} | \tilde {\mathcal {G}}, \mathbf {Z}\right). \tag {6} +$$ + +Therefore, we implement the decoder as four cooperative modules: ① Bias Predictor, ② Degree Predictor, ③ Logits Predictor, and ④ Weights Predictor. + +① Bias Predictor For effective prediction, we incorporate the prior of simple statistics of the dataset—the minimum $\underline{b}$ and the maximum $\overline{b}$ of the bias terms that occur in the dataset—into the predictor. Specifically, we normalize the bias $b_{\tilde{v}}$ to $[0,1]$ via $b_{\tilde{v}}^{*} = (b_{\tilde{v}} - \underline{b}) / (\overline{b} -\underline{b})$ . To predict $b_{\tilde{v}}^{*}$ , we use one MLP that takes the node representation $\pmb{h}_{\tilde{v}}$ and the latent vector $\pmb{z}_{\tilde{v}}$ of $\tilde{v}$ as inputs: + +$$ +\hat {b} _ {\tilde {v}} ^ {*} = \sigma \left(\operatorname {M L P} _ {\theta} ^ {\text {b i a s}} \left([ h _ {\tilde {v}}, z _ {\tilde {v}} ]\right)\right), \tag {7} +$$ + +where $\sigma (\cdot)$ is the sigmoid function used to restrict the outputs. We use the mean squared error (MSE) loss to train the predictor. At inference time, we apply the inverse transformation to obtain the predicted bias values: $\hat{b}_{\tilde{v}} = \underline{b} +(\overline{b} -\underline{b})\cdot \hat{b}_{\tilde{v}}^{*}$ + +$②$ Degree Predictor We find that the constraint degrees are crucial to the graph structures and significantly affect the combinatorial properties. Therefore, we use the Degree Predictor to determine coarse-grained degree structure, and then use the Logits Predictor to determine the fine-grained connection details. Similarly to Bias Predictor, we normalize the degree $d_{\tilde{v}}$ to $d_{\tilde{v}}^{*} = (d_{\tilde{v}} - \underline{d}) / (\overline{d} -\underline{d})$ where $\underline{d}$ and $\overline{d}$ are the minimum and maximum degrees in the dataset, respectively. We use one MLP to predict $d_{\tilde{v}}^{*}$ : + +$$ +\hat {d} _ {\tilde {v}} ^ {*} = \sigma \left(\mathrm {M L P} _ {\theta} ^ {\deg} ([ h _ {\tilde {v}}, z _ {\tilde {v}} ])\right). \tag {8} +$$ + +We use MSE loss for training, and round the predicted degree to the nearest integer $\hat{d}_{\bar{v}}$ for inference. + +③ Logits Predictor To predict the logits $\delta_{\tilde{v},u}$ indicating whether a variable vertex $u \in \mathcal{W}$ is connected with $\tilde{v}$ , we use one MLP that takes the representation $h_u$ and the latent vector $z_u$ of $u$ as inputs: + +$$ +\hat {\delta} _ {\bar {v}, u} ^ {\prime} = \sigma \left(\mathrm {M L P} _ {\theta} ^ {\log_ {0} (\left[ h _ {u}, z _ {u} \right])}\right). \tag {9} +$$ + +We use binary cross-entropy (BCE) loss to train the logistical regression module. As positive samples (i.e., variables connected with a constraint) are often scarce, we use one negative sample for each positive sample during training. The loss function is: + +$$ +\mathcal {L} _ {\text {l o g i t s}} = - \mathbb {E} _ {(\tilde {v}, u) \sim p _ {\text {p o s}}} \left[ \log \left(\hat {\delta} _ {\tilde {v}, u} ^ {\prime}\right) \right] - \mathbb {E} _ {(\tilde {v}, u) \sim p _ {\text {n e g}}} \left[ \log \left(1 - \hat {\delta} _ {\tilde {v}, u} ^ {\prime}\right) \right], \tag {10} +$$ + +where $p_{\mathrm{pos}}$ and $p_{\mathrm{neg}}$ denote the distributions over positive and negative samples, respectively. At inference time, we connect $\hat{d}_{\tilde{v}}$ variable vertices with the top logits to $\tilde{v}$ , i.e., + +$$ +\hat {\delta} _ {\tilde {v}, u} = \left\{ \begin{array}{l} 1, u \in \arg \operatorname {T o p K} \left(\left\{\hat {\delta} _ {\tilde {v}, u} ^ {\prime} \mid u \in \mathcal {W} \right\}, \hat {d} _ {\tilde {v}}\right), \\ 0, \text {o t h e r w i s e .} \end{array} \right. \tag {11} +$$ + +$④$ Weights Predictor Finally, we use one MLP to predict the normalized weights $e_{\tilde{v},u}^{*}$ for nodes $u$ that are connected with $\tilde{v}$ : + +$$ +\hat {e} _ {\tilde {v}, u} ^ {*} = \sigma \left(\mathrm {M L P} _ {\theta} ^ {\text {w e i g h t s}} \left(\left[ \boldsymbol {h} _ {u}, \boldsymbol {z} _ {u} \right]\right)\right). \tag {12} +$$ + +The training and inference procedures are similar to those of Bias Predictor. + +# 3.4 Training and Inference + +During training, we use the original graph $\mathcal{G}$ to provide supervision signals for the decoder, guiding it to reconstruct $\mathcal{G}$ from the masked $\tilde{\mathcal{G}}$ and the encoded $\mathbf{Z}$ . As described above, the decoder involves four modules, each optimized by a prediction task. The first term in Equation 4, i.e., the reconstruction loss, is written as + +$$ +\mathcal {L} _ {\text {r e c}} = \mathbb {E} _ {\mathcal {G} \sim \mathcal {D}, \tilde {\mathcal {G}} \sim \tilde {p} (\tilde {\mathcal {G}} | \mathcal {G})} \left[ \sum_ {i = 1} ^ {4} \alpha_ {i} \cdot \mathcal {L} _ {i} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) \right], \tag {13} +$$ + +where $\mathcal{L}_i(\pmb {\theta},\phi |\mathcal{G},\tilde{\mathcal{G}})$ $(i = 1,2,3,4)$ are loss functions for the four prediction tasks, respectively, and $\alpha_{i}$ are hyperparameters. + +During inference, we discard the encoder and sample $\mathbf{Z}$ from a standard Gaussian distribution, which introduces randomness to enable the model to generate novel instances. We iteratively mask one constraint vertex in the bipartite graph and replace it with a generated one. We define a hyperparameter $\eta$ to adjust the ratio of iterations to the number of constraints, i.e., $N_{\mathrm{iters}} = \eta \cdot |\mathcal{V}|$ . Naturally, a larger value of $\eta$ results in instances that are more novel, while a smaller value of $\eta$ yields instances that exhibit better similarity to the training set. For further details of training and inference procedures, please refer to Appendix A.3. + +# 4 Experiments + +# 4.1 Setup + +We conduct extensive experiments to demonstrate the effectiveness of our model. More experimental details can be found in Appendix B. Additional results are in Appendix C. + +Benchmarking To evaluate the quality of the generated MILP instances, we design three benchmarks so as to answer the following research questions. (1) How well can the generated instances preserve the graph structures of the training set? (2) How closely do the generated instances resemble the computational hardness of real-world instances? (3) How effectively do they facilitate downstream tasks to improve solver performance? + +I. Structural Distributional Similarity We consider 11 classical statistics to represent features of the instances [17, 47], including coefficient density, node degrees, graph clustering, graph modularity, etc. In line with a widely used graph generation benchmark [48], we compute the Jensen-Shannon (JS) divergence [49] for each statistic to measure the distributional similarity between the generated instances and the training set. We then standardize the metrics into similarity scores that range from 0 to 1. The computing details can be found in Appendix B.3. + +II. Computational Hardness The computational hardness is another critical metric to assess the quality of the generated instances. We draw an analogy from the SAT generation community, where though many progresses achieved, it is widely acknowledged that the generated SAT instances differs significantly from real-world ones in the computational hardness [25], and this issue remains inadequately addressed. In our work, we make efforts to mitigate this problem, even in the context of MILP generation, a more challenging task. To this end, we leverage the state-of-the-art solver Gurobi [12] to solve the instances, and we report the solving time and the numbers of branching nodes during the solving process, which can directly reflect the computational hardness of instances [19]. + +III. Downstream Task We consider two downstream tasks to examine the the potential benefits of the generated instances in practical applications. We employ G2MILP to generate new instances and augment the original datasets, and then evaluate whether the enriched datasets can improve the performance of the downstream tasks. The considered tasks include predicting the optimal values of the MILP problem, as discussed in Chen et al. [18], and applying a predict-and-search framework for solving MILPs, as proposed by Han et al. [31]. + +Datasets We consider four different datasets of various sizes. (1) Large datasets. We evaluate the model's capability of learning data distributions using two well-known synthetic MILP datasets: Maximum Independent Set (MIS) [50] and Set Covering [15]. We follow previous works [8, 9] to artificially generate 1000 instances for each of them. (2) Medium dataset. Mixed-integer Knapsack (MIK) is a widely used dataset [34], which consists of 80 training instances and 10 test instances. We use this dataset to evaluate the model's performance both on the distribution learning benchmarks and the downstream task. (3) Small dataset. We construct a small subset of MIPLIB 2017 [10] by collecting a group of problems called Nurse Scheduling problems. This dataset comes from real-world scenarios and consists of only 4 instances, 2 for training and 2 for test, respectively. Since the statistics are meaningless for such an extremely small dataset, we use it only to demonstrate the effectiveness of generated instances in facilitating downstream tasks. + +Baselines G2MILP is the first deep learning generative framework for MILP instances, and thus, we do not have any learning-based models for comparison purpose. Therefore, we compare G2MILP with a heuristic MILP instance generator, namely Bowly [16]. Bowly can create feasible and bounded MILP instances while controlling some specific statistical features such as coefficient density and coefficient mean. We set all the controllable parameters to match the corresponding statistics of the training set, allowing Bowly to imitate the training set to some extent. We also consider a useful baseline, namely Random, to demonstrate the effectiveness of deep neural networks in G2MILP. Random employs the same generation procedure as G2MILP, but replaces all neural networks in the decoder with random generators. We set the masking ratio $\eta$ for Random and G2MILP to 0.01, 0.05, and 0.1 to show how this hyperparameter helps balance the novelty and similarity. + +# 4.2 Quantitative Results + +I. Structural Distributional Similarity We present the structural distributional similarity scores between each pair of datasets in Table 1. The results indicate that our designed metric is reasonable in the sense that datasets obtain high scores with themselves and low + +Table 1: Structural similarity scores between each pair of datasets. Higher is better. + +
MISSetCoverMIK
MIS0.9980.1820.042
SetCover-1.0000.128
MIK--0.997
+ +Table 3: Average solving time (s) of instances solved by Gurobi (mean ± std). $\eta$ is the masking ratio. Numbers in the parentheses are relative errors with respect to the training sets (lower is better). + +
MISSetCoverMIK
Training Set0.349 ± 0.052.344± 0.130.198± 0.04
Bowly0.007± 0.00 (97.9%)0.048± 0.00 (97.9%)0.001± 0.00 (99.8%)
η = 0.01Random0.311± 0.05 (10.8%)2.044± 0.19 (12.8%)0.008± 0.00 (96.1%)
G2MILP0.354± 0.06 (1.5%)2.360± 0.18 (0.8%)0.169± 0.07 (14.7%)
η = 0.05Random0.569± 0.09 (63.0%)2.010± 0.11 (14.3%)0.004± 0.00 (97.9%)
G2MILP0.292± 0.07 (16.3%)2.533± 0.15 (8.1%)0.129± 0.05 (35.1%)
η = 0.1Random2.367± 0.35 (578.2%)1.988± 0.17 (15.2%)0.005± 0.00 (97.6%)
G2MILP0.214± 0.05 (38.7%)2.108± 0.21 (10.0%)0.072± 0.02 (63.9%)
+ +Table 4: Average numbers of branching nodes of instances solved by Gurobi. $\eta$ is the masking ratio. Numbers in the parentheses are relative errors with respect to the training sets (lower is better). + +
MISSetCoverMIK
Training Set16.09838.56175.35
Bowly0.00 (100.0%)0.00 (100.0%)0.00 (100.0%)
η = 0.01Random20.60 (28.1%)838.51 (0.0%)0.81 (99.5%)
G2MILP15.03 (6.6%)876.09 (4.4%)262.25 (14.7%)
η = 0.05Random76.22 (373.7%)765.30 (8.7%)0.00 (100%)
G2MILP10.58 (34.2%)874.46 (4.3%)235.35 (34.2%)
η = 0.1Random484.47 (2911.2%)731.14 (12.8%)0.00 (100%)
G2MILP4.61 (71.3%)876.92 (4.6%)140.06 (20.1%)
+ +scores with different domains. Table 2 shows the similarity scores between generated instances and the corresponding training sets. We generate 1000 instances for each dataset to compute the similarity scores. The results suggest that G2MILP closely fits the data distribution, while Bowly, which relies on heuristic rules to control the statistical features, falls short of our expectations. Furthermore, we observe that G2MILP outperforms Random, indicating that deep learning contributes to the model's performance. As expected, a higher masking ratio $\eta$ results in generating more novel instances but reduces their similarity to the training sets. + +# II. Computational Hardness We report the average solving time and numbers of branching + +nodes in Table 3 and Table 4, respectively. The results indicate that instances generated by Bowly are relatively easy, and the hardness of those generated by Random is inconclusive. In contrast, G2MILP is capable of preserving the computational hardness of the original training sets. Notably, even without imposing rules to guarantee the feasibility and boundedness of generated instances, G2MILP automatically learns from the data and produces feasible and bounded instances. + +Table 2: Structural distributional similarity scores between the generated instances with the training datasets. Higher is better. $\eta$ is the masking ratio. We do not report the results of Bowly on MIK because Ecole [45] and SCIP [51] fail to read the generated instances due to large numerical values. + +
MISSetCoverMIK
Bowly0.1840.197-
η = 0.01Random0.6510.7350.969
G2MILP0.9970.8350.991
η = 0.05Random0.5800.6130.840
G2MILP0.9400.7820.953
η = 0.1Random0.5120.5560.700
G2MILP0.8950.7820.918
+ +III. Downstream Task First, we follow Chen et al. [18] to construct a GNN model for predicting the optimal values of MILP problems. We train a predictive GNN model on the training set. After that, we employ 20 generated instances to augment the training data, and then train another predictive + +![](images/8848757bd0fbf8ee67d807c79e9980984221ccc3cb194784c20bf4d74c9d5d2e.jpg) +Figure 2: Results of the optimal value prediction task. Bars indicate the relative MSE to the model trained on the original training sets, and lines represent the relative performance improvement. + +![](images/ef16505824fdf361e1ffe816940b51ce25068038308cf64a0f2bab477749d057.jpg) + +![](images/ae343430396b1596b054c79f4b464a4e06943010c01b27abe8b3de72ae1ff5b1.jpg) +Figure 3: The t-SNE visualization of MILP instance representations for MIK. Each point represents an instance. Red points are from the training set and blue points are instances generated by G2MILP. + +![](images/1940e03567e89352b6b5dc971f07c0986cc3c0a3e8846fe115c77cacb47f02b4.jpg) + +![](images/0cae020464bb66597bcfa4593a39354c7d20eaa344eb807ab00e1e5b66a8baac.jpg) + +![](images/c65c02d7bd7de537d2b9ed016fef97bb473d4b4b497a8e7ea617dedff7308c22.jpg) + +model using the enriched dataset. We use the prediction mean squared error (MSE) to assess the resulting models, and we present the MSE relative to the default model trained on the original training sets in Figure 2. For the MIK dataset, instances generated by Bowly introduce numerical issues so that Ecole and SCIP fail to read them. For the Nurse Scheduling dataset, Random fails to generate feasible instances. Notably, G2MILP is the only method that demonstrates performance improvement on both datasets, reducing the MSE by $73.7\%$ and $24.3\%$ , respectively. The detailed results are in Table 8 in Appendix B.4. Then, we conduct experiments on the neural solver, i.e., the predict-and-search framework proposed by Han et al. [31], which employs a model to predict a solution and then uses solvers to search for the optimal solutions in a trust region. The results are in Table 9 in Appendix B.4. + +# 4.3 Analysis + +Masking Process We conduct extensive comparative experiments on different implementations of the masking process. First, we implement different versions of G2MILP, which enable us to mask and modify either constraints, variables, or both. Second, we investigate different orders of masking constraints, including uniformly sampling and sampling according to the vertex indices. Third, we analyze the effect of the masking ratio $\eta$ on similarity scores and downstream task performance improvements. The experimental results are in Appendix C.2. + +Size of Dataset We conduct experiments on different sizes of the original datasets and different ratio of generated instances to original ones on MIS. Results are in Table 15 in Appendix C.4. The results show that G2MILP yields performance improvements across datasets of varying sizes. + +Visualization We visualize the instance representations for MIK in Figure 3. Specifically, we use the G2MILP encoder to obtain the instance representations, and then apply t-SNE dimensionality reduction [52] for visualization. We observe that the generated instances, while closely resembling the training set, contribute to a broader and more continuous exploration of the problem space, thereby enhancing model robustness and generalization. Additionally, by increasing the masking ratio $\eta$ , we can effectively explore a wider problem space beyond the confines of the training sets. For comparison with the baseline, we present the visualization of instances generated by Random in Figure 5 in Appendix C.5. + +# 5 Limitations, Future Avenues, and Conclusions + +Limitations In this paper, we develop G2MILP by iteratively corrupting and replacing the constraints vertices. We also investigate different implementations of the masking process. However, more versatile masking schemes should be explored. Moreover, employing more sophisticated designs would enable us to control critical properties such as feasibility of the instances. We intend to develop a more versatile and powerful generator in our future work. + +Future Avenues We open up new avenues for research on DL-based MILP instance generative models. In addition to producing new instances to enrich the datasets, this research has many other promising technical implications [35]. (1) Such a generator will assist researchers to gain insights into different data domains and the explored space of MILP instances. (2) Based on a generative model, we can establish an adversarial framework, where the generator aims to identify challenging cases for the solver, thus automatically enhancing the solver's ability to handle complex scenarios. (3) Training a generative model involves learning the data distribution and deriving representations through unsupervised learning. Consequently, it is possible to develop a pre-trained model based on a generative model, which can benefit downstream tasks across various domains. We believe that this paper serves as an entrance for the aforementioned routes, and we expect further efforts in this field. + +Conclusions In this paper, we propose G2MILP, which to the best of our knowledge is the first deep generative framework for MILP instances. It can learn to generate MILP instances without prior expert-designed formulations, while preserving the structures and computational hardness, simultaneously. Thus the generated instances can enhance MILP solvers under limited data availability. This work opens up new avenues for research on DL-based MILP instance generative models. + +# Acknowledgements + +The authors would like to thank all the anonymous reviewers for their insightful comments. This work was supported in part by National Key R&D Program of China under contract 2022ZD0119801, National Nature Science Foundations of China grants U19B2026, U19B2044, 61836011, 62021001, and 61836006. + +# References + +[1] John A Muckstadt and Richard C Wilson. An application of mixed-integer programming duality to scheduling thermal generating systems. IEEE Transactions on Power Apparatus and Systems, (12), 1968. +[2] Yves Pochet and Laurence A Wolsey. Production planning by mixed integer programming, volume 149. Springer, 2006. +[3] Rodrigo Moreno, Roberto Moreira, and Goran Strbac. A milp model for optimising multiservice portfolios of distributed energy storage. Applied Energy, 137:554-566, 2015. +[4] Robert E Bixby, Mary Fenelon, Zonghao Gu, Ed Rothberg, and Roland Wunderling. Mixed-integer programming: A progress report. In The sharpest cut: the impact of Manfred Padberg and his work, pages 309–325. SIAM, 2004. +[5] Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. European Journal of Operational Research, 290(2):405-421, 2021. +[6] Jiayi Zhang, Chang Liu, Xijun Li, Hui-Ling Zhen, Mingxuan Yuan, Yawen Li, and Junchi Yan. A survey for solving mixed integer programming via machine learning. Neurocomputing, 519: 205-217, 2023. +[7] He He, Hal Daume III, and Jason M Eisner. Learning to search in branch and bound algorithms. Advances in neural information processing systems, 27, 2014. + +[8] Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. Advances in neural information processing systems, 32, 2019. +[9] Zhihai Wang, Xijun Li, Jie Wang, Yufei Kuang, Mingxuan Yuan, Jia Zeng, Yongdong Zhang, and Feng Wu. Learning cut selection for mixed-integer linear programming via hierarchical sequence model. In The Eleventh International Conference on Learning Representations, 2023. +[10] Ambros Gleixner, Gregor Hendel, Gerald Gamrath, Tobias Achterberg, Michael Bastubbe, Timo Berthold, Philipp Christophel, Kati Jarck, Thorsten Koch, Jeff Linderoth, et al. Miplib 2017: data-driven compilation of the 6th mixed-integer programming library. Mathematical Programming Computation, 13(3):443–490, 2021. +[11] Jun Sakuma and Shigenobu Kobayashi. A genetic algorithm for privacy preserving combinatorial optimization. In Annual Conference on Genetic and Evolutionary Computation, 2007. +[12] LLC Gurobi Optimization. Gurobi optimizer. URL http://www.gurobi.com, 2021. +[13] Han Lu, Zenan Li, Runzhong Wang, Qibing Ren, Xijun Li, Mingxuan Yuan, Jia Zeng, Xiaokang Yang, and Junchi Yan. Roco: A general framework for evaluating robustness of combinatorial optimization solvers on graphs. In The Eleventh International Conference on Learning Representations, 2023. +[14] Russ J Vander Wiel and Nikolaos V Sahinidis. Heuristic bounds and test problem generation for the time-dependent traveling salesman problem. Transportation Science, 29(2):167-183, 1995. +[15] Egon Balas and Andrew Ho. Set covering algorithms using cutting planes, heuristics, and subgradient optimization: A computational study, pages 37-60. Springer Berlin Heidelberg, Berlin, Heidelberg, 1980. ISBN 978-3-642-00802-3. doi: 10.1007/BFb0120886. URL https://doi.org/10.1007/BFb0120886. +[16] Simon Andrew Bowly. Stress testing mixed integer programming solvers through new test instance generation methods. PhD thesis, School of Mathematical Sciences, Monash University, 2019. +[17] Jiaxuan You, Haoze Wu, Clark Barrett, Raghuram Ramanujan, and Jure Leskovec. G2sat: learning to generate sat formulas. Advances in neural information processing systems, 32, 2019. +[18] Ziang Chen, Jialin Liu, Xinshang Wang, and Wotao Yin. On representing mixed-integer linear programs by graph neural networks. In The Eleventh International Conference on Learning Representations, 2023. +[19] Yang Li, Xinyan Chen, Wenxuan Guo, Xijun Li, Wanqian Luo, Junhua Huang, Hui-Ling Zhen, Mingxuan Yuan, and Junchi Yan. Hardsatgen: Understanding the difficulty of hard sat formula generation and a strong structure-hardness-aware baseline. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023. +[20] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +[21] Omar Mahmood, Elman Mansimov, Richard Bonneau, and Kyunghyun Cho. Masked graph modeling for molecule generation. Nature communications, 12(1):3156, 2021. +[22] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +[23] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016. +[24] Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268-276, 2018. + +[25] Tomás Balyo, Nils Froleyks, Marijn JH Heule, Markus Iser, Matti Jarvisalo, and Martin Suda. Proceedings of sat competition 2020: Solver and benchmark descriptions. 2020. +[26] Yang Li, Jinpei Guo, Runzhong Wang, and Junchi Yan. From distribution learning in training to gradient search in testing for combinatorial optimization. In Advances in Neural Information Processing Systems, 2023. +[27] Xijun Li, Qingyu Qu, Fangzhou Zhu, Mingxuan Yuan, Jia Zeng, and Jie Wang. Accelerating linear programming solving by exploiting the performance variability via reinforcement learning. 2023. +[28] Yufei Kuang, Xijun Li, Jie Wang, Fangzhou Zhu, Meng Lu, Zhihai Wang, Jia Zeng, Houqiang Li, Yongdong Zhang, and Feng Wu. Accelerate presolve in large-scale linear programming via reinforcement learning. arXiv preprint arXiv:2310.11845, 2023. +[29] Vinod Nair, Sergey Bartunov, Felix Gimeno, Ingrid Von Glehn, Pawel Lichocki, Ivan Lobov, Brendan O'Donoghue, Nicolas Sonnerat, Christian Tjandraatmadja, Pengming Wang, et al. Solving mixed integer programs using neural networks. arXiv preprint arXiv:2012.13349, 2020. +[30] Elias B Khalil, Christopher Morris, and Andrea Lodi. Mip-gnn: A data-driven framework for guiding combinatorial solvers. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10219-10227, 2022. +[31] Qingyu Han, Linxin Yang, Qian Chen, Xiang Zhou, Dong Zhang, Akang Wang, Ruoyu Sun, and Xiaodong Luo. A gnn-guided predict-and-search framework for mixed-integer linear programming. arXiv preprint arXiv:2302.05636, 2023. +[32] Radu Baltean-Lugojan, Pierre Bonami, Ruth Misener, and Andrea Tramontani. Scoring positive semidefinite cutting planes for quadratic optimization via trained neural networks. preprint: http://www.optimization-online.org/DB.HTML/2018/11/6943.html, 2019. +[33] Qingyu Qu, Xijun Li, Yunfan Zhou, Jia Zeng, Mingxuan Yuan, Jie Wang, Jinhu Lv, Kexin Liu, and Kun Mao. An improved reinforcement learning algorithm for learning to branch. arXiv preprint arXiv:2201.06213, 2022. +[34] Alper Atamtürk. On the facets of the mixed-integer knapsack polyhedron. Mathematical Programming, 98(1-3):145-175, 2003. +[35] Jie Wang, Zijie Geng, Xijun Li, Jianye Hao, Yongdong Zhang, and Feng Wu. G2milp: Learning to generate mixed-integer linear programming instances for milp solvers. nov 2023. doi: 10. 36227/techrxiv.24566554.v1. URL http://dx.doi.org/10.36227/techrxiv.24566554. v1. +[36] Xiaojie Guo and Liang Zhao. A systematic survey on deep generative models for graph generation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. +[37] Rocio Mercado, Tobias Rastemo, Edvard Lindelof, Günter Klambauer, Ola Engkvist, Hongming Chen, and Esben Jannik Bjerrum. Graph networks for molecular design. Machine Learning: Science and Technology, 2(2):025023, 2021. +[38] Wenqi Fan, Chengyi Liu, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, and Qing Li. Generative diffusion models on graphs: Methods and applications. arXiv preprint arXiv:2302.02591, 2023. +[39] Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, and Shu Wu. A survey on deep graph generation: Methods and applications. arXiv preprint arXiv:2203.06714, 2022. +[40] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, pages 2323-2332. PMLR, 2018. +[41] Zijie Geng, Shufang Xie, Yingce Xia, Lijun Wu, Tao Qin, Jie Wang, Yongdong Zhang, Feng Wu, and Tie-Yan Liu. De novo molecular generation via connection-aware motif mining. In The Eleventh International Conference on Learning Representations, 2023. + +[42] Duncan J Watts and Steven H Strogatz. Collective dynamics of 'small-world'networks. nature, 393(6684):440-442, 1998. +[43] Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos, and Zoubin Ghahrami. Kronecker graphs: an approach to modeling networks. Journal of Machine Learning Research, 11(2), 2010. +[44] Ivan Garzon, Pablo Mesejo, and Jesús Giráldez-Cru. On the performance of deep generative models of realistic sat instances. In 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2022. +[45] Antoine Prouvost, Justin Dumouchelle, Lara Scavuzzo, Maxime Gasse, Didier Chételat, and Andrea Lodi. Ecole: A gym-like library for machine learning in combinatorial optimization solvers. In Learning Meets Combinatorial Algorithms at NeurIPS2020, 2020. URL https://openreview.net/forum?id=IVc9hqgibyB. +[46] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. +[47] Frank Hutter, Lin Xu, Holger H Hoos, and Kevin Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206:79-111, 2014. +[48] Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. Guacamol: benchmarking models for de novo molecular design. Journal of chemical information and modeling, 59(3):1096-1108, 2019. +[49] Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145-151, 1991. +[50] David Bergman, Andre A Cire, Willem-Jan Van Hoeve, and John Hooker. Decision diagrams for optimization, volume 1. Springer, 2016. +[51] Tobias Achterberg. Scip: solving constraint integer programs. Mathematical Programming Computation, 1:1-41, 2009. +[52] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. + +# A Implementation Details + +# A.1 Data Representation + +As described in the main paper, we represent each MILP instance as a weighted bipartite graph $\mathcal{G} = (\mathcal{V} \cup \mathcal{W}, \mathcal{E})$ , where $\mathcal{V}$ represents the constraint vertex set, $\mathcal{W}$ represents the variable vertex set, and $\mathcal{E}$ represents the edge set, respectively. The graph is equipped with a tuple of feature matrices $(\mathbf{V}, \mathbf{W}, \mathbf{E})$ , and the description of these features can be found in Table 5. + +Table 5: Description of the constraint, variable, and edge features in our bipartite graph representation. + +
TensorFeatureDescription
VbiasThe bias valuebi.
WtypeVariable type (binary, continuous, integer, implicit integer) as a 4-dimensional one-hot encoding.
objectiveObjective coefficientcj.
has_lower_boundLower bound indicator.
has_upper_boundUpper bound indicator.
lower_boundLower bound valuelj.
upper_boundUpper bound valueuj.
EcoefConstraint coefficientaij.
+ +To ensure consistency, we standardize each instance to the form of Equation 1. However, we do not perform data normalization in order to preserve the potential information related to the problem domain in the original formulation. When extracting the bipartite graph, we utilize the readily available observation function provided by Ecole. For additional details on the observation function, readers can consult the following link: https://doc.ecole.ai/py/en/stable/reference/observations.html#ecoleobservation.MilpBipartite. + +# A.2 The Derivation of Masked Variational Auto-Encoder + +We consider a variable with a distribution $p(\pmb{x})$ . We draw samples from this distribution and apply a masking process to transform each sample $\pmb{x}$ into $\tilde{\pmb{x}}$ through a given probability $\tilde{p}(\tilde{\pmb{x}}|\pmb{x})$ . Our objective is to construct a parameterized generator $p_{\theta}(\pmb{x}|\tilde{\pmb{x}})$ to produce new data based on the the masked data $\tilde{\pmb{x}}$ . We assume that the generation process involves an unobserved continuous random variable $z$ that is independent of $\tilde{\pmb{x}}$ , i.e., $z \perp \tilde{\pmb{x}}$ . Consequently, we obtain the following equation: + +$$ +p _ {\theta} (\boldsymbol {x} | \tilde {\boldsymbol {x}}) = \frac {p _ {\theta} (\boldsymbol {x} | \boldsymbol {z} , \tilde {\boldsymbol {x}}) p _ {\theta} (\boldsymbol {z} | \tilde {\boldsymbol {x}})}{p _ {\theta} (\boldsymbol {z} | \boldsymbol {x} , \tilde {\boldsymbol {x}})} = \frac {p _ {\theta} (\boldsymbol {x} | \boldsymbol {z} , \tilde {\boldsymbol {x}}) p _ {\theta} (\boldsymbol {z})}{p _ {\theta} (\boldsymbol {z} | \boldsymbol {x} , \tilde {\boldsymbol {x}})}. \tag {14} +$$ + +We introduce a probabilistic encoder $q_{\phi}(z|\boldsymbol{x})$ for approximating the intractable latent variable distribution. We can then derive the follows: + +$$ +\begin{array}{l} \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \tilde {\boldsymbol {x}}) = \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} [ \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \tilde {\boldsymbol {x}}) ] \\ = \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log \frac {p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \boldsymbol {z} , \tilde {\boldsymbol {x}}) p _ {\boldsymbol {\theta}} (\boldsymbol {z})}{q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \frac {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})}{p _ {\boldsymbol {\theta}} (\boldsymbol {z} | \boldsymbol {x} , \tilde {\boldsymbol {x}})} \right] \\ = \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log \frac {p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \boldsymbol {z} , \tilde {\boldsymbol {x}}) p _ {\boldsymbol {\theta}} (\boldsymbol {z})}{q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \right] + \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log \left(\frac {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})}{p _ {\boldsymbol {\theta}} (\boldsymbol {z} | \boldsymbol {x} , \tilde {\boldsymbol {x}})}\right) \right] \\ = - \mathcal {L} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) + D _ {\mathrm {K L}} \left[ q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \| p _ {\boldsymbol {\theta}} (\boldsymbol {z} | \boldsymbol {x}, \tilde {\boldsymbol {x}}) \right] \\ \geq - \mathcal {L} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}). \tag {15} \\ \end{array} +$$ + +In the formula, the term $-\mathcal{L}(\theta, \phi | x, \tilde{x})$ is referred to as the evidence lower bound (ELBO), or the variational lower bound. It can be expressed as: + +$$ +\begin{array}{l} - \mathcal {L} (\pmb {\theta}, \phi | \pmb {x}, \tilde {\pmb {x}}) = \mathbb {E} _ {\pmb {z} \sim q _ {\phi} (\pmb {z} | \pmb {x})} \left[ \log \frac {p _ {\pmb {\theta}} (\pmb {x} | \pmb {z} , \tilde {\pmb {x}}) p _ {\pmb {\theta}} (\pmb {z})}{q _ {\phi} (\pmb {z} | \pmb {x})} \right] \\ = \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \boldsymbol {z}, \tilde {\boldsymbol {x}}) \right] - \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log \frac {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})}{p _ {\boldsymbol {\theta}} (\boldsymbol {z})} \right] \\ = \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \boldsymbol {z}, \tilde {\boldsymbol {x}}) \right] - D _ {\mathrm {K L}} \left[ q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \| p _ {\boldsymbol {\theta}} (\boldsymbol {z}) \right]. \tag {16} \\ \end{array} +$$ + +Consequently, the loss function can be formulated as follows: + +$$ +\mathcal {L} (\boldsymbol {\theta}, \phi) = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}} \mathbb {E} _ {\tilde {\boldsymbol {x}} \sim \tilde {p} (\tilde {\boldsymbol {x}} | \boldsymbol {x})} \left[ \mathcal {L} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) \right], \tag {17} +$$ + +where + +$$ +\mathcal {L} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) = \underbrace {\mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ - \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \boldsymbol {z} , \tilde {\boldsymbol {x}}) \right]} _ {\mathcal {L} _ {\mathrm {r e c}}} + \underbrace {D _ {\mathrm {K L}} \left[ q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \| p _ {\boldsymbol {\theta}} (\boldsymbol {z}) \right]} _ {\mathcal {L} _ {\mathrm {p r i o r}}}. \tag {18} +$$ + +In the formula, the first term $\mathcal{L}_{\mathrm{rec}}$ is referred to as the reconstruction loss, as it urges the decoder to reconstruct the input data $\pmb{x}$ . The second term $\mathcal{L}_{\mathrm{prior}}$ is referred to as the prior loss, as it regularizes the posterior distribution $q_{\phi}(z|x)$ of the latent variable to approximate the prior distribution $p_{\theta}(z)$ . In practice, the prior distribution $p_{\theta}(z)$ is commonly taken as $\mathcal{N}(\mathbf{0},\mathbf{I})$ , and a hyperparameter is often introduced as the coefficient for the prior loss. Consequently, the loss function can be expressed as: + +$$ +\mathcal {L} (\boldsymbol {\theta}, \phi) = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}} \mathbb {E} _ {\tilde {\boldsymbol {x}} \sim \tilde {p} (\tilde {\boldsymbol {x}} | \boldsymbol {x})} \left[ \mathcal {L} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) \right], \tag {19} +$$ + +where + +$$ +\mathcal {L} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) = \mathcal {L} _ {\text {r e c}} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) + \beta \cdot \mathcal {L} _ {\text {p r i o r}} (\phi | \boldsymbol {x}), +$$ + +$$ +\mathcal {L} _ {\operatorname {r e c}} (\boldsymbol {\theta}, \phi | \boldsymbol {x}, \tilde {\boldsymbol {x}}) = \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ - \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} | \boldsymbol {z}, \tilde {\boldsymbol {x}}) \right], \tag {20} +$$ + +$$ +\mathcal {L} _ {\mathrm {p r i o r}} (\phi | \boldsymbol {x}) = D _ {\mathrm {K L}} \left[ q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \| \mathcal {N} (\boldsymbol {0}, \boldsymbol {I}) \right]. +$$ + +In G2MILP, the loss function is instantiated as: + +$$ +\mathcal {L} (\boldsymbol {\theta}, \phi) = \mathbb {E} _ {\mathcal {G} \sim \mathcal {D}} \mathbb {E} _ {\tilde {\mathcal {G}} \sim \tilde {p} (\tilde {\mathcal {G}} | \mathcal {G})} \left[ \mathcal {L} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) \right], \tag {21} +$$ + +where + +$$ +\mathcal {L} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = \mathcal {L} _ {\text {r e c}} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) + \beta \cdot \mathcal {L} _ {\text {p r i o r}} (\phi | \mathcal {G}), +$$ + +$$ +\mathcal {L} _ {\text {r e c}} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = \mathbb {E} _ {\mathbf {Z} \sim q _ {\phi} (\mathbf {Z} | \mathcal {G})} \left[ - \log p _ {\boldsymbol {\theta}} (\mathcal {G} | \mathbf {Z}, \tilde {\mathcal {G}}) \right], \tag {22} +$$ + +$$ +\mathcal {L} _ {\text {p r i o r}} (\phi | \mathcal {G}) = D _ {\mathrm {K L}} \left[ q _ {\phi} (\mathbf {Z} | \mathcal {G}) \| \mathcal {N} (\mathbf {0}, \mathbf {I}) \right]. +$$ + +# A.3 G2MILP Implementation + +# A.3.1 Encoder + +The encoder implements $q_{\phi}(\mathbf{Z}|\mathcal{G})$ in Equation 22. Given a bipartite graph $\mathcal{G} = (\mathcal{V} \cup \mathcal{W}, \mathcal{E})$ equipped with the feature metrics $(\mathbf{V}, \mathbf{W}, \mathbf{E})$ , we employ a GNN structure with parameters $\phi$ to extract the representations. Specifically, we utilize MLPs as embedding layers to obtain the initial embeddings $h_{v_i}^{(0)}, h_{w_j}^{(0)}$ , and $h_{e_{ij}}$ , given by: + +$$ +\boldsymbol {h} _ {v _ {i}} ^ {(0)} = \operatorname {M L P} _ {\phi} (\boldsymbol {v} _ {i}), \quad \boldsymbol {h} _ {w _ {j}} ^ {(0)} = \operatorname {M L P} _ {\phi} (\boldsymbol {w} _ {j}), \quad \boldsymbol {h} _ {e _ {i j}} = \operatorname {M L P} _ {\phi} (\boldsymbol {e} _ {i j}). \tag {23} +$$ + +Next, we perform $K$ graph convolution layers, with each layer in the form of two interleaved half-convolutions. The convolution layer is defined as follows: + +$$ +\boldsymbol {h} _ {v _ {i}} ^ {(k + 1)} \leftarrow \operatorname {M L P} _ {\phi} \left(\boldsymbol {h} _ {v _ {i}} ^ {(k)}, \sum_ {j: e _ {i j} \in \mathcal {E}} \operatorname {M L P} _ {\phi} \left(\boldsymbol {h} _ {v _ {i}} ^ {(k)}, \boldsymbol {h} _ {e _ {i j}}, \boldsymbol {h} _ {v _ {j}} ^ {(k)}\right)\right), +$$ + +$$ +\boldsymbol {h} _ {w _ {j}} ^ {(k + 1)} \leftarrow \operatorname {M L P} _ {\phi} \left(\boldsymbol {h} _ {w _ {j}} ^ {(k)}, \sum_ {i: e _ {i j} \in \mathcal {E}} \operatorname {M L P} _ {\phi} \left(\boldsymbol {h} _ {v _ {i}} ^ {(k + 1)}, \boldsymbol {h} _ {e _ {i j}}, \boldsymbol {h} _ {w _ {j}} ^ {(k)}\right)\right). \tag {24} +$$ + +The convolution layer is followed by two GraphNorm layers, one for constraint vertices and the other for variable vertices. We employ a concatenation Jumping Knowledge layer to aggregate information from all $K$ layers and obtain the node representations: + +$$ +\boldsymbol {h} _ {v _ {i}} = \operatorname {M L P} _ {\phi} \left(\underset {k = 0, \dots , K} {\text {C O N C A T}} \left(\boldsymbol {h} _ {v _ {i}} ^ {(k)}\right)\right), \quad \boldsymbol {h} _ {w _ {j}} = \operatorname {M L P} _ {\phi} \left(\underset {k = 0, \dots , K} {\text {C O N C A T}} \left(\boldsymbol {h} _ {w _ {j}} ^ {(k)}\right)\right). \tag {25} +$$ + +The obtained representations contain information about the instances. Subsequently, we use two MLPs to output the mean and log variance, and then sample the latent vectors for each vertex from a Gaussian distribution as follows: + +$$ +\boldsymbol {z} _ {v _ {i}} \sim \mathcal {N} \left(\operatorname {M L P} _ {\phi} \left(\boldsymbol {h} _ {v _ {i}}\right), \exp \operatorname {M L P} _ {\phi} \left(\boldsymbol {h} _ {v _ {i}}\right)\right), +$$ + +$$ +\boldsymbol {z} _ {w _ {j}} \sim \mathcal {N} \left(\mathrm {M L P} _ {\phi} \left(\boldsymbol {h} _ {w _ {j}}\right), \exp \mathrm {M L P} _ {\phi} \left(\boldsymbol {h} _ {w _ {j}}\right)\right). \tag {26} +$$ + +# A.3.2 Decoder + +The decoder implements $p_{\theta}(\mathcal{G}|\mathbf{Z},\tilde{\mathcal{G}})$ in Equation 22. It utilizes a GNN structure to obtain the representations, which has the same structure as the encoder GNN, but is with parameters $\theta$ instead of $\phi$ . To encode the masked graph, we assign a special [mask] token to the masked vertex $\tilde{v}$ . Its initial embedding $h_{\tilde{v}}^{(0)}$ is initialized as a special embedding $h_{[\mathrm{mask}]}$ . We mask all edges between $\tilde{v}$ and the variable vertices and add virtual edges. In each convolution layer, we apply a special update rule for $\tilde{v}$ : + +$$ +\boldsymbol {h} _ {\tilde {v}} ^ {(k + 1)} \leftarrow \operatorname {M L P} _ {\boldsymbol {\theta}} \left(\boldsymbol {h} _ {\tilde {v}} ^ {(k)}, \underset {w _ {j} \in \mathcal {W}} {\text {M E A N}} \left(\boldsymbol {h} _ {w _ {j}} ^ {(k + 1)}\right)\right), \quad \boldsymbol {h} _ {w _ {j}} ^ {(k + 1)} \leftarrow \operatorname {M L P} _ {\boldsymbol {\theta}} \left(\boldsymbol {h} _ {w _ {j}} ^ {(k + 1)}, \boldsymbol {h} _ {\tilde {v}} ^ {(k + 1)}\right). \tag {27} +$$ + +This updating is performed after each convolution layer, allowing $\tilde{v}$ to aggregate and propagate the information from the entire graph. + +The obtained representations are used for the four networks—Bias Predictor, Degree Predictor, Logits Predictor, and Weights Predictor—to determine the generated graph. The details of these networks have been described in the main paper. Here we provide the losses for the four prediction tasks. In the following context, the node features, e.g., $h_{\tilde{v}}$ , refer to those from $\tilde{\mathcal{G}}$ obtained by the decoder GNN. + +$①$ Bias Prediction Loss: + +$$ +\mathcal {L} _ {1} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = \operatorname {M S E} \left(\sigma \left(\operatorname {M L P} _ {\boldsymbol {\theta}} ^ {\text {b i a s}} \left([ \boldsymbol {h} _ {\tilde {v}}, \boldsymbol {z} _ {\tilde {v}} ]\right)\right), b _ {\tilde {v}} ^ {*}\right). \tag {28} +$$ + +$②$ Degree Prediction Loss: + +$$ +\mathcal {L} _ {2} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = \operatorname {M S E} \left(\sigma \left(\operatorname {M L P} _ {\boldsymbol {\theta}} ^ {\deg} ([ \boldsymbol {h} _ {\tilde {v}}, \boldsymbol {z} _ {\tilde {v}} ])\right), d _ {\tilde {v}} ^ {*}\right). \tag {29} +$$ + +③ Logits Prediction Loss: + +$$ +\mathcal {L} _ {3} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = - \mathbb {E} _ {(\tilde {v}, u) \sim p _ {\mathrm {p o s}}} \left[ \log \left(\hat {\delta} _ {\tilde {v}, u} ^ {\prime}\right) \right] - \mathbb {E} _ {(\tilde {v}, u) \sim p _ {\mathrm {n e g}}} \left[ \log \left(1 - \hat {\delta} _ {\tilde {v}, u} ^ {\prime}\right) \right], +$$ + +$$ +\hat {\delta} _ {\tilde {v}, u} ^ {\prime} = \sigma \left(\mathrm {M L P} _ {\theta} ^ {\log_ {1 0} (\left[ h _ {u}, z _ {u} \right])}\right). \tag {30} +$$ + +$④$ Weights Prediction Loss: + +$$ +\mathcal {L} _ {4} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = \operatorname {M S E} \left(\sigma \left(\operatorname {M L P} _ {\boldsymbol {\theta}} ^ {\text {w e i g h t s}} \left(\left[ \boldsymbol {h} _ {u}, \boldsymbol {z} _ {u} \right]\right)\right), e _ {\tilde {v}, u} ^ {*}\right). \tag {31} +$$ + +With these four prediction tasks, the reconstruction loss in Equation 22 is instantiated as: + +$$ +\mathcal {L} _ {\text {r e c}} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}) = \sum_ {i = 1} ^ {4} \alpha_ {i} \cdot \mathcal {L} _ {i} (\boldsymbol {\theta}, \phi | \mathcal {G}, \tilde {\mathcal {G}}), \tag {32} +$$ + +# A.3.3 Training and Inference + +We describe the training and inference procedures in Algorithm 1 and Algorithm 2, respectively. + +Algorithm 1: Train G2MILP +Input: Dataset $\mathcal{D}$ , number of training steps $N$ , batch size B. Output: Trained G2MILP, dataset statistics $\underline{b},\overline{b},\underline{d},\overline{d},\underline{e},\overline{e}$ +1 Calculate the statistics $\underline{b},\overline{b},\underline{d},\overline{d},\underline{e},\overline{e}$ over $\mathcal{D}$ . +2 for $n = 1,\dots ,N$ do +3 $\mathcal{B}\gets \emptyset$ +4 for $b = 1,\dots ,B$ do +5 $\begin{array}{r}\mathcal{G}\sim \mathcal{D},\tilde{v}\sim \mathcal{V}^{\mathcal{G}};\\ \tilde{\mathcal{G}}\leftarrow \mathcal{G}.MaskNode(\tilde{v});\\ \mathcal{B}\leftarrow \mathcal{B}\cup \{(\mathcal{G},\tilde{\mathcal{G}})\} ;\\ \mathrm{Compute} b_{\tilde{v}}^{*},d_{\tilde{v}}^{*},\delta_{\tilde{v},u},e_{\tilde{v},u}^{*};\\ \mathrm{Compute}\mathcal{L}(\boldsymbol {\theta},\boldsymbol {\phi}|\mathcal{G},\tilde{\mathcal{G}})\mathrm{in~Equation~}22;\\ \mathcal{L}(\boldsymbol {\theta},\boldsymbol {\phi})\leftarrow \frac{1}{|B|}\sum_{(\mathcal{G},\tilde{\mathcal{G}})\in \mathcal{B}}\mathcal{L}(\boldsymbol {\theta},\boldsymbol {\phi}|\mathcal{G},\tilde{\mathcal{G}});\\ \mathrm{Update}\boldsymbol {\theta},\boldsymbol {\phi}\mathrm{to~minimize}\mathcal{L}(\boldsymbol {\theta},\boldsymbol {\phi}). \end{array}$ +7 +8 +9 +10 +11 + +Algorithm 2: Generate a MILP instance +Input: Dataset $\mathcal{D}$ , trained G2MILP, dataset statistics $\underline{b},\overline{b},\underline{d},\overline{d},\underline{e},\overline{e}$ , masking ratio $\eta$ Output: A novel instance $\hat{\mathcal{G}}$ +1 $\mathcal{G}\sim \mathcal{D}$ $N_{\mathrm{iters}}\gets \eta \cdot |\mathcal{V}^{\mathcal{G}}|,\hat{\mathcal{G}}\gets \mathcal{G};$ +2 for $n = 1,\dots ,N_{\mathrm{iters}}$ do +3 $\tilde{v}\sim \mathcal{V}^{\tilde{\mathcal{G}}};$ +4 Compute $\hat{b}_{\tilde{v}}^{*},\hat{b}_{\tilde{v}}\gets \underline{b} +(\bar{b} -\underline{b})\cdot \hat{b}_{\tilde{v}}^{*},\tilde{\mathcal{G}}.\tilde{v}.\mathrm{bias}\gets \hat{b}_{\tilde{v}};$ +5 Compute $\hat{d}_{\tilde{v}}^{*},\hat{d}_{\tilde{v}}\gets \underline{d} +(\bar{d} -\underline{d})\cdot \hat{d}_{\tilde{v}}^{*};$ +6 for $u\in \mathcal{W}^{\tilde{\mathcal{G}}}~\mathbf{do}$ +7 Compute $\hat{\delta}_{\tilde{v},u}^{\prime}$ +8 for $u\in \arg \operatorname {TopK}(\{\hat{\delta}_{\tilde{v},u}^{\prime}|u\in \mathcal{W}^{\tilde{\mathcal{G}}}\} ,\hat{d}_{\tilde{v}})$ do +9 Compute $\hat{e}_{\tilde{v},u}^{*},\hat{e}_{\tilde{v},u}\gets \underline{e} +(\overline{e} -\underline{e})\cdot \hat{e}_{\tilde{v},u}^{*};$ +10 $\tilde{\mathcal{G}}.\mathrm{AddEdge}(\tilde{v},u);$ +11 $\tilde{\mathcal{G}}.e_{\tilde{v},u}$ .weights $\leftarrow \hat{e}_{\tilde{v},u}$ +12 $\hat{\mathcal{G}}\gets \tilde{\mathcal{G}}$ +13 Output $\hat{\mathcal{G}}$ + +# B Experimental Details + +# B.1 Dataset + +The three commonly used datasets, namely MIS, SetCover, and MIK, are the same as those used in [9]. Nurse Scheduling contains a group of 4 instances from MIPLIB 2017: nursesched-medium04 and nursesched-sprint-hidden09 for training, and nursesched-sprint02 and nursesched-sprint-late03 for test. Table 6 summarizes some statistics of these datasets. + +Table 6: Statistics of datasets. Size means the number of instances in the training set. $|\mathcal{V}|$ and $|\mathcal{W}|$ are the numbers of constraints and variables, respectively. + +
DatasetMISSetCoverMIKNurse Scheduling
Size10001000802
Mean |V|19535003468707
Mean |W|500100041320659
+ +# B.2 Hyperparameters + +We report some important hyperparameters in this section. Further details can be found in our code once the paper is accepted to be published. + +We run our model on a single GeForce RTX 3090 GPU. The hidden dimension and the embedding dimension are set to 16. The depth of the GNNs is 6. Each MLP has one hidden layer and uses ReLU() as the activation function. + +In this work, we simply set all $\alpha_{i}$ to 1. We find that the choice of $\beta$ significantly impacts the model performance. For MIS, we set $\beta$ to 0.00045. For SetCover, MIK and Nurse Scheduling, we apply a sigmoid schedule [46] to let $\beta$ to reach 0.0005, 0.001, and 0.001, respectively. We employ the Adam optimizer, train the model for 20,000 steps, and choose the best checkpoint based on the average error in solving time and the number of branching nodes. The learning rate is initialized to 0.001 and decays exponentially. For MIS, SetCover, and MIK, we set the batch size to 30. Specifically, to provide more challenging prediction tasks in each batch, we sample 15 graphs and use each graph to derive 2 masked ones for training. For Nurse Scheduling, we set the batch size as 1 due to the large size of each graph. + +# B.3 Structural Distributional Similarity + +Table 7: Description of statistics used for measuring the structural distributional similarity. These statistics are calculated on the bipartite graph extracted by Ecole. + +
FeatureDescription
coef_densFraction of non-zero entries in A, i.e., |E|/(|V|·|W|).
consdegree_meanMean degree of constraint vertices in V.
consdegree_stdStd of degrees of constraint vertices in V.
vardegree_meanMean degree of variable vertices in W.
vardegree_stdStd of degrees of variance vertices in W.
lhs_meanMean of non-zero entries in A.
lhs_stdStd of non-zero entries in A.
rhs_meanMean of b.
rhs_stdStd of b.
clustering coefClustering coefficient of the graph.
modularityModularity of the graph.
+ +Table 7 presents the 11 statistics that we use to measure the structural distributional similarity. First, we calculate the statistics for each instance. We then compute the JS divergence $D_{\mathrm{JS},i}$ between the generated samples and the training set for each descriptor $i \in \{1,\dots ,11\}$ . We estimate the distributions using the histogram function in numpy and the cross entropy using the entropy function in scipy. The JS divergence falls in the range [0, log 2], so we standardize it to a score $s_i$ via: + +$$ +s _ {i} = \frac {1}{\log 2} (\log 2 - D _ {\mathrm {J S}, i}). \tag {33} +$$ + +Then we compute the mean of the 11 scores for the descriptors to obtain the final score $s$ : + +$$ +s = \frac {1}{1 1} \sum_ {i = 1} ^ {1 1} s _ {i}. \tag {34} +$$ + +Hence the final score ranges from 0 to 1, with a higher score indicating better similarity. + +We use the training set to train a G2MILP model for each dataset and generate 1000 instances to compute the similarity scores. For MIK, which has only 80 training instances, we estimated the score using put-back sampling. + +Table 8: Results on the optimal value prediction task (mean±std). On each dataset and for each method, we sample 5 different sets of 20 instances for augmentation. + +
MIKNurse Scheduling
MSEImprovementMSEImprovement
Dataset0.02360.0%679.750.0%
Bowly--663.52 (±95.33)2.3% (±14.0%)
Random0.0104 (±0.0023)55.9% (±9.7%)--
G2MILP0.0073 (±0.0014)69.1% (±5.9%)548.70 (±44.68)19.3% (±6.6%)
+ +Table 9: Results on the predict-and-search framework on MIS. The training set contains 100 instances, and we generate 100 new instances. For Random and G2MILP, masking ratio is 0.01. Time means the time for Gurobi to find the optimal solution with augmenting data generated by different models. Bowly leads to the framework failing to find optimal solutions in the trust region. + +
MethodTraining SetBowlyRandomG2MILP
Time0.04117/100 fail0.0370.032
(±0.006)(±0.003)(±0.004)
+ +Notice that for a fair comparison, we exclude statistics that remain constant in our approach, such as problem size and objective coefficients. We implement another version of metric that involves more statistics, and the results are in Appendix C.3. + +# B.4 Downstream Tasks + +The generated instances have the potential to enrich dataset in any downstream task. In this work, we demonstrate this potential through two application scenarios, i.e., the optimal value prediction task and the predict-and-search framework. + +Optimal Value Prediction Two datasets, MIK and Nurse Scheduling, are considered, with medium and extremely small sizes, respectively. Following [18], we employ a GNN as a predictive model. The GNN structure is similar to the GNNs in G2MIL. We obtain the graph representation using mean pooling over all vertices, followed by a two-layer MLP to predict the optimal values of the instances. + +For each dataset, we train a GNN predictive model on the training set. Specifically, for MIK, we use $80\%$ of instances for training, $20\%$ of instances for validating, and train for 1000 epochs to select the best checkpoint based on validation MSE. For Nurse Scheduling, we use both instances to train the model for 80 epochs. We use the generative models, Bowly, Random, and G2MILP, to generate 20 instances similar to the training sets. For Random and G2MILP, we mix together the instances generated by setting the masking ratio $\eta$ to 0.01 and 0.05, respectively. Next, we use the generated instances to enrich the original training sets, and use the enriched data to train another predictive model. We test all the trained model on previously unseen test data. Table 8 presents the predictive MSE on the test sets of the models trained on different training sets. As the absolute values of MSE are less meaningful than the relative values, we report the performance improvements brought by different generative technique. The improvement of $\mathrm{Model}_2$ relative to $\mathrm{Model}_1$ is calculated as follows: + +$$ +\text {I m p r o v e m e n t} _ {2, 1} = \frac {\mathrm {M S E} _ {1} - \mathrm {M S E} _ {2}}{\mathrm {M S E} _ {1}}. \tag {35} +$$ + +On MIK, Bowly results in numerical issues as some generated coefficients are excessively large. G2MILP significant improves the performance and outperforms Random. On Nurse Scheduling, Random fails to generate feasible instances, and Bowly yields a minor improvement. Notably, G2MILP allows for the training of the model with even minimal data. + +Predict-and-Search We conduct experiments on a neural solver, i.e., the predict-and-search framework proposed by Han et al. [31] Specifically, they propose a framework that first predicts a solution and then uses solvers to search for the optimal solutions in a trust region. We consider using generated instances to enhance the predictive model. We first train the predictive model on 100 MIS instances, and then use the generative models to generate 100 new instances to augment the dataset. The results + +are in Table 9. Bowly generates low-quality data that disturbs the model training, so that there is no optimal solution in the trust region around the predicted solution. Though both Random and G2MILP can enhance the solving framework to reduce solving time, we can see G2MILP significantly outperforms Random. + +**Discussions** These two downstream tasks, despite their simplicity, possess characteristics that make them representative problems that could benefit from generative models. Specifically, we identify the following features. + +1. More is better. We want as many data instances as possible. This condition is satisfied when we can obtain precise labels using existing methods, e.g., prediction-based neural solvers [31], or when unlabeled data is required for RL model training, e.g., RL for cut selection [9]. +2. More similar is better. We want independent identically distributed (i.i.d.) data instances for training. Thus +3. More diverse is better. We want the data to be diverse, despite being i.i.d., so that the trained model can generalize better. + +Our experimental results demonstrate the potential of G2MILP in facilitating downstream tasks with these characteristics, thus enhancing the MILP solvers. We intend to explore additional application scenarios in future research. + +# C Additional Results + +# C.1 Comparison with G2SAT + +Table 10: Results of G2SAT on MIS. In the table, "sim" denotes similarity score (higher is better), "time" denotes solving time, and "#branch" denotes the number of branching nodes, respectively. Numbers in brackets denote relative errors (lower is better). + +
simtime (s)#branch
Training Set0.9980.34916.09
G2SAT0.5720.014 (96.0%)2.11 (86.9%)
G2MILP (η = 0.01)0.9970.354 (1.5%)15.03 (6.6%)
G2MILP (η = 0.1)0.8950.214 (38.7%)4.61 (71.3%)
+ +We conduct an additional experiment that transfers G2SAT to a special MILP dataset, MIS, in which all coefficients are 1.0 and thus the instances can be modeled as homogeneous bipartite graphs. We apply G2SAT to learn to generate new graphs and convert them to MILPs. Results are in Table 10. The results show that G2MILP significantly outperforms G2SAT on the special cases. + +# C.2 Masking Process + +Masking Variables In the mainbody, for simplicity, we define the masking process of uniformly sampling a constraint vertex $\tilde{v} \sim \mathcal{U}(\mathcal{V})$ to mask, while keeping the variable vertices unchanged. We implement different versions of G2MILP that allow masking and modifying either constraints, variables, or both. The results are in Table 11. + +Ablation on Masking Ratio We have conduct ablation studies on the effect of the masking ratio $\eta$ on MIK. The results are in Figure 4. The experimental settings are the same with those in Table 2 and Figure 2. From the results we have the following conclusions. (1) though empirically a smaller leads to a relatively better performance, G2MILP maintains a high similarity performance even when $\eta$ is large. (2) The downstream task performance does not drops significantly. This makes sense because smaller $\eta$ leads to more similar instances, while larger $\eta$ leads to more diverse (but still similar) instances, both of which can benefit downstream tasks. (3) G2MILP always outperforms Random, + +Table 11: Results of different implementations of G2MILP on MIK. In the table, $\eta$ denotes mask ratio, "v" denotes only modifying variables (objective coefficients and variable types), "c" denotes only modifying constraints, and "vc" denotes first modifying variables and then modifying constraints. We do not report the similarity scores for "v" models because the current similarity metrics exclude statistics that measure only variables. + +
simtime (s)#branch
Training Set0.9970.198175.35
G2MILP(η = 0.01)v-0.183 (7.5%)136.68 (22.0%)
c0.9890.169 (17.1%)167.44 (4.5%)
vc0.9860.186 (6.1%)155.40 (11.4%)
G2MILP(η = 0.05)v-0.176 (11.1%)136.68 (22.0%)
c0.9640.148 (25.3%)150.90 (13.9%)
vc0.9640.147 (25.3%)142.70 (18.6%)
G2MILP(η = 0.1)v-0.172 (13.1%)136.67 (22.1%)
c0.9050.117 (40.9%)169.63 (3.3%)
vc0.9080.115 (41.9%)112.29 (35.9%)
+ +![](images/9bae51e0c34b50de0c8f1ff1da45d378c80e82024a0c3bb7bff90fefd85201bd.jpg) +(a) Distributional Similarity + +![](images/32312e0164ad6bb3561a07d51fe5ca698ca9d88fe129d849e1b80e6f016c74c5.jpg) +(b) Relative MSE +Figure 4: (a) Distributional similarity score (higher is better) and (b) Relative MSE (lower is better) v.s. masking ratio $\eta$ . + +which demonstrates that the learning paradigm helps maintain the performance. (4) Bowly fails on this dataset because its generated instances lead to numerical issues and cannot be read by Gurobi or SCIP. Moreover, in real applications, it is reasonable and flexible to adjust the hyperparameter to achieve good performances in different scenarios. + +Orders of Masked Constraints We also investigate different orders of masking constraint vertices, including uniformly sampling and sampling according to the vertex indices. Results are in Table 12. We find that uniformly sampling achieves the best performance. Sampling according to indices leads to a performance decrease, maybe because near constraints are relevant and lead to error accumulation. We think these results are interesting, and will study it in the future work. + +# C.3 Structural Distributional Similarity + +In the mainbody, for a fair comparison, we exclude statistics that remain constant in our approach, such as problem size and objective coefficients. However, these statistics are also important features for MILPs. In this section, we incorporate three additional statistics in the computing of similarity scores: (1) mean of objective coefficients $\mathbf{c}$ , (2) std of objective coefficients $\mathbf{c}$ , and (3) the ratio of continuous variables. With these additional metrics, we recompute the structural similarity scores and updated the results in both Table 2 and Table 11. The new results are in Table 13 and Table 14, + +Table 12: Results of different implementations of generation orders on MIK dataset. In the table, "Uni" denotes uniformly sampling from constraints. "Idx/” and "Idx\`" denote sampling constraints according to indices in ascending order and descending order, respectively. + +
ordermodelsimtime (s)#branch
UniG2MILP0.9530.129 (35.1%)235.35 (34.2%)
Random0.8400.004 (97.9%)0.00 (100%)
Idx\G2MILP0.8920.054 (72.7%)108.30 (38.2%)
Random0.7730.002 (98.9%)0.00 (100%)
Idx\G2MILP0.9250.027 (86.2%)31.53 (82.0%)
Random0.8270.003 (98.6%)0.00 (100%)
+ +respectively. From the results, we can still conclude that G2MILP outperforms all baselines, further supporting the effectiveness of our proposed method. + +Table 13: (Table 2 recomputed.) Structural distributional similarity scores between the generated instances with the training datasets. Higher is better. $\eta$ is the masking ratio. We do not report the results of Bowly on MIK because Ecole [45] and SCIP [51] fail to read the generated instances due to large numerical values. + +
MISSetCoverMIK
Bowly0.1440.150-
η = 0.01Random0.7220.7910.971
G2MILP0.9970.8740.994
η = 0.05Random0.6700.7040.878
G2MILP0.9510.8330.969
η = 0.1Random0.6180.6480.768
G2MILP0.9210.8340.930
+ +Table 14: (Table 11 recomputed.) Results of different implementations of G2MILP on MIK. In the table, $\eta$ denotes mask ratio, "v" denotes only modifying variables (objective coefficients and variable types), "c" denotes only modifying constraints, and "vc" denotes first modifying variables and then modifying constraints. + +
vccv
G2MILP (η = 0.01)0.9980.9880.985
G2MILP (η = 0.05)0.9960.9680.967
G2MILP (η = 0.1)0.9960.9280.912
+ +# C.4 Sizes of Datasets + +We conduct experiments on different sizes of the original datasets, as well as the ratio of generated instances to original ones, on MIS. The results are in Table 15. The results show that G2MILP can bring performance improvements on varying sizes of datasets. + +# C.5 Visualization + +The t-SNE visualization for baselines are in Figure 5. G2MILP generates diverse instances around the training set, while instances generated by Random are more biased from the realistic ones. + +Table 15: Results on the optimal value prediction task on MIS with different dataset sizes. In the table, "#MILPs" denotes the number of instances in the training sets, and "Augment%" denotes the ratio of generated instances to training instances. + +
#MILPsAugment%MSEImprovement
5001.3180
25%1.01423.1%
50%0.99824.3%
100%0.98225.5%
10000.7980
25%0.7861.5%
50%0.7525.8%
100%0.56123.7%
20000.2940
25%0.28319.0%
50%0.24317.3%
100%0.20231.3%
50000.1880
25%0.16810.6%
50%0.1756.9%
100%0.1709.6%
+ +![](images/174c837a33bb3afc4f5295a7ba57674f876bc115ac59767b185bf47a14ecc56d.jpg) + +![](images/9c79f7730a343791cae732acfe8dba474ac919e35beedf888c7a2a773eb83fce.jpg) + +![](images/d9ac91cbc437f6f2adc917bedf66f715b15cac75ecbf32710ab59459863fe4b7.jpg) + +![](images/341d078c8957390aa3df0851fb8c71deafe737572adea1248017e9222d1575d6.jpg) +Figure 5: The t-SNE visualization of MILP instance representations for MIK. Each point represents an instance. Red points are from the training set, blue points are instances generated by G2MILP, and green points are instances generated by Random. + +![](images/faa652f1773720f28d7112cb0bab7bd7d873d706872b241b636e38d21bd8d121.jpg) + +![](images/a9f47368fdb5525ce2857e6ee1f318e79c00bb4344ae7356682c5e5e166cee4b.jpg) \ No newline at end of file diff --git a/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/images.zip b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..542f75049c4a62285ddd4a709fa5b685748b8b8e --- /dev/null +++ b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d863a8b88dc37f589cf1d5cfa2df316d72f91c08e482926643b6909f3dc0538f +size 1189409 diff --git a/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/layout.json b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9574208f30cd6006a5b92e1724797b9bb8f28798 --- /dev/null +++ b/adeepinstancegenerativeframeworkformilpsolversunderlimiteddataavailability/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4afb7cc491dc4330e4a085d8d575ad1f141d67daa4ec9c49a10e3987e958eba +size 767404 diff --git a/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_content_list.json b/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5ee68cf23410d706366572a234b7771232e8ef53 --- /dev/null +++ b/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6320c449d6982d33226c74e21586ea06f0a6aaa05af19148716fd757dc7f558b +size 222833 diff --git a/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_model.json b/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d90297bc44328683b4ca9c32a79f71f91ace9e51 --- /dev/null +++ b/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aadfc336973d545419c5a4bcb8f507fbb54c0b7f4d6ce9e11cd7776451604eb4 +size 268868 diff --git a/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_origin.pdf b/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..71fdde970a23298bb67ca4941119cd42d726c0de --- /dev/null +++ b/adefinitionofcontinualreinforcementlearning/29a0ee25-d9e8-4464-85ee-cbde67271246_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c8bfc2cb24f7f42dd494ed9ad5f61c043d6b206ecec3338d1fe8d4b899ed440 +size 1449732 diff --git a/adefinitionofcontinualreinforcementlearning/full.md b/adefinitionofcontinualreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8d4d2094ca4a1ee5def758b00e3cd416ae53c65c --- /dev/null +++ b/adefinitionofcontinualreinforcementlearning/full.md @@ -0,0 +1,1177 @@ +# A Definition of Continual Reinforcement Learning + +David Abel + +dmabel@google.com + +Google DeepMind + +Andre Barreto + +andrebarreto@google.com + +Google DeepMind + +Benjamin Van Roy + +benvanroy@google.com + +Google DeepMind + +Doina Precup + +doinap@google.com + +Google DeepMind + +Hado van Hasselt + +hado@google.com + +Google DeepMind + +Satinder Singh + +baveja@google.com + +Google DeepMind + +# Abstract + +In a standard view of the reinforcement learning problem, an agent's goal is to efficiently identify a policy that maximizes long-term reward. However, this perspective is based on a restricted view of learning as finding a solution, rather than treating learning as endless adaptation. In contrast, continual reinforcement learning refers to the setting in which the best agents never stop learning. Despite the importance of continual reinforcement learning, the community lacks a simple definition of the problem that highlights its commitments and makes its primary concepts precise and clear. To this end, this paper is dedicated to carefully defining the continual reinforcement learning problem. We formalize the notion of agents that "never stop learning" through a new mathematical language for analyzing and cataloging agents. Using this new language, we define a continual learning agent as one that can be understood as carrying out an implicit search process indefinitely, and continual reinforcement learning as the setting in which the best agents are all continual learning agents. We provide two motivating examples, illustrating that traditional views of multi-task reinforcement learning and continual supervised learning are special cases of our definition. Collectively, these definitions and perspectives formalize many intuitive concepts at the heart of learning, and open new research pathways surrounding continual learning agents. + +# 1 Introduction + +In *The Challenge of Reinforcement Learning*, Sutton states: "Part of the appeal of reinforcement learning is that it is in a sense the whole AI problem in a microcosm" [56]. Indeed, the problem facing an agent that learns to make better decisions from experience is at the heart of the study of Artificial Intelligence (AI). Yet, when we study the reinforcement learning (RL) problem, it is typical to restrict our focus in a number of ways. For instance, we often suppose that a complete description of the state of the environment is available to the agent, or that the interaction stream is subdivided into episodes. Beyond these standard restrictions, however, there is another significant assumption that constrains the usual framing of RL: We tend to concentrate on agents that learn to solve problems, rather than agents that learn forever. For example, consider an agent learning to play Go: Once the agent has discovered how to master the game, the task is complete, and the agent's learning can stop. This view of learning is often embedded in the standard formulation of RL, in which an agent interacts with a Markovian environment with the goal of efficiently identifying an optimal policy, at which point learning can cease. + +But what if this is not the best way to model the RL problem? That is, instead of viewing learning as finding a solution, we can instead think of it as endless adaptation. This suggests study of the continual reinforcement learning (CRL) problem [47, 48, 25, 27], as first explored in the thesis by + +Ring [46], with close ties to supervised never-ending [10, 39, 43] and continual learning [47, 48, 26, 54, 41, 42, 49, 22, 30, 45, 4]. + +Despite the prominence of CRL, the community lacks a clean, general definition of this problem. It is critical to develop such a definition to promote research on CRL from a clear conceptual foundation, and to guide us in understanding and designing continual learning agents. To these ends, this paper is dedicated to carefully defining the CRL problem. Our definition is summarized as follows: + +# The CRL Problem (Informal) + +An RL problem is an instance of CRL if the best agents never stop learning. + +The core of our definition is framed around two new insights that formalize the notion of "agents that never stop learning": (i) we can understand every agent as implicitly searching over a set of history-based policies (Theorem 3.1), and (ii) every agent will either continue this search forever, or eventually stop (Remark 3.2). We make these two insights rigorous through a pair of logical operators on agents that we call generates and reaches that provide a new mathematical language for characterizing agents. Using these tools, we then define CRL as any RL problem in which all of the best agents never stop their implicit search. We provide two motivating examples of CRL, illustrating that traditional multi-task RL and continual supervised learning are special cases of our definition. We further identify necessary properties of CRL (Theorem 4.1) and the new operators (Theorem 4.2, Theorem 4.3). Collectively, these definitions and insights formalize many intuitive concepts at the heart of continual learning, and open new research pathways surrounding continual learning agents. + +# 2 Preliminaries + +We first introduce key concepts and notation. Our conventions are inspired by Ring [46], the recent work by Dong et al. [16] and Lu et al. [32], as well as the literature on general RL by Hutter [23, 24], Lattimore [28], Leike [29], Cohen et al. [12], and Majeed [36]. + +Notation. We let capital calligraphic letters denote sets $(\mathcal{X})$ , lower case letters denote constants and functions $(x)$ , italic capital letters denote random variables $(X)$ , and blackboard capitals denote the natural and real numbers $(\mathbb{N},\mathbb{R},\mathbb{N}_0 = \mathbb{N}\cup \{0\})$ . Additionally, we let $\Delta (\mathcal{X})$ denote the probability simplex over the set $\mathcal{X}$ . That is, the function $p:\mathcal{X}\times \mathcal{Y}\to \Delta (\mathcal{Z})$ expresses a probability mass function $p(\cdot \mid x,y)$ , over $\mathcal{Z}$ , for each $x\in \mathcal{X}$ and $y\in \mathcal{Y}$ . Lastly, we use $\neg$ to denote logical negation, and we use $\forall_{x\in \mathcal{X}}$ and $\exists_{x\in \mathcal{X}}$ to express the universal and existential quantifiers over a set $\mathcal{X}$ . + +# 2.1 Agents and Environments + +We begin by defining environments, agents, and related artifacts. + +Definition 2.1. An agent-environment interface is a pair $(\mathcal{A},\mathcal{O})$ of countable sets $\mathcal{A}$ and $\mathcal{O}$ where $|\mathcal{A}|\geq 2$ and $|O|\geq 1$ . + +We refer to elements of $\mathcal{A}$ as actions, denoted $a$ , and elements of $O$ as observations, denoted $o$ . Histories define the possible interactions between an agent and an environment that share an interface. + +Definition 2.2. The histories with respect to interface $(\mathcal{A},\mathcal{O})$ are the set of sequences of action-observation pairs, + +$$ +\mathcal {H} = \bigcup_ {t = 0} ^ {\infty} (\mathcal {A} \times \mathcal {O}) ^ {t}. \tag {2.1} +$$ + +We refer to an individual element of $\mathcal{H}$ as a history, denoted $h$ , and we let $hh'$ express the history resulting from the concatenation of any two histories $h, h' \in \mathcal{H}$ . Furthermore, the set of histories of length $t \in \mathbb{N}_0$ is defined as $\mathcal{H}_t = (\mathcal{A} \times \mathcal{O})^t$ , and we use $h_t \in \mathcal{H}_t$ to refer to a history containing $t$ action-observation pairs, $h_t = a_0 o_1 \ldots a_{t-1} o_t$ , with $h_0 = \emptyset$ the empty history. An environment is then a function from the set of all environments, $\mathcal{E}$ , that produces observations given a history. + +Definition 2.3. An environment with respect to interface $(\mathcal{A},\mathcal{O})$ is a function $e:\mathcal{H}\times \mathcal{A}\to \Delta (\mathcal{O})$ + +This model of environments is general in that it can capture Markovian environments such as Markov decision processes (MDPs, Puterman, 2014) and partially observable MDPs (Cassandra et al., 1994), as well as both episodic and non-episodic settings. We next define an agent as follows. + +Definition 2.4. An agent with respect to interface $(\mathcal{A},\mathcal{O})$ is a function $\lambda :\mathcal{H}\to \Delta (\mathcal{A})$ + +We let $\Lambda$ denote the set of all agents, and let $\Lambda$ denote any non-empty subset of $\Lambda$ . This treatment of an agent captures the mathematical way experience gives rise to behavior, as in "agent functions" from work by Russell and Subramanian [50]. This is in contrast to a mechanistic account of agency as proposed by Dong et al. [16] and Sutton [58]. Further, note that Definition 2.4 is precisely a history-based policy; we embrace the view that there is no real distinction between an agent and a policy, and will refer to all such functions as "agents" unless otherwise indicated. + +# 2.2 Realizable Histories + +We will be especially interested in the histories that occur with non-zero probability as a result of the interaction between a particular agent and environment. + +Definition 2.5. The realizable histories of a given agent-environment pair, $(\lambda ,e)$ , define the set of histories of any length that can occur with non-zero probability from the interaction of $\lambda$ and $e$ , + +$$ +\mathcal {H} ^ {\lambda , e} = \bar {\mathcal {H}} = \bigcup_ {t = 0} ^ {\infty} \left\{h _ {t} \in \mathcal {H} _ {t}: \prod_ {k = 0} ^ {t - 1} e \left(o _ {k + 1} \mid h _ {k}, a _ {k}\right) \lambda \left(a _ {k} \mid h _ {k}\right) > 0 \right\}. \tag {2.2} +$$ + +Given a realizable history $h$ , we will refer to the realizable history suffixes, $h'$ , which, when concatenated with $h$ , produce a realizable history $hh' \in \tilde{\mathcal{H}}$ . + +Definition 2.6. The realizable history suffixes of a given $(\lambda, e)$ pair, relative to a history prefix $h \in \mathcal{H}^{\lambda, e}$ , define the set of histories that, when concatenated with prefix $h$ , remain realizable, + +$$ +\mathcal {H} _ {h} ^ {\lambda , e} = \bar {\mathcal {H}} _ {h} = \left\{h ^ {\prime} \in \mathcal {H}: h h ^ {\prime} \in \mathcal {H} ^ {\lambda , e} \right\}. \tag {2.3} +$$ + +We abbreviate $\mathcal{H}^{\lambda ,e}$ to $\bar{\mathcal{H}}$ , and $\mathcal{H}_h^{\lambda ,e}$ to $\bar{\mathcal{H}}_h$ , where $\lambda$ and $e$ are obscured for brevity. + +# 2.3 Reward, Performance, and the RL Problem + +Supported by the arguments of Bowling et al. [7], we assume that all of the relevant goals or purposes of an agent are captured by a deterministic reward function (in line with the reward hypothesis [57]). + +Definition 2.7. We call $r: \mathcal{A} \times \mathcal{O} \to \mathbb{R}$ a reward function. + +We remain agnostic to how the reward function is implemented; it could be a function inside of the agent, or the reward function's output could be a special scalar in each observation. Such commitments do not impact our framing. When we refer to an environment we will implicitly mean that a reward function has been selected as well. We remain agnostic to how reward is aggregated to determine performance, and instead adopt the function $v$ defined as follows. + +Definition 2.8. The performance, $v: \mathcal{H} \times \mathbb{A} \times \mathcal{E} \to [\mathrm{v}_{\min}, \mathrm{v}_{\max}]$ is a bounded function for fixed constants $\mathrm{v}_{\min}, \mathrm{v}_{\max} \in \mathbb{R}$ . + +The function $v(\lambda, e \mid h)$ expresses some statistic of the received future random rewards produced by the interaction between $\lambda$ and $e$ following history $h$ , where we use $v(\lambda, e)$ as shorthand for $v(\lambda, e \mid h_0)$ . While we accommodate any $v$ that satisfies the above definition, it may be useful to think of specific choices of $v(\lambda, e \mid h_t)$ , such as the average reward, + +$$ +\lim _ {k \rightarrow \infty} \inf _ {k \rightarrow \infty} \frac {1}{k} \mathbb {E} _ {\lambda , e} [ R _ {t} + \dots + R _ {t + k} | H _ {t} = h _ {t} ], \tag {2.4} +$$ + +where $\mathbb{E}_{\lambda ,e}\big[\dots \mid H_t = h_t\big]$ denotes expectation over the stochastic process induced by $\lambda$ and $e$ following history $h_t$ . Or, we might consider performance based on the expected discounted reward, $v(\lambda ,e\mid h_t) = \mathbb{E}_{\lambda ,e}[R_t + \gamma R_{t + 1} + \ldots \mid H_t = h_t]$ , where $\gamma \in [0,1)$ is a discount factor. + +The above components give rise to a simple definition of the RL problem. + +Definition 2.9. An instance of the RL problem is defined by a tuple $(e, v, \Lambda)$ as follows + +$$ +\Lambda^ {*} = \arg \max _ {\lambda \in \Lambda} v (\lambda , e). \tag {2.5} +$$ + +This captures the RL problem facing an agent designer that would like to identify an optimal agent $(\lambda^{*}\in \Lambda^{*})$ with respect to the performance $(v)$ , among the available agents $(\Lambda)$ , in a particular environment $(e)$ . We note that a simple extension of this definition of the RL problem might instead consider a set of environments (or similar alternatives). + +# 3 Agent Operators: Generates and Reaches + +We next introduce two new insights about agents, and the logical operators that formalize them: + +1. Theorem 3.1: Every agent can be understood as searching over another set of agents. +2. Remark 3.2: Every agent will either continue their search forever, or eventually stop. + +We make these insights precise by introducing a pair of logical operators on agents: (1) a set of agents generates (Definition 3.4) another set of agents, and (2) a given agent reaches (Definition 3.5) an agent set. Together, these operators enable us to define learning as the implicit search process captured by the first insight, and continual learning as the process of continuing this search indefinitely. + +# 3.1 Operator 1: An Agent Basis Generates an Agent Set. + +The first operator is based on two complementary intuitions. + +From the first perspective, an agent can be understood as searching over a space of representable action-selection strategies. For instance, in an MDP, agents can be interpreted as searching over the space of policies (that is, the space of stochastic mappings from the MDP's state to action). It turns out this insight can be extended to any agent and any environment. + +The second complementary intuition notes that, as agent designers, we often first identify the space of representable action-selection strategies of interest. Then, it is natural to design agents that search through this space. For instance, in designing an agent to interact with an MDP, we might be interested in policies representable by a neural network of a certain size and architecture. When we design agents, we then consider all agents (choices of loss function, optimizer, memory, and so on) that search through the space of assignments of weights to this particular neural network using standard methods like gradient descent. We codify these intuitions in the following definitions. + +Definition 3.1. An agent basis (or simply, a basis), $\Lambda_{\mathrm{B}}\subset \mathbb{A}$ , is any non-empty subset of $\mathbb{A}$ + +Notice that an agent basis is a choice of agent set, $\Lambda$ . We explicitly call out a basis with distinct notation $(\Lambda_{\mathrm{B}})$ as it serves an important role in the discussion that follows. For example, we next introduce learning rules as functions that switch between elements of an agent basis for each history. + +Definition 3.2. A learning rule over an agent basis $\Lambda_{\mathrm{B}}$ is a function, $\sigma : \mathcal{H} \to \Lambda_{\mathrm{B}}$ , that selects a base agent for each history. + +We let $\Sigma$ denote the set of all learning rules over $\Lambda_{\mathrm{B}}$ , and let $\Sigma$ denote any non-empty subset of $\Sigma$ . A learning rule is a mechanism for switching between the available base agents following each new experience. Notice that learning rules are deterministic; while a simple extension captures the stochastic case, we will see by Theorem 3.1 that the above is sufficiently general in a certain sense. We use $\sigma(h)(h)$ to refer to the action distribution selected by the agent $\lambda = \sigma(h)$ at any history $h$ . + +Definition 3.3. Let $\Sigma$ be a set of learning rules over some basis $\Lambda_{\mathrm{B}}$ , and let $e$ be an environment. We say that a set $\Lambda$ is $\Sigma$ -generated by $\Lambda_{\mathrm{B}}$ in $e$ , denoted $\Lambda_{\mathrm{B}} \vDash_{\Sigma} \Lambda$ , if and only if + +$$ +\forall_ {\lambda \in \Lambda} \exists_ {\sigma \in \Sigma} \forall_ {h \in \bar {\mathcal {H}}} \lambda (h) = \sigma (h) (h). \tag {3.1} +$$ + +Thus, any choice of $\Sigma$ together with a basis $\Lambda_{\mathrm{B}}$ induces a family of agent sets whose elements can be understood as switching between the basis according to the rules prescribed by $\Sigma$ . We then say that a basis generates an agent set in an environment if there exists a set of learning rules that switches between the basis elements to produce the agent set. + +Definition 3.4. We say a basis $\Lambda_{\mathrm{B}}$ generates $\Lambda$ in $e$ , denoted $\Lambda_{\mathrm{B}} \notin \Lambda$ , if and only if + +$$ +\Lambda_ {\mathrm {B}} \varepsilon_ {\Sigma} \Lambda . \tag {3.2} +$$ + +Intuitively, an agent basis $\Lambda_{\mathrm{B}}$ generates another agent set $\Lambda$ just when the agents in $\Lambda$ can be understood as switching between the base agents. It is in this sense that we can understand agents as searching through a basis—an agent is just a particular sequence of history-conditioned switches over a basis. For instance, let us return to the example of a neural network: The agent basis might represent a specific multilayer perceptron, where each element of this basis is an assignment to the network's weights. The learning rules are different mechanisms that choose the next set of weights in + +![](images/048b494c7c609065834f10e40b97024f09be88492471c3eae8c0b3879d499395.jpg) +(a) Generates $(\Lambda_{\mathrm{B}} \notin \Lambda)$ + +![](images/ba99a96d8b7ad6f33b4a797e0121814e490a9754bfc7936e4c3bb70e85ff7f45.jpg) +(b) Sometimes Reaches $(\lambda_1 \rightsquigarrow \Lambda_{\mathrm{B}})$ +Figure 1: A visual of the generates (left) and sometimes reaches (right) operators. (a) Generates: An agent basis, $\Lambda_{\mathrm{B}}$ , comprised of three base agents depicted by the triangle, circle, and square, generates a set $\Lambda$ containing agents that can each be understood as switching between the base agents in the realizable histories of environment $e$ . (b) Sometimes Reaches: On the right, we visualize $\lambda_{1} \in \Lambda$ generated by $\Lambda_{\mathrm{B}}$ (from the figure on the left) to illustrate the concept of sometimes reaches. That is, the agent's choice of action distribution at each history can be understood as switching between the three basis elements, and there is at least one history for which the agent stops switching—here, we show the agent settling on the choice of the blue triangle and never switching again. + +response to experience (such as gradient descent). Together, the agent basis and the learning rules generate the set of agents that search over choices of weights in reaction to experience. We present a cartoon visual of the generates operator in Figure 1(a). + +Now, using the generates operator, we revisit and formalize the central insight of this section: Every agent can be understood as implicitly searching over an agent basis. We take this implicit search process to be the behavioral signature of learning. + +Theorem 3.1. For any agent-environment pair $(\lambda, e)$ , there exists infinitely many choices of a basis, $\Lambda_{\mathrm{B}}$ , such that both (1) $\lambda \notin \Lambda_{\mathrm{B}}$ , and (2) $\Lambda_{\mathrm{B}} \notin \{\lambda\}$ . + +Due to space constraints, all proofs are deferred to Appendix B. + +We require that $\lambda \notin \Lambda_{\mathrm{B}}$ to ensure that the relevant bases are non-trivial generators of $\{\lambda\}$ . This theorem tells us that no matter the choice of agent or environment, we can view the agent as a series of history-conditioned switches between basis elements. In this sense, we can understand the agent as if it were carrying out a search over the elements of some $\Lambda_{\mathrm{B}}$ . We emphasize that there are infinitely many choices of such a basis to illustrate that there are many plausible interpretations of an agent's behavior—we return to this point throughout the paper. + +# 3.2 Operator 2: An Agent Reaches a Basis. + +Our second operator reflects properties of an agent's limiting behavior in relation to a basis. Given an agent and a basis that the agent searches through, what happens to the agent's search process in the limit: does the agent keep switching between elements of the basis, or does it eventually stop? For example, in an MDP, many agents of interest eventually stop their search on a choice of a fixed policy. We formally define this notion in terms of an agent reaching a basis according to two modalities: an agent (i) sometimes or (ii) never reaches a basis. + +Definition 3.5. We say agent $\lambda \in \mathbb{A}$ sometimes reaches $\Lambda_{\mathrm{B}}$ in $e$ , denoted $\lambda \stackrel{\sim}{\rightarrow} \Lambda_{\mathrm{B}}$ , if and only if + +$$ +\exists_ {h \in \bar {\mathcal {H}}} \exists_ {\lambda_ {\mathrm {B}} \in \Lambda_ {\mathrm {B}}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda \left(h h ^ {\prime}\right) = \lambda_ {\mathrm {B}} \left(h h ^ {\prime}\right). \tag {3.3} +$$ + +That is, for at least one realizable history, there is some base agent $(\lambda_{\mathrm{B}})$ that produces the same action distribution as $\lambda$ forever after. This indicates that the agent can be understood as if it has stopped its search over the basis. We present a visual of sometimes reaches in Figure 1(b). By contrast, we say an agent never reaches a basis just when it never becomes equivalent to a base agent. + +Definition 3.6. We say agent $\lambda \in \mathbb{A}$ never reaches $\Lambda_{\mathrm{B}}$ in $e$ , denoted $\lambda \not\to \Lambda_{\mathrm{B}}$ , iff $\neg (\lambda \not\to \Lambda_{\mathrm{B}})$ . + +The reaches operators formalize the intuition that, since every agent can be interpreted as if it were searching over a basis, every agent will either (1) sometimes, or (2) never stop this search. Since (1) and (2) are simply negations of each other, we can now plainly state this fact as follows. + +Remark 3.2. For any agent-environment pair $(\lambda, e)$ and any choice of basis $\Lambda_{\mathrm{B}}$ such that $\Lambda_{\mathrm{B}} \notin \{\lambda\}$ , exactly one of the following two properties must be satisfied: + +$$ +(1) \lambda \not \sim \Rightarrow \Lambda_ {\mathrm {B}}, \quad (2) \lambda \not \sim \Rightarrow \Lambda_ {\mathrm {B}}. \tag {3.4} +$$ + +Thus, by Theorem 3.1, every agent can be thought of as implicitly searching over an agent basis, and by Remark 3.2, every agent will either (1) sometimes, or (2) never stop this search. We take this implicit search process to be the behavioral signature of learning, and will later exploit this perspective to define a continual learning agent as one that continues its search forever (Definition 4.1). Our analysis in Section 4.4 further elucidates basic properties of both the generates and reaches operators, and Figure 1 presents a cartoon visualizing the intuition behind each of the operators. We summarize all definitions and notation in a table in Appendix A. + +Considerations on the Operators. Naturally, we can design many variations of both $\sqrt{}$ and $\mathbb{E}$ . For instance, we might be interested in a variant of reaches in which an agent becomes $\epsilon$ -close to any of the basis elements, rather than require exact behavioral equivalence. Concretely, we highlight four axes of variation that can modify the definitions of the operators. We state these varieties for reaches, but similar modifications can be made to the generates operator, too: + +1. Realizability. An agent reaches a basis (i) in all histories (and thus, all environments), or (ii) in the histories realizable by a given $(\lambda, e)$ pair. +2. History Length. An agent reaches a basis over (i) infinite or, (ii) finite length histories. +3. Probability. An agent reaches a basis (i) with probability one, or (ii) with high probability. +4. Equality or Approximation. An agent reaches a basis by becoming (i) equivalent to a base agent, or (ii) sufficiently similar to a base agent. + +Rather than define all of these variations precisely for both operators (though we do explore some in Appendix C), we acknowledge their existence, and simply note that the formal definitions of these variants follow naturally. + +# 4 Continual Reinforcement Learning + +We now provide a precise definition of CRL. The definition formalizes the intuition that CRL captures settings in which the best agents do not converge—they continue their implicit search over an agent basis indefinitely. + +# 4.1 Definition:Continual RL + +We first define continual learning agents using the generates and never reaches operators as follows. + +Definition 4.1. An agent $\lambda$ is a continual learning agent in $e$ relative to $\Lambda_{\mathrm{B}}$ if and only if the basis generates the agent $(\Lambda_{\mathrm{B}} \notin \{\lambda\})$ and the agent never reaches the basis ( $\lambda \not\to \Lambda_{\mathrm{B}}$ ). + +This means that an agent is a continual learner in an environment relative to $\Lambda_{\mathrm{B}}$ if the agent's search over $\Lambda_{\mathrm{B}}$ continues forever. Notice that an agent might be considered a continual learner with respect to one basis but not another; we explore this fact more in Section 4.4. + +Then, using these tools, we formally define CRL as follows. + +Definition 4.2. Consider an RL problem $(e, v, \Lambda)$ . Let $\Lambda_{\mathrm{B}} \subset \Lambda$ be a basis such that $\Lambda_{\mathrm{B}} \not\subset \Lambda$ , and let $\Lambda^{*} = \arg \max_{\lambda \in \Lambda} v(\lambda, e)$ . We say $(e, v, \Lambda, \Lambda_{\mathrm{B}})$ defines a CRL problem if $\forall \lambda^{*} \in \Lambda^{*} \lambda^{*} \nrightarrow \Lambda_{\mathrm{B}}$ . + +Said differently, an RL problem is an instance of CRL just when all of the best agents are continual learning agents relative to basis $\Lambda_{\mathrm{B}}$ . This problem encourages a significant departure from how we tend to think about designing agents: Given a basis, rather than try to build agents that can solve problems by identifying a fixed high-quality element of the basis, we would like to design agents that continue to update their behavior indefinitely in light of their experience. + +# 4.2 CRL Examples + +We next detail two examples of CRL to provide further intuition. + +Q-Learning in Switching MDPs. First we consider a simple instance of CRL based on the standard multi-task view of MDPs. In this setting, the agent repeatedly samples an MDP to interact with from a fixed but unknown distribution [64, 9, 2, 25, 20]. In particular, we make use of the switching MDP environment from Luketina et al. [33]. The environment $e$ consists of a collection of $n$ underlying MDPs, $m_1, \ldots, m_n$ , with a shared action space and environment-state space. We refer to this environment-state space using observations, $o \in O$ . The environment has a fixed constant positive probability of 0.001 to switch the underlying MDP, which yields different transition and reward functions until the next switch. The agent only observes each environment state $o \in O$ , which does not reveal the identity of the active MDP. The rewards of each underlying MDP are structured so that each MDP has a unique optimal policy. We assume $v$ is defined as the average reward, and the basis is the set of $\epsilon$ -greedy policies over all $Q(o, a)$ functions, for fixed $\epsilon = 0.15$ . Consequently, the set of agents we generate, $\Lambda_{\mathrm{B}} \notin \Lambda$ , consists of all agents that switch between these $\epsilon$ -greedy policies. + +Now, the components $(e,v,\Lambda ,\Lambda_{\mathrm{B}})$ have been defined, we can see that this is indeed an instance of CRL: None of the base agents can be optimal, as the moment that the environment switches its underlying MDP, we know that any previously optimal policy will no longer be optimal in the next MDP following the switch. Therefore, any agent that converges (in that it reaches the basis $\Lambda_{\mathrm{B}}$ ) cannot be optimal either for the same reason. We conclude that all optimal agents in $\Lambda$ are continual learning agents relative to the basis $\Lambda_{\mathrm{B}}$ . + +We present a visual of this domain in Figure 2(a), and conduct a simple experiment contrasting the performance of $\epsilon$ -greedy continual Q-learning (blue) that uses a constant step-size parameter of $\alpha = 0.1$ , with a convergent Q-learning (green) that anneals its step size parameter over time to zero. Both use $\epsilon = 0.15$ , and we set the number of underlying MDPs to $n = 10$ . We present the average reward with $95\%$ confidence intervals, averaged over 250 runs, in Figure 2(b). Since both variants of Q-learning can be viewed as searching over $\Lambda_{\mathrm{B}}$ , the annealing variant that stops its search will under-perform compared to the continual approach. These results support the unsurprising conclusion that it is better to continue searching over the basis rather than converge in this setting. + +Continual Supervised Learning. Second, we illustrate the power of our CRL definition to capture continual supervised learning. We adopt the problem setting studied by Mai et al. [35]. Let $\mathcal{X}$ denote a set of objects to be labeled, each belonging to one of $k\in \mathbb{N}$ classes. The observation space $\mathcal{O}$ consists of pairs, $o_{t} = (x_{t},y_{t})$ , where $x_{t}\in \mathcal{X}$ and $y_{t}\in \mathcal{Y}$ . Here, each $x_{t}$ is an input object to be classified and $y_{t}$ is the label for the previous input $x_{t - 1}$ . Thus, $\mathcal{O} = \mathcal{X}\times \mathcal{Y}$ . We assume by convention that the initial + +![](images/057e55746e5040d4c6b3dc15c088f533742be8dc66ec65540f763ef15222deac.jpg) +(a) Switching MDP Visual + +![](images/767dbd24c50dc5bed69bbed0c169929974246f9dd9670d88fc332b4180b48ad9.jpg) +(b) Switching MDP Results +Figure 2: A visual of a grid world instance of the switching MDPs problem (left) [33], and results from an experiment contrasting continual learning and convergent Q-learning (right). The environment pictured contains $n$ distinct MDPs. Each underlying MDP shares the same state space and action space, but varies in transition and reward functions, as indicated by the changing walls and rewarding locations (stars, circles, and fire). The results pictured on the right contrast continual Q-learning (with $\alpha = 0.1$ ) with traditional Q-learning that anneals its step-size parameter to zero over time. + +label $y_0$ is irrelevant and can be ignored. The agent will observe a sequence of object-label pairs, $(x_0, y_0), (x_1, y_1), \ldots$ , and the action space is a choice of label, $\mathcal{A} = \{a_1, \dots, a_k\}$ where $|\mathcal{Y}| = k$ . The reward for each history $h_t$ is $+1$ if the agent's most recently predicted label is correct for the previous input, and $-1$ otherwise: + +$$ +r \left(a _ {t - 1} o _ {t}\right) = r \left(a _ {t - 1} y _ {t}\right) = \left\{ \begin{array}{l l} + 1 & a _ {t - 1} = y _ {t}, \\ - 1 & a _ {t - 1} = \text {o t h e r w i s e}. \end{array} \right. \tag {4.1} +$$ + +Concretely, the continual learning setting studied by Mai et al. [35] supposes the learner will receive samples from a sequence of probability distributions, $d_0, d_1, \ldots$ , each supported over $X \times Y$ . The $(x, y) \in X \times Y$ pairs experienced by the learner are determined by the sequence of distributions. We capture this distributional shift in an environment $e$ that shifts its probability distribution over $O$ depending on the history to match the sequence, $d_0, d_1, \ldots$ . + +Now, is this an instance of CRL? To answer this question precisely, we need to select a $(\Lambda, \Lambda_{\mathrm{B}})$ pair. We adopt the basis $\Lambda_{\mathrm{B}} = \{\lambda_{\mathrm{B}} : x \mapsto y_i, \forall_{y_i \in \mathcal{Y}}\}$ that contains each classifier that maps each object to each possible label. By the universal set of learning rules $\Sigma$ , this basis generates the set of all agents that search over classifiers. Now, our definition says the above is an instance of CRL just when every optimal agent never stops switching between classifiers, rather than stop their search on a fixed classifier. Consequently, if there is an optimal classifier in $\Lambda_{\mathrm{B}}$ , then this will not be an instance of CRL. If, however, the environment imposes enough distributional shift (changing labels, adding mass to new elements, and so on), then the only optimal agents will be those that always switch among the base classifiers, in which case the setting is an instance of CRL. + +# 4.3 Relationship to Other Views on Continual Learning + +The spirit of continual learning has been an important part of machine learning research for decades, often appearing under the name of "lifelong learning" [63, 62, 53, 55, 51, 3, 4], "never-ending learning" [39, 43] with close ties to transfer-learning [61, 60], meta-learning [52, 17], as well as online learning and non-stationarity [5, 40, 13, 6, 31]. In a similar vein, the phrase "continuing tasks" is used in the classic RL textbook [59] to refer explicitly to cases when the interaction between agent and environment is not subdivided into episodes. Continual reinforcement learning was first posed in the thesis by Ring [46]. In later work [47, 48], Ring proposes a formal definition of the continual reinforcement learning problem—The emphasis of Ring's proposal is on the generality of the environment: rather than assume that agents of interest will interact with an MDP, Ring suggests studying the unconstrained case in which an agent must maximize performance while only receiving a stream of observations as input. The environment or reward function, in this sense, may change over time or may be arbitrarily complex. This proposal is similar in spirit to general RL, studied by Hutter [24], Lattimore [28], Leike [29], and others [12, 37, 36] in which an agent interacts with an unconstrained environment. General RL inspires many aspects of our conception of CRL; for instance, our emphasis on history-dependence rather than environment-state comes directly from general RL. More recently, Khetarpal et al. [25] provide a comprehensive survey of the continual reinforcement learning literature. We encourage readers to explore this survey for a detailed history of the subject. In the survey, Khetarpal et al. propose a definition of the CRL problem that emphasizes the non-stationarity of the underlying process. In particular, in Khetarpal et al.'s definition, an agent interacts with a POMDP in which each of the individual components of the POMDP—such as the state space or reward function—are allowed to vary with time. We note that, as the environment model we study (Definition 2.3) is a function of history, it can capture time-indexed non-stationarity. In this sense, the same generality proposed by Khetarpal et al. and Ring is embraced and retained by our definition, but we add further precision to what is meant by continual learning by centering around a mathematical definition of continual learning agents (Definition 4.1). + +# 4.4 Properties of CRL + +Our formalism is intended to be a jumping off point for new lines of thinking around agents and continual learning. We defer much of our analysis and proofs to the appendix, and here focus on highlighting necessary properties of CRL. + +Theorem 4.1. Every instance of $CRL(e, v, \Lambda, \Lambda_{\mathrm{B}})$ necessarily satisfies the following properties: + +1. If $\Lambda \neq \Lambda_{\mathrm{B}} \cup \Lambda^{*}$ , then there exists a $\Lambda_{\mathrm{B}}'$ such that (1) $\Lambda_{\mathrm{B}}' \notin \Lambda$ , and (2) $(e, v, \Lambda, \Lambda_{\mathrm{B}}')$ is not an instance of CRL. +2. No element of $\Lambda_{\mathrm{B}}$ is optimal: $\Lambda_{\mathrm{B}}\cap \Lambda^{*} = \emptyset$ +3. If $|\Lambda|$ is finite, there exists an agent set, $\Lambda^{\circ}$ , such that $|\Lambda^{\circ}| < |\Lambda|$ and $\Lambda^{\circ} \notin \Lambda$ . +4. If $|\Lambda|$ is infinite, there exists an agent set, $\Lambda^{\circ}$ , such that $\Lambda^{\circ} \subset \Lambda$ and $\Lambda^{\circ} \nVdash \Lambda$ . + +This theorem tells us several things. The first point of the theorem has peculiar implications. We see that as we change a single element (the basis $\Lambda_{\mathrm{B}}$ ) of the tuple $(e, v, \Lambda_{\mathrm{B}}, \Lambda)$ , the resulting problem can change from CRL to not CRL. By similar reasoning, an agent that is said to be a continual learning agent according to Definition 4.1 may not be a continual learner with respect to some other basis. We discuss this point further in the next paragraph. Point (2.) notes that no optimal strategy exists within the basis—instead, to be optimal, an agent must switch between basis elements indefinitely. As discussed previously, this fact encourages a departure in how we think about the RL problem: rather than focus on agents that can identify a single, fixed solution to a problem, CRL instead emphasizes designing agents that are effective at updating their behavior indefinitely. Points (3.) and (4.) show that $\Lambda$ cannot be minimal. That is, there are necessarily some redundancies in the design space of the agents in CRL—this is expected, since we are always focusing on agents that search over the same agent basis. Lastly, it is worth calling attention to the fact that in the definition of CRL, we assume $\Lambda_{\mathrm{B}} \subset \Lambda$ —this suggests that in CRL, the agent basis is necessarily limited in some way. Consequently, the design space of agents $\Lambda$ are also limited in terms of what agents they can represent at any particular point in time. This limitation may come about due to a computational or memory budget, or by making use of a constrained set of learning rules. This suggests a deep connection between bounded agents and the nature of continual learning, as explored further by Kumar et al. [27]. While these four points give an initial character of the CRL problem, we note that further exploration of the properties of CRL is an important direction for future work. + +Canonical Agent Bases. It is worth pausing and reflecting on the concept of an agent basis. As presented, the basis is an arbitrary choice of a set of agents—consequently, point (1.) of Theorem 4.1 may stand out as peculiar. From this perspective, it is reasonable to ask if the fact that our definition of CRL is basis-dependant renders it vacuous. We argue that this is not the case for two reasons. First, we conjecture that any definition of continual learning that involves concepts like "learning" and "convergence" will have to sit on top of some reference object whose choice is arbitrary. Second, and more important, even though the mathematical construction allows for an easy change of basis, in practice the choice of basis is constrained by considerations like the availability of computational resources. It is often the case that the domain or problem of interest provides obvious choices of bases, or imposes constraints that force us as designers to restrict attention to a space of plausible bases or learning rules. For example, as discussed earlier, a choice of neural network architecture might comprise a basis—any assignment of weights is an element of the basis, and the learning rule $\sigma$ is a mechanism for updating the active element of the basis (the parameters) in light of experience. In this case, the number of parameters of the network is constrained by what we can actually build, and the learning rule needs to be suitably efficient and well-behaved. We might again think of the learning rule $\sigma$ as gradient descent, rather than a rule that can search through the basis in an unconstrained way. In this sense, the basis is not arbitrary. We as designers choose a class of functions to act as the relevant representations of behavior, often limited by resource constraints on memory or compute. Then, we use specific learning rules that have been carefully designed to react to experience in a desirable way—for instance, stochastic gradient descent updates the current choice of basis in the direction that would most improve performance. For these reasons, the choice of basis is not arbitrary, but instead reflects the ingredients involved in the design of agents as well as the constraints necessarily imposed by the environment. + +# 4.5 Properties of Generates and Reaches + +Lastly, we summarize some of the basic properties of generates and reaches. Further analysis of generates, reaches, and their variations is provided in Appendix C. + +Theorem 4.2. The following properties hold of the generates operator: + +1. Generates is transitive: For any triple $(\Lambda^1, \Lambda^2, \Lambda^3)$ and $e \in \mathcal{E}$ , if $\Lambda^1 \not\subsetneq \Lambda^2$ and $\Lambda^2 \not\subsetneq \Lambda^3$ , then $\Lambda^1 \not\subsetneq \Lambda^3$ . +2. Generates is not commutative: there exists a pair $(\Lambda^1, \Lambda^2)$ and $e \in \mathcal{E}$ such that $\Lambda^1 \not\in \Lambda^2$ , but $\neg (\Lambda^2 \nmid e \Lambda^1)$ . +3. For all $\Lambda$ and pair of agent bases $(\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2})$ such that $\Lambda_{\mathrm{B}}^{1}\subseteq \Lambda_{\mathrm{B}}^{2}$ if $\Lambda_{\mathrm{B}}^{1}\vDash \Lambda$ , then $\Lambda_{\mathrm{B}}^{2}\vDash \Lambda$ +4. For all $\Lambda$ and $e\in \mathcal{E},\wedge \nmid e\Lambda$ +5. The decision problem, Given $(e, \Lambda_{\mathrm{B}}, \Lambda)$ , output True iff $\Lambda_{\mathrm{B}} \notin \Lambda$ , is undecidable. + +The fact that generates is transitive suggests that the basic tools of an agent set—paired with a set of learning rules—might be likened to an algebraic structure. We can draw a symmetry between an agent basis and the basis of a vector space: A vector space is comprised of all linear combinations of the basis, whereas $\Lambda$ is comprised of all valid switches (according to the learning rules) between the base agents. However, the fact that generates is not commutative (by point 2.) raises a natural question: are there choices of learning rules under which generates is commutative? We suggest that a useful direction for future work can further explore an algebraic perspective on agents. + +We find many similar properties hold of reaches. + +Theorem 4.3. The following properties hold of the reaches operator: + +1. $\nrightarrow$ and $\nrightarrow$ are not transitive. +2. "Sometimes reaches" is not commutative: there exists a pair $(\Lambda^1, \Lambda^2)$ and $e \in \mathcal{E}$ such that $\forall_{\lambda^1 \in \Lambda^1} \lambda^1 \rightsquigarrow \Lambda^2$ , but $\exists_{\lambda^2 \in \Lambda^2} \lambda^2 \not\Rightarrow \Lambda^1$ . +3. For all pairs $(\Lambda, e)$ , if $\lambda \in \Lambda$ , then $\lambda \rightsquigarrow \Lambda$ . +4. Every agent satisfies $\lambda \prec \infty \land$ in every environment. +5. The decision problem, Given $(e,\lambda ,\Lambda)$ output True iff $\lambda \rightsquigarrow \Lambda$ is undecidable. + +Many of these properties resemble those in Theorem 4.2. For instance, point (5.) shows that deciding whether a given agent sometimes reaches a basis in an environment is undecidable. We anticipate that the majority of decision problems related to determining properties of arbitrary agent sets interacting with unconstrained environments will be undecidable, though it is still worth making these arguments carefully. Moreover, there may be interesting special cases in which these decision problems are decidable (and perhaps, efficiently so). We suggest that identifying these special cases and fleshing out their corresponding efficient algorithms is an interesting direction for future work. + +# 5 Discussion + +In this paper, we carefully develop a simple mathematical definition of the continual RL problem. We take this problem to be of central importance to AI as a field, and hope that these tools and perspectives can serve as an opportunity to think about CRL and its related artifacts more carefully. Our proposal is framed around two new insights about agents: (i) every agent can be understood as though it were searching over an agent basis (Theorem 3.1), and (ii) every agent, in the limit, will either sometimes or never stop this search (Remark 3.2). These two insights are formalized through the generates and reaches operators, which provide a rich toolkit for understanding agents in a new way—for example, we find straightforward definitions of a continual learning agent (Definition 4.1) and learning rules (Definition 3.2). We anticipate that further study of these operators and different families of learning rules can directly inform the design of new learning algorithms; for instance, we might characterize the family of continual learning rules that are guaranteed to yield continual learning agents, and use this to guide the design of principled continual learning agents (in the spirit of continual backprop by Dohare et al. [14]). In future work, we intend to further explore connections between our formalism of continual learning and some of the phenomena at the heart of recent empirical continual learning studies, such as plasticity loss [34, 1, 15], in-context learning [8], and catastrophic forgetting [38, 18, 21, 26]. More generally, we hope that our definitions, analysis, and perspectives can help the community to think about continual reinforcement learning in a new light. + +# Acknowledgements + +The authors are grateful to Michael Bowling, Clare Lyle, Razvan Pascanu, and Georgios Piliouras for comments on a draft of the paper, as well as the anonymous NeurIPS reviewers that provided valuable feedback on the paper. The authors would further like to thank all of the 2023 Barbados RL Workshop participants and Elliot Catt, Will Dabney, Sebastian Flennerhag, András György, Steven Hansen, Anna Harutyunyan, Mark Ho, Joe Marino, Joseph Modayil, Rémi Munos, Evgenii Nikishin, Brendan O'Donoghue, Matt Overlan, Mark Rowland, Tom Schaul, Yannick Shroecker, Rich Sutton, Yunhao Tang, Shantanu Thakoor, and Zheng Wen for inspirational conversations. + +# References + +[1] Zaheer Abbas, Rosie Zhao, Joseph Modayil, Adam White, and Marlos C Machado. Loss of plasticity in continual deep reinforcement learning. arXiv preprint arXiv:2303.07507, 2023. +[2] David Abel, Yuu Jinnai, Yue Guo, George Konidaris, and Michael L. Littman. Policy and value transfer in lifelong reinforcement learning. In Proceedings of the International Conference on Machine Learning, 2018. +[3] Haitham Bou Ammar, Rasul Tutunov, and Eric Eaton. Safe policy search for lifelong reinforcement learning with sublinear regret. In Proceedings of the International Conference on Machine Learning, 2015. +[4] Megan M Baker, Alexander New, Mario Aguilar-Simon, Ziad Al-Halah, Sébastien MR Arnold, Ese Ben-Iwhiwhu, Andrew P Brna, Ethan Brooks, Ryan C Brown, Zachary Daniels, et al. A domain-agnostic approach for characterization of lifelong learning systems. *Neural Networks*, 160:274–296, 2023. +[5] Peter L Bartlett. Learning with a slowly changing distribution. In Proceedings of the Annual Workshop on Computational Learning Theory, 1992. +[6] Omar Besbes, Yonatan Gur, and Assaf Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. Advances in Neural Information Processing Systems, 2014. +[7] Michael Bowling, John D. Martin, David Abel, and Will Dabney. Settling the reward hypothesis. In Proceedings of the International Conference on Machine Learning, 2023. +[8] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems, 2020. +[9] Emma Brunskill and Lihong Li. PAC-inspired option discovery in lifelong reinforcement learning. In Proceedings of the International Conference on Machine Learning, 2014. +[10] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom Mitchell. Toward an architecture for never-ending language learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2010. +[11] Anthony R. Cassandra, Leslie Pack Kaelbling, and Michael L. Littman. Acting optimally in partially observable stochastic domains. In Proceedings of the AAAI Conference on Artificial Intelligence, 1994. +[12] Michael K Cohen, Elliot Catt, and Marcus Hutter. A strongly asymptotically optimal agent in general environments. arXiv preprint arXiv:1903.01021, 2019. +[13] Travis Dick, András György, and Csaba Szepesvari. Online learning in Markov decision processes with changing cost sequences. In Proceedings of the International Conference on Machine Learning, 2014. +[14] Shibhansh Dohare, Richard S Sutton, and A Rupam Mahmood. Continual backprop: Stochastic gradient descent with persistent randomness. arXiv preprint arXiv:2108.06325, 2021. + +[15] Shibhansh Dohare, Juan Hernandez-Garcia, Parash Rahman, Richard Sutton, and A Rupam Mahmood. Loss of plasticity in deep continual learning. arXiv preprint arXiv:2306.13812, 2023. +[16] Shi Dong, Benjamin Van Roy, and Zhengyuan Zhou. Simple agent, complex environment: Efficient reinforcement learning with agent states. Journal of Machine Learning Research, 23 (255):1-54, 2022. +[17] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning, 2017. +[18] Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128-135, 1999. +[19] Milton Friedman. Essays in positive economics. University of Chicago press, 1953. +[20] Haotian Fu, Shangqun Yu, Michael Littman, and George Konidaris. Model-based lifelong reinforcement learning with Bayesian exploration. Advances in Neural Information Processing Systems, 2022. +[21] Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. +[22] Raia Hadsell, Dushyant Rao, Andrei A Rusu, and Razvan Pascanu. Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences, 24(12):1028-1040, 2020. +[23] Marcus Hutter. A theory of universal artificial intelligence based on algorithmic complexity. arXiv preprint cs/0004001, 2000. +[24] Marcus Hutter. Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media, 2004. +[25] Khimya Khetarpal, Matthew Riemer, Irina Rish, and Doina Precup. Towards continual reinforcement learning: A review and perspectives. Journal of Artificial Intelligence Research, 75: 1401-1476, 2022. +[26] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521-3526, 2017. +[27] Saurabh Kumar, Henrik Marklund, Ashish Rao, Yifan Zhu, Hong Jun Jeon, Yueyang Liu, and Benjamin Van Roy. Continual learning as computationally constrained reinforcement learning. arXiv preprint arXiv:2307.04345, 2023. +[28] Tor Lattimore. Theory of general reinforcement learning. PhD thesis, The Australian National University, 2014. +[29] Jan Leike. Nonparametric general reinforcement learning. PhD thesis, The Australian National University, 2016. +[30] Timothee Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, and Natalia Diaz-Rodriguez. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Information fusion, 58:52-68, 2020. +[31] Yueyang Liu, Benjamin Van Roy, and Kuang Xu. A definition of non-stationary bandits. arXiv preprint arXiv:2302.12202, 2023. +[32] Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband, and Zheng Wen. Reinforcement learning, bit by bit. Foundations and Trends in Machine Learning, 16(6):733-865, 2023. ISSN 1935-8237. + +[33] Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, and Satinder Singh. Meta-gradients in non-stationary environments. In Proceedings of the Conference on Lifelong Learning Agents, 2022. +[34] Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, and Will Dabney. Understanding plasticity in neural networks. In Proceedings of the International Conference on Machine Learning, 2023. +[35] Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner. Online continual learning in image classification: An empirical survey. Neurocomputing, 469:28-51, 2022. +[36] Sultan J Majeed. Abstractions of general reinforcement Learning. PhD thesis, The Australian National University, 2021. +[37] Sultan Javed Majeed and Marcus Hutter. Performance guarantees for homomorphisms beyond Markov decision processes. In Proceedings of the AAAI Conference on Artificial Intelligence, 2019. +[38] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109–165. Elsevier, 1989. +[39] Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bishan Yang, Justin Betteridge, Andrew Carlson, Bhavana Dalvi, Matt Gardner, Bryan Kisiel, et al. Never-ending learning. Communications of the ACM, 61(5):103-115, 2018. +[40] Claire Monteleoni and Tommi Jaakkola. Online learning of non-stationary sequences. Advances in Neural Information Processing Systems, 16, 2003. +[41] Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. 2018. +[42] German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural networks, 113:54-71, 2019. +[43] Emmanouil Antonios Platanios, Abulhair Saparov, and Tom Mitchell. Jelly bean world: A testbed for never-ending learning. arXiv preprint arXiv:2002.06306, 2020. +[44] Martin L Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 2014. +[45] Matthew Riemer, Sharath Chandra Raparthy, Ignacio Cases, Gopeshh Subbaraj, Maximilian Puelma Touzel, and Irina Rish. Continual learning in environments with polynomial mixing times. Advances in Neural Information Processing Systems, 2022. +[46] Mark B Ring. Continual learning in reinforcement environments. PhD thesis, The University of Texas at Austin, 1994. +[47] Mark B Ring. Child: A first step towards continual learning. Machine Learning, 28(1):77-104, 1997. +[48] Mark B Ring. Toward a formal framework for continual learning. In NeurIPS Workshop on Inductive Transfer, 2005. +[49] David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in Neural Information Processing Systems, 2019. +[50] Stuart J Russell and Devika Subramanian. Provably bounded-optimal agents. Journal of Artificial Intelligence Research, 2:575-609, 1994. +[51] Paul Ruvolo and Eric Eaton. ELLA: An efficient lifelong learning algorithm. In Proceedings of the International Conference on Machine Learning, 2013. + +[52] Tom Schaul and Jürgen Schmidhuber. Metalearning. Scholarpedia, 5(6):4650, 2010. +[53] Jürgen Schmidhuber, Jieyu Zhao, and Nicol N Schraudolph. Reinforcement learning with self-modifying policies. In Learning to Learn, pages 293-309. Springer, 1998. +[54] Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In Proceedings of the International Conference on Machine Learning, 2018. +[55] Daniel L Silver. Machine lifelong learning: Challenges and benefits for artificial general intelligence. In Proceedings of the Conference on Artificial General Intelligence, 2011. +[56] Richard S Sutton. Introduction: The challenge of reinforcement learning. In Reinforcement Learning, pages 1-3. Springer, 1992. +[57] Richard S Sutton. The reward hypothesis, 2004. URL http://incompleteideas.net/rlai.cs.ualberta.ca/RLAI/rewardhypothesis.html. +[58] Richard S Sutton. The quest for a common model of the intelligent decision maker. arXiv preprint arXiv:2202.13252, 2022. +[59] Richard S Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 2018. +[60] Matthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633-1685, 2009. +[61] Sebastian Thrun. Is learning the n-th thing any easier than learning the first? Advances in Neural Information Processing Systems, 1995. +[62] Sebastian Thrun. Lifelong learning algorithms. Learning to Learn, 8:181-209, 1998. +[63] Sebastian Thrun and Tom M Mitchell. Lifelong robot learning. Robotics and autonomous systems, 15(1-2):25-46, 1995. +[64] Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical Bayesian approach. In Proceedings of the International Conference on Machine learning, 2007. + +# A Notation + +We first provide a table summarizing all relevant notation. + +
NotationMeaningDefinition
AActions
OObservations
HtLength t historiesHt=(A×O)t
HAll historiesH=∪t=0∞Ht
hA historyh∈H
hh'History concatenation
htLength t historyht∈Ht
H̅=Hλ,eRealizable historiesH̅=∪t=0∞{ht∈Ht:Πt-1k=0e(ok|hk,ak)λ(ak|hk)>0}
H̅h=Hλ,eRealizable history suffixesH̅h={h'∈H:h'h'∈Hλ,e}
eEnvironmente:H×A→Δ(A)
ESet of all environments
λAgentλ:H→Δ(A)
Set of all agents
ΛSet of agentsΛ⊆ ∧
ΛBAgent basisΛB⊂ ∧
rReward functionr: A×O→R
vPerformancev:H×A×E→[vmin, vmax]
σLearning ruleσ:H→ΛB
ΣSet of all learning rules
ΣSet of learning rulesΣ⊆ Σ
ΛB ⇔ Σ ΛΣ-generates∀Λ∃σ∈Σ∀h∈Hλ(h)=σ(h)(h)
ΛB ⇔ ΛGenerates∃Σ⊆Σ ΛB ⇔ Σ Λ
ΛB |= Σ ΛUniversally Σ-generates∀Λ∃σ∈Σ∀h∈Hλ(h)=σ(h)(h)
ΛB |= ΛUniversally generates∃Σ⊆Σ ΛB |= Σ Λ
λ ⇔ ΛBSometimes reaches∃h∈H∃λB∈ΛB∀h'∈Hhλ(hh')=λB(hh')
λ ⇔ ΛBNever reaches¬(λ ⇔ ΛB)
λ □ ⇔ ΛBAlways reaches∀h∈H∃t∈N0∀h'∈Ht:∞∃λB∈ΛB∀h'∈Hhλ(hh'h')=λB(hh'h')
+ +Table 1: A summary of notation. + +# B Proofs of Presented Results + +We next provide proofs of each result from the paper. Our proofs make use of some extra notation: we use $\Rightarrow$ as logical implication, and we use $\mathcal{P}(\mathcal{X})$ to denote the power set of any set $\mathcal{X}$ . Lastly, we use $\forall_{\mathcal{A} \subseteq \mathcal{X}}$ and $\exists_{\mathcal{A} \subseteq \mathcal{X}}$ as shorthand for $\forall_{\mathcal{A} \in \mathcal{P}(\mathcal{X})}$ and $\exists_{\mathcal{A} \in \mathcal{P}(\mathcal{X})}$ respectively. + +# B.1 Section 3 Proofs + +Our first result is from Section 3 of the paper. + +Theorem 3.1. For any pair $(\lambda, e)$ , there exists infinitely many choices of a basis, $\Lambda_{\mathrm{B}}$ , such that both (1) $\lambda \notin \Lambda_{\mathrm{B}}$ , and (2) $\Lambda_{\mathrm{B}} \notin \{\lambda\}$ . + +# Proof of Theorem 3.1. + +Choose a fixed but arbitrary pair $(\lambda, e)$ . Then, enumerate the realizable histories, $\mathcal{H}^{\lambda, e}$ , and let $h^1$ denote the first element of this enumeration, $h^2$ the second, and so on. + +Then, we design a constructive procedure for a basis that, when repeatedly applied, induces an infinite enumeration of bases that satisfy the desired two properties. This constructive procedure for the $k$ -th basis will contain $k + 1$ agents, where each agent is distinct from $\lambda$ , but will produce the same action as the agent every $k + 1$ elements of the history sequence, $h^1, h^2, \ldots$ . + +For the first $(k = 1)$ basis, we construct two agents. The first, $\lambda_{\mathrm{B}}^{1}$ , chooses the same action distribution as $\lambda$ on each even numbered history: $\lambda_{\mathrm{B}}^{1}(h^{i}) = \lambda (h^{i})$ . Then, this agent will choose a different action distribution on the odd length histories: $\lambda_{\mathrm{B}}^{1}(h^{i + 1})\neq \lambda (h^{i + 1})$ , for $i$ any even natural number. The second agent, $\lambda_{\mathrm{B}}^{2}$ will do the opposite to $\lambda_{\mathrm{B}}^{1}$ : on each odd numbered history $h^{i + 1}$ , $\lambda_{\mathrm{B}}^{2}(h^{i + 1})\neq \lambda (h^{i + 1})$ , but on every even numbered history, $\lambda_{\mathrm{B}}^{2}(h^{i}) = \lambda (h^{i})$ . + +Observe first that by construction, $\lambda \neq \lambda_{\mathrm{B}}^{1}$ , and $\lambda \neq \lambda_{\mathrm{B}}^{2}$ , since there exist histories where they choose different action distributions. Next, observe that the basis, $\Lambda_{\mathrm{B}} = \{\lambda_{\mathrm{B}}^{1}, \lambda_{\mathrm{B}}^{2}\}$ , generates $\{\lambda\}$ in $e$ through the following set of learning rules, $\Sigma$ : given any realizable history, $h \in \mathcal{H}^{\lambda, e}$ , check whether the history has an even or odd numbered index in the enumeration. If odd, choose $\lambda_{\mathrm{B}}^{1}$ , and if even, choose $\lambda_{\mathrm{B}}^{2}$ . + +More generally, this procedure can be applied for any $k$ : + +$$ +\Lambda_ {\mathrm {B}} ^ {k} = \left\{\lambda_ {\mathrm {B}} ^ {1}, \dots , \lambda_ {\mathrm {B}} ^ {k + 1} \right\}, \quad \lambda_ {\mathrm {B}} ^ {i} (h) = \left\{ \begin{array}{l l} \lambda (h) & [ h ] = = i, \\ \neq \lambda (h) & \text {o t h e r w i s e}, \end{array} \right. \tag {B.1} +$$ + +where we use the notation $[h] == i$ to express the logical predicate asserting that the modulus of the index of $h$ in the enumeration $h^1, h^2, \ldots$ is equal to $i$ . + +Further, $\neq \lambda(h)$ simply refers to any choice of action distribution that is unequal to $\lambda(h)$ . Thus, for all natural numbers $k \geq 2$ , we can construct a new basis consisting of $k$ base agents that generates $\lambda$ in $e$ , but does not contain the agent itself. This completes the argument. + +# B.2 Section 4 Proofs + +We next present the proofs of results from Section 4. + +# B.2.1 Theorem 4.1: Properties of CRL + +We begin with Theorem 4.1 that establishes basic properties of CRL. + +Theorem 4.1. Every instance of $CRL(e, v, \Lambda, \Lambda_{\mathrm{B}})$ satisfies the following properties: + +1. If $\Lambda \neq \Lambda_{\mathrm{B}} \cup \Lambda^{*}$ , there exists a $\Lambda_{\mathrm{B}}'$ such that (1) $\Lambda_{\mathrm{B}}' \vDash \Lambda$ , and (2) $(e, v, \Lambda, \Lambda_{\mathrm{B}}')$ is not an instance of CRL. +2. No element of $\Lambda_{\mathrm{B}}$ is optimal: $\Lambda_{\mathrm{B}}\cap \Lambda^{*} = \emptyset$ +3. If $|\Lambda|$ is finite, there exists an agent set, $\Lambda^{\circ}$ such that $|\Lambda^{\circ}| < |\Lambda|$ and $\Lambda^{\circ} \nVdash \Lambda$ . +4. If $|\Lambda|$ is infinite, there exists an agent set, $\Lambda^{\circ}$ such that $\Lambda^{\circ} \subset \Lambda$ and $\Lambda^{\circ} \nVdash \Lambda$ . + +We prove this result in the form of three lemmas, corresponding to each of the four points of the theorem (with the third lemma, Lemma B.3, covering both points 3. and 4.). Some of the lemmas make use of properties of generates and reaches that we establish later in Appendix C. + +Lemma B.1. For all instances of $CRL(e, v, \Lambda, \Lambda_{\mathrm{B}})$ , if $\Lambda \neq \Lambda_{\mathrm{B}} \cup \Lambda^{*}$ , then there exists a choice $\Lambda_{\mathrm{B}}'$ such that (1) $\Lambda_{\mathrm{B}}' \notin \Lambda$ , and (2) $(e, v, \Lambda, \Lambda_{\mathrm{B}}')$ is not an instance of $CRL$ . + +# Proof of Lemma B.1. + +Recall that a tuple $(e, v, \Lambda, \Lambda_{\mathrm{B}})$ is CRL just when all of the optimal agents $\Lambda^{*}$ do not reach the basis. Then, the result holds as a straightforward consequence of two facts. First, we can always construct a new basis containing all of the optimal agents, $\Lambda_{\mathrm{B}}^{\circ} = \Lambda_{\mathrm{B}} \cup \Lambda^{*}$ . Notice that $\Lambda_{\mathrm{B}}^{\circ}$ still generates $\Lambda$ by property three of Theorem 4.2. Further, since both $\Lambda_{\mathrm{B}}$ and $\Lambda^{*}$ are each subsets of $\Lambda$ , and by assumption $\Lambda \neq \Lambda_{\mathrm{B}} \cup \Lambda^{*}$ (so there is at least one sub-optimal agent that is not in the basis), it follows that $\Lambda_{\mathrm{B}}^{\circ} \subset \Lambda$ . Second, by Proposition C.15, we know that every element $\lambda_{\mathrm{B}}^{\circ} \in \Lambda_{\mathrm{B}}^{\circ}$ will always reach the basis, $\lambda_{\mathrm{B}}^{\circ} \rightsquigarrow \Lambda_{\mathrm{B}}^{\circ}$ . Therefore, in the tuple $(e, v, \Lambda, \Lambda_{\mathrm{B}}^{\circ})$ , each of the optimal agents will reach the basis, and therefore this is not an instance of CRL. + +Lemma B.2. No element of $\Lambda_{\mathrm{B}}$ is optimal: $\Lambda_{\mathrm{B}}\cap \Lambda^{*} = \emptyset$ + +# Proof of Lemma B.2. + +The lemma follows as a combination of two facts. + +First, recall that, by definition of CRL, each optimal agent $\lambda \in \Lambda^{*}$ satisfies $\lambda^{*} \nrightarrow \Lambda_{\mathrm{B}}$ . + +Second, note that by Lemma B.11, we know that each $\lambda_{\mathrm{B}} \in \Lambda_{\mathrm{B}}$ satisfies $\lambda_{\mathrm{B}} \stackrel{\varkappa}{\rightsquigarrow} \Lambda_{\mathrm{B}}$ . + +Therefore, since sometimes reaches $(\wedge \rightarrow)$ and never reaches $(\neg \rightarrow)$ are negations of one another, we conclude that no basis element can be optimal. + +Before stating the next lemma, we note that points (3.) and (4.) of Theorem 4.1 are simply expansions of the definition of a minimal agent set, which we define precisely in Definition C.4 and Definition C.5. + +Lemma B.3. For any instance of CRL, $\Lambda$ is not minimal. + +# Proof of Lemma B.3. + +We first show that $\Lambda$ cannot be minimal. To do so, we consider the cases where the rank (Definition C.3) of $\Lambda$ is finite and infinite separately. + +(Finite Rank $\Lambda$ .) + +If $\mathrm{rank}(\Lambda)$ is finite and minimal, then it follows immediately that there is no agent set of smaller rank that generates $\Lambda$ . By consequence, since $\Lambda_{\mathrm{B}} \subset \Lambda$ and $\Lambda_{\mathrm{B}} \notin \Lambda$ , we conclude that $\Lambda$ cannot be minimal. + +(Infinite Rank $\Lambda$ ) + +If $\mathrm{rank}(\Lambda)$ is infinite and minimal, then there is no proper subset of $\Lambda$ that uniformly generates $\Lambda$ by definition. By consequence, since $\Lambda_{\mathrm{B}} \subset \Lambda$ and $\Lambda_{\mathrm{B}} \not\in \Lambda$ , we conclude that $\Lambda$ cannot be minimal. $\checkmark$ + +This completes the argument of both cases, and we conclude that for any instance of CRL, $\Lambda$ is not minimal. + +# B.2.2 Theorem 4.2: Properties of Generates + +Next, we prove basic properties of generates. + +Theorem 4.2. The following properties hold of the generates operator: + +1. Generates is transitive: For any triple $(\Lambda^1, \Lambda^2, \Lambda^3)$ and $e \in \mathcal{E}$ , if $\Lambda^1 \not\in \Lambda^2$ and $\Lambda^2 \not\in \Lambda^3$ , then $\Lambda^1 \not\in \Lambda^3$ . +2. Generates is not commutative: there exists a pair $(\Lambda^1, \Lambda^2)$ and $e \in \mathcal{E}$ such that $\Lambda^1 \not\subset \Lambda^2$ , but $\neg (\Lambda^2 \nmid e \Lambda^1)$ . +3. For all $\Lambda$ and pair of agent bases $(\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2})$ such that $\Lambda_{\mathrm{B}}^{1}\subseteq \Lambda_{\mathrm{B}}^{2}$ if $\Lambda_{\mathrm{B}}^{1}\vDash \Lambda$ , then $\Lambda_{\mathrm{B}}^{2}\vDash \Lambda$ +4. For all $\Lambda$ and $e\in \mathcal{E},\wedge \nmid e\Lambda$ +5. The decision problem, Given $(e,\Lambda_{\mathrm{B}},\Lambda)$ , output True iff $\Lambda_{\mathrm{B}} \nleq \Lambda$ , is undecidable. + +The proof of this theorem is spread across the next five lemmas below. + +The fact that generates is transitive suggests that the basic tools of an agent set—paired with a set of learning rules—might be likened to an algebraic structure. We can draw a symmetry between an agent basis and the basis of a vector space: A vector space is comprised of all linear combinations of the basis, whereas $\Lambda$ is comprised of all valid switches (according to the learning rules) between the base agents. However, the fact that generates is not commutative (by point 2.) raises a natural question: are there choices of learning rules under which generates is commutative? An interesting direction for future work is to explore this style of algebraic analysis on agents. + +Lemma B.4. Generates is transitive: For any triple $(\Lambda^1, \Lambda^2, \Lambda^3)$ and $e \in \mathcal{E}$ , if $\Lambda^1 \not\subsetneq \Lambda^2$ and $\Lambda^2 \not\subsetneq \Lambda^3$ , then $\Lambda^1 \not\subsetneq \Lambda^3$ . + +# Proof of Lemma B.4. + +Assume $\Lambda^1 \not\in \Lambda^2$ and $\Lambda^2 \not\in \Lambda^3$ . Then, by Proposition C.4 and the definition of the generates operator, we know that + +$$ +\forall_ {\lambda^ {2} \in \Lambda^ {2}} \exists_ {\sigma^ {1} \in \Sigma^ {1}} \forall_ {h \in \bar {\mathcal {H}}} \lambda^ {2} (h) = \sigma^ {1} (h) (h), \tag {B.2} +$$ + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \exists_ {\sigma^ {2} \in \mathbb {E} ^ {2}} \forall_ {h \in \bar {\mathcal {H}}} \lambda^ {3} (h) = \sigma^ {2} (h) (h), \tag {B.3} +$$ + +where $\Sigma^1$ and $\Sigma^2$ express the set of all learning rules over $\Lambda^1$ and $\Lambda^2$ respectively. By definition of a learning rule, $\sigma$ , we rewrite the above as follows, + +$$ +\forall_ {\lambda^ {2} \in \Lambda^ {2}} \forall_ {h \in \bar {\mathcal {H}}} \exists_ {\lambda^ {1} \in \Lambda^ {1}} \lambda^ {2} (h) = \lambda^ {1} (h), \tag {B.4} +$$ + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \forall_ {h \in \bar {\mathcal {H}}} \exists_ {\lambda^ {2} \in \Lambda^ {2}} \lambda^ {3} (h) = \lambda^ {2} (h). \tag {B.5} +$$ + +Then, consider a fixed but arbitrary $\lambda^3\in \Lambda^3$ . We construct a learning rule defined over $\Lambda^1$ as $\sigma^1:\mathcal{H}\to \Lambda^1$ that induces an equivalent agent as follows. For each realizable history, $h\in \bar{\mathcal{H}}$ , by Equation B.5 we know that there is an $\lambda^2$ such that $\lambda^3 (h) = \lambda^2 (h)$ , and by Equation B.4, there is an $\lambda^1$ such that $\lambda^2 (h) = \lambda^1 (h)$ . Then, set $\sigma^1:h\mapsto \lambda^1$ such that $\lambda^1 (h) = \lambda^2 (h) = \lambda^3 (h)$ + +Since $h$ and $\lambda^3$ were chosen arbitrarily, we conclude that + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \forall_ {h \in \bar {\mathcal {H}}} \exists_ {\lambda^ {1} \in \Lambda^ {1}} \lambda^ {3} (h) = \lambda^ {1} (h). +$$ + +But, by the definition of $\Sigma$ , this means there exists a learning rule such that + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \exists_ {\sigma^ {1} \in \mathbb {Z} ^ {1}} \forall_ {h \in \bar {\mathcal {H}}} \lambda^ {3} (h) = \sigma^ {1} (h) (h). +$$ + +This is exactly the definition of $\Sigma$ -generation, and by Proposition C.4, we conclude $\Lambda^1 \vDash \Lambda^3$ . + +Lemma B.5. Generates is not commutative: there exists a pair $(\Lambda^1, \Lambda^2)$ and $e \in \mathcal{E}$ such that $\Lambda^1 \nVdash \Lambda^2$ , but $\neg (\Lambda^2 \nVdash \Lambda^1)$ . + +# Proof of Lemma B.5. + +The result follows from a simple counterexample: consider the pair + +$$ +\Lambda^ {1} = \left\{\lambda_ {i}: h \mapsto a _ {1} \right\}, \quad \Lambda^ {2} = \left\{\lambda_ {i}: h \mapsto a _ {1}, \lambda_ {j}: h \mapsto a _ {2} \right\}. +$$ + +Note that since $\lambda_{i}$ is in both sets, and $\Lambda^1$ is a singleton, we know that $\Lambda^2 \notin \Lambda^1$ in any environment. But, by Proposition C.6, we know that $\Lambda^1$ cannot generate $\Lambda^2$ . + +Lemma B.6. For all $\Lambda$ and pair of agent bases $(\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2})$ such that $\Lambda_{\mathrm{B}}^{1}\subseteq \Lambda_{\mathrm{B}}^{2}$ , if $\Lambda_{\mathrm{B}}^{1}\not\in\Lambda$ , then $\Lambda_{\mathrm{B}}^{2}\not\in\Lambda$ . + +# Proof of Lemma B.6. + +The result follows as a natural consequence of the definition of generates. Recall that $\Lambda_{\mathrm{B}}^{1} \models^{\varepsilon} \Lambda$ just when, + +$$ +\begin{array}{l} \exists_ {\Sigma^ {1} \subseteq \Xi} \Lambda_ {\mathbf {B}} ^ {1} \varepsilon_ {\Sigma^ {1}} \Lambda (B.6) \\ \equiv \exists_ {\Sigma^ {1} \subseteq \mathbb {Z}} \exists_ {\sigma^ {1} \in \Sigma^ {1}} \forall_ {h \in \bar {\mathcal {H}}} \lambda (h) = \lambda_ {\mathrm {B}} ^ {\sigma^ {1} (h)} (h), (B.7) \\ \end{array} +$$ + +where again $\lambda_{\mathrm{B}}^{\sigma^{1}(h)} \in \Lambda_{\mathrm{B}}^{1}$ is the base agent chosen by $\sigma^{1}(h)$ . We use superscripts $\Sigma^{1}$ and $\sigma^{1}$ to signify that $\sigma^{1}$ is defined relative to $\Lambda_{\mathrm{B}}^{1}$ , that is, $\sigma^{1}: \mathcal{H} \to \Lambda_{\mathrm{B}}^{1} \in \Sigma^{1}$ . + +But, since $\Lambda_{\mathrm{B}}^{1} \subseteq \Lambda_{\mathrm{B}}^{2}$ , we can define $\Sigma^{2} = \Sigma^{1}$ and ensure that $\Lambda_{\mathrm{B}}^{2} \not\subsetneq \Sigma^{2} \Lambda$ , since the agent basis $\Lambda_{\mathrm{B}}^{1}$ was already sufficient to generate $\Lambda$ . Therefore, we conclude that $\Lambda_{\mathrm{B}}^{2} \not\subsetneq \Lambda$ . + +Lemma B.7. For all $\Lambda$ and $e\in \mathcal{E},\mathbb{A}\notin \Lambda$ + +# Proof of Lemma B.7. + +This is a direct consequence of Proposition C.18. + +Lemma B.8. The decision problem, AGENTSGENERATE, Given $(e, \Lambda_{\mathrm{B}}, \Lambda)$ , output True iff $\Lambda_{\mathrm{B}} \not\in \Lambda$ , is undecidable. + +# Proof of Lemma B.8. + +We proceed as is typical of such results by reducing AGENTSGENERATE from the Halting Problem. + +In particular, let $m$ be a fixed but arbitrary Turing Machine, and $w$ be a fixed but arbitrary input to be given to machine $m$ . Then, HALT defines the decision problem that outputs True iff $m$ halts on input $w$ . + +We construct an oracle for AGENTSGENERATE that can decide HALT as follows. Let $(\mathcal{A},\mathcal{O})$ be an interface where the observation space is comprised of all configurations of machine $m$ . Then, we consider a deterministic environment $e$ that simply produces the next configuration of $m$ when run on input $w$ , based on the current tape contents, the state of $m$ , and the location of the tape head. Note that all three of these elements are contained in a Turing Machine's configuration, and that a single configuration indicates whether the Turing Machine is in a halting state or not. Now, let the action space $\mathcal{A}$ consist of two actions, $\{a_{\mathrm{no-op}},a_{\mathrm{halt}}\}$ . On execution of $a_{\mathrm{no-op}}$ no-op, the environment moves to the next configuration. On execution of $a_{\mathrm{halt}}$ , the machine halts. That is, we restrict ourselves to the singleton agent set, $\Lambda$ , containing + +the agent $\lambda^{\circ}$ that outputs $a_{\mathrm{halt}}$ directly following the machine entering a halting configuration, and $a_{\mathrm{no-op}}$ otherwise: + +$$ +\lambda^ {\circ}: h a o \mapsto \left\{ \begin{array}{l l} a _ {\text {h a l t}} & o \text {i s a h a l t i n g c o n f i g r a t i o n}, \\ a _ {\text {n o - o p}} & \text {o t h e r w i s e}. \end{array} \right., \qquad \Lambda = \{\lambda^ {\circ} \}. +$$ + +Using these ingredients, we take any instance of HALT, $(m,w)$ , and consider the singleton agent basis: $\Lambda_{\mathrm{B}}^{\mathrm{I}} = \{a_{\mathrm{no - op}}\}$ . + +We make one query to our AGENTSGENERATE oracle, and ask: $\Lambda_{\mathrm{B}}^{1} \notin \Lambda$ . If it is True, then the histories realizable by $(\lambda^{\circ}, e)$ pair ensure that the single agent in $\Lambda$ never emits the $a_{\mathrm{halt}}$ action, and thus, $m$ does not halt on $w$ . If it is False, then there are realizable histories in $e$ in which $m$ halts on $w$ . We thus use the oracle's response directly to decide the given instance of HALT. + +# B.2.3 Theorem 4.3: Properties of Reaches + +We find many similar properties hold for reaches. + +Theorem 4.3. The following properties hold of the reaches operator: + +1. $\Leftrightarrow$ and $\nrightarrow$ are not transitive. +2. Sometimes reaches is not commutative: there exists a pair $(\Lambda^1, \Lambda^2)$ and $e \in \mathcal{E}$ such that $\forall_{\lambda^1 \in \Lambda^1} \lambda^1 \rightsquigarrow \Lambda^2$ , but $\exists_{\lambda^2 \in \Lambda^2} \lambda^2 \not\rightarrow \Lambda^1$ . +3. For all pairs $(\Lambda, e)$ , if $\lambda \in \Lambda$ , then $\lambda \rightsquigarrow \Lambda$ . +4. Every agent satisfies $\lambda \stackrel{\wedge}{\rightarrow} \Lambda$ in every environment. +5. The decision problem, Given $(e,\lambda ,\Lambda)$ , output True iff $\lambda \rightsquigarrow \Lambda$ , is undecidable. + +Again, we prove this result through five lemmas that correspond to each of the above properties. + +Many of these properties resemble those in Theorem 4.2. For instance, point (5.) shows that deciding whether a given agent sometimes reaches a basis in an environment is undecidable. We anticipate that the majority of decision problems related to determining properties of arbitrary agent sets will be undecidable, though it is still worth making these arguments carefully. Moreover, there may be interesting special cases in which these decision problems are decidable (and perhaps, efficiently so). Identifying these special cases and their corresponding efficient algorithms is another interesting direction for future work. + +Lemma B.9. $\rightsquigarrow$ and $\nrightarrow$ are not transitive. + +# Proof of Lemma B.9. + +We construct two counterexamples, one for each of "sometimes reaches" $(\rightsquigarrow)$ and "never reaches" $(\not\rightsquigarrow)$ . + +Counterexample: Sometimes Reaches. To do so, we begin with a tuple $(e,\Lambda^1,\Lambda^2,\Lambda^3)$ such that both + +$$ +\forall_ {\Lambda^ {1} \in \Lambda^ {1}} \lambda^ {1} \stackrel {\varepsilon} {\rightsquigarrow} \Lambda^ {1}, \quad \forall_ {\Lambda^ {2} \in \Lambda^ {2}} \lambda^ {2} \stackrel {\varepsilon} {\rightsquigarrow} \Lambda^ {2}. +$$ + +We will show that there is an agent, $\overline{\lambda}^1 \in \Lambda^1$ , such that $\overline{\lambda}^1 \rightsquigarrow \Lambda^3$ , thus illustrating that sometimes reaches is not guaranteed to be transitive. The basic idea is that sometimes reaches only requires an agent stop its search on one realizable history. So, $\lambda^1 \rightsquigarrow \Lambda^2$ might happen on some history $h$ , but each $\lambda^2 \in \Lambda^2$ might only reach $\Lambda^3$ on an entirely different history. As a result, reaching $\Lambda^2$ is not enough to ensure the agent also reaches $\Lambda^3$ . + +In more detail, the agent sets of the counterexample are as follows. Let $\mathcal{A} = \{a_1, a_2\}$ and $\mathcal{O} = \{o_1, o_2\}$ . Let $\Lambda^2$ be all agents that, after ten timesteps, always take $a_2$ . $\overline{\lambda}^1$ is simple: it + +always takes $a_1$ , except on one realizable history, $h^\circ$ , (and all of the realizable successors of $h^\circ$ , $\mathcal{H}_{h^\circ}^{\overline{\lambda},e}$ ), where it switches to taking $a_2$ after ten timesteps. Clearly $\overline{\lambda^1} \rightsquigarrow \Lambda^2$ , since after ten timesteps, we know there will be some $\lambda^2$ such that $\overline{\lambda}^1(h^\circ h') = \lambda^2(h^\circ h')$ for all realizable history suffixes $h'$ . Now, by assumption, we know that $\lambda^2 \rightsquigarrow \Lambda^3$ . This ensures there is a single realizable history $h$ such that there is an $\lambda^3$ where $\lambda^2(hh') = \lambda^3(hh')$ for any realizable suffix $h'$ . To finish the counterexample, we simply note that this realizable $h$ can be different from $h^\circ$ and all of its successors. For example, $h^\circ$ might be the history containing only $o_1$ for the first ten timesteps, while $h$ could be the history containing only $o_2$ for the first ten timesteps. Thus, this $\lambda^1$ never reaches $\Lambda^3$ , and we conclude the counterexample. + +Counterexample: Never Reaches. The instance for never reaches is simple: Let $\mathcal{A} = \{a_1, a_2, a_3\}$ , and $\Lambda^1 = \Lambda^3$ . Suppose all agents in $\Lambda^1$ (and thus $\Lambda^3$ ) only choose actions $a_1$ and $a_3$ . Let $\Lambda^2$ be a singleton, $\Lambda^2 = \{\lambda^2\}$ such that $\lambda^2 : h \mapsto a_2$ . Clearly, every $\lambda^1 \in \Lambda^1$ will never reach $\Lambda^2$ , since none of them ever choose $a_2$ . Similarly, $\lambda^2$ will never reach $\Lambda^3$ , since no agents in $\Lambda^3$ choose $a_2$ . However, by Proposition C.15 and the assumption that $\Lambda^1 = \Lambda^3$ , we know $\forall_{\lambda^1 \in \Lambda^1} \lambda^1 \rightsquigarrow \Lambda^3$ . This directly violates transitivity. + +This completes the argument for all three cases, and we conclude. + +![](images/7bf17fb42a8f5edd35ae83bbb4006b3033a9abfadf0717c5a6c23a67f87dbe28.jpg) + +Lemma B.10. Sometimes reaches is not commutative: there exists a pair $(\Lambda^1, \Lambda^2)$ and $e \in \mathcal{E}$ such that $\forall_{\lambda^1 \in \Lambda^1} \lambda^1 \rightsquigarrow \Lambda^2$ , but $\exists_{\lambda^2 \in \Lambda^2} \lambda^2 \not\Rightarrow \Lambda^1$ . + +# Proof of Lemma B.10. + +The result holds as a straightforward consequence of the following counterexample. Consider the pair of agent sets + +$$ +\Lambda^ {1} = \{\lambda_ {i}: h \mapsto a _ {1} \}, \qquad \Lambda^ {2} = \{\lambda_ {i}: h \mapsto a _ {1}, \lambda_ {j}: h \mapsto a _ {2} \}. +$$ + +Note that since $\lambda_{i}$ is in both sets, and $\Lambda^1$ is a singleton, we know that $\lambda \rightsquigarrow \Lambda^1$ in any environment by Lemma B.11. But, clearly $\lambda_{j}$ never reaches $\Lambda^1$ , since no agent in $\Lambda^1$ ever chooses $a_1$ . + +Lemma B.11. For all pairs $(\Lambda, e)$ , if $\lambda \in \Lambda$ , then $\lambda \stackrel{\sim}{\rtimes} \Lambda$ . + +# Proof of Lemma B.11. + +The proposition is straightforward, as any $\lambda \in \Lambda$ will be equivalent to itself in behavior for all histories. + +Lemma B.12. Every agent satisfies $\lambda \rightsquigarrow \triangleleft$ in every environment. + +# Proof of Lemma B.12. + +This is again a direct consequence of Proposition C.18. + +![](images/a8de89d634ef5668d47d29513136bbdf66b1c2b82f5afa8202cb01ff07c50096.jpg) + +Lemma B.13. The decision problem, AGENTREACHES, Given $(e,\lambda ,\Lambda)$ , output True iff $\lambda \rightsquigarrow \Lambda$ , is undecidable. + +# Proof of Lemma B.13. + +We again proceed by reducing AGENTREACHES from the Halting Problem. + +In particular, let $m$ be a fixed but arbitrary Turing Machine, and $w$ be a fixed but arbitrary input to be given to machine $m$ . Then, HALT defines the decision problem that outputs True iff $m$ halts on input $w$ . + +We construct an oracle for AGENTREACHES that can decide HALT as follows. Consider the same observation space used in the proof of Lemma B.8: Let $\mathcal{O}$ be comprised of all configurations of machine $m$ . Then, sequences of observations are simply evolution of different Turing Machines processing possible inputs. We consider an action space, $\mathcal{A} = \{a_{\mathrm{halted}}, a_{\mathrm{not - yet}}\}$ , where agents simply report whether the history so far contains a halting configuration. + +Then, we consider a deterministic environment $e$ that simply produces the next configuration of $m$ when run on input $w$ , based on the current tape contents, the state of $m$ , and the location of the tape head. Note again that all three of these elements are contained in a Turing Machine's configuration. + +Using these ingredients, we take any instance of HALT, $(m, w)$ , and build the singleton agent set $\Lambda_{\mathrm{B}}$ containing only the agent $\lambda_{\mathrm{halted}}: h \mapsto a_{\mathrm{halted}}$ that always reports the machine as having halted. We then consider whether the agent that outputs $a_{\mathrm{not-yet}}$ indefinitely until $m$ reports halting, at which point the agent switches to $a_{\mathrm{halted}}$ . + +We make one query to our AGENTREACHES oracle, and ask: $\lambda \stackrel{\sim}{\rightsquigarrow} \Lambda_{\mathrm{B}}$ . If it is True, then the branching agent eventually becomes equivalent to $\lambda_{\mathrm{halted}}$ in that they both indefinitely output $a_{\mathrm{halted}}$ on at least one realizable history. Since $e$ is deterministic, we know this equivalence holds across all histories. If the query reports False, then there is no future in $e$ in which $m$ halts on $w$ , otherwise the agent would become equivalent to $\lambda_{\mathrm{halted}}$ . We thus use the oracle's response directly to decide the given instance of HALT. + +# C Additional Analysis + +Finally, we present a variety of additional results about agents and the generates and reaches operators. + +# C.1 Additional Analysis: Generates + +We first highlight simple properties of the generates operator. Many of our results build around the notion of uniform generation, a variant of the generates operator in which a basis generates an agent set in every environment. We define this operator precisely as follows. + +Definition C.1. Let $\Sigma$ be a set of learning rules over some basis $\Lambda_{\mathrm{B}}$ . We say that a set $\Lambda$ is uniformly $\Sigma$ -generated by $\Lambda_{\mathrm{B}}$ , denoted $\Lambda_{\mathrm{B}} \models_{\Sigma} \Lambda$ , if and only if + +$$ +\forall_ {\lambda \in \Lambda} \exists_ {\sigma \in \Sigma} \forall_ {h \in \mathcal {H}} \lambda (h) = \sigma (h) (h). \tag {C.1} +$$ + +Definition C.2. We say a basis $\Lambda_{\mathrm{B}}$ uniformly generates $\Lambda$ , denoted $\Lambda_{\mathrm{B}} \models \Lambda$ , if and only if + +$$ +\exists_ {\Sigma \subseteq \Sigma} \Lambda_ {B} \models_ {\Sigma} \Lambda . \tag {C.2} +$$ + +We will first show that uniform generation entails generation in a particular environment. As a consequence, when we prove that certain properties hold of uniform generation, we can typically also conclude that the properties hold for generation as well, though there is some subtlety as to when exactly this implication will allow results about $\models$ to apply directly to $\varepsilon$ . + +Proposition C.1. For any $(\Lambda_{\mathrm{B}},\Lambda)$ pair, if $\Lambda_{\mathrm{B}}\models \Lambda$ , then for all $e\in \mathcal{E}$ $\Lambda_{\mathrm{B}}\vDash \Lambda$ + +# Proof of Proposition C.1. + +Recall that in the definition of uniform generation, $\Lambda_{\mathrm{B}}\models \Lambda$ , we require, + +$$ +\exists_ {\Sigma \subseteq \Xi} \forall_ {\lambda \in \Lambda} \exists_ {\sigma \in \Sigma} \forall_ {h \in \mathcal {H}} \lambda (h) = \sigma (h) (h). \tag {C.3} +$$ + +Now, contrast this with generates with respect to a specific environment $e$ , + +$$ +\exists_ {\Sigma \subseteq \mathbb {Z}} \forall_ {\lambda \in \Lambda} \exists_ {\sigma \in \Sigma} \forall_ {h \in \bar {\mathcal {H}}} \lambda (h) = \sigma (h) (h). \tag {C.4} +$$ + +The only difference in the definitions is that the set of histories quantified over is $\mathcal{H}$ in the former, and $\bar{\mathcal{H}} = \mathcal{H}^{\lambda ,e}$ in the latter. + +Since $\tilde{\mathcal{H}}\subseteq \mathcal{H}$ for any choice of environment $e$ , we can conclude that when Equation C.3, it is also the case that Equation C.4 holds, too. Therefore, $\Lambda_{\mathrm{B}}\models \Lambda \Rightarrow \Lambda_{\mathrm{B}}\nVdash \Lambda$ for any $e$ + +We next show that the subset relation implies generation. + +Proposition C.2. Any pair of agent sets $(\Lambda_{\mathrm{small}},\Lambda_{\mathrm{big}})$ such that $\Lambda_{\mathrm{small}}\subseteq \Lambda_{\mathrm{big}}$ satisfies + +$$ +\Lambda_ {\text {b i g}} \vDash \Lambda_ {\text {s m a l l}}. \tag {C.5} +$$ + +# Proof of Proposition C.2. + +The result follows from the combination of two facts. First, that all agent sets generate themselves. That is, for arbitrary $\Lambda$ , we know that $\Lambda \nsubseteq \Lambda$ , since the trivial set of learning rules, + +$$ +\Sigma_ {\mathrm {t r}} = \left\{\sigma_ {i}: h \mapsto \lambda_ {i}, \forall_ {\lambda_ {i} \in \Lambda} \right\}, \tag {C.6} +$$ + +that never switches between agents is sufficient to generate the agent set. + +Second, observe that removing an agent from the generated set has no effect on the generates operator. That is, let $\Lambda' = \Lambda \setminus \lambda$ , for fixed but arbitrary $\lambda \in \Lambda$ . We see that $\Lambda \nVdash \Lambda'$ , since $\Sigma_{\mathrm{tr}}$ is sufficient to generate $\Lambda'$ , too. By inducting over all removals of agents from $\Lambda$ , we reach our conclusion. + +Next, we establish properties about the sets of learning rules that correspond to the generates operator. + +Proposition C.3. For any $(\Lambda_{\mathrm{B}},\Sigma ,\Lambda)$ such that $\Lambda_{\mathrm{B}}\models_{\Sigma}\Lambda$ , it holds that + +$$ +| \Lambda | \leq | \Sigma |. \tag {C.7} +$$ + +# Proof of Proposition C.3. + +We proceed toward contradiction, and assume $|\Lambda| > |\Sigma|$ . Then, there is at least one learning rule $\sigma \in \Sigma$ that corresponds to two or more distinct agents in $\Lambda$ . Call this element $\sigma^{\circ}$ , and without loss of generality let $\lambda^1$ and $\lambda^2$ be two distinct agents that are each generated by $\sigma^{\circ}$ in the sense that + +$$ +\lambda^ {1} (h) = \sigma^ {\circ} (h) (h), \quad \lambda^ {2} (h) = \sigma^ {\circ} (h) (h), \tag {C.8} +$$ + +for every $h \in \mathcal{H}$ . But, by the distinctness of $\lambda^1$ and $\lambda^2$ , there must exist a history $h$ in which $\lambda^1(h) \neq \lambda^2(h)$ . We now arrive at a contradiction as such a history cannot exist: By Equation C.8, we know that $\lambda^1(h) = \sigma^\circ(h)(h) = \lambda^2(h)$ for all $h$ . + +We see that the universal learning rules, $\Sigma$ , is the strongest in the following sense. + +Proposition C.4. For any basis $\Lambda_{\mathrm{B}}$ and agent set $\Lambda$ , exactly one of the two following properties hold: + +1. The agent basis $\Lambda_{\mathrm{B}}$ uniformly generates $\Lambda$ under the set of all learning rules: $\Lambda_{\mathrm{B}} \models_{\Sigma} \Lambda$ . +2. There is no set of learning rules for which the basis $\Sigma$ -uniformly generates the agent set: $\neg \exists_{\Sigma \subseteq \Sigma} \Lambda_{\mathrm{B}} \models_{\Sigma} \Lambda$ . + +# Proof of Proposition C.4. + +The proof follows from the law of excluded middle. That is, for any set of learning rules $\Sigma$ , either it generates $\Lambda$ or it does not. If it does generate $\Lambda$ , by Lemma B.6 so does $\Sigma$ . By consequence, if $\Sigma$ does not generate $\Lambda$ , neither do any of its subsets. + +Furthermore, uniform generation is also transitive. + +Theorem C.5. Uniform generates is transitive: For any triple $(\Lambda^1, \Lambda^2, \Lambda^3)$ , if $\Lambda^1 \models \Lambda^2$ and $\Lambda^2 \models \Lambda^3$ , then $\Lambda^1 \models \Lambda^3$ . + +# Proof of Theorem C.5. + +Assume $\Lambda^1 \models \Lambda^2$ and $\Lambda^2 \models \Lambda^3$ . Then, by Proposition C.4 and the definition of the uniform generates operator, we know that + +$$ +\forall_ {\lambda^ {2} \in \Lambda^ {2}} \exists_ {\sigma^ {1} \in \Sigma^ {1}} \forall_ {h \in \mathcal {H}} \lambda^ {2} (h) = \sigma^ {1} (h) (h), \tag {C.9} +$$ + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \exists_ {\sigma^ {2} \in \Sigma^ {2}} \forall_ {h \in \mathcal {H}} \lambda^ {3} (h) = \sigma^ {2} (h) (h), \tag {C.10} +$$ + +where $\Sigma^1$ and $\Sigma^2$ express the set of all learning rules over $\Lambda^1$ and $\Lambda^2$ respectively. By definition of a learning rule, $\sigma$ , we rewrite the above as follows, + +$$ +\forall_ {\lambda^ {2} \in \Lambda^ {2}} \forall_ {h \in \mathcal {H}} \exists_ {\lambda^ {1} \in \Lambda^ {1}} \lambda^ {2} (h) = \lambda^ {1} (h), \tag {C.11} +$$ + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \forall_ {h \in \mathcal {H}} \exists_ {\lambda^ {2} \in \Lambda^ {2}} \lambda^ {3} (h) = \lambda^ {2} (h). \tag {C.12} +$$ + +Then, consider a fixed but arbitrary $\lambda^3\in \Lambda^3$ . We construct a learning rule defined over $\Lambda^1$ as $\sigma^1:\mathcal{H}\to \Lambda^1$ that induces an equivalent agent as follows. For each history, $h\in \mathcal{H}$ , by Equation C.12 we know that there is an $\lambda^2$ such that $\lambda^3 (h) = \lambda^2 (h)$ , and by Equation C.11, there is an $\lambda^1$ such that $\lambda^2 (h) = \lambda^1 (h)$ . Then, set $\sigma^1:h\mapsto \lambda^1$ such that $\lambda^1 (h) = \lambda^2 (h) = \lambda^3 (h)$ . Since $h$ and $\lambda^3$ were chosen arbitrarily, we conclude that + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \forall_ {h \in \mathcal {H}} \exists_ {\lambda^ {1} \in \Lambda^ {1}} \lambda^ {3} (h) = \lambda^ {1} (h). +$$ + +But, by the definition of $\Sigma$ , this means there exists a learning rule such that + +$$ +\forall_ {\lambda^ {3} \in \Lambda^ {3}} \exists_ {\sigma^ {1} \in \Xi^ {1}} \forall_ {h \in \mathcal {H}} \lambda^ {3} (h) = \sigma^ {1} (h) (h). +$$ + +This is exactly the definition of $\Sigma$ -uniform generation, and by Proposition C.4, we conclude $\Lambda^1 \models \Lambda^3$ . + +Next, we show that a singleton basis only generates itself. + +Proposition C.6. Any singleton basis, $\Lambda_{\mathrm{B}} = \{\lambda \}$ , only uniformly generates itself. + +# Proof of Proposition C.6. + +Note that generates requires switching between base agents. With only a single agent, there cannot be any switching, and thus, the only agent that can be described as switching amongst the elements of the singleton set $\Lambda_{\mathrm{B}} = \{\lambda\}$ is the set itself. + +# C.1.1 Rank and Minimal Bases + +As discussed in the paper, one natural reaction to the concept of an agent basis is to ask how we can justify different choices of a basis. And, if we cannot, then perhaps the concept of an agent basis is disruptive, rather than illuminating. In the main text, we suggest that in many situations, the choice of basis is made by the constraints imposed by the problem, such as the available memory. However, there are some objective properties of different bases that can help us to evaluate possible choices of a suitable basis. For instance, some bases are minimal in the sense that they cannot be made smaller while still retaining the same expressive power (that is, while generating the same agent sets). Identifying such minimal sets may be useful, as it is likely that there is good reason to consider only the most compressed agent bases. + +To make these intuitions concrete, we introduce the rank of an agent set. + +Definition C.3. The rank of an agent set, $\mathrm{rank}(\Lambda)$ , is the size of the smallest agent basis that uniformly generates it: + +$$ +\operatorname {r a n k} (\Lambda) = \min _ {\Lambda_ {B} \subset \mathbb {A}} | \Lambda_ {B} | \quad s. t. \quad \Lambda_ {B} \vDash \Lambda . \tag {C.13} +$$ + +For example, the agent set, + +$$ +\Lambda = \left\{\lambda^ {0}: h \mapsto a _ {0}, \right. \tag {C.14} +$$ + +$$ +\lambda^ {1}: h \mapsto a _ {1}, +$$ + +$$ +\lambda^ {2}: h \mapsto \left\{ \begin{array}{l l} a _ {0} & | h | \bmod 2 = 0, \\ a _ {1} & | h | \bmod 2 = 1, \end{array} \right. +$$ + +}, + +has $\mathrm{rank}(\Lambda) = 2$ , since the basis, + +$$ +\Lambda_ {\mathrm {B}} = \left\{\lambda_ {\mathrm {B}} ^ {0}: h \mapsto a _ {0}, \lambda_ {\mathrm {B}} ^ {1}: h \mapsto a _ {1} \right\}, +$$ + +uniformly generates $\Lambda$ , and there is no size-one basis that uniformly generates $\Lambda$ by Proposition C.6. + +Using the notion of an agent set's rank, we now introduce the concept of a minimal basis. We suggest that minimal bases are particular important, as they contain no redundancy with respect to their expressive power. Concretely, we define a minimal basis in two slightly different ways depending on whether the basis has finite or infinite rank. In the finite case, we say a basis is minimal if there is no basis of lower rank that generates it. + +Definition C.4. An agent basis $\Lambda_{\mathrm{B}}$ with finite rank is said to be minimal just when there is no smaller basis that generates it, + +$$ +\forall_ {\Lambda_ {B} ^ {\prime} \subset \mathbb {A}} \Lambda_ {B} ^ {\prime} \vDash \Lambda_ {B} \Rightarrow \operatorname {r a n k} \left(\Lambda_ {B} ^ {\prime}\right) \geq \operatorname {r a n k} \left(\Lambda_ {B}\right). \tag {C.15} +$$ + +In the infinite case, as all infinite rank bases will have the same effective size, we instead consider a notion of minimiality based on whether any elements can be removed from the basis without changing its expressive power. + +Definition C.5. An agent basis $\Lambda_{\mathrm{B}}$ with infinite rank is said to be minimal just when no proper subset of $\Lambda_{\mathrm{B}}$ uniformly generates $\Lambda_{\mathrm{B}}$ . + +$$ +\forall_ {\Lambda_ {B} ^ {\prime} \subseteq \Lambda_ {B}} \Lambda_ {B} ^ {\prime} \vDash \Lambda_ {B} \Rightarrow \Lambda_ {B} ^ {\prime} = \Lambda_ {B}. \tag {C.16} +$$ + +Notably, this way of looking at minimal bases will also apply to finite rank agent bases as a direct consequence of the definition of a minimal finite rank basis. However, we still provide both definitions, as a finite rank basis may not contain a subset that generates it, but there may exist a lower rank basis that generates it. + +Corollary C.7. As a Corollary of Proposition C.2 and Definition C.4, for any minimal agent basis $\Lambda_{\mathrm{B}}$ , there is no proper subset of $\Lambda_{\mathrm{B}}$ that generates $\Lambda_{\mathrm{B}}$ . + +Regardless of whether an agent basis has finite or infinite rank, we say the basis is a minimal basis of an agent set $\Lambda$ just when the basis uniformly generates $\Lambda$ and the basis is minimal. + +Definition C.6. For any $\Lambda$ , a minimal basis of $\Lambda$ is any basis $\Lambda_{\mathrm{B}}$ that is both (1) minimal, and (2) $\Lambda_{\mathrm{B}} \models \Lambda$ . + +A natural question arises as to whether the minimal basis of any agent set $\Lambda$ is unique. We answer this question in the negative. + +Proposition C.8. The minimal basis of a set of agents is not necessarily unique. + +# Proof of Proposition C.8. + +To prove the claim, we construct an instance of an agent set with two distinct minimal bases. Let $\mathcal{A} = \{a_0, a_1\}$ , and $\mathcal{O} = \{o_0\}$ . We consider the agent set containing four agents. The first two map every history to $a_0$ and $a_1$ , respectively, while the second two alternate between $a_0$ + +and $a_1$ depending on whether the history is of odd or even length: + +$$ +\Lambda = \left\{\lambda^ {0}: h \mapsto a _ {0}, \right. \tag {C.17} +$$ + +$$ +\lambda^ {1}: h \mapsto a _ {1}, +$$ + +$$ +\lambda^ {2}: h \mapsto \left\{ \begin{array}{l l} a _ {0} & | h | \bmod 2 = 0, \\ a _ {1} & | h | \bmod 2 = 1, \end{array} \right. +$$ + +$$ +\lambda^ {3}: h \mapsto \left\{ \begin{array}{l l} a _ {0} & | h | \bmod 2 = 1, \\ a _ {1} & | h | \bmod 2 = 0, \end{array} \right. +$$ + +$$ +\}. +$$ + +Note that there are two distinct subsets that each universally generate $\Lambda$ : + +$$ +\Lambda_ {\mathrm {B}} ^ {0, 1} = \left\{\lambda_ {\mathrm {B}} ^ {0}, \lambda_ {\mathrm {B}} ^ {1} \right\}, \quad \Lambda_ {\mathrm {B}} ^ {2, 3} = \left\{\lambda_ {\mathrm {B}} ^ {2}, \lambda_ {\mathrm {B}} ^ {3} \right\}. \tag {C.18} +$$ + +Next notice that there cannot be a singleton basis by Proposition C.6, and thus, both $\Lambda_{\mathrm{B}}^{0,1}$ and $\Lambda_{\mathrm{B}}^{2,3}$ satisfy (1) $|\Lambda_{\mathrm{B}}^{0,1}| = |\Lambda_{\mathrm{B}}^{2,3}| = \mathrm{rank}(\Lambda)$ , and (2) both $\Lambda_{\mathrm{B}}^{0,1} \models \Lambda$ , $\Lambda_{\mathrm{B}}^{2,3} \models \Lambda$ . + +Beyond the lack of redundancy of a basis, we may also be interested in their expressive power. For instance, if we compare two minimal bases, $\Lambda_{\mathrm{B}}^{1}$ and $\Lambda_{\mathrm{B}}^{2}$ , how might we justify which to use? To address this question, we consider another desirable property of a basis: universality. + +Definition C.7. An agent basis $\Lambda_{\mathrm{B}}$ is universal if $\Lambda_{\mathrm{B}} \models \mathbb{A}$ . + +Clearly, it might be desirable to work with a universal basis, as doing so ensures that the set of agents we consider in our design space is as rich as possible. We next show that there is at least one natural basis that is both minimal and universal. + +Proposition C.9. The basis, + +$$ +\Lambda_ {\mathrm {B}} ^ {\circ} = \left\{\lambda : \mathcal {O} \rightarrow \Delta (\mathcal {A}) \right\}, \tag {C.19} +$$ + +is a minimal universal basis: + +1. $\Lambda_{\mathrm{B}}^{\circ} \models \mathbb{A}$ : The basis uniformly generates the set of all agents. +2. $\Lambda_{\mathrm{B}}^{\circ}$ is minimal. + +Proof of Proposition C.9. + +We prove each property separately. + +$$ +I. \Lambda_ {\mathrm {B}} ^ {\circ} \vDash \Lambda +$$ + +First, we show that the basis is universal: $\Lambda_{\mathrm{B}}^{\circ} \models \mathbb{A}$ . Recall that this amounts to showing that, + +$$ +\forall_ {\lambda \in \mathcal {A}} \forall_ {h \in \mathcal {H}} \exists_ {\lambda^ {\prime} \in \Lambda_ {\mathrm {B}} ^ {\circ}} \lambda (h) = \lambda^ {\prime} (h). \tag {C.20} +$$ + +Let $\lambda \in \mathbb{A}$ and $h\in \mathcal{H}$ be fixed but arbitrary. Now, let us label the action distribution produced by $\lambda (h)$ as $p_{\lambda (h)}$ . Let $o$ refer to the last observation contained in $h$ (or $\emptyset$ if $h = h_0 = \emptyset$ ). Now, construct the agent $\lambda_{\mathrm{B}}^{\circ}:o\mapsto p_{\lambda (h)}$ . By construction of $\Lambda_{\mathrm{B}}^{\circ}$ , this agent is guaranteed to be a member of $\Lambda_{\mathrm{B}}^{\circ}$ , and furthermore, we know that $\lambda_{\mathrm{B}}^{\circ}$ produces the same output as $\lambda$ on $h$ . Since both $\lambda$ and $h$ were chosen arbitrarily, the construction will work for any choice of $\lambda$ and $h$ and we conclude that at every history, there exists a basis agent $\Lambda_{\mathrm{B}}^{\circ}\in \Lambda_{\mathrm{B}}^{\circ}$ that produces the same probability distribution over actions as any given agent. Thus, the first property holds. + +2. $\Lambda_{\mathrm{B}}^{\circ}$ is minimal. + +Second, we show that $\Lambda_{\mathrm{B}}^{\circ}$ is a minimal basis of $\mathbb{A}$ . Recall that since $\mathrm{rank}(\Lambda_{\mathrm{B}}^{\circ}) = \infty$ , the definition of a minimal basis means that: + +$$ +\forall_ {\Lambda_ {B} \subseteq \Lambda_ {B} ^ {\circ}} \Lambda_ {B} \models \wedge \Rightarrow \Lambda_ {B} = \Lambda_ {B} ^ {\circ}. \tag {C.21} +$$ + +To do so, fix an arbitrary proper subset of $\Lambda_{\mathrm{B}} \in \mathcal{P}(\Lambda_{\mathrm{B}}^{\circ})$ . Notice that since $\Lambda_{\mathrm{B}}$ is a proper subset, there exists a non-empty set $\overline{\Lambda_{\mathrm{B}}}$ such that, + +$$ +\Lambda_ {\mathrm {B}} \cup \overline {{\Lambda_ {\mathrm {B}}}} = \Lambda_ {\mathrm {B}} ^ {\circ}. +$$ + +Now, we show that $\Lambda_{\mathrm{B}}$ cannot uniformly generate $\triangleleft$ by constructing an agent from $\overline{\Lambda_{\mathrm{B}}}$ . In particular, consider the first element of $\overline{\Lambda_{\mathrm{B}}}$ , which, by construction of $\Lambda_{\mathrm{B}}^{\circ}$ , is some mapping from $O$ to a choice of probability distribution over $\mathcal{A}$ . Let us refer to this agent's output probability distribution over actions as $\overline{p}$ . Notice that there cannot exist an agent in $\Lambda_{\mathrm{B}}^{\circ}$ that chooses $\overline{p}$ , otherwise $\Lambda_{\mathrm{B}}$ would not be a proper subset of $\Lambda_{\mathrm{B}}^{\circ}$ . Notice further that in the set of all agents, there are infinitely many agents that output $\overline{p}$ in at least one history. We conclude that $\Lambda_{\mathrm{B}}$ cannot uniformly generate $\mathbb{A}$ , as it does not contain any base element that produces $\overline{p}$ . The set $\overline{\Lambda_{\mathrm{B}}}$ was chosen arbitrarily, and thus the claim holds for any proper subset of $\Lambda_{\mathrm{B}}^{\circ}$ , and we conclude. + +This completes the proof of both statements. + +![](images/9625ed80fa3d09bb8a5041ca981145cebba938d4398c5e4b7f8535b06191025b.jpg) + +Corollary C.10. As a direct consequence of Proposition C.9, every universal basis has infinite rank. + +# C.1.2 Orthogonal and Parallel Agent Sets + +Drawing inspiration from vector spaces, we introduce notions of orthogonal and parallel agent bases according to the agent sets they generate. + +Definition C.8. A pair of agent bases $(\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2})$ are orthogonal if any pair $(\Lambda^1,\Lambda^2)$ they each uniformly generate + +$$ +\Lambda_ {\mathrm {B}} ^ {1} \vDash \Lambda^ {1}, \quad \Lambda_ {\mathrm {B}} ^ {2} \vDash \Lambda^ {2}, \tag {C.22} +$$ + +satisfy + +$$ +\Lambda^ {1} \cap \Lambda^ {2} = \emptyset . \tag {C.23} +$$ + +Naturally this definition can be modified to account for environment-relative generation $(\mathbb{I}^{\varepsilon})$ , or to be defined with respect to a particular set of learning rules in which case two bases are orthogonal with respect to the learning rule set just when they generate different agent sets under the given learning rules. As with the variants of the two operators, we believe the details of such formalisms are easy to produce. + +A few properties hold of any pair of orthogonal bases. + +Proposition C.11. If two bases $\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2}$ are orthogonal, then the following properties hold: + +1. $\Lambda_{\mathrm{B}}^{1}\cap \Lambda_{\mathrm{B}}^{2} = \emptyset$ +2. Neither $\Lambda_{\mathrm{B}}^{1}$ nor $\Lambda_{\mathrm{B}}^{2}$ are universal. + +Proof of Proposition C.11. + +We prove each property independently. + +$$ +I. \Lambda_ {\mathrm {B}} ^ {1} \cap \Lambda_ {\mathrm {B}} ^ {2} = \emptyset +$$ + +We proceed toward contradiction. That is, suppose that both $\Lambda_{\mathrm{B}}^{1}$ is orthogonal to $\Lambda_{\mathrm{B}}^{2}$ , and that $\Lambda_{\mathrm{B}}^{1} \cap \Lambda_{\mathrm{B}}^{2} \neq \emptyset$ . Then, by the latter property, there is at least one agent that is an element of + +both bases. Call this agent $\lambda_{\mathrm{B}}^{\circ}$ . It follows that the set $\Lambda_{\mathrm{B}}^{\circ} = \{\lambda_{\mathrm{B}}^{\circ}\}$ is a subset of both $\Lambda_{\mathrm{B}}^{1}$ and $\Lambda_{\mathrm{B}}^{2}$ . By Proposition C.2, it follows that $\Lambda_{\mathrm{B}}^{1} \models \Lambda_{\mathrm{B}}^{\circ}$ and $\Lambda_{\mathrm{B}}^{2} \models \Lambda_{\mathrm{B}}^{\circ}$ . But this contradicts the fact that $\Lambda_{\mathrm{B}}^{1}$ is orthogonal to $\Lambda_{\mathrm{B}}^{2}$ , and so we conclude. + +2. Neither $\Lambda_{\mathrm{B}}^{1}$ nor $\Lambda_{\mathrm{B}}^{2}$ are universal. + +We again proceed toward contradiction. Suppose without loss of generality that $\Lambda_{\mathrm{B}}^{1}$ is universal. Then, we know $\Lambda_{\mathrm{B}}^{1} \models \wedge$ . Now, we consider two cases: either $\Lambda_{\mathrm{B}}^{2}$ generates some non-empty set, $\Lambda^{2}$ , or it does not generate any sets. If it generates a set $\Lambda^{2}$ , then we arrive at a contradiction as $\Lambda^{2} \cap \wedge \neq \emptyset$ , which violates the definition of orthogonal bases. If if does not generate a set, this violates the definition of a basis, as any basis is by construction non-empty, and we know that containing even a single element is sufficient to generate at least one agent set by Proposition C.6. Therefore, in either of the two cases, we arrive at a contradiction, and thus conclude the argument. + +This concludes the proof of each statement. + +![](images/d6bc3902cb796d0f9310fc10179ae08615c254fa85b2d8d94dd904cfb0d91d11.jpg) + +Corollary C.12. For any non-universal agent basis $\Lambda_{\mathrm{B}}$ , there exists an orthogonal agent basis, $\Lambda_{\mathrm{B}}^{\dagger}$ . + +Conversely, two agent bases $\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2}$ are parallel just when they generate the same agent sets. + +Definition C.9. A pair of agent bases $(\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2})$ are parallel if for every agent set $\Lambda$ , $\Lambda_{\mathrm{B}}^{1} \models \Lambda$ if and only if $\Lambda_{\mathrm{B}}^{2} \models \Lambda$ . + +Proposition C.13. If two bases $\Lambda_{\mathrm{B}}^{1},\Lambda_{\mathrm{B}}^{2}$ are parallel, then the following properties hold: + +1. Both $\Lambda_{\mathrm{B}}^{1} \models \Lambda_{\mathrm{B}}^{2}$ and $\Lambda_{\mathrm{B}}^{2} \models \Lambda_{\mathrm{B}}^{1}$ . +2. $\mathrm{rank}(\Lambda_{\mathrm{B}}^{1}) = \mathrm{rank}(\Lambda_{\mathrm{B}}^{2})$ +3. $\Lambda_{\mathrm{B}}^{1}$ is universal if and only if $\Lambda_{\mathrm{B}}^{2}$ is universal. + +# Proof of Proposition C.13. + +We prove each property separately. + +1. Both $\Lambda_{\mathrm{B}}^{1} \models \Lambda_{\mathrm{B}}^{2}$ and $\Lambda_{\mathrm{B}}^{2} \models \Lambda_{\mathrm{B}}^{1}$ . + +The claim follows directly from the definition of parallel bases. An agent set $\Lambda$ is uniformly generated by $\Lambda_{\mathrm{B}}^{1}$ if and only if it is uniformly generated by $\Lambda_{\mathrm{B}}^{2}$ . Since by Proposition C.2 we know both $\Lambda_{\mathrm{B}}^{1} \models \Lambda_{\mathrm{B}}^{1}$ and $\Lambda_{\mathrm{B}}^{2} \models \Lambda_{\mathrm{B}}^{2}$ , we conclude that both $\Lambda_{\mathrm{B}}^{1} \models \Lambda_{\mathrm{B}}^{2}$ and $\Lambda_{\mathrm{B}}^{2} \models \Lambda_{\mathrm{B}}^{1}$ . + +2. $\mathrm{rank}(\Lambda_{\mathrm{B}}^{1}) = \mathrm{rank}(\Lambda_{\mathrm{B}}^{2}).$ + +Recall that the definition of rank refers to the size of the smallest basis that uniformly generates it, + +$$ +\operatorname {r a n k} (\Lambda) = \min _ {\Lambda_ {B} \subset \mathbb {A}} | \Lambda_ {B} | \qquad \text {s . t .} \qquad \Lambda_ {B} \models \Lambda . +$$ + +Now, note that by property (1.) of the proposition, both sets uniformly generate each other. Therefore, we know that + +$$ +\operatorname {r a n k} \left(\Lambda_ {\mathrm {B}} ^ {1}\right) \leq \min \left\{\left| \Lambda_ {\mathrm {B}} ^ {1} \right|, \left| \Lambda_ {\mathrm {B}} ^ {2} \right| \right\}, \quad \operatorname {r a n k} \left(\Lambda_ {\mathrm {B}} ^ {2}\right) \leq \min \left\{\left| \Lambda_ {\mathrm {B}} ^ {1} \right|, \left| \Lambda_ {\mathrm {B}} ^ {2} \right| \right\}, +$$ + +since the smallest set that generates each basis is no larger than the basis itself, or the other basis. + +3. $\Lambda_{\mathrm{B}}^{1}$ is universal if and only if $\Lambda_{\mathrm{B}}^{2}$ is universal. + +The claim again follows by combining the definitions of universality and parallel: If $\Lambda_{\mathrm{B}}^{1}$ is universal, then by definition of parallel bases, $\Lambda_{\mathrm{B}}^{2}$ must uniformly generate all the same agent sets including $\mathbb{A}$ , and therefore $\Lambda_{\mathrm{B}}^{2}$ is universal, too. Now, if $\Lambda_{\mathrm{B}}^{1}$ is not universal, then it does not uniformly generate $\mathbb{A}$ . By the definition of parallel bases, we conclude $\Lambda_{\mathrm{B}}^{2}$ does not generate $\mathbb{A}$ as well. Both directions hold for each labeling of the two bases without loss of generality, and we conclude. + +This completes the argument for each property, and we conclude. + +![](images/ad6a811cc5220b47804b62c3e4d6561e038aca300a09dd81ee07721ee428f6ca.jpg) + +# C.2 Analysis: Reaches + +We now establish other properties of the reaches operator. Several of these results are based on a third modality of the reaches operator: always reaches, in which an agent eventually reaches an agent basis in all histories realizable in a given environment. We define this precisely as follows. + +Definition C.10. We say agent $\lambda \in \mathbb{A}$ always reaches $\Lambda_{\mathrm{B}}$ , denoted $\lambda \sqcup \stackrel{\sim}{\succ} \Lambda_{\mathrm{B}}$ , if and only if + +$$ +\forall_ {h \in \bar {\mathcal {H}}} \exists_ {t \in \mathbb {N} _ {0}} \forall_ {h ^ {\circ} \in \bar {\mathcal {H}} _ {h} ^ {t: \infty}} \exists_ {\lambda_ {\mathrm {B}} \in \Lambda_ {\mathrm {B}}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda (h h ^ {\circ} h ^ {\prime}) = \lambda_ {\mathrm {B}} (h h ^ {\circ} h ^ {\prime}). \tag {C.24} +$$ + +The nested quantifiers allows the agent to become equivalent to different base behaviors depending on the evolution of the interaction stream. For example, in an environment that flips a coin to determine whether $a_{\mathrm{heads}}$ or $a_{\mathrm{tails}}$ is optimal, the $\lambda$ might output $a_{\mathrm{heads}}$ indefinitely if the coin is heads, but $a_{\mathrm{tails}}$ otherwise. In this case, such an agent will still always reach the basis $\Lambda_{\mathrm{B}} = \{\lambda_{\mathrm{B}}^{1}: h \mapsto a_{\mathrm{heads}}, \lambda_{\mathrm{B}}^{2}: h \mapsto a_{\mathrm{tails}}\}$ . Notice that we here make use of the notation, $\bar{\mathcal{H}}_h^{t:\infty}$ , which refers to all history suffixes of length $t$ or greater, defined precisely as + +$$ +\bar {\mathcal {H}} _ {h} ^ {t: \infty} = \left\{h ^ {\prime} \in \bar {\mathcal {H}} _ {h}: | h ^ {\prime} | \geq t \right\} \tag {C.25} +$$ + +We first show that the always reaches operator implies sometimes reaches. + +Proposition C.14. If $\lambda \rightsquigarrow \Lambda$ , then $\lambda \rightsquigarrow \Lambda$ . + +# Proof of Proposition C.14. + +Assume $\lambda \rightsquigarrow \Lambda$ . That is, expanding the definition of always reaches, we assume + +$$ +\forall_ {h \in \bar {\mathcal {H}}} \exists_ {t \in \mathbb {N} _ {0}} \forall_ {h ^ {\circ} \in \bar {\mathcal {H}} _ {h} ^ {t: \infty}} \exists_ {\lambda_ {\mathrm {B}} \in \Lambda_ {\mathrm {B}}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda (h h ^ {\circ} h ^ {\prime}) = \lambda_ {\mathrm {B}} (h h ^ {\circ} h ^ {\prime}). \tag {C.26} +$$ + +Further recall the definition of can reach $\lambda \rightsquigarrow \Lambda_{\mathrm{B}}$ is as follows + +$$ +\exists_ {h \in \bar {\mathcal {H}}} \exists_ {\lambda_ {\mathrm {B}} \in \Lambda_ {\mathrm {B}}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda (h h ^ {\prime}) = \lambda_ {\mathrm {B}} (h h ^ {\prime}). \tag {C.27} +$$ + +Then, the claim follows quite naturally: pick any realizable history $h \in \mathcal{H}$ . By our initial assumption that $\lambda \boxdot \stackrel{\sim}{\to} \Lambda$ , it follows (by Equation C.26) that there is a time $t$ and a realizable history suffix $h^{\circ}$ for which + +$$ +\exists_ {\lambda_ {\mathrm {B}} \in \Lambda_ {\mathrm {B}}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda (h h ^ {\circ} h ^ {\prime}) = \lambda_ {\mathrm {B}} (h h ^ {\circ} h ^ {\prime}). +$$ + +By construction of $hh^{\circ} \in \bar{\mathcal{H}}_h$ , we know $hh^{\circ}$ is a realizable history. Therefore, there exists a realizable history, $h^{*} = hh^{\circ}$ , for which $\exists_{\lambda_{\mathrm{B}} \in \Lambda_{\mathrm{B}}} \forall_{h' \in \bar{\mathcal{H}}_h} \lambda(h^{*}h') = \lambda_{\mathrm{B}}(h^{*}h')$ holds. But this is exactly the definition of can reach, and therefore, we conclude the argument. + +Next, we highlight the fact that every agent in a basis also reaches that basis. + +Proposition C.15. For any agent set $\Lambda$ , it holds that $\lambda \sqcup \aleph \Rightarrow \Lambda$ for every $\lambda \in \Lambda$ . + +Proof of Proposition C.15. + +The proposition is straightforward, as any $\lambda \in \Lambda$ will be equivalent to itself in behavior for all histories. + +Corollary C.16. As a corollary of Proposition C.15, any pair of agent sets $(\Lambda_{\mathrm{small}}, \Lambda_{\mathrm{big}})$ where $\Lambda_{\mathrm{small}} \subseteq \Lambda_{\mathrm{big}}$ , satisfies + +$$ +\forall_ {\lambda \in \Lambda_ {\text {s m a l l}}} \lambda \square^ {\Leftrightarrow} \Lambda_ {\text {b i g}}. \tag {C.28} +$$ + +We further show that, unlike sometimes and never reaches, always reaches is transitive. + +Proposition C.17. Always reaches is transitive. + +# Proof of Proposition C.17. + +We proceed by assuming that both $\forall_{\lambda^1 \in \Lambda^1} \lambda^1 \boxdot \Leftrightarrow \Lambda^2$ and $\forall_{\lambda^2 \in \Lambda^2} \lambda^2 \boxdot \Leftrightarrow \Lambda^3$ and show that it must follow that $\forall_{\lambda^1 \in \Lambda^1} \lambda^1 \boxdot \Leftrightarrow \Lambda^3$ . To do so, pick a fixed but arbitrary $\lambda^1 \in \Lambda^1$ , and expand $\lambda^1 \boxdot \Leftrightarrow \Lambda^2$ as + +$$ +\forall_ {h \in \bar {\mathcal {H}}} \exists_ {t \in \mathbb {N} _ {0}} \forall_ {h ^ {\circ} \in \bar {\mathcal {H}} _ {h} ^ {\mathrm {t}: \infty}} \exists_ {\lambda^ {2} \in \Lambda^ {2}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda^ {1} (h h ^ {\circ} h ^ {\prime}) = \lambda^ {2} (h h ^ {\circ} h ^ {\prime}). +$$ + +Now, consider for any realizable history $hh^\circ h'$ , we know that the corresponding $\lambda^2$ that produces the same action distribution as $\lambda^1$ also satisfies $\lambda^2 \Box \Leftrightarrow \Lambda^3$ . Thus, there must exist some time $\bar{t}$ at which, any realizable history $\bar{h} \bar{h}^\circ$ , will satisfy $\exists_{\lambda^3 \in \Lambda^3} \forall_{\bar{h}' \in \bar{\mathcal{H}}} \lambda^2 (\bar{h} \bar{h}^\circ \bar{h}') = \lambda^3 (\bar{h} \bar{h}^\circ \bar{h}')$ . But then there exists a time $(\bar{t})$ , that ensures every $\lambda^2 \in \Lambda^2$ will have a corresponding $\lambda^3 \in \Lambda^3$ with the same action distribution at all subsequent realizable histories. + +Therefore, + +$$ +\forall_ {h \in \bar {\mathcal {H}}} \exists_ {t ^ {\prime} \in \mathbb {N} _ {0}} \forall_ {h ^ {\circ} \in \bar {\mathcal {H}} _ {h t ^ {\prime}: \infty}} \exists_ {\lambda^ {2} \in \Lambda^ {2}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda^ {1} (h h ^ {\circ} h ^ {\prime}) = \underbrace {\lambda^ {2} (h h ^ {\circ} h ^ {\prime})} _ {\exists_ {\lambda^ {3} \in \Lambda^ {3}} = \lambda^ {3} (h h ^ {\circ} h ^ {\prime})}. +$$ + +Thus, rewriting, + +$$ +\forall_ {h \in \bar {\mathcal {H}}} \exists_ {t ^ {\prime} \in \mathbb {N} _ {0}} \forall_ {h ^ {\circ} \in \bar {\mathcal {H}} _ {h} ^ {t ^ {\prime}: \infty}} \exists_ {\lambda^ {3} \in \Lambda^ {3}} \forall_ {h ^ {\prime} \in \bar {\mathcal {H}} _ {h}} \lambda^ {1} (h h ^ {\circ} h ^ {\prime}) = \lambda^ {3} (h h ^ {\circ} h ^ {\prime}). +$$ + +But this is precisely the definition of always reaches, and thus we conclude. + +![](images/a5e76aab4ea2c4a8d5dacd22c7b5634e89449039fe58f2b58f29444bf44d6dbc.jpg) + +Next, we show two basic properties of the set of all agents: it uniformly generates all agent sets, and it is always reached by all agents. + +Proposition C.18. For any $e$ , the set of all agents $\bigtriangleup (i)$ uniformly generates all other agent sets, and (ii) is always reached by all agents: + +$$ +\begin{array}{l l} (i) & \forall_ {\Lambda \subseteq \mathbb {A}} \wedge \vDash \Lambda , \\ & (i i) \quad \forall_ {\lambda \in \mathbb {A}} \lambda \square^ {\Leftrightarrow} \wedge . \end{array} \tag {C.29} +$$ + +Proof of Proposition C.18. + +$$ +(i). \forall_ {\Lambda \subseteq \mathbb {A}} \mathbb {A} \vDash \Lambda +$$ + +The property holds as a straightforward consequence of Proposition C.2: Since any set $\Lambda$ is a subset of $\mathbb{A}$ , it follows that $\mathbb{A} \models \Lambda$ . + +$$ +\left(i i\right). \forall_ {\lambda \in \mathbb {A}} \lambda \square^ {\Leftrightarrow} \Lambda +$$ + +The property holds as a straightforward consequence of Proposition C.15: Since every agent satisfies $\lambda \in \mathbb{A}$ , it follows that $\lambda \rightsquigarrow \mathbb{A}$ . + +This concludes the argument of both statements. + +![](images/578ca9a7c28925412a4b01214bfa8a6160a971b72a01714dec111107c132aa10.jpg) + +# C.3 Figure: Set Relations in CRL + +Finally, in Figure 3 we present a visual depicting the set relations in CRL between an agent basis $\Lambda_{\mathrm{B}}$ , an agent set it generates $\Lambda$ , and the three agent sets corresponding to those agents in $\Lambda$ that (i) sometimes, (ii) never, or (iii) always reach the basis. First, we highlight that we visualize $\Lambda_{\mathrm{B}}$ as a subset of $\Lambda$ since we define $\Lambda_{\mathrm{B}} \subset \Lambda$ in CRL (Definition 4.2). However, there can exist triples $(\Lambda_{\mathrm{B}}, \Lambda, e)$ such that $\Lambda_{\mathrm{B}} \notin \Lambda$ , but that $\Lambda_{\mathrm{B}}$ is not a subset of $\Lambda$ . Such cases are slightly peculiar, since it means that the basis contains agents that cannot be expressed by the agent set $\Lambda$ . Such cases are not in line with our definition of CRL, so we instead opt to visualize $\Lambda_{\mathrm{B}}$ as a subset of $\Lambda$ . Next, notice that the basis is a subset of both the agents that always reach the basis and the agents that sometimes reach the basis—this follows directly from the combination of Proposition C.14 and point (3.) of Theorem 4.3. By similar reasoning from Proposition C.14, we know that the set of agents that always reaches $\Lambda_{\mathrm{B}}$ is a subset of the agents that sometimes reach the basis. Further, since sometimes and never reaches are negations of one another (Remark 3.2), observe that the two sets are disjoint, and together comprise the entirety of $\Lambda$ . Lastly, we know that the set of optimal agents, $\Lambda^{*}$ , contains only agents that never reach the basis, and thus the set $\Lambda^{*}$ is disjoint from $\Lambda_{\mathrm{B}}$ and the set of agents that sometimes reach $\Lambda_{\mathrm{B}}$ . + +![](images/e8a41257889ec94bb9684e9a92f4ae789b281e5338d9c47bf5ca9f72e04d3e39.jpg) +Figure 3: A depiction of the division of a set of agents $\Lambda$ relative to a basis $\Lambda_{\mathrm{B}}$ through the reaches operator in CRL. \ No newline at end of file diff --git a/adefinitionofcontinualreinforcementlearning/images.zip b/adefinitionofcontinualreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0368ffd5e7605ef8ce0051cdf663aa2cc88da012 --- /dev/null +++ b/adefinitionofcontinualreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a5575f7d09ba0c9e201225bb03ca8ed31309d2f6d7dac2e288d050971b74ae4 +size 559583 diff --git a/adefinitionofcontinualreinforcementlearning/layout.json b/adefinitionofcontinualreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b12b92438ed62bd27f9ce169af0fa2b2a9c6843a --- /dev/null +++ b/adefinitionofcontinualreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45b089d6efe90b25c203e5795f4e885a22f64a60ee563b350b754f2f1850bb89 +size 1661502 diff --git a/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_content_list.json b/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b08682cabf9537aa4fe0bba91fb76535d531c4c0 --- /dev/null +++ b/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3da88ed9f19553493f6e6fceae20b79840b3ba6199f959949dbad9f48eb6136 +size 96848 diff --git a/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_model.json b/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_model.json new file mode 100644 index 0000000000000000000000000000000000000000..68b037d1c1da56e8a4253bf98780573f35cf4555 --- /dev/null +++ b/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdbec6dd48827722eb80747cd8b3e7bec976f37433042a65f339b319c2be21b4 +size 117362 diff --git a/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_origin.pdf b/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3ed534fa1cb9c6c2013c75e72ea2f3326278105f --- /dev/null +++ b/adiffusionmodelofjointinteractivenavigation/12282213-7726-45b5-bcd0-59eabf4d1600_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:521d023545cd0f20ff98fbeb4b1cdf8f00c44521c699d71604525698fec5b7eb +size 2294131 diff --git a/adiffusionmodelofjointinteractivenavigation/full.md b/adiffusionmodelofjointinteractivenavigation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9dd4357d7cecfda030ee2cdb129aa066d6e0c424 --- /dev/null +++ b/adiffusionmodelofjointinteractivenavigation/full.md @@ -0,0 +1,376 @@ +# A Diffusion-Model of Joint Interactive Navigation + +Matthew Niedoba $^{1,2}$ J. Wilder Lavington $^{1,2}$ Yunpeng Liu $^{1,2}$ Vasileios Lioutas $^{1,2}$ + +Justice Sefas1,2 Xiaoxuan Liang1,2 Dylan Green1,2 Setareh Dabiri2 + +Berend Zwartsenberg² Adam Scibior1,2 Frank Wood1,2 + +1 University of British Columbia, 2 Inverted AI + +mniedoba@cs.ubc.ca + +# Abstract + +Simulation of autonomous vehicle systems requires that simulated traffic participants exhibit diverse and realistic behaviors. The use of prerecorded real-world traffic scenarios in simulation ensures realism but the rarity of safety critical events makes large scale collection of driving scenarios expensive. In this paper, we present DJINN – a diffusion based method of generating traffic scenarios. Our approach jointly diffuses the trajectories of all agents, conditioned on a flexible set of state observations from the past, present, or future. On popular trajectory forecasting datasets, we report state of the art performance on joint trajectory metrics. In addition, we demonstrate how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions including goal-based sampling, behavior-class sampling, and scenario editing. + +# 1 Introduction + +Accurate simulations are critical to the development of autonomous vehicles (AVs) because they facilitate the safe testing of complex driving systems [15]. One of the most popular methods of simulation is virtual replay [46], in which the performance of autonomous systems are evaluated by replaying previously recorded traffic scenarios. Although virtual replay is a valuable tool for AV testing, recording diverse scenarios is expensive and time consuming, as safety-critical traffic behaviors are rare [17]. Methods for producing synthetic traffic scenarios of specific driving behaviors are therefore essential to accelerate AV development and simulation quality. + +Producing these synthetic traffic scenarios involves generating the joint future motion of all the agents in a scene, a task which is closely related to the problem of trajectory forecasting. Due to the complexity of learning a fully autonomous end-to-end vehicle controller, researchers often opt to split the problem into three main tasks [52]: perception, trajectory forecasting, and planning. In trajectory forecasting, the future positions of all agents are predicted up to a specified future time based on the agent histories and the road information. Due to the utility of trajectory forecasting models in autonomous vehicle systems along with the availability of standard datasets and benchmarks to measure progress [4, 53], a variety of effective trajectory forecasting methods are now available. Unfortunately, most methods produce deterministic sets of trajectory forecasts per-agent [47, 9] which are difficult to combine to produce realistic joint traffic scenes [30]. + +Generative models of driving behavior have been proposed as an alternative to deterministic trajectory forecasting methods for traffic scene generation [40, 46]. These models re-frame trajectory forecasting as modeling the joint distribution of future agent state conditioned on past observations and map context. However, given that the distribution of traffic scenes in motion forecasting datasets are similar to real-world driving, modelling the data distribution does not ensure that models will generate rare, safety critical events. + +To alleviate these issues we propose DJINN, a model which generatively produces joint traffic scenarios with flexible conditioning. DJINN is a diffusion model over the joint states of all agents in the scene. Similar to [30], our model is conditioned on a flexible set of agent states. By modifying the conditioning set at test-time, DJINN is able to draw traffic scenarios from a variety of conditional distributions of interest. These distributions include sampling scenes conditioned on specific goal states or upsampling trajectories from sparse waypoints. Additionally, the joint diffusion structure of DJINN enables test-time diffusion guidance. Utilizing these methods enables further control over the conditioning of traffic scenes based on behavior modes, agent states, or scene editing. + +We evaluate the quality of sampled trajectories with both joint and ego-only motion forecasting on the Argoverse [4] and INTERACTION [53] datasets. We report excellent ego-only motion forecasting and outperform Scene Transformer on joint motion forecasting metrics. We further demonstrate both DJINN's flexibility and compatibility with various forms of test-time diffusion guidance by generating goal-directed samples, examples of cut-in driving behaviors, and editing replay logs. + +# 2 Related Work + +Trajectory Forecasting: A wide variety of methods have been proposed to address the problem of trajectory forecasting. Two attributes which divide this area of work are the output representation type and the agents for which predictions are made. The most common class of models deterministically predict the distribution of ego agent trajectories using a weighted trajectory set either with or without uncertainties. Due to the applicability of this representation as the input for real-time self-driving planners, there are numerous prior methods of this type. Some approaches rasterize the scene into a birdview image and use CNNs to predict a discrete set of future trajectories for the ego agent [5, 3, 32]. The convolutional architecture of these methods captures local information around the agent well, but the birdview image size and resolution limit the ability to capture high speed and long-range interactions. To address these challenges, other prior approaches encode agent states directly either by using RNNs [47, 39, 27], polyline encoders [7, 9] or 1D convolutions [24]. Agent features can be combined with roadgraph information in a variety of ways including graph convolutional networks [24, 1] or attention [29, 27]. + +To control the distribution of predicted trajectories, several methods have utilized mode or goal conditioning. One approach is to directly predict several goal targets before regressing trajectories to those targets [54, 9, 51, 8]. An alternate approach is to condition on trajectory prototypes [3] or latent embeddings [47]. + +Predicting joint traffic scenes using per-agent marginal trajectory sets is challenging due to the exponential growth of trajectory combinations. Recent approaches aim to rectify this by producing joint weighted sets of trajectories for all agents in a scene. M2I [45] generates joint trajectory sets by producing "reactor" trajectories which are conditioned on marginal "influencer" trajectories. Scene Transformer [30], which uses a similar backbone architecture to our method, uses a transformer [48] network to jointly produce trajectory sets for all agents in the scene. + +As an alternative to deterministic predictions, multiple methods propose generative models of agent trajectories. A variety of generative model classes have been employed including Normalizing Flows [36], GANs [10, 38] or CVRNNs [40, 46]. Joint generative behavior models can either produce entire scenarios in one shot [10, 38, 36], or produce scenarios by autoregressively "rolling-out" agent trajectories [40, 46]. + +Diffusion Models: Diffusion models, proposed by Sohl-Dickstein et al. [41] and improved by Ho et al. [12] are a class of generative models which approximate the data distribution by reversing a forward process which gradually adds noise to the data. The schedule of noise addition can be discrete process or represented by a continuous time differential equation [44, 21]. We utilize the diffusion parameterization introduced in EDM [21] in our work for its excellent performance and separation of training and sampling procedures. + +This class of models have shown excellent sample quality in a variety of domains including images [12, 6], video [11, 14] and audio [22]. In addition, diffusion models can be adapted at test-time through various conditioning mechanisms. Classifier [6] and classifier-free guidance [13] have enabled powerful conditional generative models such as text conditional image models [34, 37] while editing techniques [26, 31] have enabled iterative refinement of generated samples. + +![](images/897f9c2bf80701db770d5a52722a78e8834add712d399de282aa16058327784c.jpg) +Figure 1: Top: Five example observation masks $\mathcal{O}$ demonstrating potential conditioning inputs to DJINN. Each element of each mask corresponds to the boolean value of $\mathcal{O}$ for that agent state. Individual agents are shown in rows, with timesteps as columns. Bottom: Generated traffic scenes corresponding to the type of observation masks above. + +One recent application of diffusion models is planning. Diffuser [20] uses diffusion models to generate trajectories for offline reinforcement learning tasks. They condition their samples using classifier guidance to achieve high rewards and satisfy constraints. Trace and Pace [35] utilizes diffusion planning for guided pedestrian motion planning. In the vehicle planning domain, Controllable Traffic Generation (CTG) [55] builds on Diffuser, using diffusion models to generate trajectories which satisfy road rule constraints. Like CTG, our method also models the future trajectories of road users using diffusion models. However, our approach differs from CTG in terms of output and our methods of conditioning. In CTG, marginal per-agent trajectory samples are combined into a joint scene representation by "rolling-out" a portion of each agent's trajectory before drawing new samples per-agent. By contrast, DJINN models the full joint distribution of agent trajectories in one shot, with no re-planning or roll-outs required. The authors of CTG condition their model exclusively on the past states of other agents and the map, and use classifier-guidance to condition their samples to follow road rules. In our method, we demonstrate conditioning on scene semantics via classifier guidance as well as conditioning on arbitrary state observations, including the past or future states of each agent, and control the strength of conditioning using classifier-free guidance as demonstrated in Fig. 1. + +# 3 Background + +# 3.1 Problem Formulation + +Our work considers traffic scenarios consisting of $A$ agents across $T$ discrete timesteps driving on a roadway described by a set of roadgraph features $\mathbf{M}$ . Each agent $a\in \{1\dots ,A\}$ in the scene at time $t\in \{1,\ldots T\}$ is represented by a state $\mathbf{s}_t^a = \{x_t^a,y_t^a,\theta_t^a\}$ consisting of its 2D position $(x_{t},y_{t})$ and heading $\theta_{t}$ . The joint representation of the scene $\mathbf{x}$ is the combination of all agents across all timesteps $\mathbf{x} = \{s_t^a |a\in \{1,\dots ,A\} ,t\in \{1,\dots ,T\} \} \in \mathbb{R}^{A\times T\times 3}$ . We assume scenes are distributed according to an unknown distribution $p_{data}(\mathbf{x})$ . + +We introduce a model which is conditioned on the map features $\mathbf{M}$ and can moreover be flexibly conditioned on arbitrary set of observed agent states. For the latter purpose, we consider a boolean variable $\mathcal{O} \in \{0,1\}^{A \times T}$ . We denote that a state in the scene is observed if $\mathcal{O}_t^a = 1$ . Using $\mathcal{O}$ , we partition the scene into two components. The observed portion of the scene is defined as $\mathbf{x}_{obs} = \{\mathbf{s}_t^a | \mathbf{s}_t^a \in \mathbf{x}, \mathcal{O}_t^a = 1\}$ while the unobserved, latent portion is $\mathbf{x}_{lat} = \mathbf{x} \setminus \mathbf{x}_{obs}$ . Figure 1 shows five choices for $\mathcal{O}$ and their corresponding tasks. Our ultimate goal is to learn a conditional distribution over the set of all latent agent states $\mathbf{x}_{lat}$ given the observed states $\mathbf{x}_{obs}$ and the map $\mathbf{M}$ , by modelling $p(\mathbf{x}_{lat} | \mathbf{x}_{obs}, \mathbf{M})$ . Using this probabilistic framework, we can represent conditional + +distributions corresponding to various trajectory forecasting tasks by modifying the observation mask $\mathcal{O}$ and the corresponding conditioning set $\mathbf{x}_{obs}$ . + +# 3.2 Diffusion Models + +Diffusion models [41, 12] are a powerful class of generative models built upon a diffusion process which iteratively adds noise to the data. In the continuous time formulation of this process [44, 21], this iterative addition is described by a stochastic differential equation (SDE) + +$$ +d \mathbf {x} _ {\tau} = \mu (\mathbf {x} _ {\tau}, \tau) d \tau - \sigma (\tau) d \mathbf {w}. \tag {1} +$$ + +Here, $\tau \in [0,\tau_{max}]$ where $\tau_{max}$ is a fixed, large constant, $\mu (\mathbf{x}_{\tau},\tau)$ is the drift function and $\sigma (\tau)$ is the diffusion coefficient which scales standard Brownian motion $\mathbf{w}$ . Note that our work has two notions of time. Throughout we will use $t$ to denote the "scenario time" and $\tau$ to represent "diffusion time". We express the marginal distribution of $\mathbf{x}_{\tau}$ at diffusion time $\tau$ as $p(\mathbf{x}_{\tau})$ , with $p(\mathbf{x}_0)$ corresponding to the data distribution $p_{data}(\mathbf{x})$ . Typically, $\mu (\mathbf{x}_{\tau},\tau)$ , $\sigma (\tau)$ , and $\tau_{max}$ are chosen such the conditional density $p(\mathbf{x}_{\tau}|\mathbf{x}_0)$ is available in closed form and that $p(\mathbf{x}_{\tau_{max}})$ approximates a tractable Gaussian distribution $\pi (\mathbf{x})$ . Notably, for every diffusion SDE, there exists a corresponding probability flow (PF) ordinary differential equation (ODE) [44] whose marginal probability densities $p(\mathbf{x}_{\tau})$ match the densities of Eq. (1) + +$$ +d \mathbf {x} _ {\tau} = \left[ \mu \left(\mathbf {x} _ {\tau}, \tau\right) - \frac {1}{2} \sigma (\tau) ^ {2} \nabla_ {x} \log p \left(\mathbf {x} _ {\tau}\right) \right] d \tau . \tag {2} +$$ + +Using the PF ODE, samples are generated from a diffusion model by integrating Eq. (2) from $\tau = \tau_{max}$ to $\tau = 0$ with initial condition $\mathbf{x}_{\tau_{max}} \sim \pi(\mathbf{x}_{\tau_{max}})$ using an ODE solver. Typically integration is stopped at some small value $\epsilon$ for numerical stability. Solving this initial value problem requires evaluation of the score function $\nabla_{\mathbf{x}_{\tau}} \log p(\mathbf{x}_{\tau})$ . Since $p(\mathbf{x}_{\tau})$ is not known in closed form, diffusion models learn an approximation of the score function $\mathbf{s}_{\theta}(\mathbf{x}_{\tau}, \tau) \approx \nabla_{\mathbf{x}_{\tau}} \log p(\mathbf{x}_{\tau})$ via score matching [16, 43, 44]. + +A useful property of diffusion models is the ability to model conditional distributions $p(\mathbf{x}_0|y)$ at test-time using guidance. Given some conditional information $y$ , the key idea of guidance is to replace the score function in the PF ODE with an approximate conditional score function $\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)$ . + +By using the gradient of a pretrained classifier $p_{\phi}(y|\mathbf{x}_{\tau})$ , classifier guidance [6] approximates the conditional score function through the a linear combination of the unconditional score function and the classifier gradient. The parameter $\alpha$ controls the strength of the guidance + +$$ +\nabla_ {\mathbf {x} _ {\tau}} \log p (\mathbf {x} _ {\tau} | y) \approx \mathbf {s} _ {\theta} (\mathbf {x} _ {\tau}, \tau) + \alpha \nabla_ {\mathbf {x} _ {\tau}} \log p _ {\phi} (y | \mathbf {x} _ {\tau}). \tag {3} +$$ + +One major drawback of classifier guidance is the need to train an external classifier. Instead, classifier-free guidance [13], utilizes a conditional score network $\mathbf{s}_{\theta}(\mathbf{x}_{\tau},\tau ,y)$ . Then, a weighted average of the conditional and unconditional scores is used to estimate the conditional score function. + +$$ +\nabla_ {\mathbf {x} _ {\tau}} \log p (\mathbf {x} _ {\tau} | y) \approx \lambda \mathbf {s} _ {\theta} (\mathbf {x} _ {\tau}, \tau , y) + (1 - \lambda) \mathbf {s} _ {\theta} (\mathbf {x} _ {\tau}, \tau). \tag {4} +$$ + +Here $\lambda$ is a scalar parameter which controls the strength of the guidance. In both cases, the approximate conditional score can be substituted into Eq. (2) to draw conditional samples from $p(\mathbf{x}_0|y)$ . + +# 4 DJINN + +Our approach models the joint distribution agent states $p(\mathbf{x}_{lat}|\mathbf{x}_{obs},\mathbf{M})$ conditioned on a set of observed states and the map context. For this purpose, we employ a diffusion model which diffuses directly over $\mathbf{x}_{lat}$ – the unobserved states of each agent in the scene for $t = \{1,\dots T\}$ . An important aspect of our method is the choice of observation mask $\mathcal{O}$ and observation set $\mathbf{x}_{obs}$ on which we condition. For this purpose we introduce a distribution over observation masks $p(\mathcal{O})$ which controls the tasks on which we train our model. + +In the design of our diffusion process, we follow the choices from EDM [21], setting $\mu (\mathbf{x}_{lat,\tau},\tau) = \mathbf{0}$ and $\sigma (\tau) = \sqrt{2\tau}$ from Eq. (2). We also utilize their score function parameterization + +$$ +\nabla_ {\mathbf {x} _ {l a t, \tau}} \log p (\mathbf {x} _ {l a t, \tau} | \mathbf {x} _ {o b s}, \mathbf {M}, \mathbf {c}) = \frac {D _ {\theta} (\mathbf {x} _ {l a t , \tau} , \mathbf {x} _ {o b s} , \mathbf {M} , \mathbf {c} , \tau) - \mathbf {x} _ {l a t , \tau}}{\tau^ {2}}. \tag {5} +$$ + +Here $D_{\theta}$ is a neural network which approximates the latent portion of the noise free data $\mathbf{x}_{lat,0}$ . In addition to $\mathbf{x}_{lat,\tau}$ and $\tau$ , in our work $D_{\theta}$ also receives the map context $\mathbf{M}$ , the clean observed states $\mathbf{x}_{obs}$ and $c$ , a collection of unmodelled agent features per observed agent timestep such as velocity, vehicle size, or agent type. We train our network on a modification of the objective from EDM [21] + +$$ +\mathbb {E} _ {\mathbf {x} _ {0}, \tau , \mathcal {O}, \mathbf {x} _ {l a t, \tau}} \| D _ {\theta} \left(\mathbf {x} _ {l a t, \tau}, \mathbf {x} _ {o b s}, \mathbf {M}, \mathbf {c}, \tau\right) - \mathbf {x} _ {l a t, 0} \| _ {2} ^ {2}. \tag {6} +$$ + +Here, $\mathbf{x}_0\sim p_{data}(\mathbf{x})$ $\mathbf{x}_{\tau}\sim p(\mathbf{x}_{\tau}|\mathbf{x}_0) = \mathcal{N}(\mathbf{x},\tau^2\mathbf{I})$ and $\mathcal{O}\sim p(\mathcal{O})$ . We compute our loss over $\tau \sim p_{train}-$ a log normal distribution which controls the variance of the noise added to the data. We set the mean and variance of $p_{train}$ according to [21]. + +We use the Heun $2^{\mathrm{nd}}$ order sampler from [21] to sample traffic scenarios with no changes to the reported hyperparameters. Empirically, we found that deterministic sampling, corresponding to integrating the PF ODE, leads to higher quality samples than using an SDE solver. Unless otherwise noted all samples are produced using 50 iterations of the ODE solver, which produces the highest quality samples as measured by ego and joint minADE and minFDE. + +Input Representation An important choice for trajectory forecasting models is the reference frame for the agent states. In our work, the diffused agent states and observations $\mathbf{x}_{obs}$ are centered around an "ego agent," which is often specified in trajectory forecasting datasets as the primary agent of interest. We transform $\mathbf{x}_0$ such that the scene is centered on the last observed position of this arbitrary "ego agent" and rotated so the last observed heading of the ego agent is zero. We scale the positions and headings of all agents in each ego-transformed scene to a standard deviation of 0.5. + +We represent the map $\mathbf{M}$ as an unordered collection of polylines representing the center of each lane. Polylines are comprised of a fixed number of 2D points. We split longer polylines split into multiple segments and pad shorter polylines padded to the fixed length. Each point has a boolean variable indicating whether the element is padding. Polyline points are represented in the same reference frame as the agent states and are scaled by the same amount as the agent position features. + +Model Architecture Our score estimator network $D_{\theta}$ is parameterized by a transformer-based architecture similar to [30]. The network operates on a fixed $[A, T, F]$ shaped feature tensor composed of one $F$ dimensional feature vector per agent timestep. We use sinusoidal positional embeddings [48] to produce initial feature tensors. Noisy and observed agent states $\mathbf{x}_{\tau}$ , $\mathbf{x}_{obs}$ , the time indices $t = \{1, \dots, T\}$ , and diffusion step $\tau$ are all embedded into $F$ dimensional embeddings. $\mathbf{x}_{lat,\tau}$ and $\mathbf{x}_{obs}$ are padded with zeros for observed and latent states respectively prior to embedding. A shared MLP projects the concatenated positional embeddings into a $F$ dimensional vector for each agent. + +The main trunk of the network is comprised of a series of transformer layers [48]. Attention between all pairs of feature vectors is factorized into alternating time and agent transformer layers. In time transformer layers, self-attention is performed per-agent across each timestep of that agent's trajectory, allowing for temporal consistency along a trajectory. In agent transformer layers, self-attention is computed across all agents at a given time, updating each agent's features with information about the other agents at that time. We encode the map information $\mathbf{M}$ with a shared MLP that consumes flattened per-point and per-lane features to produce a fixed size embedding per lane. Cross attention between the collection of lane embeddings and agent states incorporates map information into the agent state features. Our network is comprised of 15 total transformer layers with a fixed feature dimension of 256. We use an MLP decoder after the final transformer layer to produce our estimate of $\mathbf{x}_{lat,0}$ . A full representation of our architecture is available in Appendix A. + +# 5 Guidance for Conditional Scene Generation + +So far, we have outlined our method for generating joint traffic scenes using DJINN. Next, we describe how the diffusion nature of DJINN enables fine-grained control over the generation and modification of driving scenarios. + +Table 1: Ego-only motion forecasting performance on Argoverse and INTERACTION datasets. minADE and minFDE metrics on both datasets indicate that DJINN produces ego samples which closely match the distribution of ego agent trajectories. + +(a) Argoverse test set + +
MethodminADE6minFDE6
Jean [27]0.981.42
mmTransformer [25]0.871.34
DenseTNT [9]0.881.28
MultiPath++[47]0.791.214
DCMS [50]0.771.14
SceneTransformer [30]0.801.23
Ours1.021.65
+ +(b) INTERACTION validation set + +
MethodminADE6minFDE6
DESIRE [23]0.320.88
TNT [54]0.210.67
ReCoG [28]0.190.65
ITRA [40]0.170.49
StarNet [19]0.130.38
SAN [18]0.100.29
Ours0.140.39
+ +Table 2: Ego-only and joint metrics comparing DJINN to a jointly trained Scene Transformer model on the Argoverse validation set. DJINN produces better joint samples than SceneTransformer when measured by minSceneADE and minSceneFDE. + +
MethodminADE6minFDE6minSceneADE6minSceneFDE6
Scene Transformer (Joint)0.8481.3981.0191.835
Ours0.8711.4090.8951.758
+ +# 5.1 Classifier-free Guidance + +In Scene Transformer [30], a masked sequence modelling framework is introduced for goal-directed and agent-reactive scene predictions. One limitation of this approach is that conditioning is performed on precise agent states while future agent states or goals are usually uncertain. We mitigate this limitation through the use of classifier-free guidance. + +We assume access to a set of precise observations $\mathbf{x}_{obs}$ , and some set of additional agent states $\mathbf{x}_{cond}$ on which we wish to condition our sample. For instance, $\mathbf{x}_{cond}$ may include agent goals upon which we wish to condition. Let $\mathbf{x}_{obs}^{\prime} = \{\mathbf{x}_{obs} \cup \mathbf{x}_{cond}\}$ . Based on Eq. (4), the conditional score is through a weighted average of the score estimate conditioned on $\mathbf{x}_{obs}$ and the estimated conditioned on $\mathbf{x}_{obs}^{\prime}$ + +$$ +\begin{array}{l} \nabla_ {\mathbf {x} _ {l a t, \tau}} \log p (\mathbf {x} _ {l a t, \tau} | \mathbf {x} _ {o b s} ^ {\prime}) \approx \lambda \frac {D _ {\theta} (\mathbf {x} _ {l a t , \tau} , \mathbf {x} _ {o b s} ^ {\prime} , \mathbf {M} , \mathbf {c} , \tau) - \mathbf {x} _ {l a t , \tau}}{\tau^ {2}} \\ + (1 - \lambda) \frac {D _ {\theta} (\mathbf {x} _ {l a t , \tau} , \mathbf {x} _ {o b s} , \mathbf {M} , \mathbf {c} , \tau) - \mathbf {x} _ {l a t , \tau}}{\tau^ {2}}. \tag {7} \\ \end{array} +$$ + +To facilitate classifier-free conditioning, we train DJINN on a $p(\mathcal{O})$ representing varied conditioning tasks. These tasks include conditioning on agent history, agent goals, windows of agent states, and random agent states. A full overview of our task distribution is given in Appendix B. + +# 5.2 Classifier Guidance + +Many driving behaviors of individual or multiple agents can be categorized by a class $y$ based on their geometry, inter-agent interactions or map context. Examples of classes include driving maneuvers such as left turns, multi agent behaviors such as yielding to another agent, or constraints such as trajectories which follow the speed limit. DJINN uses classifier guidance to conditioned scenes on these behavior classes. Given a set of example scenes corresponding to a behavior class $y$ , we train a classifier to model $p_{\phi}(y|\mathbf{x})$ . Using Eq. (3) we approximate the conditional score for conditional sampling. Importantly, due to the joint nature of our representation, classifiers for per-agent, multi-agent or whole-scene behaviors can be all used to condition sampled traffic scenes. + +# 5.3 Scenario Editing + +One benefit of sampling traffic scenes at once instead of autoregressively is the ability to edit generated or recorded traffic scenarios through stochastic differential editing [26]. Given a traffic scene $\mathbf{x}$ , a user can manually modify the trajectories in the scene to produce a "guide" scene $\mathbf{x}'$ which approximates + +![](images/e4be4cf29032158b333c7553d5957c6887cb0b8c0293826659771f53f2349457.jpg) +observed + +![](images/a3315400b5dc0fbc74042ca445fe8ccb6040a84e89ba8782bf782207c621bb2c.jpg) +generated +Figure 2: The effect of classifier-free guidance weight on the spread of trajectories for goal conditioned sampling. Samples drawn from the INTERACTION validation set conditioned using classifier-free guidance on a goal state (star). As the guidance weight increases, deviation from the goals decreases. + +![](images/57c44410ff999fe76cfc9c6e9206e6ed3e1b4d930f6019b34302523c6cff8d3d.jpg) +$\star$ goal + +the desired trajectories in the scene. The guide scene is used to condition the start of a truncated reverse diffusion process by sampling $\mathbf{x}_{\tau_{edit}} \sim \mathcal{N}(\mathbf{x}', \tau_{edit}\mathbf{I})$ where $\tau_{edit}$ is an intermediate time in the diffusion process between 0 and $\tau_{max}$ . Then, the edited scene is produced by integrating the PF ODE using the same ODE solver, starting from initial condition $\tau_{edit}$ . Through the stochastic differential editing, the guide scene is translated into a realistic traffic scene with agent trajectories which approximate the guide trajectories. We empirically find $\tau_{edit} = 0.8$ to be a good trade-off between generating realistic trajectory scenes and maintaining the information of the guide scene. + +# 6 Experiments + +# 6.1 Motion Forecasting Performance + +To measure the quality of the samples from DJINN, we evaluate our method on two popular motion prediction datasets, matching $\mathcal{O}$ during training to match each dataset. For the INTERACTION dataset [53] scenes, we observe the state of all agents over the first second of the scene and generate the next three seconds. On the Argoverse dataset [4] our model observes agent states over the first two seconds of the scene and generates the next three seconds. Training hyperparameters for both models are found in Appendix A. + +We note that both INTERACTION and Argoverse metrics measure an ego-only trajectory-set using minADE and minFDE over 6 trajectories. Since DJINN produces stochastic samples of entire traffic scenes, a set of 6 random trajectories may not cover all future trajectory modes. To alleviate this, we draw a collection of 60 samples for each scenario and fit a 6 component Gaussian mixture model with diagonal covariances using EM in a method similar to [47]. We use the means of the mixture components as the final DJINN prediction for motion forecasting benchmarks. + +We present DJINN's performance on motion forecasting in Table 1 with Argoverse results in Table 1a and INTERACTION results in Table 1b. On INTERACTION, DJINN generates excellent ego vehicle trajectories, with similar minFDE and minADE to state of the art methods on this dataset. On the Argoverse test set we produce competitive metrics, although our results lag slightly behind top motion forecasting methods. We hypothesize that our lower performance on Argoverse is due to the lower quality agent tracks in this dataset when compared to INTERACTION. + +We further analyze the joint motion forecasting performance of DJINN. To this end, we measure the Scene minADE and minFDE proposed by [2] which measures joint motion forecasting performance over a collection of traffic scenes. We compare DJINN against a reproduction of Scene Transformer trained for joint motion forecasting, using their reported hyperparameters. Ego-only and Scene motion forecasting performance is shown in Table 2. Although Scene Transformer predicts slightly better ego vehicle trajectories, we demonstrate DJINN has superior joint motion forecasting capabilities. + +# 6.2 State-conditioned Traffic Scene Generation + +While DJINN is able to draw samples for motion forecasting benchmarks by conditioning on past observations of the scene, a key benefit of our approach is the ability to flexibly condition at test-time + +![](images/32cfa19fcfc78859492a435755299a72a51e96ae066a97205d2f8b873ff5bbbb.jpg) +Figure 3: Examples of synthetic cut-in behaviors generated using classifier guidance. Samples are generated from the INTERACTION validation set conditioned on the first 10 agent states. Applying classifier guidance causes the other agent (green) to cut in front of the ego agent (purple). We generate trajectories for all agents in the scene, but other agent trajectories have been omitted for clarity + +based on arbitrary agent states. We illustrate this test-time conditioning in Fig. 1 by generating samples from five conditional distributions which correspond to use-cases for our model. + +Specifying exact agent states on which to condition can be challenging. One approach is to utilize the states of a prerecorded trajectory to produce conditioning inputs. However, if one wishes to generate a trajectory which deviates from a recorded trajectory, there is uncertainty about the exact states on which to condition. In Fig. 2, we demonstrate how classifier-free guidance can be utilized to handle user uncertainty in conditioning agent states. In this example, we set the observation set $\mathbf{x}_{obs}$ to the first ten states of each agent's recorded trajectory. Further, we create a conditional observation set $\mathbf{x}_{obs}^{\prime}$ by augmenting $\mathbf{x}_{obs}$ with a goal state for each agent drawn from a normal distribution centered on the ground-truth final position of each agent, with $1\mathrm{m}$ variance. We sample traffic scenes with varying levels of classifier-free guidance strength, drawing two conclusions. First, DJINN is robust to goals which do not match the recorded final agent states. Secondly, the strength of the classifier guidance weight controls the emphasis of the goal conditioning, resulting in trajectory samples which cluster more tightly around the specified goal as the guidance strength is increased. With low guidance weight, the samples are diverse, and do not closely match the specified goal position. As the weight increases, the spread of the trajectory distribution tightens, especially for fast, longer trajectories. These properties give users finer control over the distribution of traffic scenes when there is uncertainty over the conditioning states. + +# 6.3 Conditional Generation from Behavior Classes + +We now continue to demonstrate the flexibility of our approach by considering test-time conditioning of our model on specific driving behaviors through classifier guidance. Specifically, we highlight the ability to condition DJINN on the behavior class of cut-in trajectories by conditioning our INTERACTION trained model with a cut-in classifier. + +A "cut-in" occurs when one vehicle merges into the path of another, often requiring intervention by the cut-off driver. We selected this behavior to demonstrate how classifier guidance can be used with our joint representation to sample scenes conditioned on the behavior of multiple agents. We condition DJINN trained on INTERACTION using a simple cut-in classifier. To train the classifier, we first mined a dataset of cut-in behaviors trajectory pairs from the "DR_CHN_Merging_ZS" location - a highway driving scene with some cut-in examples. Each trajectory pair is comprised of an "ego" and an "other" agent. We define a positive cut-in as a case where the future state of the other agent at time $t_{\text{other}}$ overlaps with a future state of the ego agent at time $t_{\text{ego}}$ such that $t_{\text{ego}} - 3s < t_{\text{other}} < t_{\text{ego}}$ . Further, we filter cases where the initial state of the other agent overlaps with any part of the ego + +![](images/3d2b3620185ef82a9a4b96b1436046ea59a7da6ae0f50219cb159bef8d8d2a98.jpg) +Figure 4: Two scenario fine-tuning examples (one per row) based on Argoverse validation set scenarios. Left: original scene with ground-truth trajectories shown for two interacting vehicles, vehicle positions at the same time index for all agents. Middle: a manual edit of one agent's trajectory in each scene. One (top) replaces a right turn with a forward continuation, the other (bottom) shifts a trajectory back in space to cause a complex interaction to occur near the end of the trajectory. Right: the resulting stochastic differential edit of the original scenario. Both rows of the last column illustrate joint reactivity to the new trajectories arising from the edit; in the top row the left-turning vehicle yields and in the bottom row both trajectories shift to avoid collision. + +trajectory to eliminate lane following cases. We label a negative cut-in case as any other pair of trajectories in which the minimum distance between any pair of ego and other states is less than $5\mathrm{m}$ . + +Using these heuristics, we collect a dataset of 2013 positive and 296751 negative examples. We trained a two layer MLP classifier with 128 dimensions per hidden layer. The classifier takes as input the diffused trajectories of each agent, the validity of each timestep and the diffusion time $\tau$ . Using this classifier, we generate synthetic cut-in scenarios via Eq. (3). Examples of our synthetic cut-in scenarios are found in Fig. 3. The generated scenarios clearly demonstrate our model can be conditioned to create synthetic cut-in behaviors. These synthetic examples provide evidence that given a collection of trajectories exemplifying a behavior mode, or a heuristic which can be used to generate example trajectories, DJINN can be conditioned to generate synthetic examples representing that behavior mode. This finding further expands the flexibility of our model to generate trajectory samples from valuable conditional distributions. + +# 6.4 Scenario Fine-Tuning + +We exhibit another method of controlling the traffic scenarios generated with DJINN through fine-tuning. Since DJINN diffuses entire traffic scenes without iterative replanning, we are able to use stochastic differential editing to modify the sampled scenes. Given a recorded or sampled traffic scene, differential stochastic editing can be used to fine-tune the scene through the use of a manually specified guide. In Fig. 4, we demonstrate how DJINN can fine-tune existing scenarios to produce new scenarios with realistic trajectories but complex interactions. Using two recorded validation set scenes from Argoverse, we aim to edit the scenes to generate more interactive trajectories between the agents. For this purpose, we generate an guide scene $\mathbf{x}_{guide}$ by manually adjusting the trajectories in each scene so that the future paths of two of the agents will intersect. Through stochastic differential editing, we show that DJINN is able to produce realistic driving scenes which shift the guide scene trajectories to maintain their interactivity but avoid collisions between agents. + +# 7 Conclusions + +In this work, we present DJINN – a diffusion model of joint traffic scenes. By diffusing in a joint agent state representation, DJINN can be adapted at test time to a variety of modeling tasks through guidance methods and scenario editing. The power of this scenario generation model opens exciting possibilities. Future research may expand the variety of guidance classifiers such as utilizing the classifiers proposed in [55] for traffic-rule constraint satisfaction. Another promising avenue of research is scaling DJINN for faster scenario generation. Although flexible, the diffusion structure of DJINN makes scenario generation relatively slow due to the iterative estimation of the score function. Distillation techniques such as consistency models [42] may be helpful in this regard to improve the number of score estimates required per sample. Future work may also consider scaling the length and agent count in generated scenarios to improve the complexity of behaviors which can be generated. Other areas of future work include using DJINN in a model predictive control setting (hinted at in the predictive mask of Fig. 1) in which an ego action is scored using statistics of ego-action conditioned joint trajectories from DJINN. + +# Acknowledgements + +We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada CIFAR AI Chairs Program, Inverted AI, MITACS, the Department of Energy through Lawrence Berkeley National Laboratory, and Google. This research was enabled in part by technical support and computational resources provided by the Digital Research Alliance of Canada. Compute Canada (alliancecan.ca), the Advanced Research Computing at the University of British Columbia (arc.ubc.ca), Amazon, and Oracle. + +# References + +[1] Sergio Casas, Cole Gulino, Renjie Liao, and Raquel Urtasun. Spagnn: Spatially-aware graph neural networks for relational behavior forecasting from sensor data. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 9491-9497. IEEE, 2020. +[2] Sergio Casas, Cole Gulino, Simon Suo, Katie Luo, Renjie Liao, and Raquel Urtasun. Implicit latent variable model for scene-consistent motion forecasting. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII 16, pages 624-641. Springer, 2020. +[3] Yuning Chai, Benjamin Sapp, Mayank Bansal, and Dragomir Anguelov. Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. In Conference on Robot Learning, pages 86-99. PMLR, 2020. +[4] Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, et al. Argoverse: 3d tracking and forecasting with rich maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8748-8757, 2019. +[5] Henggang Cui, Vladan Radosavljevic, Fang-Chieh Chou, Tsung-Han Lin, Thi Nguyen, Tzu-Kuo Huang, Jeff Schneider, and Nemanja Djuric. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In 2019 International Conference on Robotics and Automation (ICRA), pages 2090-2096. IEEE, 2019. +[6] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. +[7] Jiyang Gao, Chen Sun, Hang Zhao, Yi Shen, Dragomir Anguelov, Congcong Li, and Cordelia Schmid. Vectornet: Encoding hd maps and agent dynamics from vectorized representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11525-11533, 2020. +[8] Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, and Fabien Moutarde. Gohome: Graph-oriented heatmap output for future motion estimation. In 2022 International Conference on Robotics and Automation (ICRA), pages 9107-9114. IEEE, 2022. +[9] Junru Gu, Chen Sun, and Hang Zhao. Densetnt: End-to-end trajectory prediction from dense goal sets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15303-15312, 2021. +[10] Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2255–2264, 2018. +[11] William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Dietrich Weilbach, and Frank Wood. Flexible diffusion modeling of long videos. In Advances in Neural Information Processing Systems, 2022. +[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. + +[13] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. +[14] Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. +[15] WuLing Huang, Kunfeng Wang, Yisheng Lv, and FengHua Zhu. Autonomous vehicles testing methods review. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pages 163-168. IEEE, 2016. +[16] Aapo Hyvarinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. +[17] Ashesh Jain, Luca Del Pero, Hugo Grimmett, and Peter Ondruska. Autonomy 2.0: Why is self-driving always 5 years away? arXiv preprint arXiv:2107.08142, 2021. +[18] Faris Janjos, Maxim Dolgov, Muhamed Kurić, Yinzhe Shen, and J Marius Zöllner. San: Scene anchor networks for joint action-space prediction. In 2022 IEEE Intelligent Vehicles Symposium (IV), pages 1751-1756. IEEE, 2022. +[19] Faris Janjos, Maxim Dolgov, and J Marius Zöllner. Starnet: Joint action-space prediction with star graphs and implicit global frame self-attention. arXiv preprint arXiv:2111.13566, 2021. +[20] Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In International Conference on Machine Learning, 2022. +[21] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems, 2022. +[22] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. In International Conference on Learning Representations, 2021. +[23] Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B Choy, Philip HS Torr, and Manmohan Chandraker. Desire: Distant future prediction in dynamic scenes with interacting agents. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 336-345, 2017. +[24] Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, and Raquel Urtasun. Learning lane graph representations for motion forecasting. In European Conference on Computer Vision, pages 541-556. Springer, 2020. +[25] Yicheng Liu, Jinghuai Zhang, Liangji Fang, Qinhong Jiang, and Bolei Zhou. Multimodal motion prediction with stacked transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7577-7586, 2021. +[26] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations, 2021. +[27] Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, and Guillermo Pita Gil. Multi-head attention for multi-modal joint vehicle motion forecasting. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 9638-9644. IEEE, 2020. +[28] Xiaoyu Mo, Yang Xing, and Chen Lv. Recog: A deep learning framework with heterogeneous graph for interaction-aware trajectory prediction. arXiv preprint arXiv:2012.05032, 2020. +[29] Nigamaa Nayakanti, Rami Al-Rfou, Aurick Zhou, Kratarth Goel, Khaled S Refaat, and Benjamin Sapp. Wayformer: Motion forecasting via simple & efficient attention networks. arXiv preprint arXiv:2207.05844, 2022. +[30] Jiquan Ngiam, Vijay Vasudevan, Benjamin Caine, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David J Weiss, Ben Sapp, Zhifeng Chen, and Jonathon Shlens. Scene transformer: A unified architecture for predicting future trajectories of multiple agents. In International Conference on Learning Representations, 2022. +[31] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. arXiv preprint arXiv:2302.03027, 2023. +[32] Tung Phan-Minh, Elena Corina Grigore, Freddy A Boulton, Oscar Beijbom, and Eric M Wolff. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14074-14083, 2020. +[33] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. +[34] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. +[35] Davis Rempe, Zhengyi Luo, Xue Bin Peng, Ye Yuan, Kris Kitani, Karsten Kreis, Sanja Fidler, and Or Litany. Trace and pace: Controllable pedestrian animation via guided trajectory diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13756-13766, 2023. +[36] Nicholas Rhinehart, Rowan McAllister, Kris Kitani, and Sergey Levine. Precog: Prediction conditioned on goals in visual multi-agent settings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2821-2830, 2019. + +[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. +[38] Amir Sadeghian, Vineet Kosaraju, Ali Sadeghian, Noriaki Hirose, Hamid Rezatofighi, and Silvio Savarese. Sophie: An attentive gan for predicting paths compliant to social and physical constraints. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1349-1358, 2019. +[39] Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty, and Marco Pavone. Trajectory++: Dynamically-feasible trajectory forecasting with heterogeneous data. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVIII 16, pages 683-700. Springer, 2020. +[40] Adam Scibior, Vasileios Lioutas, Daniele Reda, Peyman Bateni, and Frank Wood. Imagining the road ahead: Multi-agent trajectory prediction via differentiable simulation. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 720-725. IEEE, 2021. +[41] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. +[42] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023. +[43] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019. +[44] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. +[45] Qiao Sun, Xin Huang, Junru Gu, Brian C Williams, and Hang Zhao. M2i: From factored marginal trajectory prediction to interactive prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6543-6552, 2022. +[46] Simon Suo, Sebastian Regalado, Sergio Casas, and Raquel Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10400-10409, 2021. +[47] Balakrishnan Varadarajan, Ahmed Hefny, Avikalp Srivastava, Khaled S Refaat, Nigamaa Nayakanti, Andre Cornman, Kan Chen, Bertrand Douillard, Chi Pang Lam, Dragomir Anguelov, et al. Multipath++: Efficient information fusion and trajectory aggregation for behavior prediction. In 2022 International Conference on Robotics and Automation (ICRA), pages 7814-7821. IEEE, 2022. +[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. +[49] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In International Conference on Machine Learning, pages 10524-10533. PMLR, 2020. +[50] Maosheng Ye, Jiamiao Xu, Xunnong Xu, Tongyi Cao, and Qifeng Chen. Dcms: Motion forecasting with dual consistency and multi-pseudo-target supervision. arXiv preprint arXiv:2204.05859, 2022. +[51] Wenyuan Zeng, Ming Liang, Renjie Liao, and Raquel Urtasun. Lanercnn: Distributed representations for graph-centric motion forecasting. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 532-539. IEEE, 2021. +[52] Wenyuan Zeng, Wenjie Luo, Simon Suo, Abbas Sadat, Bin Yang, Sergio Casas, and Raquel Urtasun. End-to-end interpretable neural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8660-8669, 2019. +[53] Wei Zhan, Liting Sun, Di Wang, Haojie Shi, Aubrey Clausse, Maximilian Naumann, Julius Kummerle, Hendrik Konigshof, Christoph Stiller, Arnaud de La Fortelle, et al. Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. arXiv preprint arXiv:1910.03088, 2019. +[54] Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Ben Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, et al. Tnt: Target-driven trajectory prediction. In Conference on Robot Learning, pages 895-904. PMLR, 2021. +[55] Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. Guided conditional diffusion for controllable traffic simulation. arXiv preprint arXiv:2210.17366, 2022. + +# Appendix + +# A Model Details + +# A.1 Preconditioning + +Following EDM [21], we precondition $D_{\theta}$ by combining $\mathbf{x}_{lat,\tau}$ and the output of our network $F_{\theta}$ using scaling factors + +$$ +D _ {\theta} \left(\mathbf {x} _ {l a t, \tau}, \mathbf {x} _ {o b s}, \mathbf {M}, \mathbf {c}, \tau\right) = c _ {s k i p} (\tau) \mathbf {x} _ {l a t, \tau} + c _ {o u t} (\tau) F _ {\theta} \left(c _ {i n} \mathbf {x} _ {l a t, \tau}, \mathbf {x} _ {o b s}, \mathbf {M}, \mathbf {c}, c _ {n o i s e} (\tau)\right). \tag {8} +$$ + +We use the scaling values reported in [21] without modification but report them in Table 3 for convenience. + +Table 3: Scaling Functions for Preconditioning + +
Scaling FactorFunction
cskipσ2data/(τ2+ σ2data)
coutτ·σdata/√τ2+ σ2data
cin1/√τ2+ σ2data
cnoise1/4 ln(τ)
+ +Here, $\sigma_{data}$ is the standard deviation of the diffusion features. We scale the positions and headings of the agent so that $\sigma_{data}$ is 0.5 for all diffusion features. + +# A.2 Architecture + +DJINN utilizes a transformer based architecture for $F_{\theta}$ . An overview of the model structure is shown in Fig. 5. + +# Feature Encoding + +We encode the observed agent states $\mathbf{x}_{obs}$ , the noisy latent states $\mathbf{x}_{lat,\tau}$ , the scenario time $t$ and the diffusion time $\tau$ using sinusoidal positional encoding [48]. We represent the scenario time as an integer index increasing from 0 to $T$ with 0 corresponding to the earliest agent states. For each of the encoded features, we produce a 256-dimensional encoding vector. An important hyperparameter for sinusoidal positional embeddings are the maximum and minimum encoding periods which we report in Table 4. + +Table 4: Maximum and minimum positional encoding periods for DJINN input features + +
FeatureMinimum PeriodMaximum Period
xobs, xlat,τ0.0110
t1100
τ0.110,000
+ +The concatenation of the positional encodings with additional agent state features $\mathbf{c}$ are fed through an MLP to form the input to the main transformer network. The additional agent state features consist of the agent velocity, the observed mask, and the agent size which is available for INTERACTION only. The input MLP is shared across all agent states and contains two linear layers with hidden dimension 256 and ReLU non-linearities. + +# Roadgraph Encoding + +DJINN is conditioned on the geometry of the roadgraph through a collection of lane center polylines. Each polyline is comprised of an ordered series of 2D points which represent the approximate center of each driving lane. We fix the length of each polyline to 10 points. We split polylines longer than this threshold into approximately equal segments, and pad shorter polylines with zeros. We utilize a boolean feature to indicate which polyline points are padded. Unlike [30], we do not use a PointNet [33] to encode the roadgraph polylines. Instead, we encode the polylines into a 256-dimensional vector per polyline using a simple MLP. To generate the input to this MLP, we concatenate the position + +![](images/6e5e15fc9c2fedfc767f55a6aef2872254a6d43b6af8cf910b3c661e96450403.jpg) +Figure 5: An overview of the DJINN architecture. The main network structure is comprised of time, agent and roadgraph attention layers. Features are encoded using positional encodings and MLPs. The output of the network is an estimate of the de-noised agent states. + +and padding mask for each polyline, along with any additional per-polyline features present in the dataset. For both datasets, the MLP is comprised of four linear layers, with a hidden dimensionality of 256 and ReLU non-linearities. + +# Transformer Network + +The main transformer backbone of DJINN is comprised of 15 transformer layers which perform self-attention over the time and agent dimension, and cross attention with the roadgraph encodings. We utilize the same transformer layers as those proposed in [30], but modify the number of layers and their ordering. Specifically, include more time transformer layers as we found this produced smoother trajectories. All attention layers consume and produce 256 dimensional features per-agent state. We use four heads for each attention operation, and a 1024 dimensional hidden state for in the feed forward network. In the transformer layers, we use the pre-layernorm structure described in [49]. + +Due to batching and agents which are not tracked for the duration of the traffic scene, there is padding present in the agent feature tensor. The transformer layers account for padding in the scene by modifying the attention masks so that padded agent states are not attended to. + +# Output MLP + +We use a two layer MLP with hidden dimension 256 to produce the final output for $F_{\theta}$ . We produce a three dimensional vector per agent state for INTERACITON and two-dimensional vector for Argoverse since headings are not provided in the dataset. + +# A.3 Training details + +We train DJINN on two A100 GPUs for 150 epochs. We utilize the Adam optimizer with learning rate of 3E-4 and default values for $\beta_{1}$ and $\beta_{2}$ . We use a linear learning rate ramp up, scaling from 0 to 3E-4 over 0.1 epochs. We set the batch size to 32. We clip gradients to a maximum norm of 5. Training takes approximately 6 days to complete from scratch. + +# B Observation Distribution + +We train DJINN over a variety of observation masks $\mathcal{O}$ by randomly drawing masks from a training distribution $p(\mathcal{O})$ . Table 5 outlines this training task distribution. We refer to the length of agent state history observation for each dataset as $t_{obs}$ and the total number of timesteps as $T$ . $\mathcal{U}$ indicates a uniform distribution over integers. + +Table 5: Task distribution for training DJINN. The training observation mask $\mathcal{O}$ is sampled from this distribution with probabilities given in the rightmost column. + +
TaskDescriptionProbability
PredictiveObserve states where t ∈ [0, tobs].50%
Goal-ConditionedObserve states where t ∈ [0, tobs] and the final state of 3 random agents.25%
Agent-ConditionedObserve states where t ∈ [0, tobs] and the entire trajectory of 3 random agents.10%
Ego-ConditionedObserve states where t ∈ [0, tobs] and the entire ego-agent trajectory.10%
WindowedObserve states where t ∈ [0, tstart] and t ∈ (tstart + tobs, T] where tstart ~ U(0, tobs).5%
UpsamplingObserve every tobs/T states, starting from tstart ~ U(0, tobs/T).5%
ImputationRandomly sample observing each state with probability tobs/T.5%
+ +# C Additional Qualitative Results + +Fig. 6 shows additional qualitative samples from DJINN on the INTERACTION dataset for a subset of the observation masks outlined in Fig. 1. Each row in the figure corresponds to a different tasks, abd each element of the row is a sampled traffic scene. + +# D Additional Quantitative Results + +# D.1 Effect of Observation Distribution + +To enable test-time conditioning through classifier-free guidance as outlined in section 6.2, we train DJINN on the observation distribution described in Appendix B. To quantify the effect of training on this distribution, we compare the sample quality of a DJINN model trained trained on the full observation distribution to one which is trained exclusively on the "Predictive Task." Table 6 shows the impact of the observation distribution as measured by trajectory forecasting metrics on samples drawn from INTERACTION dataset scenes. + +Table 6 demonstrates that training on the full mixture of observation masks somewhat reduces the predictive performance of DJINN when compared to the model trained exclusively on the predictive task. However, the diversity of trajectories measured using MFD [40] increases when training on the more diverse distribution. + +![](images/817089993817e3f11feba081aa87e798f318cd0f7caaf0b2ec888d103a3fc948.jpg) +Figure 6: Additional generated traffic scenes from the INTERACTION validation set. Each row demonstrates samples generated using a different observation mask. + +Table 6: Comparison of trajectory forecasting performance for models trained with varying observation distributions. Trajectory metrics are measured using 6 samples per scene on the INTERACTION validation set. + +
ObservationsminADEminFDEScene minADEScene minFDEMFD
Predictive0.210.490.350.912.33
Mixture0.260.630.451.173.11
+ +# D.2 Effect of Reduced Sampling Steps + +The continuous time training procedure of DJINN enables test-time variation in the number of sampling steps. Table 7 outlines the effect of reducing the number of sampling steps from the 50 steps which are used in all other experiments. + +Table 7 shows that reducing the number of sampling steps results in modest trajectory forecasting performance reductions up to 20 sampling steps across all metrics. Using 10 steps severely impacts the quality of sampled scenes across all metrics. As sampling time scales linearly with the number of sampling steps, reducing the number of sampling steps allows for a performance runtime tradeoff. + +Table 7: Trajectory forecasting performance versus the number of timesteps used in the diffusion sampling procedure. Trajectory forecasting performance is measured using 6 samples per scene on the INTERACTION validation set. + +
Diffusion StepsminADEminFDEScene minADEScene minFDE
100.280.640.451.135
200.220.510.370.95
300.220.500.360.92
400.210.500.350.92
500.210.490.350.92
+ +# D.3 Model Runtime + +We compare DJINN's runtime to SceneTransformer [30], varying the input size as measured by the number of agents in the scene. Runtimes are measured across 1000 samples on a GeForce RTX 2070 Mobile GPU. + +Table 8: Average scenario generation time for DJINN and Scene Transformer across varying scene sizes. + +
Agent CountScene TransformerDJINN - 50 StepsDJINN - 25 Steps
80.0126s0.574s1.15s
160.0140s0.611s1.24s
320.017s0.844s1.69s
640.026s1.40s2.89s
\ No newline at end of file diff --git a/adiffusionmodelofjointinteractivenavigation/images.zip b/adiffusionmodelofjointinteractivenavigation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ceef40b4cc1c5dfeea022806f235d3b9ae7ffbf7 --- /dev/null +++ b/adiffusionmodelofjointinteractivenavigation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b970583798652d13f34abbf93420df6a46ce51bdfbd140444ff9ebe759c95e21 +size 724636 diff --git a/adiffusionmodelofjointinteractivenavigation/layout.json b/adiffusionmodelofjointinteractivenavigation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f40d9c9d9549fac03a00e63e11141b0cbe8cb406 --- /dev/null +++ b/adiffusionmodelofjointinteractivenavigation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0413946903e2fbd3c5f521ce632174059334b9cf247fea6411e75613706bc325 +size 491186 diff --git a/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_content_list.json b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..14f100df09c129e1d78d30922e230fac57c1434c --- /dev/null +++ b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a60556644eccae79086a1ff706acfcff2974661fce71c9c1010348ea4480c400 +size 105015 diff --git a/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_model.json b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..67923cb0bfea80092c6c71a6b1f44c9359265b60 --- /dev/null +++ b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9b0da111777030f8084c13fa845b7052f0e3247a13248ecdccd930af4e53133 +size 134853 diff --git a/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_origin.pdf b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8304a7b5dd43013fa908e0347b63575dbc74b5a2 --- /dev/null +++ b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/921d5112-f478-4bc0-97ce-7f05272addcd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daebe5f848db362e092bc0e244ab0919da0bb08793ce63cda49f21dc23b6d78c +size 6728881 diff --git a/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/full.md b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e5599fa72aaa5eed0d81f5aea1e8ead51b27d6f7 --- /dev/null +++ b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/full.md @@ -0,0 +1,408 @@ +# A Dual-Stream Neural Network Explains the Functional Segregation of Dorsal and Ventral Visual Pathways in Human Brains + +Minkyu Choi $^{1}$ , Kuan Han $^{1}$ , Xiaokai Wang $^{2}$ , Yizhen Zhang $^{1,3}$ , and Zhongming Liu $^{1,2}$ + +$^{1}$ Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 + +$^{2}$ Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109 + +$^{3}$ Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 + +{cminkyu, kuanhan, xiaokaiw, zhyz, zmliu}@umich.edu + +# Abstract + +The human visual system uses two parallel pathways for spatial processing and object recognition. In contrast, computer vision systems tend to use a single feedforward pathway, rendering them less robust, adaptive, or efficient than human vision. To bridge this gap, we developed a dual-stream vision model inspired by the human eyes and brain. At the input level, the model samples two complementary visual patterns to mimic how the human eyes use magnocellular and parvocellular retinal ganglion cells to separate retinal inputs to the brain. At the backend, the model processes the separate input patterns through two branches of convolutional neural networks (CNN) to mimic how the human brain uses the dorsal and ventral cortical pathways for parallel visual processing. The first branch (WhereCNN) samples a global view to learn spatial attention and control eye movements. The second branch (WhatCNN) samples a local view to represent the object around the fixation. Over time, the two branches interact recurrently to build a scene representation from moving fixations. We compared this model with the human brains processing the same movie and evaluated their functional alignment by linear transformation. The WhereCNN and WhatCNN branches were found to differentially match the dorsal and ventral pathways of the visual cortex, respectively, primarily due to their different learning objectives, rather than their distinctions in retinal sampling or sensitivity to attention-driven eye movements. These model-based results lead us to speculate that the distinct responses and representations of the ventral and dorsal streams are more influenced by their distinct goals in visual attention and object recognition than by their specific bias or selectivity in retinal inputs. This dual-stream model takes a further step in brain-inspired computer vision, enabling parallel neural networks to actively explore and understand the visual surroundings. + +# 1 Introduction + +The human visual system comprises two parallel and segregated streams of neural networks: the "where" stream and the "what" stream [1]. The "where" stream originates from magnocellular retinal ganglion cells and extends along the dorsal visual cortex. The "what" stream originates from parvocellular retinal ganglion cells and extends along the ventral visual cortex [2]. The two streams exhibit selective responses to different aspects of visual stimuli [3]. The "where" stream is tuned to coarse but fast information from a wide view, while the "what" stream is selective to fine but slow + +![](images/a7ca312de15e3c61903b9ebccf026adf3f08957db6dcbdea3a48b50af8c8b405.jpg) +Figure 1: Brain-inspired dual-stream vision model. The top illustrates the subcortical (dashed arrows) and cortical (solid arrows) pathways for parallel visual processing in the brain. Given a scene (e.g., "two foxes on the lawn"), the retina samples incoming light relative to the fixation of the eyes (shown as the cross). Magnocellular (orange) and parvocellular (blue) retinal ganglion cells encode complementary visual information into two sets of retinal inputs relayed onto separate layers in the lateral geniculate nuclei (LGN) and further onto different neurons in the primary visual cortex (V1). Within V1, the relative ratio of magnocellular vs. parvocellular projections is higher for the periphery and lower for the fovea. Beyond V1, the magnocellular pathway continues along the dorsal visual cortex towards the intraparietal areas and further onto the frontal eye field (FEF) for oculomotor control, while the parvocellular pathway continues along the ventral visual cortex towards the inferior temporal cortex and further onto the superior temporal areas for semantic cognition. The bottom illustrates our model architecture including WhereCNN and WhatCNN. The model's frontend mimics the human retina and generates two separate input patterns relative to the fixation. One pattern is wider but coarser while the other is narrower but finer, providing the respective inputs to WhereCNN and WhatCNN. With the wide-view input, WhereCNN generates a probability map of saliency from which the next fixation is sampled. With a narrow-view input, WhatCNN generates an object representation per each fixation and constructs a scene representation recurrently from multiple fixations. + +information from a narrow view [2, 4]. The two streams are thought to serve different purposes. The "where" stream zooms out for spatial analysis [5], visual attention [6, 7, 8], and guiding actions [9] such as eye movements [10], while the "what" stream zooms in to recognize the object around the fixation [11]. While being largely parallel, the two streams interact with each other [12]. In one way of their interaction, the "where" stream decides where to look next and guides the "what" stream to focus on a salient location for visual perception. As the eyes move around the visual environment, the interaction between the "where" and "what" streams builds a scene representation by accumulating object representations over time and space. This dual-stream architecture allows the brain to efficiently process visual information and support dynamic visual behaviors [13]. + +In contrast, computer vision systems tend to use a single stream of feedforward processing, acting as passive observers that sample visual information all at once with fixed and uniform patterns [14, 15, 16]. Compared to human vision, this processing is less robust, especially given adversarial attacks [17, 18]; it is less efficient since it samples visual information equally regardless of salience or nuisance [19]; it is less adaptive, lacking spatial attention for active sensing [20, 21]. These distinctions define a major gap between human and computer vision. Many visual tasks that are + +straightforward for humans are still challenging for machines [22, 23]. Therefore, computer vision may benefit from taking further inspiration from the brain by using a dual-stream architecture to learn adaptive and robust visual behaviors. + +To gain insights into the computational mechanisms of human vision, researchers have developed image-computable models by utilizing goal-driven deep neural networks that simulate human perceptual behavior. In particular, convolutional neural networks (CNNs) are leading models of visual perception, capturing the hierarchical processing by the brain's ventral visual stream [24, 25, 26, 27, 28]. Previous models of this nature commonly utilize CNNs trained through supervised learning [24, 27, 25, 29, 26, 30], adversarial training [31, 32], unsupervised learning [33, 34], or self-supervised learning [35, 36, 37]. However, models of the dorsal stream remain relatively under-explored, despite few studies [38, 39, 40, 41]. Existing testing of these models has primarily focused on static images presented briefly to the fovea, thus limiting their assessment to a narrow range of visual behaviors and processes [42]. A more comprehensive approach is needed to develop models that incorporate both dorsal and ventral stream processing and to assess those models against brain responses when humans engage both the dorsal and ventral streams to freely explore complex and dynamic visual environments, which may be simulated in experimental settings [43]. + +To meet this need, we have developed a dual-stream model to mimic the parallel ventral and dorsal streams in the human brain [1, 2, 3, 9]. The model includes two branches of convolutional neural networks: WhereCNN and WhatCNN, which share the same architecture but receive distinct visual inputs and generate different outputs. WhereCNN samples a wide view to learn spatial attention and where to direct the subsequent gaze, while WhatCNN samples a narrow view to learn object representations. By taking multiple gazes at a given scene, the model sequentially samples the salient locations and progressively constructs a scene representation over both space and time. To evaluate this dual-stream model as a model of the human visual system, we have tested its ability to reproduce human gaze behavior and predict functional brain scans from humans watching a movie with unconstrained eye movements. Our hypothesis is that the model's WhereCNN and WhatCNN branches can effectively predict the brain responses along the brain's dorsal and ventral visual pathways, respectively. In addition, we have also conducted experiments to evaluate the underlying factors contributing to the functional segregation of the brain's dorsal and ventral visual streams. Of particular interest were the relative contributions of retinal sampling, spatial attention, and attention-guided eye movement in shaping the function of the dorsal stream and its interplay with the ventral stream during dynamic natural vision. + +# 2 Related Works + +# 2.1 Dorsal-stream vision + +Image-computable models of the brain's dorsal stream have been relatively limited compared to models of the ventral stream. Previous work has attempted to model the dorsal stream by training deep neural networks to detect motion [38] or classify actions [40] using video inputs. However, these models do not fully capture the neuroscientific understanding that the dorsal stream is involved in locating objects and guiding actions, leading to its designation as the "where" or "how" visual pathway. More recent work by Mineault et al. focused on training a dorsal-stream model to emulate human head movements during visual exploration [39]. Additionally, Bakhtiari et al. utilized predictive learning to train parallel pathways and observed the ventral-like and dorsal-like representations as an emergent consequence of structural segregation [41]. However, no prior work has explored neural network models that emulate how the dorsal stream learns spatial attention and guides eye movements for visual navigation. + +# 2.2 Spatial attention and eye movement + +Prior research in the field of computer vision has attempted to train models to attend to and selectively focus on salient objects within a scene [21, 44, 45], rather than processing the entire scene as a whole. This approach aligns with the brain's mechanism of spatial attention, where the dorsal stream acts as a global navigator, and the ventral stream functions as a local perceiver. In line with this mechanism, previous studies have employed dual-stream neural networks that process global and local features in parallel, aiming to achieve enhanced computational efficiency as a unified system [46, 44, 47, 48, 49]. + +However, these models do not fully replicate the way human eyes sample visual inputs during active exploration of the scene and thus still fall short in biological relevance. + +# 2.3 Foveated vision and retinal transformation + +The human retina functions as a sophisticated camera that intelligently samples and transmits visual information. It exhibits the highest visual acuity in the central region of the visual field, a phenomenon referred to as foveated vision [50, 51, 52]. In contrast, peripheral vision uses lower spatial acuity but higher temporal sensitivity, making it better suited for detecting motion. These properties of retinal sampling are potentially useful for training neural networks to enhance performance [53, 54, 55, 56] or robustness [57, 58, 19], augment data or synthesize images [59]. The retina also transmits information to the brain using distinct types of cells. The magnocellular and parvocellular retinal ganglion cells have different distributions, selectivity, and relay information through largely separate pathways. Taken together, the retina transforms visual information into segregated inputs for parallel visual processing. This biological mechanism has not been systematically investigated. + +Unlike the above prior works, our work combines multiple biologically inspired mechanisms into an integral learnable model. It uses a frontend inspired by the human retina and applies complementary retinal sampling and transformation. It uses two parallel pathways inspired by the human dorsal and ventral streams. It uses spatial attention and object recognition as the distinct learning objectives for training the two pathways. It further uses the attention to move fixations and thus allows the two pathways to interact for active sensing. Although these mechanisms have been explored separately in prior studies, their combination is novel and motivates our work to build and test such a model against the human brain and behavior in a naturalistic condition of freely watching a movie. + +# 3 Methods + +In our model, WhereCNN and WhatCNN serve distinct functions in processing objects within a scene. WhereCNN identifies the spatial location of an object, determining "where" it is situated, while WhatCNN focuses on recognizing the identity of the object, determining "what" it is. When multiple objects are present in a scene, WhereCNN learns spatial attention and "how" to sequentially locate and fixate on each object. This allows the model to selectively attend to different objects in a sequence, mirroring the dynamic nature of human eye movements during visual exploration. Fig.1 illustrates and describes the human visual system that inspires us to design our model. + +# 3.1 Model design and training + +Akin to the human eyes [60, 61], our model uses retinal transformation [62] to generate separate inputs to WhereCNN and WhatCNN. For both, the retinal input consists of $64 \times 64$ samples non-uniformly distributed around the fixation. When describing a point in the retinal image and its corresponding point in the visual world in terms of the radial distance and the polar angle with respect to the fixation, their polar angles are the same while their radial distances are related by Eq. 1. + +$$ +r = g \left(r ^ {\prime}\right) = \frac {b}{\sqrt {\pi}} \frac {1 - \exp (\ln (a) r ^ {\prime} / 2)}{1 - \exp (\ln (a) / 2)} \tag {1} +$$ + +where $r'$ and $r$ are the radial distances in the retinal and original images, respectively, $b$ is a constant that ensures $r_{\mathrm{max}} / g(r_{\mathrm{max}}') = 1$ , and $a$ controls the degree of center-concentration. Given a larger $a$ , more retinal samples are closer to the fovea relatively to the periphery. We set $a = 15$ for WhatCNN and $a = 2.5$ for WhereCNN. In this setting, WhereCNN is more selective to global features, while WhatCNN is more selective to local features, mirroring the sampling bias of magnocellular and parvocellular retinal ganglion cells, as illustrated in Fig.1. + +Both WhereCNN and WhatCNN use similar backbone architectures. The backbone consists of four blocks of convolutional layers. Each block includes two Conv2D layers (kernel size $3 \times 3$ ) followed by ReLU and BatchNorm. Applying $2 \times 2$ MaxPool between adjacent blocks progressively reduces the spatial dimension. The feature dimension following each block is 64, 128, 256, or 512. Atop this backbone CNN, both WhereCNN and WhatCNN use additional components to support different goals. For WhereCNN, the feature maps from the 3rd and 4th convolutional blocks are resized to + +$16 \times 16$ and concatenated, providing the input to an additional convolutional block. Its output feature map is subject to SoftMax to generate a probability map of visual saliency. By random sampling by the probability of saliency, WhereCNN decides a single location for the next fixation. To avoid future fixations to revisit previously attended areas, inhibition of return (IOR) [63] is used. IOR keeps records of locations of prior visits as defined in Eq.2. + +$$ +\mathbf {I O R} (t) = \operatorname {R e L U} \left(\mathbf {1} - \sum_ {\tau = 1} ^ {t} G (\boldsymbol {\mu} = \boldsymbol {l} _ {\tau}, \boldsymbol {\Sigma} = \sigma^ {2} \boldsymbol {I})\right) \tag {2} +$$ + +where $G(\pmb{\mu}, \pmb{\Sigma})$ is a 2D Gaussian function centered at $l_{\tau}$ (prior fixations) with a standard deviation $\sigma$ at the $\tau$ -th step. Its values are normalized so that its maximum equals 1. By applying the IOR to the predicted saliency map using an element-wise multiplication, the future fixations would not revisit the areas already explored. + +For WhatCNN, the output feature map from the 4th convolutional block, after global average pooling, is given as the input to an additional layer of Gated Recurrent Units (GRU) [64], which recurrently update the representation from a sequence of fixations to construct a cumulative representation following another fully-connected layer. + +We first pre-train each stream separately and then fine-tune them together through three stages. + +Stage 1 - WhereCNN. We first train WhereCNN for image recognition using ILSVRC2012 [65] for object recognition and then fine-tune it for generating the saliency map to match human attention using SALICON dataset [66]. In this stage, we use random fixations to generate the retinal inputs to WhereCNN. Adam optimizer [67] ( $\mathrm{lr} = 0.002$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.99$ ) is used with 25 epochs for SALICON training. At this stage, WhereCNN learns spatial attention from humans. + +Stage 2 - WhatCNN. We first train WhatCNN for single-object recognition using ILSVRC2012 [65] and then fine-tune it for multi-object recognition using MSCOCO [68]. In this stage, we use the pre-trained WhereCNN to generate a sequence of eight fixations and accordingly apply the retinal transformation to generate a sequence of retinal inputs to WhatCNN for recurrent object recognition. Note that the training in Stage 2 is confined to WhatCNN, while leaving WhereCNN as pre-trained in Stage 1. Adam optimizer $(\mathrm{lr} = 0.002, \beta_{1} = 0.9, \beta_{2} = 0.99)$ is used with 40 epochs for MSCOCO training. + +Stage 3 - WhereCNN & WhatCNN. Lastly, we equally combine the two learning objectives to train both WhereCNN and WhatCNN altogether and end-to-end using eight fixations. Adam optimizer $(\mathrm{lr} = 0.0002, \beta_{1} = 0.9, \beta_{2} = 0.99)$ is used with 25 epochs for training. In this stage, SALICON, which contain labels for both saliency prediction and object recognition, is used for training. More details can be found at Appendix B1. + +# 3.2 Model evaluation with human gaze behavior and fMRI responses + +We use two criteria to evaluate how well a model matches the brain given naturalistic and dynamic visual stimuli. First, the model should generate similar human visual behaviors, such as visual perception and gaze behavior. Second, the model's internal responses to the stimuli should predict the brain's responses to the same stimuli through linear projection implemented as linear encoding models [69]. For our dual-stream model, we hypothesize that WhereCNN better predicts dorsal-stream voxels and WhatCNN better predicts ventral-stream voxels. + +For this purpose, we use a publicly available fMRI dataset from a prior study [70], in which a total of 11 human subjects (4 females) were instructed to watch the movie Raiders of the Lost Ark (115 minutes) with unconstrained eye movements. The movie was displayed on an LCD projector with a visual angle of $17^{\circ} \times 22.7^{\circ}$ . Whole-brain fMRI data was acquired in a 3-T MRI system with a gradient-recalled echo planar imaging sequence (TR/TE = 2.5s/35ms, flip angle = $90^{\circ}$ , nominal resolution = 3mm × 3mm × 3mm). We preprocess the data by using the minimal preprocessing pipeline released by the Human Connectome Project (HCP) [71] + +We test how well the model can predict the voxel-wise fMRI response to the movie stimuli through a learnable linear projection of artificial units in the model. To evaluate whether and how the two branches in the model differentially predict the two streams in the brain, we define two encoding + +models for each voxel: one based on WhereCNN and the other based on WhatCNN. We train and test the encoding models with data during different segments of the movie. To avoid overfitting, we apply dimension reduction to the internal responses in either WhereCNN or WhatCNN by applying principal component analysis (PCA) first to each layer and then to all layers while retaining $99\%$ of the variance [30, 33]. We further convolve the resulting principal components with a canonical hemodynamic response function (HRF) that peaks at 5 seconds, down-sample them to match the sampling rate of fMRI, generating the linear regressors used in the encoding model. Using the training data $(81\%$ of the total data), we estimate the encoding parameters using L2-regularized least squares estimation. Using the held-out testing data $(19\%)$ , we test the encoding models for their ability predicting the fMRI responses observed at each voxel and measure the accuracy of prediction as the correlation between the predicted and measured fMRI responses, denoted as $r_{where}$ and $r_{what}$ for the encoding models based on WhereCNN and WhatCNN. We test the significance of the prediction using a block permutation test [72] with a block size of 20-seconds and 100,000 permutations and apply the false discovery rate (FDR) $(p < 0.05)$ . We further differentiate the relative roles of the brain's WhereCNN vs. WhatCNN in predicting the brain's dorsal and ventral streams for single voxels as well as regions of interest. For this, we define a relative performance (Eq.3). + +$$ +p _ {w h e r e} = \frac {r _ {w h e r e} ^ {2}}{r _ {w h e r e} ^ {2} + r _ {w h a t} ^ {2}} \tag {3} +$$ + +In the range from 0 to 1, $p_{\text{where}} > 0.5$ indicates better predictive performance by WhereCNN, while $p_{\text{where}} < 0.5$ indicates better predictive performance by WhatCNN. + +# 3.3 Alternative models and control experiments + +By design, the WhereCNN and WhatCNN branches within our model exhibit two key distinctions. WhereCNN is specifically trained to learn spatial attention by utilizing wider views, while WhatCNN focuses on object recognition through the use of local views. To explore the impact of input views and learning objectives on the model's capacity to predict brain responses, we introduce two modified control streams: ControlCNN-a and ControlCNN-b, designed as hybrid variants encapsulating diverse input views and learning objectives. ControlCNN-a receives a narrower view as its input and is trained for saliency prediction. Conversely, ControlCNN-b is curated to accommodate a broader view and primarily focuses its training on object recognition. These adjustments yield a versatile examination of their respective influences and functionalities. By combining WhereCNN or WhatCNN with ControlCNN-a or ControlCNN-b, we create four alternative dual-stream models (illustrated in Fig.4) and examine their abilities to explain the functional segregation of the brain's dorsal and ventral streams. + +# 4 Results + +# 4.1 WhereCNN learns attention and WhatCNN learns perception + +The WhereCNN and WhatCNN branches in our model are specifically designed to fulfill different objectives: predicting human visual saliency and recognizing visual objects, respectively. In Fig.2, we present examples comparing human attention with the model's attention based on the SALICON's validation set. WhereCNN can successfully identify salient locations where humans are more likely to direct gaze. Additionally, WhereCNN can mimic human saccadic eye movements by generating a sequence of fixations that navigate the model's attention to those salient locations. In contrast, WhatCNN can recognize either single or multiple objects (macro F1 score on MSCOCO's validation set: 61.0). + +# 4.2 WhereCNN and WhatCNN matches dorsal and ventral visual streams + +By using linear encoding models, we use the WhereCNN and WhatCNN branches to predict fMRI responses during the processing of identical movie stimuli by both the model and the brain. Together, these two branches can predict responses across a wide range of cortical locations involved in visual processing. However, they exhibit distinct predictive power in relation to the dorsal and ventral streams. Generally, the WhereCNN branch exhibits superior predictive performance for the + +![](images/2c377fd6a17e45871d1271f87514b6b9bca7dd7f8f7367a885586420acb60a03.jpg) +WhereCNN learns human attention to mimic human gazes +Figure 2: Saliency prediction. Given an image (1st row), WhereCNN generates a saliency map (3rd row) similar to the map of human attention (2nd row). Sampling this saliency map generates a sequence of fixations (as red/orange/blue circles in the order of time in the 4th row) similar to human saccadic eye movements (not shown). + +![](images/c2942df15fc9a885e82be9bfdb9accd62dc830bd4d1a30b33b7ef933661eb641.jpg) +(a) Relative contributions of WhereCNN and WhatCNN to prediction of brain responses + +![](images/7f62060b2523ced5032ba2071f1d7cbb4d91edb26361c9d6d3c1a4ba54e0ef93.jpg) +(b) Brain response predictability by WhereCNN vs WhatCNN +Figure 3: Differential encoding of the dorsal and ventral streams. (a) Relative contributions of WhereCNN and WhatCNN to the prediction of the fMRI response observed at each cortical location. Color highlights the locations significantly predictable by the model (FDR<0.05, block permutation test). The color itself indicates the degree by which WhereCNN is more predictive than WhatCNN (warm-tone) or the opposite (cool-tone). Visual areas are delineated and labeled based on brain altas [73]. Panel (b) plots the predictive performance by WhereCNN (y-axis) against that by WhatCNN (x-axis) and shows a clear separation of voxels (left panel) or ROIs (right panel) along the dorsal stream (red) vs. ventral stream (blue) relative to the dashed line of equal predictability. See Appendix A for the full ROI labels. + +![](images/46a147c89bf68f2037138d69f69ffe2b9232a5439a06cab45aab0c54aee306c3.jpg) + +![](images/5c80a9ba851f0f0d5ba7ac3ae65805c34771d513f7465729159d95731647cfe7.jpg) +Combination 1 + +![](images/3923873b9cb871fdf9b35eaba995e09b56a371a06a57ae007dd5677717e7bc11.jpg) +Combination 2 + +![](images/f7705b8b426bb1410282b49f58196b89373d69e4d05d1f7e325f9fca4e570dd9.jpg) +Combination 3 + +![](images/30c84dd53ddf06176cb51bed9f17c715480bd1902b618ecca10c1b4c2cd6ff26.jpg) +Combination 4 + +![](images/d036d9678da5bd11a586da190d7c09e13ef74e1bed64d8827ee65f1ef3f55196.jpg) +(a) Alternative inputs and objectives for WhereCNN and WhatCNN +Combination 5 +(Proposed Model) + +![](images/b5541ca264673ed1520b3ddae8bb55330f92c92fd7998106e0096a744e00108d.jpg) +(b) Brain response predictability by WhereCNN vs. WhatCNN + +![](images/ad4c25679389a38f4bce17c84e1598e7544a3439e59caca44b90f5eb363c86d7.jpg) +Figure 4: Contributing factors of the dorsal-ventral functional segmentation. (a) Alternative designs of the two-stream model for investigating the contributing factors of the functional segregation of two streams in the proposed model. In addition to WhereCNN and WhatCNN in the proposed model, we included ControlCNN-a (narrow input field of view for predicting location) and ControlCNN-b (wide input field of view for predicting perception) for an ablation study. (b) The predictive performances and functional segregations by the two streams are plotted for the dorsal (red squares) vs. ventral (blue triangles) ROIs for each of the alternative models in (a), correspondingly. The dashed diagonal lines represent equal predictive abilities of ventral and dorsal ROIs. (c) Quantitative evaluation of the functional segregation of the dorsal and ventral ROIs relative to the dashed line of equal predictability. Separability measures how far the predictions are away from the dashed line. Assuming the coordinates of a certain ROI is $(x,y)$ , separability is calculated as the average of $|x - y|$ for all ROIs. + +![](images/0232ef5a46071514f104f5e404b31f99f02a83fbcb6b19868aa64df65df86f8c.jpg) + +![](images/cd5d616f88d43ba0b5fc2d427623a959e3d5029389afd76468e7162558136531.jpg) + +![](images/d7cf844bb15ea2fdcc66ca454d5b7b89edf4a751a1cf85da2a1ff73819f36b75.jpg) + +![](images/c0ecc60141335b367fef473d9f8d42d365c18a6365f4beb3a7b1b2aef2f4275f.jpg) +(c) Effect Summary + +dorsal stream, while the WhatCNN branch performs better in predicting responses within the ventral stream (Fig.3). For the early visual areas (V1, V2, V3), WhereCNN better predicts the peripheral representations, while WhatCNN better predicts the foveal representations. + +# 4.3 Factors underlying the functional segregation of the dorsal and ventral streams + +We further investigate the underlying factors contributing to the model's ability to explain the functional segregation of the brain's dorsal and ventral visual streams (as depicted in Fig.3). Specifically, we examine the input sampling pattern and output learning objective, both of which are distinct for the WhereCNN and WhatCNN branches in our dual-stream model. + +To investigate the contributing factors of the functional segregation in predicting human ventral and dorsal streams, we introduce four variations of the proposed model, where the two branches either share their inputs or have the same learning objectives, and compare their abilities to account for the functional segregation of the dorsal and ventral visual streams (as shown in Fig. 4). When the two branches solely differ in their input sampling, they are unable to explain the dorsal-ventral segregation (combination 1 and 2). However, when the two branches exclusively differ in their learning objectives, the functional segregation is better explained (combination 3 and 4). Moreover, when the two branches differ in both input sampling and learning objectives (combination 5), as utilized in our proposed model, the functional segregation is even more pronounced. These ablation experiments suggest that the distinct learning objectives of the brain's dorsal and ventral streams is the primary factor underlying their functional segregation. + +# 4.4 Dual-stream: a better brain model than single-stream + +We also compare our dual-stream model with single-stream alternatives. One of these alternatives is a baseline CNN that shares the same backbone architecture as a single branch in our dual-stream model. However, this baseline CNN is trained with original (224x224) images to recognize objects in ImageNet [65] and MS-COCO [68]. Thus, it serves as a direct comparison with either the WhereCNN or WhatCNN branch in our model. In addition, we also include AlexNet [14], ResNet18, + +![](images/1af2a00e1cb5baaa1649b330ec272d4e734805701885a367c7c14a6a571befcd.jpg) +Comparisons to feed-forward CNNs + +![](images/c18d47cc3d03bcd19ea0e943779d29e1ddd88089dcb93256ca3bd7fe5979d979.jpg) +Figure 5: WhatCNN (left) or WhereCNN (right) vs. alternative single-stream CNNs. The boxplot shows the encoding performance of different models for ventral or dorsal visual areas. Each dot within the box plot signifies the average prediction accuracy r within a respective ROI in the ventral or dorsal region. Asterisk (*) represents a significant difference by the Wilcoxon signed-rank test $(\alpha = 0.05)$ . + +![](images/d99e726a95ee5913c7441fb2b542f9334f1439586e4929f2525e6819fe540b7d.jpg) +Effect of Fixations (Learned vs Random) +Figure 6: Effects of attention-driven eye movements. The use of attention to determine fixations vs. the use of random fixations is evaluated in terms of the resulting difference in the encoding performance by WhereCNN (left) and WhatCNN (right), denoted and color-coded as $\Delta r_{\text{where}}$ and $\Delta r_{\text{what}}$ , respectively. Voxels displayed in warm colors indicate that the predictions are more accurate with learned fixations, whereas those in cool colors signify better predictions with random fixations. + +![](images/b47ae2bf540b0de01368f39f16077c4bd0d2f8a90645ac75f9ee8fbbda4231e7.jpg) +Prediction By WhatCNN + +and ResNet34 [15] as additional alternatives, which have been previously evaluated in relation to brain responses [30, 74]. We compare these single-stream alternatives with either branch in our model in terms of their ability to predict brain responses within dorsal or ventral visual areas (Fig.5). Despite their use of the same backbone architecture, the baseline under-performs WhatCNN in predicting responses in ventral visual areas, and under-performs WhereCNN in predicting responses in dorsal visual areas. This result suggests that the interactive and parallel nature of the dual-stream model renders each stream more akin to the functioning of the human brain, surpassing the performance of isolating a single stream. Moreover, WhatCNN or WhereCNN also performs better than AlexNet and comparably with ResNet18 and ResNet34, which are deeper than the architecture of our model. + +# 4.5 Attention-driven eye movements improve encoding + +Similar to human gaze behavior towards salient objects, our model learns spatial attention to guide fixations for parallel visual processing. In this study, we investigate whether and how the model's ability to predict brain responses depends on its utilization of attention-driven fixations. To examine this, we conduct experiments where the model is allowed to use either attention-driven fixations or random fixations to collect retinal samples, and we evaluate how this choice impacts the model's capability to predict brain responses by calculating $\Delta r = r^{learned} - r^{random}$ for all voxels, where $r^{learned}$ and $r^{random}$ are encoding performances of each voxel using learned or random fixations respectively. As depicted in Fig.6, employing attention-driven fixations leads to higher encoding accuracy by both WhereCNN and WhatCNN compared to the use of random fixations for a majority of visual cortical locations within both the dorsal and ventral streams. + +# 5 Discussion + +In summary, we introduce a new dual-stream neural network that incorporates the brain's mechanisms for parallel visual processing. The defining features of our model include 1) using retinal transformation to separate complementary inputs to each stream, 2) using different learning objectives to train each stream to learn either spatial attention or object recognition, and 3) controlling sequential fixations for active and interactive visual sensing and processing. We demonstrate that the combination of these features renders the model more akin to the human brain and better predictive of brain responses in humans freely engaged in naturalistic visual environments. Importantly, the two streams in our model differentially explain the two streams in the brain, contributing to the computational understanding as to how and why the brain exhibits and organizes distinct responses and processes along the structurally segregated dorsal and ventral visual pathways. Our findings suggest that the primary factor contributing to the dorsal-ventral functional segregation is the different goals of the dorsal and ventral pathways. That is, the dorsal pathway learns spatial attention to control eye movements [75, 7, 76, 77], while the ventral stream learns object recognition. + +Although our model demonstrates initial steps to model parallel visual processing in the brain, it has limitations that remain to be addressed in future studies. For one limitation, the model uses different spatial sampling to generate the retinal inputs to the two streams but does not consider different temporal sampling that makes the dorsal stream more sensitive to motion than the ventral stream [38, 40, 39, 41]. For another limitation, the interaction between the two streams is limited to the common fixation that determines the complementary retinal input to each stream. Although attention-driven eye movement is an important aspect of human visual behavior shaping brain responses for both dorsal and ventral streams, the two streams also interact and exchange information at higher levels. The precise mechanisms for dorsal-ventral interactions remain unclear but may be important to understanding human vision or improving brain-inspired computer vision. + +# 6 Acknowledgements + +This research is supported by the Collaborative Research in Computational Neuroscience (CRCNS) program from National Science Foundation (Award#: IIS 2112773) and the University of Michigan. + +# References + +[1] Mortimer Mishkin, Leslie G Ungerleider, and Kathleen A Macko. Object vision and spatial vision: two cortical pathways. Trends in neurosciences, 6:414-417, 1983. +[2] William H Merigan and John HR Maunsell. How parallel are the primate visual pathways? Annual review of neuroscience, 16(1):369-402, 1993. +[3] Jonathan J Nassi and Edward M Callaway. Parallel processing strategies of the primate visual system. Nature reviews neuroscience, 10(5):360-372, 2009. +[4] Margaret Livingstone and David Hubel. Segregation of form, color, movement, and depth: anatomy, physiology, and perception. Science, 240(4853):740-749, 1988. +[5] James V Haxby, Cheryl L Grady, Barry Horwitz, Leslie G Ungerleider, Mortimer Mishkin, Richard E Carson, Peter Herscovitch, Mark B Schapiro, and Stanley I Rapoport. Dissociation of object and spatial visual processing pathways in human extrastriate cortex. Proceedings of the National Academy of Sciences, 88(5):1621-1625, 1991. +[6] Giacomo Rizzolatti and Massimo Matelli. Two different streams form the dorsal visual system: anatomy and functions. Experimental brain research, 153:146-157, 2003. +[7] Maurizio Corbetta and Gordon L Shulman. Control of goal-directed and stimulus-driven attention in the brain. Nature reviews neuroscience, 3(3):201-215, 2002. +[8] Gustavo Deco and Edmund T Rolls. A neurodynamical cortical model of visual attention and invariant object recognition. Vision research, 44(6):621-642, 2004. +[9] Melvyn A Goodale and A David Milner. Separate visual pathways for perception and action. Trends in neurosciences, 15(1):20-25, 1992. + +[10] Carol L Colby and Michael E Goldberg. Space and attention in parietal cortex. Annual review of neuroscience, 22(1):319-349, 1999. +[11] Keiji Tanaka. Inferotemporal cortex and object vision. Annual review of neuroscience, 19(1):109-139, 1996. +[12] A David Milner. How do the two visual streams interact with each other? Experimental brain research, 235(5):1297-1308, 2017. +[13] Laurent Itti and Christof Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research, 40(10-12):1489-1506, 2000. +[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84-90, 2017. +[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +[17] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +[18] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. +[19] Minkyu Choi, Yizhen Zhang, Kuan Han, Xiaokai Wang, and Zhongming Liu. Human eyes inspired recurrent neural networks are more robust against adversarial noises. arXiv preprint arXiv:2206.07282, 2022. +[20] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11):1254-1259, 1998. +[21] Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. Advances in neural information processing systems, 27, 2014. +[22] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and brain sciences, 40:e253, 2017. +[23] Ernest Davis and Gary Marcus. Commonsense reasoning and commonsense knowledge in artificial intelligence. Communications of the ACM, 58(9):92-103, 2015. +[24] Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619–8624, 2014. +[25] Umut Güçlü and Marcel AJ van Gerven. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35(27):10005-10014, 2015. +[26] Michael Eickenberg, Alexandre Gramfort, Géral Varoquaux, and Bertrand Thirion. Seeing it all: Convolutional network layers map the function of the human visual system. NeuroImage, 152:184-194, 2017. +[27] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. Deep supervised, but not unsupervised, models may explain it cortical representation. PLoS computational biology, 10(11):e1003915, 2014. + +[28] Daniel LK Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356-365, 2016. +[29] Radoslaw Martin Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports, 6(1):27755, 2016. +[30] Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, and Zhongming Liu. Neural encoding and decoding with deep learning for dynamic natural vision. *Cerebral cortex*, 28(12):4136-4160, 2018. +[31] Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, and James J DiCarlo. Simulating a primary visual cortex at the front of cnns improves robustness to image perturbations. Advances in Neural Information Processing Systems, 33:13073-13087, 2020. +[32] William Berrios and Arturo Deza. Joint rotational invariance and adversarial training of a dual-stream transformer yields state of the art brain-score for area v4. arXiv preprint arXiv:2203.06649, 2022. +[33] Kuan Han, Haiguang Wen, Junxing Shi, Kun-Han Lu, Yizhen Zhang, Di Fu, and Zhongming Liu. Variational autoencoder: An unsupervised model for encoding and decoding fmri activity in visual cortex. NeuroImage, 198:125-136, 2019. +[34] Minkyu Choi and Jun Tani. Predictive coding for dynamic visual processing: development of functional hierarchy in a multiple spatiotemporal scales rnn model. Neural computation, 30(1):237-270, 2018. +[35] Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C Frank, James J DiCarlo, and Daniel LK Yamins. Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3):e2014196118, 2021. +[36] Aria Yuan Wang, Kendrick Kay, Thomas Naselaris, Michael J Tarr, and Leila Wehbe. Incorporating natural language into vision models improves prediction and understanding of higher visual cortex. BioRxiv, pages 2022-09, 2022. +[37] Talia Konkle and George Alvarez. Deepnets do not need category supervision to predict visual system responses to objects. Journal of Vision, 20(11):498-498, 2020. +[38] Reuben Rideaux and Andrew E Welchman. But still it moves: static image statistics underlie how we see motion. Journal of Neuroscience, 40(12):2538-2552, 2020. +[39] Patrick Mineault, Shahab Bakhtiari, Blake Richards, and Christopher Pack. Your head is there to move you around: Goal-driven models of the primate dorsal pathway. Advances in Neural Information Processing Systems, 34:28757-28771, 2021. +[40] Umut Güçlü and Marcel AJ van Gerven. Increasingly complex representations of natural movies across the dorsal stream are shared between subjects. NeuroImage, 145:329-336, 2017. +[41] Shahab Bakhtiari, Patrick Mineault, Timothy Lillicrap, Christopher Pack, and Blake Richards. The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning. Advances in Neural Information Processing Systems, 34:25164-25178, 2021. +[42] Thomas Serre. Deep learning: the good, the bad, and the ugly. Annual review of vision science, 5:399-426, 2019. +[43] Hyun-Chul Kim, Sangsoo Jin, Sungman Jo, and Jong-Hwan Lee. A naturalistic viewing paradigm using 360 panoramic video clips and real-time field-of-view changes with eye-gaze tracking. NeuroImage, 216:116617, 2020. +[44] Pierre Sermanet, Andrea Frome, and Esteban Real. Attention for fine-grained categorization. arXiv preprint arXiv:1412.7054, 2014. + +[45] Jianlong Fu, Heliang Zheng, and Tao Mei. Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4438-4446, 2017. +[46] Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, and Kostas Daniilidis. Polar transformer networks. arXiv preprint arXiv:1709.01889, 2017. +[47] Yulin Wang, Kangchen Lv, Rui Huang, Shiji Song, Le Yang, and Gao Huang. Glance and focus: a dynamic approach to reducing spatial redundancy in image classification. Advances in Neural Information Processing Systems, 33:2432-2444, 2020. +[48] Yiyou Guo, Jinsheng Ji, Xiankai Lu, Hong Huo, Tao Fang, and Deren Li. Global-local attention network for aerial scene classification. IEEE Access, 7:67200-67212, 2019. +[49] Kevin Wu, Eric Wu, and Gabriel Kreiman. Learning scene gist with convolutional neural networks to improve object recognition. In 2018 52nd Annual Conference on Information Sciences and Systems (CISS), pages 1-6. IEEE, 2018. +[50] AM Derrington and P Lennie. Spatial and temporal contrast sensitivities of neurones in lateral geniculate nucleus of macaque. The Journal of physiology, 357(1):219-240, 1984. +[51] Michael Connolly and David Van Essen. The representation of the visual field in parvicellular and magnocellular layers of the lateral geniculate nucleus in the macaque monkey. Journal of Comparative Neurology, 226(4):544-564, 1984. +[52] Christine A Curcioo, Kenneth R Sloan, Robert E Kalina, and Anita E Hendrickson. Human photoreceptor topography. Journal of comparative neurology, 292(4):497-523, 1990. +[53] Juhong Min, Yucheng Zhao, Chong Luo, and Minsu Cho. Peripheral vision transformer. arXiv preprint arXiv:2206.06801, 2022. +[54] Chittesh Thavamani, Mengtian Li, Nicolas Cebron, and Deva Ramanan. Fovea: Foveated image magnification for autonomous navigation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 15539-15548, 2021. +[55] Aditya Jonnalagadda, William Yang Wang, BS Manjunath, and Miguel P Eckstein. Foveater: Foveated transformer for image classification. arXiv preprint arXiv:2105.14173, 2021. +[56] Emre Akbas and Miguel P Eckstein. Object detection through search with a foveated visual system. PLoS computational biology, 13(10):e1005743, 2017. +[57] Anne Harrington and Arturo Deza. Finding biological plausibility for adversarially robust features via metameric tasks. In SVRHM 2021 Workshop@ NeurIPS, 2021. +[58] Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Biologically inspired mechanisms for adversarial robustness. Advances in Neural Information Processing Systems, 33:2135-2146, 2020. +[59] Binxu Wang, David Mayo, Arturo Deza, Andrei Barbu, and Colin Conwell. On the use of cortical magnification and saccades as biological proxies for data augmentation. arXiv preprint arXiv:2112.07173, 2021. +[60] Alyssa A Brewer, William A Press, Nikos K Logothetis, and Brian A Wandell. Visual areas in macaque cortex measured using functional magnetic resonance imaging. Journal of Neuroscience, 22(23):10416-10426, 2002. +[61] Ricardo Gattass and Charles G Gross. Visual topography of striate projection zone (mt) in posterior superior temporal sulcus of the macaque. Journal of neurophysiology, 46(3):621-638, 1981. +[62] Pouya Bashivan, Kohitij Kar, and James J DiCarlo. Neural population control via deep image synthesis. Science, 364(6439):eaav9436, 2019. +[63] Laurent Itti and Christof Koch. Computational modelling of visual attention. Nature reviews neuroscience, 2(3):194-203, 2001. + +[64] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. +[65] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211-252, 2015. +[66] Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. Silicon: Saliency in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1072-1080, 2015. +[67] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[68] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. +[69] Thomas Naselaris, Kendrick N Kay, Shinji Nishimoto, and Jack L Gallant. Encoding and decoding in fmri. Neuroimage, 56(2):400-410, 2011. +[70] James V Haxby, J Swaroop Guntupalli, Andrew C Connolly, Yaroslav O Halchenko, Bryan R Conroy, M Ida Gobbini, Michael Hanke, and Peter J Ramadge. A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron, 72(2):404-416, 2011. +[71] Matthew F Glasser, Stamatos N Sotiropoulos, J Anthony Wilson, Timothy S Coalson, Bruce Fischl, Jesper L Andersson, Junqian Xu, Saad Jbabdi, Matthew Webster, Jonathan R Polimeni, et al. The minimal preprocessing pipelines for the human connectome project. Neuroimage, 80:105-124, 2013. +[72] Daniela Adolf, Snezhana Weston, Sebastian Baecke, Michael Luchtmann, Johannes Bernarding, and Siegfried Kropf. Increasing the reliability of data analysis of functional magnetic resonance imaging by applying a new blockwise permutation method. Frontiers in neuroinformatics, 8:72, 2014. +[73] Matthew F Glasser, Timothy S Coalson, Emma C Robinson, Carl D Hacker, John Harwell, Essa Yacoub, Kamil Ugurbil, Jesper Andersson, Christian F Beckmann, Mark Jenkinson, et al. A multi-modal parcellation of human cerebral cortex. Nature, 536(7615):171-178, 2016. +[74] Haiguang Wen, Junxing Shi, Wei Chen, and Zhongming Liu. Deep residual network predicts cortical representation and organization of visual features for rapid categorization. Scientific reports, 8(1):3752, 2018. +[75] James W Bisley and Michael E Goldberg. Attention, intention, and priority in the parietal lobe. Annual review of neuroscience, 33:1-21, 2010. +[76] Steven Yantis and John T Serences. Cortical mechanisms of space-based and object-based attentional control. Current opinion in neurobiology, 13(2):187-193, 2003. +[77] John HR Maunsell and Stefan Treue. Feature-based attention in visual cortex. Trends in neurosciences, 29(6):317-322, 2006. + +# Appendix: A Dual-Stream Neural Network Explains the Functional Segregation of Dorsal and Ventral Visual Pathways in Human Brains + +Minkyu Choi $^{1}$ , Kuan Han $^{1}$ , Xiaokai Wang $^{2}$ , Yizhen Zhang $^{1,3}$ , and Zhongming Liu $^{1,2}$ + +$^{1}$ Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 + $^{2}$ Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109 + +$^{3}$ Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 + +{cminkyu, kuanhan, xiaokaiw, zhyz, zmliu}@umich.edu + +# A Regions of Interests + +In our study, we delineated our regions of interest (ROIs) into two primary segments: 1) the ventral visual stream and object recognition-related regions and 2) the dorsal visual stream and overt attention-related regions. This approach followed the parcellations proposed by [1]. For the dorsal visual stream, the ROIs includes V3A, V3B, V6, V6A, and V7. Within the parietal cortex, visuo-spatial information and overt attention are processed by the intraparietal sulcus (IPS) and the superior parietal lobule (SPL) [2, 3, 4, 5, 6]. The IPS encompasses V7, IPS1, IPO, IP1, and IP2; whereas the SPL consists of lateral intraparietal cortex (LIPv, LIPd), ventral intraparietal complex (VIP), anterior intraparietal (AIP), medial intraparietal area (MIP), 7PC, 7AL, 7Am, 7PL, and 7Pm. We also included the frontal eye field (FEF), which is acknowledged for controlling eye movements [7, 8, 9, 10]. In contrast, the ROIs associated with object recognition and the ventral visual stream encompassed V8, the posterior inferotemporal (PIT) complex, the fusiform face complex (FFC), and ventromedial visual (VMV) areas 1, 2, 3, along with the lateral occipital area (LO). In addition, we included the superior temporal sulcus (STS), which is recognized for processing multimodal signals, including auditory and visual cues [11, 12, 13]. Fig. S1 displays the full set of region labels, corresponding to Fig.3(a) from the main text. Among the parcellations by [1], regions including significantly predicted voxels either by the WhereCNN or WhatCNN are presented in Fig. S1. + +![](images/8495c788e9774d65793db1d38a4f70a5d3277583b4da95d70f66ffe612203248.jpg) +Figure S1: Region labels. Regions including significant voxels from Fig.3(a) in the main text are presented. + +# B Training Details + +The backbone Convolutional Neural Networks (CNNs) of both the WhereCNN and WhatCNN share the same architecture, consisting of four blocks of convolutional operations. Situated atop the backbone CNN, the WhereCNN and WhatCNN possess additional layers tailored to their specific objectives: the WhereCNN features two convolutional layers that produce 2D saliency maps, whereas the WhatCNN includes a Gated Recurrent Unit (GRU) layer followed by a fully connected layer for object classification. + +During the pre-training of the backbone CNN, a global average pooling and a fully connected layer are integrated atop the backbone CNN, serving as a classifier. Upon completion of the pre-training process, the classifier is detached, allowing the pre-trained backbone CNN to be incorporated as a component of the WhereCNN or WhatCNN. + +As detailed in Section 3.1 of the main text, our model underwent a three-stage training process. In this section, we will elaborate on the specifics of the pre-training phase. + +Stage 1 - WhereCNN The backbone architecture of the WhereCNN was pre-trained on ILSVRC2012 [14] for an image classification task over 120 epochs. A batch size of 1, 024 was employed, along with the Adam optimizer [15] $(\mathrm{lr} = 0.001, \beta_{1} = 0.9, \beta_{2} = 0.99)$ . During pre-training, fixations for the retinal transformation were randomly generated across the image area. Once the backbone architecture had been pre-trained, we detached the classifier and initialized the WhereCNN using the model parameters obtained from the pre-training stage. We then performed SALICON training, as described in Section 3.1 of the main text. + +Stage 2 - WhatCNN In a process mirroring Stage 1, the backbone of the WhatCNN was also pre-trained on ILSVRC2012 [14] for an image classification task over 120 epochs, utilizing random fixations and the Adam optimizer $(\mathrm{lr} = 0.001, \beta_{1} = 0.9, \beta_{2} = 0.99)$ . After pre-training the backbone CNN, we initialized the WhatCNN using the weights of the pre-trained backbone CNN. + +Subsequently, the WhatCNN, initialized with the pre-trained weights as a whole, was trained on ILSVRC2012 [14] for object recognition using four fixations. Four randomly generated fixations were employed for training the WhatCNN for 55 epochs, again utilizing the Adam optimizer $(\mathrm{lr} = 0.001, \beta_{1} = 0.9, \beta_{2} = 0.99)$ . After this stage, we conducted a fine-tuning process using the learned fixations from the WhereCNN. In this stage, the WhereCNN, after the pre-training in Stage 1, was incorporated to guide the WhatCNN's fixations. However, only the WhatCNN was optimized, while the WhereCNN remained unchanged. This fine-tuning with learned fixations deployed four gazes, utilizing the Adam optimizer $(\mathrm{lr} = 0.0001, \beta_{1} = 0.9, \beta_{2} = 0.99)$ over 25 epochs. Finally, the WhatCNN underwent further training on MSCOCO, as described in Section 3.1 of the main text. + +Stage3 - WhereCNN & WhatCNN During this stage, both WhereCNN and WhatCNN, trained in the previous stages, were used to initialize model weights, followed by further end-to-end training, leveraging the stream-specific objectives (object recognition and saliency prediction, respectively). As the training requires labels for both tasks, the model was trained using images in the SALICON dataset, which contain labels for both saliency prediction and object recognition. + +The model samples fixations from the predicted saliency maps from WhereCNN. As this sampling process is non-differentiable, the gradients from object recognition cannot optimize the weights of WhereCNN. To tackle this issue, we utilized REINFORCE [16] to approximate the gradient for WhereCNN. At the time $t$ , a fixation $l_{t}$ is generated by WhereCNN, based on which WhatCNN predicts a class prediction $p_t$ . Then, in the context of REINFORCE, the reward $r_t$ of choosing $l_{t}$ as the fixation is calculated as the reduced classification loss relative to the previous time step $r_t = CE(p_{t - 1},\mathrm{label}_c) - CE(p_t,\mathrm{label}_c)$ , where $CE$ is the cross-entropy loss, $\mathrm{label}_c$ is class labels. The goal of REINFORCE is to maximize the discounted sum of rewards, $R = \sum_{t = 1}^{T}\gamma^{t - 1}r_{t}$ , where $\gamma \in (0,1)$ is the discount factor and set as 0.8. + +In this stage, we strived to minimize the object recognition and saliency prediction losses while maximizing the discounted sum of rewards. As indicated in Section 3.1 of the main text, we utilized the Adam optimizer $(\mathrm{lr} = 0.0002, \beta_{1} = 0.9, \beta_{2} = 0.99)$ for 25 epochs for this training stage. + +For All Stages All training stages were conducted using four NVIDIA A40 GPUs. All codes are written in Pytorch 1.9.1. + +# C Saliency Maps and Inhibition of Returns + +Once the saliency maps were generated by WhereCNN, inhibition of return (IOR) was used to prohibit future fixations to re-visit image areas that had been already explored. This process is illustrated in Fig. S2 + +![](images/9739515166643645f7cf14725d2c3562b72a8a069ce3bd0bf1062fc9834a4e8f.jpg) +Figure S2: Process of determining the next fixation point given the current fixation. A saliency map, generated by WhereCNN, is multiplied element-wise (indicated by *) with the inhibition of return (IOR) to prevent future fixations from reverting to previous positions. In the IOR, white and black colors correspond to values of 1 and 0, respectively. + +In the process of determining the next fixation, the WhereCNN generate a saliency map based on the current fixation. The location of this subsequent fixation is guided by the saliency map's probabilistic distribution. However, it's important to note that if the current fixation point possesses a high probability, subsequent fixations are likely to occur in proximity to the present fixation. + +To ensure a more dynamic and comprehensive exploration of the visual field, we employed the principle of Inhibition of Return (IOR), detailed in Eq.2 of the main text, and presented again here in Eq.4. + +$$ +\mathbf {I O R} (t) = \operatorname {R e L U} \left(\mathbf {1} - \sum_ {\tau = 1} ^ {t} G \left(\boldsymbol {\mu} = \boldsymbol {l} _ {\tau}, \boldsymbol {\Sigma} = \sigma^ {2} \boldsymbol {I}\right)\right) \tag {4} +$$ + +where $G(\pmb{\mu}, \pmb{\Sigma})$ is a 2D Gaussian function centered at $l_{\tau}$ (prior fixations) with a standard deviation $\sigma$ at the $\tau$ -th step. The Inhibition of Return (IOR) is initially created at a resolution of $224 \times 224$ with $\sigma = 25$ , and subsequently resized to align with the dimensions of the saliency map. IOR serves to decrease the saliency of previously attended areas, thereby preventing the model from repetitively focusing on these regions. This mechanism is informed by the model's all prior fixation history. The IOR map is designed such that it assigns lower values (approaching 0.0) in the vicinity of prior fixation points, and higher values (up to 1.0) in regions further away. Thus, when the IOR map is element-wise multiplied with the saliency map, it effectively reduces the saliency values in areas already explored. + +Following the application of IOR, the subsequent fixation point is decided upon by considering the adjusted saliency map. It is then chosen based on the probabilistic distribution within this updated map. This strategy encourages more diverse fixations and facilitates a broader and more comprehensive understanding of the scene. + +# D WhereCNN's Saliency Maps and Fixation Points + +The original images are presented in Cartesian coordinates. Once the retinal transformation is applied to these images, the resultant retinal images adopt retinal coordinates, as detailed in Eq.1 of the main + +text. Since the inputs to the WhereCNN operate in retinal coordinates, it naturally follows that the output saliency maps mirror this coordinate system. To visualize these within this paper, we utilize the inverse function of Eq.1, thereby transforming the saliency maps from retinal back to Cartesian coordinates. + +In preparation for our model's processing of the movie *Raiders of the Lost Ark*, we reduce the frame rate to 6 frames per second (fps). This adjustment helps mitigate computational and memory costs associated with the handling of the extracted features. As the model engages with the movie, a solitary fixation point is established for each frame. Importantly, the Inhibition of Return (IOR) mechanism is not invoked during the model's interaction with the movie. Fig. S3 showcases saliency maps and fixation points derived from segments of the movie *Raiders of the Lost Ark*. Frames situated on the same horizontal axis are selected at a rate of 1 fps. + +![](images/2b7055f1dd42009d20a0094638b6c15d4adc648fe2bdb5f46ff8886ff8b10462.jpg) + +![](images/fa7d42aa1764d5c5c132d4eca603ae6e4af08dfefcf98df8fc1aa5aab8eb558c.jpg) + +![](images/90d72e6432a01de2950a9504519624a0c021b4d870419e3367572ff28e469780.jpg) + +![](images/15a1dae7c4110e19a869f44f5e94c99f1243e560d43bcea3b2e84f41f3540aba.jpg) + +![](images/9e28b938cee955d92f72d6a3d13c9cf941a6e2224d070eb9aa4925b9f64e9692.jpg) +Figure S3: Given the movie frames (1st row), the WhereCNN generates saliency maps (2nd row) and fixations (3rd row). The red marker in the 3rd row presents the fixation point. + +# E Investigating Layer-wise Correspondence to Visual Cortex + +In the main text, the whole features from the all layers of each stream are used for predicting voxel activities (noted as Stream-wise encoding). In an alternative way, the features from each layer can be used to predict voxel activities, instead of concatenating all the layers, (noted as Layer-wise encoding). In this way, the hierarchical correspondence between each layer in the model to the ROIs of the visual system can be observed. + +With the layer-wise encoding scheme, we predicted fMRI responses using features from each layer in the WhereCNN and WhatCNN. Fig. S4 associates each voxel to one (color-coded) layer most predictive of that voxel for either (a) WhatCNN or (b) WhereCNN. Fig. S4 (a) shows that the lower layers of WhatCNN better predict earlier visual areas such as V1/V2, whereas the higher layers of WhatCNN better predict higher-order visual areas such as LO and PIT, consistent with prior studies [17, 18]. The results with the WhereCNN show different patterns, as shown in Fig. S4 (b). Within early visual areas, the lower layers of WhereCNN better predict foveal representations, whereas the higher layers better predict peripheral representations. + +![](images/d57f1f83af842d28a10e6638653448f6ac3eb2b0678acf8a266bb900144e7072.jpg) +Layer-wise Voxel Predictions + +![](images/46e753d0b98a29c2725259faf032e3075106fc4ba214ea864c7e148c04d1a150.jpg) +Figure S4: Each voxel is predicted by the features from a single layer from (a) WhatCNN and (b) WhereCNN. Layer indexes are color-coded so that the layer best predicting each voxel is presented. + +# F Implications to the Computer Visions + +In the current study, we demonstrated that the biologically plausible components (two stream, retinal sampling and eye movements) can be used to build a better model for the human visual cortex in a naturalistic viewing condition. At the same time, those components we considered in this study may also bring benefits to the computer vision applications. + +1) Efficiency. Unlike conventional CNNs that process entire images, our dual-stream model allows serial processing. It concentrates processing power on key image regions through attention directed fixations. This serial processing may significantly lower memory and computational overhead, + +because resources are allocated only to the crucial image regions. It is plausible that such efficiency underpins the brain's adoption of dual stream processing due to biological constraints on energy use. + +2) Adaptability. The dual streams of our model offer complementary lenses for visual exploration and perception in real-world environments. One stream provides a broad yet rough overview of the environment. The other gathers detailed observations with precision. Their synergistic interaction may facilitate adaptive behaviors for tasks like visual search, object detection in complex and cluttered scenes. Moreover, the distinct functions of each of the parallel streams present a combinatorial flexibility when leveraged together, potentially enhancing the model's overall capability to adapt to diverse visual challenges, including potential applications in robotics. + +However, leveraging such potential benefits within the scope of current study face challenges. First, mainstream datasets like ImageNet and MS-COCO offer a narrow view and lack the high-resolution detail our model thrives on. Moreover, these datasets often focus on large, central objects, limiting our model's adaptability that benefits object recognition. A better benchmark to our model would be high-resolution panoramic images or synthetic virtual reality environments to accommodate unlimited fixation variances. In such settings, the efficiency and adaptability of our model should be more appealing. + +# References + +[1] Matthew F Glasser, Timothy S Coalson, Emma C Robinson, Carl D Hacker, John Harwell, Essa Yacoub, Kamil Ugurbil, Jesper Andersson, Christian F Beckmann, Mark Jenkinson, et al. A multi-modal parcellation of human cerebral cortex. Nature, 536(7615):171-178, 2016. +[2] Jacqueline P Gottlieb, Makoto Kusunoki, and Michael E Goldberg. The representation of visual salience in monkey parietal cortex. Nature, 391(6666):481-484, 1998. +[3] Anna E Ipata, Angela L Gee, Jacqueline Gottlieb, James W Bisley, and Michael E Goldberg. Lip responses to a popout stimulus are reduced if it is overtly ignored. Nature neuroscience, 9(8):1071-1076, 2006. +[4] Makoto Kusunoki, Jacqueline Gottlieb, and Michael E Goldberg. The lateral intraparietal area as a salience map: the representation of abrupt onset, stimulus motion, and task relevance. Vision research, 40(10-12):1459-1468, 2000. +[5] James W Bisley, Koorosh Mirpour, Fabrice Arcizet, and Wei S Ong. The role of the lateral intraparietal area in orienting attention and its implications for visual search. European Journal of Neuroscience, 33(11):1982-1990, 2011. +[6] James W Bisley and Michael E Goldberg. Attention, intention, and priority in the parietal lobe. Annual review of neuroscience, 33:1-21, 2010. +[7] Hugo L Fernandes, Ian H Stevenson, Adam N Phillips, Mark A Segraves, and Konrad P Kording. Saliency and saccade encoding in the frontal eye field during natural scene search. Cerebral Cortex, 24(12):3232-3245, 2014. +[8] Kirk G Thompson and Narcisse P Bichot. A visual salience map in the primate frontal eye field. Progress in brain research, 147:249-262, 2005. +[9] Charles J Bruce, Michael E Goldberg, M Catherine Bushnell, and Gregory B Stanton. Pri-mate frontal eye fields. ii. physiological and anatomical correlates of electrically evoked eye movements. Journal of neurophysiology, 54(3):714-734, 1985. +[10] David A Robinson and Albert F Fuchs. Eye movements evoked by stimulation of frontal eye fields. Journal of neurophysiology, 32(5):637-648, 1969. +[11] Jon Driver and Toemme Noesselt. Multisensory interplay reveals crossmodal influences on 'sensory-specific' brain regions, neural responses, and judgments. Neuron, 57(1):11-23, 2008. +[12] Gemma A Calvert. Crossmodal processing in the human brain: insights from functional neuroimaging studies. Cerebral cortex, 11(12):1110-1123, 2001. + +[13] Michael S Beauchamp. Statistical criteria in fmri studies of multisensory integration. Neuroinformatics, 3:93-113, 2005. +[14] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211-252, 2015. +[15] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[16] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992. +[17] Santiago A Cadena, George H Denfield, Edgar Y Walker, Leon A Gatys, Andreas S Tolias, Matthias Bethge, and Alexander S Ecker. Deep convolutional models improve predictions of macaque v1 responses to natural images. PLoS computational biology, 15(4):e1006897, 2019. +[18] Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, and Zhongming Liu. Neural encoding and decoding with deep learning for dynamic natural vision. *Cerebral cortex*, 28(12):4136-4160, 2018. \ No newline at end of file diff --git a/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/images.zip b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d2dffd768f978f70f12184ff674633467bbc882d --- /dev/null +++ b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:732864d9ef7107b741ac2d4a9441eb1928d35340af91a2f41059116d3afda5ef +size 851986 diff --git a/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/layout.json b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b9cbab260582e1b64e6f7495501d9ea229bbed95 --- /dev/null +++ b/adualstreamneuralnetworkexplainsthefunctionalsegregationofdorsalandventralvisualpathwaysinhumanbrains/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0bdb0c20d3495df90a2a3d52cbaac8e50e18c054c92bfeabd8745d985195c40 +size 507763 diff --git a/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_content_list.json b/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bba38207c9e098f714c2c751a74fd03ba90ab764 --- /dev/null +++ b/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5073286f9693af48a90ccac1b6052469885addddffdfdb7f4cfaceb2f6a46546 +size 184717 diff --git a/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_model.json b/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_model.json new file mode 100644 index 0000000000000000000000000000000000000000..23072740550e2178ae6d4b9be249064f364c4167 --- /dev/null +++ b/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbf60a2aee9f34833bbf7ffccd98381c01f51bdfdd57f8250303558f2d2daf99 +size 218373 diff --git a/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_origin.pdf b/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..84e404c5190195de8fdb28ee12e7ad36cdb80186 --- /dev/null +++ b/adynamicalsystemviewoflangevinbasednonconvexsampling/e1053370-8335-4a66-a2e0-ac194e024147_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e287f256b728e2ca41e98515ac0db5f10577d24548e738b806c646792b3c5e6d +size 408254 diff --git a/adynamicalsystemviewoflangevinbasednonconvexsampling/full.md b/adynamicalsystemviewoflangevinbasednonconvexsampling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..52398b391caebbec9d0c0219d669e2a2f1c0c19a --- /dev/null +++ b/adynamicalsystemviewoflangevinbasednonconvexsampling/full.md @@ -0,0 +1,1084 @@ +# A Dynamical System View of Langevin-Based Non-Convex Sampling + +Mohammad Reza Karimi* + +ETH Zürich + +mkarimi@inf.ethz.ch + +Ya-Ping Hsieh* + +ETH Zürich + +yaping.hsieh@inf.ethz.ch + +Andreas Krause + +ETH Zürich + +krausea@ethz.ch + +# Abstract + +Non-convex sampling is a key challenge in machine learning, central to non-convex optimization in deep learning as well as to approximate probabilistic inference. Despite its significance, theoretically there remain some important challenges: Existing guarantees suffer from the drawback of lacking guarantees for the last-iterates, and little is known beyond the elementary schemes of stochastic gradient Langevin dynamics. To address these issues, we develop a novel framework that lifts the above issues by harnessing several tools from the theory of dynamical systems. Our key result is that, for a large class of state-of-the-art sampling schemes, their last-iterate convergence in Wasserstein distances can be reduced to the study of their continuous-time counterparts, which is much better understood. Coupled with standard assumptions of MCMC sampling, our theory immediately yields the last-iterate Wasserstein convergence of many advanced sampling schemes such as mirror Langevin, proximal, randomized mid-point, and Runge-Kutta methods. + +# 1 Introduction + +Many modern learning tasks involve sampling from a high-dimensional density $\pi \propto e^{-f}$ , where $f$ is a non-convex potential representing, for instance, the loss function of a deep neural network. To this end, an approach that has found wide success is to discretize the continuous-time Langevin diffusion + +$$ +\mathrm {d} L _ {t} = - \nabla f (L _ {t}) \mathrm {d} t + \sqrt {2} \mathrm {d} B _ {t} \tag {LD} +$$ + +where $B_{t}$ is a Brownian motion [57]. The idea behind this approach is that, since $\pi$ is the stationary distribution of (LD), one can expect a similar behavior for discretizations of (LD). Such a framework has inspired numerous sampling schemes with per-iteration costs as cheap as stochastic gradient descent, which are particularly suitable for large-scale approximate probabilistic inference and Bayesian learning [2, 54, 57]. Moreover, several works have noticed that these Langevin-based schemes provide deep insights about minimizing $f$ using stochastic oracles [22, 48], which serves as an important step toward explaining the empirical success of training deep neural networks. + +The convergence of Langevin-based non-convex sampling has therefore attracted significant interest from both practitioners and theoreticians, whose intense study has led to a plethora of new guarantees; see related work for details. Despite such impressive progress, several challenges remain for the fully non-convex setup: + +- The convergence is typically given on the averaged iterates instead of the more natural last iterates [4, 54]. This is especially problematic from the perspective of understanding the minimization of $f$ , as in practice, the last iterates of an optimization algorithm play the most pivotal role for downstream tasks. + +- An additional notable drawback of the current theory is its predominant focus on the basic Euler-Maruyama discretization of (LD) (see, e.g., [4, 20, 54]). As a result, the convergence analysis of more advanced sampling schemes remains largely unexplored in the fully non-convex regime [1, 24, 25, 34, 53, 61]. + +$\S$ Contributions and Approaches. To overcome the aforementioned challenges, our main contribution, from a high level, can be succinctly summarized as: + +Under mild assumptions, we prove that the iterates of a broad range of Langevin-based sampling schemes converge to the continuous-time (LD) in Wasserstein distance. $(\star)$ + +Combining $(\star)$ with classical results on Langevin diffusion [45] immediately yields the last-iterate convergence in Wasserstein distances for a wide spectrum of sampling schemes, thus resolving all the challenges mentioned above. To illustrate this point, we state a simple version of our main result. + +Theorem (Informal). Suppose we discretize (LD) as + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} (\nabla f (x _ {k}) + \text {n o i s e} + \text {b i a s}) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1} +$$ + +with step-sizes $\{\gamma_k\}_{k\in \mathbb{N}}$ and i.i.d. standard Gaussians $\{\xi_k\}_{k\in \mathbb{N}}$ . Then, under an easy-to-verify condition on the bias (see (5) in Assumption 3), $\{x_{k}\}_{k\in \mathbb{N}}$ converges in Wasserstein distance to $\pi$ . In addition, these conditions are satisfied by many advanced sampling schemes. + +This result is achieved via a new dynamical perspective to study Langevin-based sampling. More specifically, + +1. We introduce the Picard process, which is the sampling analogue of Picard's method of successive approximations for solving ODEs [17]. Contrary to most existing analyses, the Picard process allows us to completely bypass the use of relative entropy, which is the culprit for the appearance of averaged iterates [20]. +2. Using the Picard process, we will prove that the iterates of various Langevin-based schemes generate a so-called Wasserstein asymptotic pseudotrajectory (WAPT) for the continuous-time (LD). The main motivation for considering WAPT is to connect Langevin-based schemes to the dynamical system theory of Benaim and Hirsch [7], which works for metric spaces and is last-iterate by design, and therefore particularly suitable for our purpose. +3. Finally, under standard stability assumptions in the literature [39, 51], we show how a tandem of our WAPT result and dynamical system theory yields the desirable convergence of various existing schemes, as well as motivates more efficient algorithms that enjoy the same rigorous guarantees. + +§ Related work. There is a vast literature on structured non-convex sampling, where one imposes extra assumptions on the target density. Under these conditions, one can derive non-asymptotic rates for Langevin-based schemes [13, 15, 33, 35, 36, 41, 48, 56, 60, 63]. Our work is orthogonal to these works as we study generic non-convex sampling, an NP-hard problem whose convergence is asymptotic at best. + +Most relevant to our paper are the works [4, 8, 20, 29, 54], which study the asymptotic convergence of Langevin-based schemes under minimal regularity assumptions on $f$ . Compared to their results, our guarantees either improve upon existing ones or are incomparable; see Section 5.4 for a more detailed comparison. + +# 2 The Langevin-Robbins-Monro Template + +We consider the following general template for sampling algorithms: Starting from an initial point, the iterates $\{x_{k}\}_{k\in \mathbb{N}}$ follow the recursion + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \left\{v \left(x _ {k}\right) + Z _ {k + 1} \right\} + \sqrt {2 \gamma_ {k + 1}} \sigma \left(x _ {k}\right) \xi_ {k + 1}, \tag {LRM} +$$ + +where $\gamma_{k}$ 's are step sizes, $v$ is a vector field, $Z_{k}$ 's are (random or deterministic) perturbations, $\sigma$ is the state-dependent diffusion matrix, and $\xi_{k}$ 's are i.i.d. standard Gaussian random variables. In the sequel, we will further decompose the perturbation as $Z_{k} = U_{k} + b_{k}$ , where $U_{k}$ is the (zero-mean) + +noise and $b_{k}$ is the bias. We call this recursion the Langevin-Robbins-Monro (LRM) template, as it is reminiscent of the Robbins-Monro template for stochastic approximation [50]. + +The generality of the LRM template allows us to capture many existing algorithms and suggests ways to design new ones. For illustration purposes, we showcase instances of (LRM) with the following examples. Other examples (SGLD and proximal) are provided in Appendix A. In the first three examples, the vector field $v$ in (LRM) is $-\nabla f$ and $\sigma \equiv 1$ . + +Example 1. The Randomized Mid-Point Method [24, 53] is an alternative discretization scheme to Euler-Maruyama and has been proposed for both overdamped and underdamped Langevin diffusion. For the overdamped case, its iterates are + +$$ +x _ {k + 1 / 2} = x _ {k} - \gamma_ {k + 1} \alpha_ {k + 1} \widetilde {\nabla} f (x _ {k}) + \sqrt {2 \gamma_ {k + 1} \alpha_ {k + 1}} \xi_ {k + 1} ^ {\prime}, \tag {RMM} +$$ + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \bar {\nabla} f \left(x _ {k + 1 / 2}\right) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, +$$ + +where $\{\alpha_k\}$ are i.i.d. and uniformly distributed in $[0,1]$ , $\xi_k,\xi_k'$ are standard Gaussian random variables with cross-variance $\sqrt{\alpha_k} I$ , and $\widetilde{\nabla} f$ is a noisy evaluation of $\nabla f$ . To cast (RMM) in the LRM template, we set $U_{k + 1}\coloneqq \widetilde{\nabla} f(x_{k + 1 / 2}) - \nabla f(x_{k + 1 / 2})$ and $b_{k + 1}\coloneqq \nabla f(x_{k + 1 / 2}) - \nabla f(x_k)$ . + +Example 2. Inspecting the update rule of (RMM), we see that it requires two gradient oracle calls at each iteration. Inspired by the optimistic gradient methods in optimization and online learning [16, 47, 49], we propose to "recycle" the past gradients: + +$$ +x _ {k + 1 / 2} = x _ {k} - \gamma_ {k + 1} \alpha_ {k + 1} \widetilde {\nabla} f \left(x _ {k - 1 / 2}\right) + \sqrt {2 \gamma_ {k + 1} \alpha_ {k + 1}} \xi_ {k + 1} ^ {\prime}, \tag {ORMM} +$$ + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \widetilde {\nabla} f \left(x _ {k + 1 / 2}\right) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, +$$ + +where $\{\alpha_k\}, \xi_k, \xi_k'$ , and $\widetilde{\nabla} f$ are the same as in (RMM). This is again an LRM scheme with $U_{k+1} \coloneqq \widetilde{\nabla} f(x_{k+1/2}) - \nabla f(x_{k+1/2})$ and $b_{k+1} \coloneqq \nabla f(x_{k+1/2}) - \nabla f(x_k)$ . + +Notice that (ORMM) requires one gradient oracle, thereby reducing the per-iteration cost of (RMM) by 2. To our knowledge, the scheme (ORMM) is new. + +Example 3. In addition to the simple (stochastic) Euler-Maruyama discretization in (SGLD), there exists a class of more sophisticated discretization methods of (LD) known as higher-order integrators. The Stochastic Runge-Kutta method [34] is an example of an order 1.5 integrator, with iterates + +$$ +h _ {1} = x _ {k} + \sqrt {2 \gamma_ {k + 1}} \left(c _ {1} \xi_ {k + 1} + c _ {2} \xi_ {k + 1} ^ {\prime}\right) +$$ + +$$ +h _ {2} = x _ {k} - \gamma_ {k + 1} \widetilde {\nabla} f (x _ {k}) + \sqrt {2 \gamma_ {k + 1}} \left(c _ {3} \xi_ {k + 1} + c _ {2} \xi_ {k + 1} ^ {\prime}\right), +$$ + +$$ +x _ {k + 1} = x _ {k} - \frac {\gamma_ {k + 1}}{2} \left(\widetilde {\nabla} f \left(h _ {1}\right) + \widetilde {\nabla} f \left(h _ {2}\right)\right) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, +$$ + +where $\xi_{k + 1}$ and $\xi_{k + 1}^{\prime}$ are independent standard Gaussian random variables, and $c_{1},c_{2},c_{3}$ are suitably chosen integrator constants. This algorithm is an LRM scheme with $U_{k + 1}\coloneqq \frac{1}{2} (\widetilde{\nabla} f(h_1) - \nabla f(h_1)) + \frac{1}{2} (\widetilde{\nabla} f(h_2)) - \nabla f(h_2))$ and $b_{k + 1}\coloneqq \frac{1}{2} (\nabla f(h_1) + \nabla f(h_2)) - \nabla f(x_k)$ . + +Example 4. The Mirror Langevin algorithm [1, 25, 61], which is the sampling analogue of the celebrated mirror descent scheme in optimization [5, 43], uses a strongly convex function $\phi$ to adapt to a favorable local geometry. In the dual space (i.e., the image of $\nabla \phi$ ), its iterates follow + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \nabla f \left(\nabla \phi^ {*} \left(x _ {k}\right)\right) + \sqrt {2 \gamma_ {k + 1}} \left(\nabla^ {2} \phi^ {*} \left(x _ {k}\right) ^ {- 1}\right) ^ {1 / 2} \xi_ {k + 1}, \tag {ML} +$$ + +where $\phi^{*}$ is the Fenchel dual of $\phi$ [52]. In our framework, (ML) fits into (LRM) by taking $v = -\nabla f \circ \nabla \phi^{*}$ and $\sigma = (\nabla^2 \phi^{*})^{-1/2}$ . Additionally, one can also consider a stochastic version of (ML) with noisy evaluations of $\nabla f$ . + +# 3 Technique Overview: A Dynamical System Perspective + +The goal of our paper is to provide last-iterate guarantees for the general LRM schemes introduced in Section 2. There are two equivalent, commonly considered, ways of characterizing the dynamics of the iterates of an LRM scheme. The first one is to view the iterates $\{x_{k}\}_{k\in \mathbb{N}}$ as a random trajectory in $\mathbb{R}^d$ , which is perhaps the most natural way of describing a sampling algorithm. The second way is to view the distributions $\{\rho_k\}_{k\in \mathbb{N}}$ of $\{x_{k}\}_{k\in \mathbb{N}}$ as a deterministic trajectory in the Wasserstein space. + +With these two characterizations in mind, in this section, we will devise a new framework based on the dynamical system theory and present its high-level ideas. + +To understand our novelty, it is important to contrast our framework to the existing Wasserstein viewpoint towards Langevin-based sampling algorithms. Following the seminal work of Otto [44], one can view a sampling algorithm as the discretization of a class of well-studied dynamical systems—gradient flows. This viewpoint suggests using Lyapunov arguments, which has become the predominant approach in much prior work. + +Despite its appealing nature, in the rest of this section, we will argue that Lyapunov analysis of gradient flows is in fact not suited for studying generic non-convex sampling. In particular, we will show how our new framework is motivated to overcome the several important limitations of gradient flow analysis. Finally, we give a high-level overview of the techniques used in our paper. + +§ Langevin Diffusion as Gradient Flows. We denote by $\rho_t$ the probability density of $L_t$ in (LD), and consider the continuous curve $t \mapsto \rho_t$ in the Wasserstein space $\mathbb{W}_2$ . In their seminal works, Jordan et al. [27] and Otto [44] discover that this curve is the (exact) gradient flow of the relative entropy functional; that is, defining the functional $F: \rho \mapsto D_{\mathrm{KL}}(\rho \| e^{-f})$ , one has $\partial_t \rho_t = -\operatorname{grad} F(\rho_t)$ , where "grad" is the gradient in the Wasserstein sense. This gradient flow viewpoint of (LD) thus provides a clear link between sampling in $\mathbb{R}^d$ and optimization in $\mathbb{W}_2$ . Indeed, this suggests that the relative entropy is a natural choice for the Lyapunov function of the discrete-time sampling algorithm, which is a prominent approach for analyzing sampling algorithms in recent years [4, 21, 58]. + +Although the gradient flow viewpoint has led to a sequence of breakthroughs, it has a few important shortcomings: + +(a) The usual Lyapunov-type analysis for sampling algorithms focuses on bounding the change in relative entropy across iterations. This is extremely challenging when one considers more advanced sampling algorithms, as one has to understand the effect of the additive bias and noise of the algorithm on the change of relative entropy. Crucially, this makes the Lyapunov analysis applicable only to the simple Euler-Maruyama discretization of (LD),2 i.e., $x_{k + 1} = x_k - \gamma_{k + 1}\nabla f(x_k) + \sqrt{2\gamma_{k + 1}}\xi_{k + 1}$ , and fails to capture more advanced and biased sampling schemes such as Examples 1-4. Even for the simple (SGLD), the presence of stochastic gradients significantly complicates the Lyapunov analysis and requires extra assumptions such as convexity [21] or uniform spectral gap [48]. +(b) This gradient flow-based analysis often requires an extra averaging step to decrease the relative entropy (see, e.g., [4]). This is the main reason why many existing works provide guarantees only on the averaged iterates $(\bar{\rho}_k \coloneqq \frac{1}{k} \sum_{i=1}^k \rho_i)$ instead of the last ones $(\rho_k)$ . + +In this paper, we overcome these limitations by introducing a new perspective, whose two ingredients are as follows. + +$\S$ Wasserstein Asymptotic Pseudotrajectories. A notion that will play a pivotal role in our analysis is the Wasserstein asymptotic pseudotrajectory (WAPT), which is a measure of "asymptotic closeness" in the Wasserstein sense, originally defined by Benaim and Hirsch [7] for metric spaces: + +Definition 1 (Wasserstein asymptotic pseudotrajectory). We say the stochastic process $(X_{t})_{t\geq 0}$ is a Wasserstein asymptotic pseudotrajectory (WAPT) of the SDE + +$$ +\mathrm {d} \Phi_ {t} = v (\Phi_ {t}) \mathrm {d} t + \sigma (\Phi_ {t}) \mathrm {d} B _ {t} \tag {SDE} +$$ + +if, for all $T > 0$ + +$$ +\lim _ {t \rightarrow \infty} \sup _ {0 \leq s \leq T} W _ {2} \left(X _ {t + s}, \Phi_ {s} ^ {(t)}\right) = 0. \tag {1} +$$ + +Here, $\Phi_s^{(t)}$ is the solution of the SDE at time $s$ initialized at $X_{t}$ , and $W_{2}$ is the 2-Wasserstein distance. + +Despite the seemingly convoluted definition, WAPT can be intuitively understood as follows: Let $\{x_{k}\}_{k\in \mathbb{N}}$ be the iterates of a sampling scheme. Then, (1) simply posits that for sufficiently large + +![](images/abcc683036929e988dcc4b4b03c616390d9869ba03762708e9405bff43883764.jpg) +Figure 1: High-level overview of the two components of the dynamical perspective. + +$m$ , one cannot distinguish between the "tail" iterates $\{x_{k}\}_{k\geq m}$ versus the SDE solution starting at $x_{m}$ , up to arbitrarily small error measured in terms of the Wasserstein distance. Since we are only interested in the asymptotic behavior of $x_{k}$ , these controls on the tail iterates will suffice to conclude the last-iterate convergence. $^3$ + +Importantly, from the perspective of WAPT, the Langevin diffusion (LD) (or more generally, $\Phi_s^{(t)}$ ) is simply viewed as a generic dynamical system and not as a gradient flow. In particular, relative entropy will play no role throughout our analysis, thereby resolving issue (b). + +§ Langevin-Robbins-Monro Schemes. We have seen that the LRM template in Section 2 is capable of capturing a broad range of existing and new algorithms in a unified way. To resolve the remaining issue (a), we will further rely on the LRM template: for proving that (LRM) generates a WAPT of the corresponding SDE, we show that the key condition (1) in WAPT can be reduced to checking an easy-to-verify bound on the perturbation terms $Z_{k}$ . + +To achieve this, the most important step in our proof, which distinguishes our analysis from all existing works in non-convex sampling, is the construction of the so-called Picard process, the natural generalization of the Picard's successive approximation method [17] from ordinary differential equations to stochastic differential equations. In the stochastic approximation literature, similar techniques have been successfully applied to study optimization and games in various settings such as on Riemannian or primal-dual spaces [26, 28, 37]. The application to sampling has also been previously explored by Bubeck et al. [11], Chau et al. [12] in different contexts. What distinguishes our work from the existing literature is the advantage of generalizing the Picard process to encompass a vastly wider class of algorithms, specifically the LRM schemes. Moreover, the integration of the Picard process with the theory of WAPT plays a pivotal role in our analysis, and both of these aspects present original contributions. + +$\S$ Framework overview. To conclude, for proving last-iterate convergence, we proceed as follows: + +1. For a given LRM scheme $\{x_{k}\}_{k\in \mathbb{N}}$ , we first construct a continuous-time trajectory $(X_{t})_{t\geq 0}$ via interpolating the iterates (see (3)). +2. We prove that $(X_{t})$ constitutes a WAPT of the SDE (see Theorem 1). This step relies heavily on the construction of the aforementioned Picard process. +3. By invoking the dynamical system theory of Benaim and Hirsch [7], the convergence of LRM schemes reduces to simply checking the stability condition (Theorem 2). In the Wasserstein space, this condition translates into boundedness of the second moments of the iterates $\{x_{k}\}$ , for which there is a plethora of approaches; we present two such methods in Section 5. + +Fig. 1 depicts a high-level overview of the ingredients needed in our framework, and their corresponding theorems. + +# 4 The Dynamics of Langevin-Robbins-Monro Schemes + +In this section, we view (LRM) as a noisy and biased discretization of (LD). To make this analogy precise, let $(B_{t})_{t\geq 0}$ be a Brownian motion defined on a filtered probability space with filtration + +$(\mathcal{F}_t)_{t \geq 0}$ , and define $\tau_k = \sum_{n=1}^k \gamma_n$ to be the effective time that has elapsed at the iteration $k$ . Using the Brownian motion, we can rewrite (LRM) as + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \left\{v \left(x _ {k}\right) + Z _ {k + 1} \right\} + \sigma \left(x _ {k}\right) \left(B _ {\tau_ {k + 1}} - B _ {\tau_ {k}}\right), \tag {2} +$$ + +assuming that the filtration satisfies $Z_{k}\in \mathcal{F}_{\tau_{k}}$ 4 The (continuous-time) interpolation $(X_{t})_{t\geq 0}$ of $\{x_{k}\}_{k\in \mathbb{N}}$ is then defined as the adapted process + +$$ +X _ {t} = x _ {k} - (t - \tau_ {k}) \left\{v \left(x _ {k}\right) + \mathbb {E} \left[ Z _ {k + 1} \mid \mathcal {F} _ {t} \right] \right\} + \sigma \left(x _ {k}\right) \left(B _ {t} - B _ {\tau_ {k}}\right), \quad \text {f o r} t \in \left[ \tau_ {k}, \tau_ {k + 1} \right]. \tag {3} +$$ + +In addition, for a fixed $t$ , consider the Brownian motion $(B_s^{(t)})_{s\geq 0}$ where $B_{s}^{(t)}\coloneqq B_{t + s} - B_{t}$ , and define the Langevin flow $(\Phi_s^{(t)})_{s\geq 0}$ as the (strong) solution of (SDE) initialized at $X_{t}$ . It is important to note that $\Phi^{(t)}$ and $X$ are synchronously coupled by sharing the same Brownian motion. + +# 4.1 Technical Assumptions and Requirements + +We now introduce the basic technical assumptions and discuss their generality. + +Assumption 1. The vector field $v$ is $L$ -Lipschitz, and satisfies $\langle x, v(x) \rangle \leq C_v (1 + \|x\|)$ for some $C_v > 0$ . Moreover, $\sigma$ is $L$ -Lipschitz and is bounded in Hilbert-Schmidt norm. + +Lipschitzness of $v$ is a standard assumption and is also required to ensure the existence of a unique strong solution of (SDE). The second assumption on the vector field is exceedingly weak and when $v = -\nabla f$ , is satisfied even for distributions without moments. The assumptions on diffusion coefficient $\sigma$ are already satisfied when $\sigma \equiv 1$ , and we show that it holds for practical schemes such as Example 4. + +Assumption 2. The Robbins-Monro summability conditions hold: $\sum_{k=1}^{\infty} \gamma_k = \infty$ and $\sum_{k=1}^{\infty} \gamma_k^2 < \infty$ . Moreover, for some constant $P$ to be defined in (20), we have + +$$ +\gamma_ {k + 1} / \gamma_ {k} + P \gamma_ {k} \gamma_ {k + 1} < 1 - \gamma_ {k}, \quad \forall k. \tag {4} +$$ + +The Robbins-Monro step size conditions are standard in the non-convex sampling literature [4, 20, 29, 30]. For (4), it can be verified that condition is satisfied even for slowly-decreasing step sizes such as $\gamma_{k} \propto (\sqrt{k}\log k)^{-1}$ , which hence is not restrictive. + +Assumption 3. The noises $\{U_k\}_{k \in \mathbb{N}}$ form a martingale difference sequence, i.e., $\mathbb{E}[U_{k+1} | U_k] = 0$ and have uniformly bounded second moments. In addition, the bias terms satisfy + +$$ +\mathbb {E} \left[ \| b _ {k + 1} \| ^ {2} \mid \mathscr {F} _ {\tau_ {k}} \right] = \mathcal {O} \left(\gamma_ {k + 1} ^ {2} \| v (x _ {k}) \| ^ {2} + \gamma_ {k + 1}\right). \tag {5} +$$ + +A martingale difference sequence is more general than an i.i.d. sequence, allowing the noise to be state-dependent. The bias condition (5) simply states that the bias shall not overpower the signal $v(x_{k})$ , and, as we show later, is satisfied by all our examples. + +# 4.2 From Discrete to Continuous: LRM Schemes and WAPTs + +We are now in a position to state our main theorems. Our first result below establishes a precise link between the discrete-time (LRM) and the continuous-time (SDE). + +Theorem 1. Under Assumptions 1-3, the interpolation (3) of an LRM scheme is a Wasserstein asymptotic pseudotrajectory of (SDE). + +§ Sketch of the Proof for Theorem 1. The proof of this theorem is heavily based on the notion of the Picard process and iterate moment bounds. The complete proof can be found in Appendix C. + +§ Step 1: The Picard Process. For a fixed $t > 0$ , recall the construction of the interpolation (3) and the Langevin flow. Central to our analysis is the Picard process, defined as + +$$ +Y _ {s} ^ {(t)} = X _ {t} + \int_ {0} ^ {s} v \left(X _ {t + u}\right) d u + \int_ {0} ^ {s} \sigma \left(X _ {t + u}\right) d B _ {u} ^ {(t)}. \tag {6} +$$ + +The Picard process is adapted and is (synchronously) coupled with the Langevin flow and the interpolation. We think of the Picard process as one step of the Picard iteration for successive approximations to solve ODEs. This means, intuitively, that its trajectory should be close to the original interpolation, as well as to that of the Langevin flow, playing the role of a "bridge". + +Fix $T > 0$ . For $s \in [0, T]$ , we decompose the distance between the interpolation $X_{t}$ in (3) and the Langevin flow as + +$$ +\frac {1}{2} \left\| X _ {t + s} - \Phi_ {s} ^ {(t)} \right\| ^ {2} \leq \left\| Y _ {s} ^ {(t)} - \Phi_ {s} ^ {(t)} \right\| ^ {2} + \left\| X _ {t + s} - Y _ {s} ^ {(t)} \right\| ^ {2}. \tag {7} +$$ + +We now bound each term of the decomposition. By synchronous coupling of the processes, Lipschitzness of $v$ , and Itô isometry, Lemma 3 bounds the first term as + +$$ +\left\| Y _ {s} ^ {(t)} - \Phi_ {s} ^ {(t)} \right\| ^ {2} \leq 2 (T + 1) L ^ {2} \int_ {0} ^ {s} \left\| \Phi_ {u} ^ {(t)} - X _ {t + u} \right\| ^ {2} d u. \tag {8} +$$ + +This will be suitable for later use of Gronwall's lemma. + +§ Step 2: Accumulated Noise and Bias. For the rest of the proof, we need some extra notation. Define $m(t) \coloneqq \sup \{k \geq 0 : \tau_k \leq t\}$ and the piecewise-constant process $\overline{X}_t \coloneqq x_{m(t)}$ . Going back to the second term of (7), observe that + +$$ +X _ {t + s} - Y _ {s} ^ {(t)} = \int_ {t} ^ {t + s} v \left(\bar {X} _ {u}\right) - v \left(X _ {u}\right) d u + \int_ {0} ^ {s} \sigma \left(\bar {X} _ {t + u}\right) - \sigma \left(X _ {t + u}\right) d B _ {u} ^ {(t)} - \Delta_ {Z} (t, s), \tag {9} +$$ + +where $\Delta_Z(t,s)$ is the accumulated noise and bias from time $t$ to time $t + s$ . It is expected that $\| \Delta_Z(t,s)\|$ eventually becomes negligible, since the step size becomes small. The next lemma confirms this intuition. + +Lemma 1. Suppose Assumptions 1-3 hold. Then, for any fixed $T > 0$ we have + +$$ +\lim_{t\to \infty}\sup_{0\leq s\leq T}\mathbb{E}\| \Delta_{Z}(t,s)\|^{2} = 0. +$$ + +§ Step 3: Gradient Moment Bounds. Based on (9) and Lemma 1, bounding the distance between the Picard process and the interpolation essentially reduces to bounding how much the discrete algorithm "moves" during one iteration in expectation. This, in turn, depends on how large the moments of $\| v(x_k) \|$ grow per iteration, which is controlled by the following lemma: + +Lemma 2. Let $\{x_{k}\}_{k\in \mathbb{N}}$ be the iterates of (LRM) and suppose Assumptions 1-3 hold. Then, $\mathbb{E}\| x_k\|^2 = O(1 / \gamma_{k + 1})$ . This in turn implies $\mathbb{E}\| v(x_k)\|^2 = O(1 / \gamma_{k + 1})$ and $\mathbb{E}\| b_{k + 1}\|^2 = O(\gamma_{k + 1})$ . + +Using this lemma and Lemma 1 we can obtain $A_{t} \coloneqq \sup_{0 \leq s \leq T} \mathbb{E}\|X_{t+s} - Y_{s}^{(t)}\|^{2} \to 0$ as $t \to \infty$ , which shows that the Picard process gets arbitrarily close to the interpolation as $t \to \infty$ . + +§ Step 4: Concluding the Proof. Let us go back to the decomposition (7). Taking expectation and using (8) and Gronwall's lemma, we obtain $\mathbb{E}\left[\| X_{t + s} - \Phi_s^{(t)}\|^2\right] \leq 4A_t\exp (T^2 L^2)$ , Thus, + +$$ +\lim _ {t \to \infty} \sup _ {s \in [ 0, T ]} \mathbb {E} \left[ \| X _ {t + s} - \Phi_ {s} ^ {(t)} \| ^ {2} \right] = 0. +$$ + +As we coupled $X_{t+s}$ and $\Phi_s^{(t)}$ in a specific way (via synchronizing the Brownian motions), we directly get an upper bound on the Wasserstein distance. + +# 5 Last-Iterate Convergence of Sampling Schemes + +In this section we focus on last-iterate convergence of LRM schemes in Wasserstein space. We first explore the interplay between the convergence of WAPTs and stability. We then show that the existing stability results for simple Euler-Maruyama discretization of the Langevin diffusion can be extended, with little to no extra assumptions, to the class of LRM schemes in Section 2. This in turn readily implies the last-iterate convergence of a wide class of LRM schemes. + +# 5.1 From WAPTs to Convergence in $\mathbb{W}_2$ + +Since convergence of the distribution of $x_{k}$ to $\pi$ in Wasserstein distance implies convergence of the second moments of $x_{k}$ to that of $\pi$ [3], convergence in the Wasserstein space should at least require: + +$$ +\sup _ {k \in \mathbb {N}} \mathbb {E} \| x _ {k} \| ^ {2} < \infty . \tag {10} +$$ + +It turns out that, for WAPTs, the exceedingly weak necessary condition (10) is also sufficient: + +Theorem 2. Let $(X_{t})$ be a Wasserstein asymptotic pseudotrajectory of the Langevin diffusion (LD) generated by an LRM scheme $\{x_{k}\}$ via (3). Then $W_{2}(x_{k},\pi)\to 0$ if and only if (10) holds. + +Proof. The proof relies on the structure of compact sets in the Wasserstein space and limit-set theorems for dynamical systems [7]. Specifically, the closure of bounded subsets of $\mathbb{W}_2$ is compact [3], so condition (10) implies that $(\mathrm{law}(X_t))_{t\geq 0}$ is pre-compact in $\mathbb{W}_2$ . Moreover, Assumption 1 implies that the Langevin flow is globally integrable. Thus, $(\mathrm{law}(X_t))_{t\geq 0}$ is a pre-compact WAPT of a globally integrable flow, and we can apply the limit-set theorem for metric spaces [7, Theorem 0.1] to conclude that the limit-set of $(\mathrm{law}(X_t))_t$ is an internally chain transitive (ICT) set. + +Next, we show that for the case of the Langevin flow, the only ICT set is $\{\pi\}$ , implying the desired convergence of our theorem. To see this, define $V(\cdot) = D_{\mathrm{KL}}(\cdot \mid \pi)$ . It can be observed that $V$ is a Lyapunov function for (LD), whose value is strictly decreasing along the flow (as the time derivative of $V$ along the flow is negative of the relative Fisher information, which is strictly positive for all measures other than $\pi$ ). Thus, all requirements of [6, Prop. 6.4] are satisfied, showing that the only point in the ICT set is $\pi$ . This also shows the uniqueness of the stationary distribution of (LD). + +Remark. From the proof of Theorem 1, we observe that the supremum of the Wasserstein distance between $(X.)_{[t,t + T]}$ and $(\Phi_t^{(t)})_{[0,T]}$ typically scales exponentially with $T$ , which is common for weak approximation error in the literature, see [40]. Despite the exponential dependence on $T$ , the convergence of the last iterate is assured by Theorem 2 without a need of a uniform control in $T$ . This is primarily attributed to the adoption of a dynamical system viewpoint and the application of corresponding tools, effectively harnessing the paradigm established by Benaim and Hirsch. + +Theorems 1-2 in tandem thus show that, as long as an LRM scheme satisfies Assumptions 1-3 and the moment condition (10), the desirable last-iterate convergence in $\mathbb{W}_2$ is immediately attained. Therefore, in the rest of this section, we turn our focus to establishing (10) for LRM schemes. + +# 5.2 Bounded Moments of LRM Schemes + +There is a long history of study on conditions that ensure (10) for iterative algorithms, which has culminated in the so-called dissipativity properties. We consider two such examples below. + +Assumption 4 (Dissipativity). There exist constants $\alpha >0$ and $\beta \geq 0$ such that + +$$ +\langle x, v (x) \rangle \leq - \alpha \| x \| ^ {2} + \beta , \quad \forall x \in \mathbb {R} ^ {d}. +$$ + +Under Assumption 4, it is classical that (10) holds for the simple Euler-Maruyama discretization of (LD) with deterministic or stochastic gradient oracles [23, 29, 30, 38, 48, 51, 54]. These studies, however, cannot handle non-zero bias, which, as seen in Examples 1-3, is crucial for incorporating more advanced sampling schemes. + +To this end, our next result shows that for a wide class of LRM schemes, the stability (10) essentially comes for free under Assumption 4. The proof is provided in Appendix D. + +Theorem 3. Let $v$ be a vector field satisfying Assumptions 1 and 4 and $\sigma$ be a diffusion coefficient satisfying Assumption 1, and let $\{x_{k}\}$ be an LRM scheme. Assume that $\lim_{k\to \infty}\gamma_k = 0$ , $\sup_k\mathbb{E}\| U_k\|^2 < \infty$ , and the bias satisfies (5). Then, the stability condition (10) holds for $\{x_{k}\}$ . + +A weaker notion of dissipativity that has been studied in the literature is: + +Assumption 5 (Weak dissipativity). There exist constants $\alpha >0$ , $\kappa \in (0,1]$ , and $\beta \geq 0$ such that + +$$ +\langle x, v (x) \rangle \leq - \alpha \| x \| ^ {1 + \kappa} + \beta , \quad \forall x \in \mathbb {R} ^ {d}. +$$ + +
NOISEBIASLAST-ITERATE
LAMBERTON AND PAGES [29], LEMAIRE [30]XXX
TEH ET AL. [54]XX
BENAÇM ET AL. [8]XX
DURMUS AND MOULINES [20]XX
BALASUBRAMANIAN ET AL. [4]XXX
THIS WORK
+ +Table 1: Comparison to existing works on convergence of LRM schemes. All methods, except for [4], require bounded second moments of the iterates. + +When $\kappa = 1$ , Assumption 5 is simply Assumption 4. As opposed to Assumption 4, which requires quadratic growth of $f$ outside a compact set (when $v = -\nabla f$ ), Assumption 5 only entails superlinear growth and therefore is considerably weaker. + +For Euler-Maruyama discretization of (LD) with deterministic gradients, [20] prove that Assumption 5 is sufficient to guarantee bounded moments of the iterates. As for a generic LRM scheme, we consider the following general condition on the bias terms, which will suffice to cover all our examples in Section 2: For some constant $c$ , + +$$ +\left\| b _ {k + 1} \right\| ^ {2} \leq c \left(\gamma_ {k + 1} ^ {2} \| v (x _ {k}) \| ^ {2} + \gamma_ {k + 1} ^ {2} \| U _ {k + 1} ^ {\prime} \| ^ {2} + \gamma_ {k + 1} \| \xi_ {k + 1} ^ {\prime} \| ^ {2} + \gamma_ {k + 1} \| \xi_ {k + 1} \| ^ {2}\right), \tag {11} +$$ + +where $U_{k+1}^{\prime}$ is an extra noise term, and $\xi_{k+1}^{\prime}$ is a standard Gaussian independent of the noises and $\xi_k$ . The price to pay with the weaker Assumption 5, however, is that we need to assume sub-Gaussianity of the noise. For a proof, see Appendix D. + +Theorem 4. Let $\pi \propto e^{-f}$ be the target distribution, where $v = -\nabla f$ satisfies Assumptions 1 and 5, and let $\{x_{k}\}$ be an LRM scheme. Assume that $\lim_{n\to \infty}\gamma_k = 0$ , the noises $U_{k}$ and $U_{k}^{\prime}$ are sub-Gaussian, and the bias term of $\{x_{k}\}$ satisfies (11). Then, (10) holds for $\{x_{k}\}$ in when (i) $\sigma \equiv 1$ , or (ii) $f$ is Lipschitz and the LRM follows the Mirror Langevin algorithm (Example 4). + +# 5.3 Examples of Convergent LRM Schemes + +We now illustrate the use of Theorems 1-4 on our examples in Section 2. + +Proposition 1. Under Assumption 1 and noise with uniformly bounded second moments, the following holds for Examples 1-6: (i) The bias has the form (11) and satisfies (5), (ii) As a result, under Assumptions 2 and 3, Examples 1-6 produce iterates that generate a WAPT of (SDE). (iii) Under the additional conditions of Theorem 3 or Theorem 4, Examples 1-6 enjoy last-iterate convergence to the target distribution in Wasserstein distance. + +# 5.4 Comparison to Existing Work + +We now give a more detailed comparison of our results to existing literature; a summary is given in Table 1, and additional comparison with prior works can be found in Appendix B. + +$\S$ Guarantees for LRM Schemes. Lamberton and Pages [29] and Lemaire [30] study the simple Euler-Maruyama discretization of (LD) with deterministic gradients (i.e., $U_{k} = b_{k} = 0$ ) and establish the weak convergence of the average iterates under a moment condition that is slightly weaker than (10). Their analysis is further extended by [54] to incorporate stochastic gradients. Later, the last-iterate convergence of the simple Euler-Maruyama discretization of (LD) is studied by [20], who prove the convergence in the total variation distance under Assumption 5. Another work on a similar setting as [20] is [8], where the convergence criterion is given in an integral probability metric (IPM) [42] of the form $d_{\mathcal{B}}(\mu ,\nu)\coloneqq \sup_{\varphi \in \mathcal{B}}|\mathbb{E}_{\mu}\varphi -\mathbb{E}_{\nu}\varphi |$ for a certain class of test functions $\mathcal{B}$ that is known to imply weak convergence, but not convergence in total variation or Wasserstein distances. + +Compared to these results, our guarantees possess the following desirable features: + +- The convergence is always on the last iterates instead of the average iterates. +- As we tolerate biased algorithms, the class of LRM schemes we consider is significantly more general than the ones in existing work. + +Finally, we note that our results are incomparable to the recent work of Balasubramanian et al. [4], who derive the same result as in [29, 30], i.e., average-iterate, weak convergence, deterministic Euler-Maruyama discretization. A remarkable feature of the analysis in [4] is that it does not require any bounded moments, and, in particular, their bounds can be applied to target distributions with unbounded variance. However, the downside of [4] is that, in the presence of $U_{k}$ and $b_{k}$ , their analysis produces a bound that does not vanish as $k\to \infty$ ; see [4, Theorem 15]. In contrast, our framework can tolerate quite general $U_{k}$ and $b_{k}$ , gives stronger guarantees (W2 vs. weak convergence; last-iterate vs. average-iterate). + +$\S$ On Analysis Techniques. While, to our knowledge, our framework is significantly different from previous works on sampling, we acknowledge that similar ideas of creating an auxiliary process in-between the iterates and the continuous-time flow is not entirely new and has been touched upon in the literature, e.g., [10, 12]. That being said, our specific approach in building the Picard process and its development into a wider array of algorithms, i.e., Langevin-Robbins-Monro schemes, undoubtedly plays a pivotal role in our analysis. Moreover, the integration of the Picard process with the theory of asymptotic pseudo-trajectories offers dual benefits to our study, and we view these as our unique contributions to this area of research. + +Furthermore, the novel Picard process gives a significant advantage in all of our results. The work of [8] also hinges on dynamical system theory-related ideas. Yet, missing the critical step of the Picard process has seemingly resulted in much weaker findings compared to our work. This observation is not meant as a critique; rather, it merely highlights the potency of the unique method we have integrated into our study. + +# 6 Concluding Remarks + +In this paper, we provided a new, unified framework for analyzing a wide range of sampling schemes, thus laying the theoretical ground for using them in practice, as well as motivating new and more efficient sampling algorithms that enjoy rigorous guarantees. We built on the ideas from dynamical system theory, and gave a rather complete picture of the asymptotic behavior of many first-order sampling algorithms. In short, our results help with the following: + +- Validating existing methods: Methods like mirror Langevin and randomized mid-point currently lack even asymptotic guarantees in fully non-convex scenarios, such as sampling from neural network-defined distributions. Our work fills this gap by offering the first rigorous justification for these schemes, supporting practitioners in utilizing these methods confidently. +- Facilitating new algorithm design: Our work motivates novel sampling methods through a straightforward verification of Assumptions 1-3. An illustrative instance involves the randomized mid-point method and Runge-Kutta integrators, wherein a substantial $50\%$ reduction in computation per iteration can be achieved without compromising convergence by simply recycling past gradients, shown in Example 2. The balance between the benefits of saving gradient oracles and potential drawbacks remains an open question, necessitating case-by-case practical evaluation. Nevertheless, our theory provides a flexible algorithmic design template that extends beyond the current literature's scope. + +While our WAPT result holds under very mild conditions, a severe limitation of our current framework is that it only applies to Langevin-based algorithms, whereas there exist numerous practical sampling schemes, such as Metropolis-Hastings, that are not immediately linked to (LD). We believe that this restriction arises as an artifact of our analysis, as the WAPT framework can in principle be applied equally well to any continuous-time dynamics. Lifting such constraint is an interesting future work. + +# Acknowledgments and Disclosure of Funding + +This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement No 815943. YPH acknowledges funding through an ETH Foundations of Data Science (ETH-FDS) postdoctoral fellowship. + +# References + +[1] Kwangjun Ahn and Sinho Chewi. Efficient constrained sampling via the mirror-langevin algorithm. Advances in Neural Information Processing Systems, 34:28405-28418, 2021. +[2] Sungjin Ahn, Anoop Korattikara, and Max Welling. Bayesian posterior sampling via stochastic gradient fisher scoring. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1771-1778, 2012. +[3] Luigi Ambrosio, Nicola Gigli, and Giuseppe Savare. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2005. +[4] Krishna Balasubramanian, Sinho Chewi, Murat A Erdogdu, Adil Salim, and Shunshi Zhang. Towards a theory of non-log-concave sampling: first-order stationarity guarantees for Langevin monte carlo. In Conference on Learning Theory, pages 2896-2923. PMLR, 2022. +[5] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167-175, 2003. +[6] Michel Benaim. Dynamics of stochastic approximation algorithms. In *Seminaire de probabilités XXXIII*, pages 1–68. Springer, 1999. +[7] Michel Benaim and Morris W Hirsch. Asymptotic pseudotrajectories and chain recurrent flows, with applications. Journal of Dynamics and Differential Equations, 8(1):141-176, 1996. +[8] Michel Benaïm, Florian Bouguet, and Bertrand Cloez. Ergodicity of inhomogeneous markov chains through asymptotic pseudotrajectories. The Annals of Applied Probability, 27(5):3004-3049, 2017. +[9] Espen Bernton. Langevin monte carlo and jko splitting. In Conference On Learning Theory, pages 1777-1798. PMLR, 2018. +[10] Sébastien Bubeck, Ronen Eldan, and Joseph Lehec. Sampling from a log-concave distribution with projected Langevin monte carlo. arXiv preprint arXiv:1507.02564, 2015. +[11] Sébastien Bubeck, Ronen Eldan, and Joseph Lehec. Sampling from a log-concave distribution with projected Langevin monte carlo. Discrete & Computational Geometry, 59:757-783, 2018. +[12] Ngoc Huy Chau, Éric Moulines, Miklos Rásonyi, Sotirios Sabanis, and Ying Zhang. On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case, 2021. +[13] Xiang Cheng, Niladri S Chatterji, Yasin Abbasi-Yadkori, Peter L Bartlett, and Michael I Jordan. Sharp convergence rates for Langevin dynamics in the nonconvex setting. arXiv preprint arXiv:1805.01648, 2018. +[14] Sinho Chewi. Log-concave sampling, 2023. +[15] Sinho Chewi, Murat A Erdogdu, Mufan Bill Li, Ruoqi Shen, and Matthew Zhang. Analysis of Langevin monte carlo from poincar\`e to log-sobolev. arXiv preprint arXiv:2112.12662, 2021. +[16] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In Conference on Learning Theory, pages 6-1. JMLR Workshop and Conference Proceedings, 2012. +[17] Earl A Coddington and Norman Levinson. Theory of ordinary differential equations. Tata McGraw-Hill Education, 1955. + +[18] Arnak S Dalalyan and Avetik G Karagulyan. User-friendly guarantees for the langevin monte carlo with inaccurate gradient. arXiv preprint arXiv:1710.00095, 2017. +[19] Arnak S. Dalalyan and Lionel Riou-Durand. On sampling from a log-concave density using kinetic Langevin diffusions. Bernoulli, 26(3):1956 - 1988, 2020. doi: 10.3150/19-BEJ1178. URL https://doi.org/10.3150/19-BEJ1178. +[20] Alain Durmus and Eric Moulines. Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. The Annals of Applied Probability, 27(3):1551-1587, 2017. +[21] Alain Durmus, Szymon Majewski, and Băjęj Míasojedow. Analysis of Langevin monte carlo via convex optimization. The Journal of Machine Learning Research, 20(1):2666-2711, 2019. +[22] Murat A Erdogdu, Lester Mackey, and Ohad Shamir. Global non-convex optimization with discretized diffusions. Advances in Neural Information Processing Systems, 31, 2018. +[23] J Hale. Asymptotic behavior of dissipative systems. In *African Mathematical Society*, 1988. +[24] Ye He, Krishnakumar Balasubramanian, and Murat A Erdogdu. On the ergodicity, bias and asymptotic normality of randomized midpoint sampling method. Advances in Neural Information Processing Systems, 33:7366-7376, 2020. +[25] Ya-Ping Hsieh, Ali Kavis, Paul Rolland, and Volkan Cevher. Mirrored Langevin dynamics. Advances in Neural Information Processing Systems, 31, 2018. +[26] Ya-Ping Hsieh, Panayotis Mertikopoulos, and Volkan Cevher. The limits of min-max optimization algorithms: Convergence to spurious non-critical sets. In International Conference on Machine Learning, pages 4337-4348. PMLR, 2021. +[27] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the fokkerplanck equation. SIAM journal on mathematical analysis, 29(1):1-17, 1998. +[28] Mohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, and Andreas Krause. The dynamics of riemannian robbins-monro algorithms. arXiv preprint arXiv:2206.06795, 2022. +[29] Damien Lamberton and Gilles Pages. Recursive computation of the invariant distribution of a diffusion. Bernoulli, pages 367-405, 2002. +[30] Vincent Lemaire. Estimation recursive de la mesure invariante d'un processus de diffusion. PhD thesis, Université de Marne la Vallée, 2005. +[31] Ruilin Li, Molei Tao, Santosh S. Vempala, and Andre Wibisono. The mirror langevin algorithm converges with vanishing bias, 2021. +[32] Ruilin Li, Hongyuan Zha, and Molei Tao. Sqrt(d) dimension dependence of langevin monte carlo. In The International Conference on Learning Representations, 2022. +[33] Xuechen Li, Yi Wu, Lester Mackey, and Murat A Erdogdu. Stochastic runge-kutta accelerates Langevin monte carlo and beyond. Advances in neural information processing systems, 32, 2019. +[34] Xuechen Li, Denny Wu, Lester Mackey, and Murat A. Erdogdu. Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond, February 2020. +[35] Yi-An Ma, Niladri S Chatterji, Xiang Cheng, Nicolas Flammarion, Peter L Bartlett, and Michael I Jordan. Is there an analog of nesterov acceleration for gradient-based mcmc? Bernoulli, 27(3):1942-1992, 2021. +[36] Mateusz B Majka, Aleksandar Mijatović, and Łukasz Szpruch. Nonasymptotic bounds for sampling algorithms without log-concavity. The Annals of Applied Probability, 30(4):1534-1581, 2020. +[37] Panayotis Mertikopoulos, Ya-Ping Hsieh, and Volkan Cevher. Learning in games from a stochastic approximation viewpoint. arXiv preprint arXiv:2206.03922, 2022. + +[38] Sean P Meyn and Richard L Tweedie. Stability of markovian processes iii: Foster-lyapunov criteria for continuous-time processes. Advances in Applied Probability, 25(3):518-548, 1993. +[39] Sean P Meyn and Richard L Tweedie. Markov chains and stochastic stability. Springer Science & Business Media, 2012. +[40] Grigori N Milstein and Michael V Tretyakov. Stochastic numerics for mathematical physics, volume 39. Springer, 2004. +[41] Wenlong Mou, Nicolas Flammarion, Martin J Wainwright, and Peter L Bartlett. Improved bounds for discretization of Langevin diffusions: Near-optimal rates without convexity. Bernoulli, 28(3):1577-1601, 2022. +[42] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429-443, 1997. +[43] AS Nemirovsky and DB Yudin. Problem Complexity and Method Efficiency in Optimization. J. Wiley @ Sons, New York, 1983. +[44] Felix Otto. The geometry of dissipative evolution equations: the porous medium equation. Communications in Partial Differential Equations, 26, 2001. +[45] Grigorios A Pavliotis. Stochastic processes and applications: diffusion processes, the Fokker-Planck and Langevin equations, volume 60. Springer, 2014. +[46] Marcelo Pereyra. Proximal markov chain monte carlo algorithms. Statistics and Computing, 26 (4):745-760, 2016. +[47] Leonid Denisovich Popov. A modification of the arrow-hurwicz method for search of saddle points. Mathematical notes of the Academy of Sciences of the USSR, 28(5):845-848, 1980. +[48] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis. In Conference on Learning Theory, pages 1674–1703. PMLR, 2017. +[49] Sasha Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. Advances in Neural Information Processing Systems, 26, 2013. +[50] Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400 - 407, 1951. doi: 10.1214/aoms/1177729586. URL https://doi.org/10.1214/aoms/1177729586. +[51] Gareth O Roberts and Richard L Tweedie. Exponential convergence of langevin distributions and their discrete approximations. Bernoulli, pages 341-363, 1996. +[52] Ralph Tyrell Rockafellar. Convex analysis. Princeton university press, 2015. +[53] Ruoqi Shen and Yin Tat Lee. The randomized midpoint method for log-concave sampling. Advances in Neural Information Processing Systems, 32, 2019. +[54] Yee Whye Teh, Alexandre H Thiery, and Sebastian J Vollmer. Consistency and fluctuations for stochastic gradient Langevin dynamics. Journal of Machine Learning Research, 17, 2016. +[55] Belinda Tzen, Anant Raj, Maxim Raginsky, and Francis Bach. Variational principles for mirror descent and mirror langevin dynamics, 2023. +[56] Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. Advances in neural information processing systems, 32, 2019. +[57] Max Welling and Yee Whye Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In ICML, page 8, 2011. +[58] Andre Wibisono. Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. In Conference on Learning Theory, pages 2093-3027. PMLR, 2018. + +[59] Andre Wibisono. Proximal Langevin algorithm: Rapid convergence under isoperimetry. arXiv preprint arXiv:1911.01469, 2019. +[60] Pan Xu, Jinghui Chen, Difan Zou, and Quanquan Gu. Global convergence of Langevin dynamics based algorithms for nonconvex optimization. Advances in Neural Information Processing Systems, 31, 2018. +[61] Kelvin Shuangjian Zhang, Gabriel Peyré, Jalal Fadili, and Marcelo Pereyra. Wasserstein control of mirror langevin monte carlo. In Conference on Learning Theory, pages 3814-3841. PMLR, 2020. +[62] Kelvin Shuangjian Zhang, Gabriel Peyré, Jalal Fadili, and Marcelo Pereyra. Wasserstein control of mirror langevin monte carlo. In Jacob Abernethy and Shivani Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pages 3814-3841. PMLR, 09-12 Jul 2020. URL https://proceedings.mlr.press/v125/zhang20a.html. +[63] Difan Zou, Pan Xu, and Quanquan Gu. Sampling from non-log-concave distributions via variance-reduced gradient Langevin dynamics. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2936-2945. PMLR, 2019. + +# A Further Examples of LRM Schemes + +Example 5. The classic Stochastic Gradient Langevin Dynamics [57] iterates as + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \widetilde {\nabla} f (x _ {k}) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, \tag {SGLD} +$$ + +where $\widetilde{\nabla} f$ is the gradient of the negative log-likelihood of a random batch of the data. (SGLD) fits the LRM template by setting $U_{k + 1}\coloneqq \widetilde{\nabla} f(x_k) - \nabla f(x_k)$ , and $b_{k + 1}\coloneqq 0$ . + +Example 6. The Proximal Langevin Algorithm [9, 46, 59] is defined via + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \nabla f \left(x _ {k + 1}\right) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}. \tag {PLA} +$$ + +This algorithm is implicit, and it is assumed that one can solve (PLA) for $x_{k + 1}$ . By setting $b_{k + 1} \coloneqq \nabla f(x_{k + 1}) - \nabla f(x_k)$ and $U_{k + 1} \coloneqq 0$ , we see that this algorithm also follows the LRM template. + +# B Additional Related Work + +Our paper studies the behavior of a wide range of Langevin-based sampling algorithms proposed in the literature in the asymptotic setting under minimal assumptions. This allows us to give last-iterate guarantees in Wasserstein distance. As stressed in Section 1, our goal is not to provide non-asymptotic rates in this general setting as the problem is inherently NP-Hard. However, given more assumptions and structures on the potential $f$ , there is a plethora of works which prove convergence rates for the last iterates in Wasserstein distance. In this appendix, we provide additional background for these works and the methods used in the literature. + +A powerful framework for quantifying the global discretization error of a numerical algorithm is the mean-square analysis framework [40]. This framework furnishes a general recipe for controlling short and long-term integration errors. For sampling, this framework has been applied to prove convergence rates for Langevin Monte-Carlo (the Euler-Maruyama discretization of (LD)) in the strongly-convex setting [32, 34]. Similar to our work, the convergence obtained in these works is last-iterate and in Wasserstein distance. One of the essential ingredients in the latter work is the contraction property of the SDE, which is ensured by the strong convexity assumption. This, in turn, implies strong non-asymptotic convergence guarantees. + +It is an interesting future direction to study the combination of the Mean-Squared analysis together with the Picard process and its applicability to more sophisticated algorithms (such as LRM schemes with bias and noise), as well as non-convex potentials. + +As explained in Section 3, one of the main themes in proving error bounds for sampling is the natural relation between sampling and optimization in the Wasserstein space. This point of view, when applied to strongly-convex potentials, has produced numerous non-asymptotic guarantees; see [14, 18] for a recent account and the references therein. Note that strong convexity is crucial for the analysis used in the aforementioned work. Moreover, the error bounds for biased and noisy discretizations do not decrease with the step-size or iteration count; see [18, Theorem 4, Eqn. (14)]. This means that while the bound is non-asymptotic, it does not automatically result in an asymptotic convergence. Finally, we stress that these approaches are orthogonal to our techniques: We view a sampling algorithm as a (noisy and biased) discretization of a dynamical system (and not necessarily a gradient flow), and use tools from dynamical system theory to provide asymptotic convergence results. + +# C Proofs for Section 4 + +# C.1 Proof of Theorem 1 + +In this appendix, we bring the detailed proof of Theorem 1. Recall that we interpolate the iterates of the LRM scheme $\{x_{k}\}$ as + +$$ +X _ {t} = x _ {k} + \left(t - \tau_ {k}\right) \left\{v \left(x _ {k}\right) + \mathbb {E} \left[ Z _ {k + 1} \mid \mathcal {F} _ {t} \right] \right\} + \sigma \left(x _ {k}\right) \left(B _ {t} - B _ {\tau_ {k}}\right). \tag {3} +$$ + +Moreover, for a fixed $t > 0$ , we considered the Brownian motion $B_{s}^{(t)} = B_{t + s} - B_{t}$ , and constructed two important processes: the Langevin flow defined via + +$$ +\mathrm {d} \Phi_ {s} ^ {(t)} = v \left(\Phi_ {s} ^ {(t)}\right) \mathrm {d} s + \sigma \left(\Phi_ {s} ^ {(t)}\right) \mathrm {d} B _ {s} ^ {(t)}, \quad \Phi_ {0} ^ {(t)} = X _ {t}, \tag {12} +$$ + +and the Picard process (6) constructed as + +$$ +Y _ {s} ^ {(t)} = X _ {t} + \int_ {0} ^ {s} v \left(X _ {t + u}\right) \mathrm {d} u + \int_ {0} ^ {s} \sigma \left(X _ {t + u}\right) \mathrm {d} B _ {u} ^ {(t)}. \tag {6} +$$ + +Let us fix $T > 0$ , and for $s \in [0, T]$ decompose the distance between the interpolation and the Langevin flow as + +$$ +\frac {1}{2} \left\| X _ {t + s} - \Phi_ {s} ^ {(t)} \right\| ^ {2} \leq \left\| Y _ {s} ^ {(t)} - \Phi_ {s} ^ {(t)} \right\| ^ {2} + \left\| X _ {t + s} - Y _ {s} ^ {(t)} \right\| ^ {2}, \tag {7} +$$ + +where we have used $\| a + b\| ^2\leq 2\| a\| ^2 +2\| b\| ^2$ . We now bound each term of this decomposition. Notice that due to the synchronous coupling of the processes, the Brownian motion cancels out in the differences. + +The first term controls how close the Picard process is to the Langevin flow, and is bounded in the following lemma. + +Lemma 3. For fixed $t, T > 0$ and $0 \leq s \leq T$ , the distance of the Picard process and the Langevin flow is bounded as + +$$ +\| Y _ {s} ^ {(t)} - \Phi_ {s} ^ {(t)} \| ^ {2} \leq 2 (T + 1) L ^ {2} \int_ {0} ^ {s} \| \Phi_ {u} ^ {(t)} - X _ {t + u} \| ^ {2} \mathrm {d} u. +$$ + +Proof. By the auxiliary Lemma 4 below, Lipschitzness of $v, \sigma$ , Itô isometry (see, e.g., [62]) and $s \leq T$ , we have + +$$ +\begin{array}{l} \mathbb {E} \| Y _ {s} ^ {(t)} - \Phi_ {s} ^ {(t)} \| ^ {2} = \mathbb {E} \left\| \int_ {0} ^ {s} v (\Phi_ {u} ^ {(t)}) - v (X _ {t + u}) \mathrm {d} u + \int_ {0} ^ {s} \sigma (\Phi_ {u} ^ {(t)}) - \sigma (X _ {t + u}) \mathrm {d} B _ {u} ^ {(t)} \right\| ^ {2} \\ \leq 2 s \int_ {0} ^ {s} \mathbb {E} \left\| v \left(\Phi_ {u} ^ {(t)}\right) - v \left(X _ {t + u}\right) \right\| ^ {2} d u + 2 \mathbb {E} \int_ {0} ^ {s} \left\| \sigma \left(X _ {t + u}\right) - \sigma \left(\Phi_ {u} ^ {(t)}\right) \right\| _ {F} ^ {2} d u \\ \leq 2 (T + 1) L ^ {2} \int_ {0} ^ {s} \mathbb {E} \left\| \Phi_ {u} ^ {(t)} - X _ {t + u} \right\| ^ {2} d u. \\ \end{array} +$$ + +For the rest of the proof, we need to define the continuous-time piecewise-constant processes $\overline{X}(\tau_k + s) = X_k$ , $\overline{\gamma}(\tau_k + s) = \gamma_{k+1}$ , $\overline{Z}(\tau_k + s) = Z_{k+1}$ , and $Z(\tau_k + s) = \mathbb{E}[Z_{k+1} | \mathcal{F}_{\tau_k + s}]$ , for $0 \leq s < \gamma_{k+1}$ . Also, let $m(t) = \sup \{k \geq 0 : \tau_k \leq t\}$ so that $\tau_{m(t)} \leq t < \tau_{m(t) + 1}$ . + +To bound the second term in (7), we have seen that + +$$ +\begin{array}{l} X _ {t + s} - Y _ {s} ^ {(t)} = \int_ {t} ^ {t + s} v (\bar {X} (u)) d u - \int_ {0} ^ {s} v (X _ {t + u}) d u \\ + \int_ {t} ^ {t + s} \sigma (\bar {X} (u)) d B _ {u} - \int_ {0} ^ {s} \sigma (X _ {t + u}) d B _ {u} ^ {(t)} \\ + \Delta_ {Z} (t, s), \\ \end{array} +$$ + +where $\Delta_Z(t,s)$ plays the role of accumulated noise and bias from time $t$ to $t + s$ , and is defined as + +$$ +\Delta_ {Z} (t, s) := \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} Z _ {i + 1} + (t + s - \tau_ {k}) \mathbb {E} \left[ Z _ {k + 1} \mid \mathcal {F} _ {t + s} \right] - (t - \tau_ {n}) \mathbb {E} \left[ Z _ {n + 1} \mid \mathcal {F} _ {t} \right], \tag {13} +$$ + +with $k = m(t + s)$ and $n = m(t)$ . We therefore have + +$$ +\begin{array}{l} \mathbb {E} \left\| X _ {t + s} - Y _ {s} ^ {(t)} \right\| ^ {2} \leq 3 \mathbb {E} \left\| \int_ {t} ^ {t + s} v (X _ {u}) - v (\overline {{X}} (u)) \mathrm {d} u \right\| ^ {2} \\ + 3 \mathbb {E} \left\| \int_ {t} ^ {t + s} \sigma \left(X _ {u}\right) - \sigma (\bar {X} (u)) \mathrm {d} B _ {u} \right\| ^ {2} + 3 \mathbb {E} \| \Delta_ {Z} (t, s) \| ^ {2} \\ \leq 3 s \int_ {t} ^ {t + s} \mathbb {E} \left\| v \left(X _ {u}\right) - v (\bar {X} (u)) \right\| ^ {2} d u \\ + 3 \mathbb {E} \int_ {t} ^ {t + s} \left\| \sigma (X _ {u}) - \sigma (\overline {{X}} (u)) \right\| _ {F} ^ {2} d u + 3 \mathbb {E} \| \Delta_ {Z} (t, s) \| ^ {2} \\ \leq 3 (s + 1) L ^ {2} \int_ {t} ^ {t + s} \mathbb {E} \| X _ {u} - \bar {X} (u) \| ^ {2} \mathrm {d} u + 3 \mathbb {E} \| \Delta_ {Z} (t, s) \| ^ {2}. \tag {14} \\ \end{array} +$$ + +For bounding the term inside the integral, we have + +$$ +\begin{array}{l} \mathbb {E} \| X _ {u} - \bar {X} (u) \| ^ {2} = \mathbb {E} \| (u - \tau_ {m (u)}) \{v (\bar {X} (u)) + Z (u) \} + \sigma (\bar {X} (u)) \left(B _ {u} - B _ {\tau_ {m (u)}}\right) \| ^ {2} \\ \leq 4 \bar {\gamma} (u) ^ {2} \left(\mathbb {E} \| v (\bar {X} (u)) \| ^ {2} + \mathbb {E} \| Z (u) \| ^ {2}\right) + 2 \bar {\gamma} (u) \mathbb {E} \operatorname {t r} \left(\sigma (\bar {X} (u)) ^ {\top} \sigma (\bar {X} (u))\right). \\ \end{array} +$$ + +We have used the fact that + +$$ +\begin{array}{l} \mathbb {E} \| \sigma (\bar {X} (u)) (B _ {u} - B _ {\tau_ {m (u)}}) \| ^ {2} = \mathbb {E} \left((B _ {u} - B _ {\tau_ {m (u)}}) ^ {\top} \sigma (\bar {X} (u)) ^ {\top} \sigma (\bar {X} (u)) (B _ {u} - B _ {\tau_ {m (u)}})\right) \\ = \mathbb {E} \operatorname {t r} \left(\sigma (\bar {X} (u)) ^ {\top} \sigma (\bar {X} (u)) \left(B _ {u} - B _ {\tau_ {m (u)}}\right) \left(B _ {u} - B _ {\tau_ {m (u)}}\right) ^ {\top}\right) \\ = \mathbb {E} \left[ \mathbb {E} \left[ \operatorname {t r} \left(\sigma (\bar {X} (u)) ^ {\top} \sigma (\bar {X} (u)) \left(B _ {u} - B _ {\tau_ {m (u)}}\right) \left(B _ {u} - B _ {\tau_ {m (u)}}\right) ^ {\top}\right) \mid \mathscr {F} _ {\tau_ {m (u)}} \right] \right] \\ = (u - \tau_ {m (u)}) \mathbb {E} \left[ \operatorname {t r} \left(\sigma (\bar {X} (u)) ^ {\top} \sigma (\bar {X} (u))\right) \right] \\ \end{array} +$$ + +Notice that since conditional expectation is a projection in $L^2$ , we have $\mathbb{E}\| Z(u)\|^2 \leq \mathbb{E}\|\overline{Z}(u)\|^2$ . Using this fact, along with boundedness of $\sigma(\cdot)$ by $C_\sigma$ , and Lemma 2 we get + +$$ +\begin{array}{l} \mathbb {E} \left[ \| X _ {u} - \bar {X} (u) \| ^ {2} \right] \leq 4 \bar {\gamma} (u) ^ {2} \left(\mathbb {E} \| v (\bar {X} (u)) \| ^ {2} + \mathbb {E} \| \bar {Z} (u) \| ^ {2}\right) + 2 \bar {\gamma} (u) \mathbb {E} \operatorname {t r} \left(\sigma (\bar {X} (u)) ^ {\top} \sigma (\bar {X} (u))\right) \\ \leq 4 \bar {\gamma} (u) ^ {2} \mathbb {E} \| v (\bar {X} (u)) \| ^ {2} + 8 \bar {\gamma} (u) ^ {2} \sigma^ {2} + 4 \bar {\gamma} (u) ^ {2} \mathcal {O} (\bar {\gamma} (u)) + 2 C _ {\sigma} \bar {\gamma} (u) \leq C \bar {\gamma} (u), \\ \end{array} +$$ + +for some constant $C > 0$ . Plugging this estimate into (14) after taking expectation yields + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| X _ {t + s} - Y _ {s} ^ {(t)} \right\| ^ {2} \right] \leq 3 (s + 1) L ^ {2} C \int_ {t} ^ {t + s} \bar {\gamma} (u) d u + 3 \mathbb {E} \| \Delta_ {Z} (t, s) \| ^ {2} \\ \leq 3(s + 1)sL^{2}C\sup_{u\in [t,t + s]}\overline{\gamma} (u) + 3\mathbb{E}\| \Delta_{Z}(t,s)\|^{2} \\ \leq 3 (T + 1) ^ {2} L ^ {2} C \sup _ {u \in [ t, t + T ]} \overline {{\gamma}} (u) + 3 \sup _ {u \in [ 0, T ]} \mathbb {E} \| \Delta_ {Z} (t, u) \| ^ {2} \\ \end{array} +$$ + +Taking supremum over $s \in [0, T]$ and noticing that the right-hand-side is independent of $s$ and $\gamma_k \to 0$ , together with Lemma 1 yields + +$$ +\begin{array}{l} A _ {t} := \sup _ {0 \leq s \leq T} \mathbb {E} \left[ \| X _ {t + s} - Y _ {s} ^ {(t)} \| ^ {2} \right] \tag {15} \\ \leq 3 (T + 1) ^ {2} L ^ {2} C \sup _ {t \leq u \leq t + T} \overline {{\gamma}} (u) + 3 \sup _ {0 \leq u \leq T} \mathbb {E} \left[ \| \Delta_ {Z} (t, u) \| ^ {2} \right] \\ \rightarrow 0 \quad \text {a s} t \rightarrow \infty , \\ \end{array} +$$ + +showing that the Picard process gets arbitrary close to the original interpolation, as $t \to \infty$ . + +Let us return to the decomposition (7). By taking expectation and using (8) and (15) we obtain + +$$ +\begin{array}{l} \mathbb {E} \left[ \| X _ {t + s} - \Phi_ {s} ^ {(t)} \| ^ {2} \right] \leq 2 (T + 1) L ^ {2} \int_ {0} ^ {s} \mathbb {E} \left[ \| X _ {t + u} - \Phi_ {u} ^ {(t)} \| ^ {2} \right] \mathrm {d} u + 2 A _ {t} \\ \leq 2 A _ {t} \exp (s (T + 1) L ^ {2}) \\ \leq 2 A _ {t} \exp \left(\left(T + 1\right) ^ {2} L ^ {2}\right), \\ \end{array} +$$ + +where in the last line we have used the Gronwall lemma. Thus, + +$$ +\lim _ {t \to \infty} \sup _ {s \in [ 0, T ]} \mathbb {E} \left[ \| X _ {t + s} - \Phi_ {s} ^ {(t)} \| ^ {2} \right] = 0. +$$ + +Recall that the Wasserstein distance between $X_{t+s}$ and $\Phi_s^{(t)}$ is the infimum over all possible couplings between them, having the correct marginals. As $\Phi_s^{(t)}$ has the same marginal as the Langevin diffusion started from $X_t$ at time $s$ , and the synchronous coupling of the interpolation and the Langevin flow produces a specific coupling between them, we directly get + +$$ +W _ {2} \left(X _ {t + s}, \Phi_ {s} ^ {(t)}\right) \leq \mathbb {E} \left[ \left\| X _ {t + s} - \Phi_ {s} ^ {(t)} \right\| ^ {2} \right] ^ {\frac {1}{2}}, +$$ + +which implies + +$$ +\lim_{t\to \infty}\sup_{s\in [0,T]}W_{2}(X_{t + s},\Phi_{s}^{(t)}) = 0, +$$ + +as desired. + +# C.2 Auxiliary Lemmas + +Lemma 1. Suppose Assumptions 1-3 hold. Then, for any fixed $T > 0$ we have + +$$ +\lim_{t\to \infty}\sup_{0\leq s\leq T}\mathbb{E}\| \Delta_{Z}(t,s)\|^{2} = 0. +$$ + +Proof. Define $\Delta_b$ and $\Delta_U$ the same way as in (13). By Cauchy-Schwarz we have + +$$ +\begin{array}{l} \left\| \Delta_ {b} (t, s) \right\| ^ {2} \\ \leq \left(\sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} \| b _ {i + 1} \| + (t + s - \tau_ {k}) \| \mathbb {E} [ b _ {k + 1} | \mathcal {F} _ {t + s} ] \| + (t - \tau_ {n}) \| \mathbb {E} [ b _ {n + 1} | \mathcal {F} _ {t} ] \|\right) ^ {2} \\ \leq \left(2 \gamma_ {n + 1} + s\right) \left(\sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} \| b _ {i + 1} \| ^ {2} + (t + s - \tau_ {k}) \| \mathbb {E} \left[ b _ {k + 1} \mid \mathcal {F} _ {t + s} \right] \| ^ {2} + (t - \tau_ {n}) \| \mathbb {E} \left[ b _ {n + 1} \mid \mathcal {F} _ {t} \right] \| ^ {2}\right), \\ \end{array} +$$ + +where the last inequality comes from $\sum_{i=n}^{k-1} \gamma_{i+1} \leq s, t+s-\tau_k \leq \gamma_{k+1}, t-\tau_n \leq \gamma_{n+1}$ , and $\gamma_{k+1} \leq \gamma_{n+1}$ . + +Noticing that conditional expectation is a contraction in $L^2$ and letting $k' = m(t + T)$ , we get + +$$ +\sup _ {0 \leq s \leq T} \mathbb {E} \left[ \| \Delta_ {b} (t, s) \| ^ {2} \right] \leq (2 + T) \left(\sum_ {i = n} ^ {k ^ {\prime} - 1} \gamma_ {i + 1} \mathbb {E} \| b _ {i + 1} \| ^ {2} + \sup _ {n \leq j \leq k ^ {\prime} + 1} \gamma_ {j + 1} \mathbb {E} \| b _ {j + 1} \| ^ {2} + \gamma_ {n + 1} \mathbb {E} \| b _ {n + 1} \| ^ {2}\right) +$$ + +Now, invoking Lemma 2 yields + +$$ +\begin{array}{l} \sup _ {0 \leq s \leq T} \mathbb {E} \left[ \| \Delta_ {b} (t, s) \| ^ {2} \right] \leq C (2 + T) \left(\sum_ {i = n} ^ {k ^ {\prime} - 1} \gamma_ {i + 1} ^ {2} + \sup _ {n \leq j \leq k ^ {\prime} + 1} \gamma_ {j + 1} ^ {2} + \gamma_ {n + 1} ^ {2}\right) \\ \leq C (2 + T) \left(\sum_ {i = n} ^ {k ^ {\prime} - 1} \gamma_ {i + 1} ^ {2} + 2 \gamma_ {n + 1} ^ {2}\right) \\ \leq C(2 + T)(T + 2\gamma_{n + 1})\sup_{0\leq s\leq T}\overline{\gamma} (t + s). \\ \end{array} +$$ + +As $t\to \infty$ , the last quantity vanishes, since $\gamma_{n}\rightarrow 0$ + +For the noise we have + +$$ +\begin{array}{l} \| \Delta_ {U} (t, s) \| ^ {2} \leq 2 \left\| \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} U _ {i + 1} \right\| ^ {2} + 4 \| (t + s - \tau_ {k}) \mathbb {E} [ U _ {k + 1} | \mathcal {F} _ {t + s} ] \| ^ {2} + 4 \| (t - \tau_ {n}) \mathbb {E} [ U _ {n + 1} | \mathcal {F} _ {t} ] \| ^ {2} \\ \leq 2 \left\| \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} U _ {i + 1} \right\| ^ {2} + 4 \gamma_ {k + 1} ^ {2} \| U _ {k + 1} \| ^ {2} + 4 \gamma_ {n + 1} ^ {2} \| U _ {n + 1} \| ^ {2}. \\ \end{array} +$$ + +Taking expectations and then sup, we get + +$$ +\sup _ {0 \leq s \leq T} \mathbb {E} \left[ \| \Delta_ {U} (t, s) \| ^ {2} \right] \leq 2 \sup _ {n + 1 \leq k \leq m (t + T)} \mathbb {E} \left\| \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} U _ {i + 1} \right\| ^ {2} + 4 \gamma_ {k + 1} ^ {2} \sigma^ {2} + 4 \gamma_ {n + 1} ^ {2} \sigma^ {2}. +$$ + +Since $\{U_i\}$ is a martingale difference sequence, we have that $\left\{\sum_{i=n}^{k-1} \gamma_{i+1} U_{i+1}\right\}_{k > n}$ is a martingale. Thus, by the boundedness of the second moments of $U_i$ , we get + +$$ +\mathbb {E} \left\| \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} U _ {i + 1} \right\| ^ {2} = \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} ^ {2} \mathbb {E} \| U _ {i + 1} \| ^ {2} \leq \sigma^ {2} \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} ^ {2}. +$$ + +Hence, + +$$ +\lim _ {n \rightarrow \infty} \sup \left\{\mathbb {E} \| \sum_ {i = n} ^ {k - 1} \gamma_ {i + 1} U _ {i + 1} \| ^ {2}: n < k \leq m (\tau_ {n} + T) \right\} \leq \lim _ {n \rightarrow \infty} \sigma^ {2} \sum_ {i = n} ^ {\infty} \gamma_ {i + 1} ^ {2} = 0. +$$ + +Lemma 2. Let $\{x_{k}\}_{k\in \mathbb{N}}$ be the iterates of (LRM) and suppose Assumptions 1-3 hold. Then, $\mathbb{E}\| x_k\|^2 = O(1 / \gamma_{k + 1})$ . This in turn implies $\mathbb{E}\| v(x_k)\|^2 = O(1 / \gamma_{k + 1})$ and $\mathbb{E}\| b_{k + 1}\|^2 = O(\gamma_{k + 1})$ . + +Proof. Without loss of generality, suppose $v$ has a stationary point at 0. We repeatedly use the fact that $\mathbb{E}\|v(x_k)\|^2 \leq L^2\mathbb{E}\|x_k\|^2$ . Moreover, by Assumption 1 we have $\langle v(x), x \rangle \leq C_v(\|x\| + 1)$ , and $\| \sigma(x)\|_F^2 \leq C_\sigma$ . + +Define $a_{k}\coloneqq \mathbb{E}\| x_{k}\|^{2}$ .We have + +$$ +\begin{array}{l} a _ {k + 1} - a _ {k} = \gamma_ {k + 1} ^ {2} \mathbb {E} \| v (x _ {k}) + Z _ {k + 1} \| ^ {2} + \gamma_ {k + 1} \mathbb {E} \| \sigma (x _ {k}) \xi_ {k + 1} \| ^ {2} + 2 \gamma_ {k + 1} \mathbb {E} \langle x _ {k}, v (x _ {k}) + Z _ {k + 1} \rangle \\ + 2 \gamma_ {k + 1} ^ {1 / 2} \mathbb {E} \langle x _ {k}, \sigma (x _ {k}) \xi_ {k + 1} \rangle + 2 \gamma_ {k + 1} ^ {3 / 2} \mathbb {E} \langle v (x _ {k}) + Z _ {k + 1}, \sigma (x _ {k}) \xi_ {k + 1} \rangle \\ \leq 2 L ^ {2} \gamma_ {k + 1} ^ {2} a _ {k} + 2 \gamma_ {k + 1} ^ {2} \mathbb {E} \| Z _ {k + 1} \| ^ {2} + \gamma_ {k + 1} C _ {\sigma} + 2 \gamma_ {k + 1} C _ {v} (\sqrt {a _ {k}} + 1) + 2 \gamma_ {k + 1} \sqrt {a _ {k}} \sqrt {\mathbb {E} \| Z _ {k + 1} \| ^ {2}} \\ + 2 \gamma_ {k + 1} ^ {3 / 2} \sqrt {C _ {\sigma}} \sqrt {\mathbb {E} \| Z _ {k + 1} \| ^ {2}} \tag {16} \\ \end{array} +$$ + +By Assumption 3, there is some $C_b > 0$ such that $\mathbb{E}\| b_{k + 1}\| ^2\leq C_b(\gamma_{k + 1}^2 a_k + \gamma_{k + 1})$ , and we have + +$$ +\mathbb {E} \| Z _ {k + 1} \| ^ {2} \leq 2 \mathbb {E} \| b _ {k + 1} \| ^ {2} + 2 \mathbb {E} \| U _ {k + 1} \| ^ {2} \leq 2 C _ {b} \left(\gamma_ {k + 1} ^ {2} a _ {k} + \gamma_ {k + 1}\right) + 2 \sigma^ {2}. \tag {17} +$$ + +Moreover, as $\sqrt{p + q} \leq \sqrt{p} + \sqrt{q}$ , we have + +$$ +\sqrt {\mathbb {E} \left\| Z _ {k + 1} \right\| ^ {2}} \leq \sqrt {2 C _ {b}} \left(\gamma_ {k + 1} \sqrt {a _ {k}} + \sqrt {\gamma_ {k + 1}}\right) + \sqrt {2} \sigma . \tag {18} +$$ + +Plugging the bounds from (17) and (18) into (16) gives + +$$ +\begin{array}{l} a _ {k + 1} - a _ {k} \leq 2 L ^ {2} \gamma_ {k + 1} ^ {2} a _ {k} + 4 C _ {b} \gamma_ {k + 1} ^ {4} a _ {k} + 4 C _ {b} \gamma_ {k + 1} ^ {3} + 4 \gamma_ {k + 1} ^ {2} \sigma^ {2} \\ + \gamma_ {k + 1} C _ {\sigma} + 2 \gamma_ {k + 1} C _ {v} \sqrt {a _ {k}} + 2 \gamma_ {k + 1} C _ {v} \\ + 2 \sqrt {2 C _ {b}} \gamma_ {k + 1} ^ {2} a _ {k} + 2 \sqrt {2 C _ {b}} \gamma_ {k + 1} ^ {3 / 2} \sqrt {a _ {k}} + 2 \sqrt {2} \sigma \gamma_ {k + 1} \sqrt {a _ {k}} \tag {19} \\ + 2 \sqrt {2 C _ {b} C _ {\sigma}} \gamma_ {k + 1} ^ {5 / 2} \sqrt {a _ {k}} + 2 \sqrt {2 C _ {b} C _ {\sigma}} \gamma_ {k + 1} ^ {2} + 2 \gamma_ {k + 1} ^ {3 / 2} \sqrt {2 C _ {\sigma}} \sigma \\ =: P \gamma_ {k + 1} ^ {2} a _ {k} + Q \gamma_ {k + 1} \sqrt {a _ {k}} + R \gamma_ {k + 1}, \\ \end{array} +$$ + +where + +$$ +P = 2 L ^ {2} + 4 C _ {b} \gamma_ {k + 1} ^ {2} + 2 \sqrt {2 C _ {b}} +$$ + +$$ +Q = 2 C _ {v} + 2 \sqrt {2 C _ {b}} \sqrt {\gamma_ {k + 1}} + 2 \sqrt {2} \sigma + 2 \sqrt {2 C _ {b}} \gamma_ {k + 1} + 2 \sqrt {2 C _ {b} C _ {\sigma}} \gamma_ {k + 1} ^ {3 / 2} +$$ + +$$ +R = 4 C _ {b} \gamma_ {k + 1} ^ {2} + 4 \gamma_ {k + 1} \sigma^ {2} + C _ {\sigma} + 2 C _ {v} + 2 \sqrt {2 C _ {b} C _ {\sigma}} \gamma_ {k + 1} + 2 \gamma_ {k + 1} ^ {1 / 2} \sqrt {2 C _ {\sigma}} \sigma . +$$ + +The exact values of $P$ , $Q$ , and $R$ are irrelevant, and we only need upper bounds for them. Assuming that $\gamma_{k+1} < 1$ for all $k$ , we replace the three quantities by + +$$ +P = 2 L ^ {2} + 4 C _ {b} + 2 \sqrt {2 C _ {b}} +$$ + +$$ +Q = 2 C _ {v} + 2 \sqrt {2 C _ {b}} + 2 \sqrt {2} \sigma + 2 \sqrt {2 C _ {b}} + 2 \sqrt {2 C _ {b} C _ {\sigma}} \tag {20} +$$ + +$$ +R = 4 C _ {b} + 4 \sigma^ {2} + C _ {\sigma} + 2 C _ {v} + 2 \sqrt {2 C _ {b} C _ {\sigma}} + 2 \sqrt {2 C _ {\sigma}} \sigma . +$$ + +Now, define $h_k = \gamma_{k+1}^2 a_k$ . The recursion (19) in terms of $h_k$ becomes + +$$ +h _ {k + 1} \leq h _ {k} \left(1 + P \gamma_ {k + 1} ^ {2}\right) \frac {\gamma_ {k + 2} ^ {2}}{\gamma_ {k + 1} ^ {2}} + \sqrt {h _ {k}} Q \gamma_ {k + 2} ^ {2} + R \gamma_ {k + 1} \gamma_ {k + 2} ^ {2}. +$$ + +We now prove that there exists some $M > 0$ so that $h_k \leq M\gamma_{k+1}$ by induction. Suppose it is the case for $k$ , and we prove it for $k+1$ . Using the induction hypothesis we get + +$$ +\begin{array}{l} h _ {k + 1} \leq M \gamma_ {k + 1} (1 + P \gamma_ {k + 1} ^ {2}) \frac {\gamma_ {k + 2} ^ {2}}{\gamma_ {k + 1} ^ {2}} + \sqrt {M \gamma_ {k + 1}} Q \gamma_ {k + 2} ^ {2} + R \gamma_ {k + 1} \gamma_ {k + 2} ^ {2} \\ = M (1 + P \gamma_ {k + 1} ^ {2}) \frac {\gamma_ {k + 2} ^ {2}}{\gamma_ {k + 1}} + \sqrt {M} Q \sqrt {\gamma_ {k + 1}} \gamma_ {k + 2} ^ {2} + R \gamma_ {k + 1} \gamma_ {k + 2} ^ {2} \\ \end{array} +$$ + +For the last to be less than $M\gamma_{k + 2}$ , we have to verify + +$$ +M \left(1 + P \gamma_ {k + 1} ^ {2}\right) \frac {\gamma_ {k + 2}}{\gamma_ {k + 1}} + \sqrt {M} Q \sqrt {\gamma_ {k + 1}} \gamma_ {k + 2} + R \gamma_ {k + 1} \gamma_ {k + 2} \leq M +$$ + +or equivalently, + +$$ +M \left(\frac {\gamma_ {k + 2}}{\gamma_ {k + 1}} + P \gamma_ {k + 1} \gamma_ {k + 2} - 1\right) + \sqrt {M} Q \sqrt {\gamma_ {k + 1}} \gamma_ {k + 2} + R \gamma_ {k + 1} \gamma_ {k + 2} \leq 0. +$$ + +This is a quadratic equation in $\sqrt{M}$ , and for this inequality to hold, we prove that the leading coefficient is negative, and the largest root is bounded above by some constant not depending on $n$ . + +Negativity of the leading coefficient is equivalent to + +$$ +\frac {\gamma_ {k + 2}}{\gamma_ {k + 1}} + P \gamma_ {k + 1} \gamma_ {k + 2} < 1, +$$ + +which is implied by our assumption on the step size. + +The larger root of the equation is + +$$ +\begin{array}{l} \frac {\left(- 4 \gamma_ {k + 1} ^ {2} \gamma_ {k + 2} ^ {2} P R + \gamma_ {k + 1} \gamma_ {k + 2} \left(\gamma_ {k + 2} Q ^ {2} + 4 R\right) - 4 R \gamma_ {k + 2} ^ {2}\right) ^ {1 / 2} + \sqrt {\gamma_ {k + 1}} \gamma_ {k + 2} Q}{2 \left(1 - \gamma_ {k + 1} \gamma_ {k + 2} P - \gamma_ {k + 2} / \gamma_ {k + 1}\right)} \\ < \frac {\sqrt {\gamma_ {k + 1}} \gamma_ {k + 2} Q + \sqrt {R \gamma_ {k + 1} \gamma_ {k + 2}}}{(1 - \gamma_ {k + 1} \gamma_ {k + 2} P - \gamma_ {k + 2} / \gamma_ {k + 1})} \\ \leq \frac {\sqrt {\gamma_ {k + 1}} \gamma_ {k + 1} Q + \sqrt {R} \gamma_ {k + 1}}{\left(1 - \gamma_ {k + 1} \gamma_ {k + 2} P - \gamma_ {k + 2} / \gamma_ {k + 1}\right)}. \\ \end{array} +$$ + +By our assumption on the step size that + +$$ +\frac {\gamma_ {k + 2}}{\gamma_ {k + 1}} + P \gamma_ {k + 1} \gamma_ {k + 2} < 1 - \gamma_ {k + 1}, +$$ + +we get that the larger root is smaller than + +$$ +\frac {\sqrt {\gamma_ {k + 1}} \gamma_ {k + 1} Q + \sqrt {R} \gamma_ {k + 1}}{\gamma_ {k + 1}} = \sqrt {\gamma_ {k + 1}} Q + \sqrt {R} < Q + \sqrt {R}. +$$ + +Letting $M \coloneqq Q + \sqrt{R}$ gives the desired result. + +The second argument of the lemma follows from Assumption 3 and the first result of the lemma. + +Lemma 4. For a vector valued function $g\in L^{2}(\mathbb{R};\mathbb{R}^{d})$ , one has + +$$ +\left\| \int_ {0} ^ {s} g (u) d u \right\| ^ {2} \leq \left(\int_ {0} ^ {s} \| g (u) \| d u\right) ^ {2} \leq s \int_ {0} ^ {s} \| g (u) \| ^ {2} d u. +$$ + +# D Proofs for Section 5 + +# D.1 Proof of Theorem 3 + +For brevity, let us write $\mathcal{F}_k$ instead of $\mathcal{F}_{\tau_k}$ . Opening up $\| x_{k + 1}\| ^2 = \| x_k + \gamma_{k + 1}\{v(x_k) + Z_{k + 1}\} +$ $\sqrt{\gamma_{k + 1}}\sigma (x_k)\xi_{k + 1}\| ^2$ and ignoring every term that is zero-mean under $\mathbb{E}[\cdot |\mathcal{F}_k]$ , we get + +$$ +\begin{array}{l} \mathbb {E} \left[ \| x _ {k + 1} \| ^ {2} \mid \mathcal {F} _ {k} \right] = \mathbb {E} \left[ \| x _ {k} \| ^ {2} + 2 \gamma_ {k + 1} \langle x _ {k}, v (x _ {k}) + Z _ {k + 1} \rangle \right. \\ + \gamma_ {k + 1} ^ {2} \| v (x _ {k}) + Z _ {k + 1} \| ^ {2} + \gamma_ {k + 1} \| \sigma (x _ {k}) \xi_ {k + 1} \| ^ {2} + 2 \gamma_ {k + 1} ^ {\frac {3}{2}} \langle \sigma (x _ {k}) \xi_ {k + 1}, b _ {k + 1} \rangle \left| \mathcal {F} _ {k} \right] \\ \leq \left\| x _ {k} \right\| ^ {2} + 2 \gamma_ {k + 1} \left(\left\langle x _ {k}, v (x _ {k}) \right\rangle + C _ {\sigma} / 2\right) + 2 \gamma_ {k + 1} ^ {2} \left\| v (x _ {k}) \right\| ^ {2} \\ + \mathbb {E} \left[ 2 \gamma_ {k + 1} ^ {2} \| Z _ {k + 1} \| ^ {2} + 2 \gamma_ {k + 1} \langle x _ {k}, Z _ {k + 1} \rangle + 2 \gamma_ {k + 1} ^ {\frac {3}{2}} \langle \sigma (x _ {k}) \xi_ {k + 1}, b _ {k + 1} \rangle \mid \mathcal {F} _ {k} \right] \\ \leq \left\| x _ {k} \right\| ^ {2} + 2 \gamma_ {k + 1} \left(\left\langle x _ {k}, v \left(x _ {k}\right) \right\rangle + C _ {\sigma} / 2 + \gamma_ {k + 1} ^ {\frac {1}{2}} C _ {\sigma} / 4\right) + 2 \gamma_ {k + 1} ^ {2} \| v \left(x _ {k}\right) \| ^ {2} \tag {21} \\ + \mathbb {E} \left[ 2 \gamma_ {k + 1} ^ {2} \| Z _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} \right] + \gamma_ {k + 1} ^ {\frac {3}{2}} \mathbb {E} \left[ \| b _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} \right] + 2 \mathbb {E} \left[ \gamma_ {k + 1} \langle x _ {k}, b _ {k + 1} \rangle | \mathcal {F} _ {k} \right]. \\ \end{array} +$$ + +Recalling (5) in Assumption 3, we have for some $C > 0$ + +$$ +\mathbb {E} \| Z _ {k + 1} \| ^ {2} \leq 2 \sigma^ {2} + 2 C \left(\gamma_ {k + 1} ^ {2} \mathbb {E} \| v (x _ {k}) \| ^ {2} + \gamma_ {k + 1}\right) \tag {22} +$$ + +Without loss of generality, assume $\gamma_{k} \leq 1$ and $\mathbb{E}\|x_k\|^2 \geq 1$ (so that $\left(\mathbb{E}\|x_k\|^2\right)^2 \geq \mathbb{E}\|x_k\|^2$ ) for all $k$ . Then, $\|v(x_k)\|^2 \leq L^2\|x_k\|^2$ , together with Assumption 4 and the Cauchy-Schwartz inequality on the last term of (21), implies + +$$ +\begin{array}{l} \mathbb {E} \| x _ {k + 1} \| ^ {2} \leq \mathbb {E} \| x _ {k} \| ^ {2} - 2 \alpha \gamma_ {k + 1} \mathbb {E} \| x _ {k} \| ^ {2} + 2 \gamma_ {k + 1} \left(\beta + C _ {\sigma} + \frac {1}{2} \gamma_ {k + 1} ^ {\frac {1}{2}} C _ {\sigma}\right) + 2 L ^ {2} \gamma_ {k + 1} ^ {2} \mathbb {E} \| x _ {k} \| ^ {2} \\ + 2 \gamma_ {k + 1} ^ {2} \left[ 2 \sigma^ {2} + 2 C \left(L ^ {2} \gamma_ {k + 1} ^ {2} \mathbb {E} \| x _ {k} \| ^ {2} + \gamma_ {k + 1}\right) \right] \\ + \gamma_ {k + 1} ^ {\frac {3}{2}} C \left(L ^ {2} \gamma_ {k + 1} ^ {2} \mathbb {E} \| x _ {k} \| ^ {2} + \gamma_ {k + 1}\right) \\ + 2 \gamma_ {k + 1} \sqrt {C} \sqrt {L ^ {2} \gamma_ {k + 1} ^ {2} \left(\mathbb {E} \| x _ {k} \| ^ {2}\right) ^ {2} + \gamma_ {k + 1} \mathbb {E} \| x _ {k} \| ^ {2}} \\ \leq \mathbb {E} \| x _ {k} \| ^ {2} \left(1 - C _ {1} \gamma_ {k + 1} + C _ {2} \gamma_ {k + 1} ^ {\frac {3}{2}}\right) + C _ {3} \gamma_ {k + 1} \\ \end{array} +$$ + +for some constants $C_1, C_2, C_3$ depending on $L, C, \sigma, \alpha, \beta$ , and $d$ . Since $\gamma_k \to 0$ , there exist $\tilde{\alpha}, \tilde{\beta} > 0$ and $k_0$ such that, for all $k \geq k_0$ , + +$$ +\mathbb {E} \| x _ {k + 1} \| ^ {2} \leq \mathbb {E} \| x _ {k} \| ^ {2} (1 - \tilde {\alpha} \gamma_ {k + 1}) + \tilde {\beta} \gamma_ {k + 1}, \quad 1 - \tilde {\alpha} \gamma_ {k + 1} > 0. +$$ + +A simple induction yields + +$$ +\sup _ {k} \mathbb {E} \| x _ {k} \| ^ {2} \leq \max \left\{\frac {\tilde {\beta}}{\tilde {\alpha}}, \mathbb {E} \| x _ {k _ {0}} \| ^ {2} \right\} +$$ + +which concludes the proof. + +# D.2 Proof of Theorem 4 for Constant Diffusion + +Before proceeding, we need a lemma which can be distilled from [20, Proposition 8]: + +Lemma 5. Suppose $\nabla f$ is $L$ -Lipschitz. Fix $x \in \mathbb{R}^d$ and $\gamma > 0$ , let $\tilde{x}^{+} = x - \gamma \nabla f(x) + \sqrt{2\gamma}\xi$ . Then + +$$ +\mathbb {E} \left[ \exp \left(\frac {1}{2} \langle \nabla f (x), \tilde {x} ^ {+} - x \rangle + \frac {L}{4} \| \tilde {x} ^ {+} - x \| ^ {2}\right) \right] \leq (1 - \gamma L) ^ {- d / 2} e ^ {- \frac {\gamma}{4} \| \nabla f (x) \| ^ {2}}. \tag {23} +$$ + +Let $\tilde{x}_{k + 1} := x_k - \gamma_{k + 1}\nabla f(x_k) + \sqrt{2\gamma_{k + 1}}\xi_{k + 1}$ so that $x_{k + 1} - x_k = \tilde{x}_{k + 1} - x_k - \gamma_{k + 1}(U_{k + 1} + b_{k + 1})$ . Conditioned on $x_k, U_{k + 1}, U_{k + 1}', \xi_{k + 1}'$ , and using the $L$ -Lipschitzness of $\nabla f$ , we get + +$$ +\begin{array}{l} e ^ {- \frac {1}{2} f (x _ {k})} \mathbb {E} e ^ {\frac {1}{2} f (x _ {k + 1})} \\ \leq \mathbb {E} \exp \left(\frac {1}{2} \langle \nabla f (x _ {k}), x _ {k + 1} - x _ {k} \rangle + \frac {L}{4} \| x _ {k + 1} - x _ {k} \| ^ {2}\right) (24) \\ \leq \mathbb {E} \exp \left\{\frac {1}{2} \langle \nabla f (x _ {k}), \tilde {x} _ {k + 1} - x _ {k} \rangle - \frac {1}{2} \langle \nabla f (x _ {k}), \gamma_ {k + 1} U _ {k + 1} \rangle \right. (25) \\ \left. - \frac {1}{2} \langle \nabla f (x _ {k}), \gamma_ {k + 1} b _ {k + 1} \rangle + \frac {L}{2} \| \tilde {x} _ {k + 1} - x _ {k} \| ^ {2} + L \gamma_ {k + 1} ^ {2} \| U _ {k + 1} \| ^ {2} + L \gamma_ {k + 1} ^ {2} \| b _ {k + 1} \| ^ {2} \right\}. (26) \\ \end{array} +$$ + +Let $\delta \in (0,1)$ . Since + +$$ +\begin{array}{l} - \frac {1}{2} \langle \nabla f (x _ {k}), \gamma_ {k + 1} U _ {k + 1} \rangle \leq \gamma_ {k + 1} ^ {2 - \delta} \| \nabla f (x _ {k}) \| ^ {2} + \gamma_ {k + 1} ^ {\delta} \| U _ {k + 1} \| ^ {2}, \\ - \frac {1}{2} \langle \nabla f (x _ {k}), \gamma_ {k + 1} b _ {k + 1} \rangle \leq \gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + \| b _ {k + 1} \| ^ {2}, \\ \end{array} +$$ + +we have + +$$ +\begin{array}{l} e ^ {- \frac {1}{2} f (x _ {k})} \mathbb {E} e ^ {\frac {1}{2} f (x _ {k + 1})} (27) \\ \leq \mathbb {E} \exp \left\{\frac {1}{2} \langle \nabla f (x _ {k}), \tilde {x} _ {k + 1} - x _ {k} \rangle + \frac {L}{2} \| \tilde {x} _ {k + 1} - x _ {k} \| ^ {2} \right. (28) \\ \left. + \left(\gamma_ {k + 1} ^ {2 - \delta} + \gamma_ {k + 1} ^ {2}\right) \| \nabla f (x _ {k}) \| ^ {2} + \left(L \gamma_ {k + 1} ^ {2} + \gamma_ {k + 1} ^ {\delta}\right) \| U _ {k + 1} \| ^ {2} + \left(L \gamma_ {k + 1} ^ {2} + 1\right) \| b _ {k + 1} \| ^ {2} \right\}. (29) \\ \end{array} +$$ + +Invoking (11) an denoting $c' \triangleq (L\gamma_{k+1}^2 + 1) \cdot c$ , we get + +$$ +e ^ {- \frac {1}{2} f (x _ {k})} \mathbb {E} e ^ {\frac {1}{2} f (x _ {k + 1})} \leq e ^ {A _ {k}} \cdot \mathbb {E} \exp \left\{\frac {1}{2} \langle \nabla f (x _ {k}), \tilde {x} _ {k + 1} - x _ {k} \rangle + \frac {L}{2} \| \tilde {x} _ {k + 1} - x _ {k} \| ^ {2} + c ^ {\prime} \cdot \gamma_ {k + 1} \| \xi_ {k + 1} \| ^ {2} \right\}, \tag {30} +$$ + +where, + +$$ +\begin{array}{l} A _ {k} \triangleq \left(\gamma_ {k + 1} ^ {2 - \delta} + \gamma_ {k + 1} ^ {2} + c ^ {\prime} \gamma_ {k + 1} ^ {2}\right) \| \nabla f (x _ {k}) \| ^ {2} \\ + \left(L \gamma_ {k + 1} ^ {2} + \gamma_ {k + 1} ^ {\delta}\right) \| U _ {k + 1} \| ^ {2} \tag {31} \\ + c ^ {\prime} \left(\gamma_ {k + 1} ^ {2} \| U _ {k + 1} ^ {\prime} \| ^ {2} + \gamma_ {k + 1} \| \xi_ {k + 1} ^ {\prime} \| ^ {2}\right). \\ \end{array} +$$ + +Recalling that $\sqrt{2\gamma_{k + 1}}\xi_{k + 1} = \tilde{x}_{k + 1} - x_k + \gamma_{k + 1}\nabla f(x_k)$ , we have $\gamma_{k + 1}\| \xi_{k + 1}\| ^2\leq \| \tilde{x}_{k + 1} - x_k\| ^2 +$ $\gamma_{k + 1}^2\| \nabla f(x_k)\| ^2$ , and thus + +$$ +e ^ {- \frac {1}{2} f (x _ {k})} \mathbb {E} e ^ {\frac {1}{2} f (x _ {k + 1})} \leq e ^ {A _ {k} ^ {\prime}} \cdot \mathbb {E} \exp \left\{\frac {1}{2} \langle \nabla f (x _ {k}), \tilde {x} _ {k + 1} - x _ {k} \rangle + \left(\frac {L}{2} + c ^ {\prime}\right) \| \tilde {x} _ {k + 1} - x _ {k} \| ^ {2} \right\}, \tag {32} +$$ + +where $A_k' = A_k + c'\gamma_{k+1}^2 \|\nabla f(x_k)\|^2$ . Lemma 5 then implies + +$$ +e ^ {- \frac {1}{2} f (x _ {k})} \mathbb {E} e ^ {\frac {1}{2} f (x _ {k + 1})} \leq e ^ {A _ {k} ^ {\prime \prime}} \cdot \left(1 - \gamma_ {k + 1} L ^ {\prime}\right) ^ {- \frac {d}{2}} \tag {33} +$$ + +where $A_k^{\prime \prime} = A_k^{\prime} - \frac{\gamma_{k + 1}}{4}\| \nabla f(x_k)\|^2$ + +We now take the expectation over $x_{k}, U_{k+1}, U_{k+1}', \xi_{k+1}'$ (in other words, we are now only conditioning on $x_{k}$ ). Set $\epsilon \triangleq (1 - \gamma_{k+1}L')^{-\frac{1}{2}} - 1 > 0$ . Since $U_{k+1}, U_{k+1}', \xi_{k+1}'$ are sub-Gaussian and since $\gamma_{k} \to 0$ , for $k$ sufficiently large we have + +$$ +\begin{array}{l} \mathbb {E} A _ {k} ^ {\prime \prime} \leq (1 + \epsilon) \cdot \exp \left[ \left(- \frac {\gamma_ {k + 1}}{4} + \gamma_ {k + 1} ^ {2 - \delta} + \gamma_ {k + 1} ^ {2} + c ^ {\prime} \gamma_ {k + 1} ^ {2} + c ^ {\prime} \gamma_ {k + 1} ^ {2}\right) \| \nabla f (x _ {k}) \| ^ {2} \right] (34) \\ \leq (1 + \epsilon) \cdot e ^ {- \frac {\gamma_ {k + 1}}{8} \| \nabla f (x _ {k}) \| ^ {2}}. (35) \\ \end{array} +$$ + +To summarize, we have shown that, conditioned on $x_{k}$ + +$$ +e ^ {- \frac {1}{2} f (x _ {k})} \mathbb {E} e ^ {\frac {1}{2} f (x _ {k + 1})} \leq \left(1 - \gamma_ {k + 1} L ^ {\prime}\right) ^ {- \frac {d + 1}{2}} e ^ {- \frac {\gamma_ {k + 1}}{8} \| \nabla f (x _ {k}) \| ^ {2}}. \tag {36} +$$ + +A simple induction à la [20, Lemma 1 & Proposition 8] then concludes the proof. + +# D.3 Proof of Theorem 4 for Mirror Langevin + +Here, we bring the proof of Theorem 4 for the case of Example 4 and without noise. The proof for the noisy case is the same as in Appendix D.2. + +Define + +$$ +x ^ {+} = x - \gamma \nabla f \circ \nabla \phi^ {*} (x) + \sqrt {2 \gamma} (\nabla^ {2} \phi^ {*} (x) ^ {- 1}) ^ {1 / 2} \xi , +$$ + +where $\xi$ is a standard Gaussian random variable. Let $U(x) = f(\nabla \phi^{*}(x))$ . For a fixed $x$ , we have + +$$ +\mathbb {E} e ^ {\frac {1}{2} U (x ^ {+}) - \frac {1}{2} U (x)} = \frac {1}{(2 \pi) ^ {d / 2}} \int \exp \left(\frac {1}{2} U (x ^ {+}) - \frac {1}{2} U (x) - \frac {\| \xi \| ^ {2}}{2}\right) d \xi +$$ + +Notice that we have + +$$ +\xi = \frac {1}{\sqrt {2 \gamma}} \left(\nabla^ {2} \phi^ {*} (x)\right) ^ {1 / 2} \left(x ^ {+} - x + \gamma \nabla f \circ \nabla \phi^ {*} (x)\right) +$$ + +which implies + +$$ +d \xi = (\sqrt {2 \gamma}) ^ {- d} \sqrt {\det \nabla^ {2} \phi^ {*} (x)} d x ^ {+} +$$ + +Thus, the integral, after the change of variable from $\xi$ to $x^{+}$ becomes + +$$ +\frac {1}{C} \int \exp \left(\frac {1}{2} U (x ^ {+}) - \frac {1}{2} U (x) - \frac {1}{4 \gamma} \| (\nabla^ {2} \phi^ {*} (x)) ^ {1 / 2} (x ^ {+} - x + \gamma \nabla f \circ \nabla \phi^ {*} (x)) \| ^ {2}\right) d x ^ {+} \tag {37} +$$ + +with $C = (4\pi \gamma)^{d / 2}\sqrt{\operatorname*{det}\nabla^2\phi^*(x)^{-1}}$ . Now we use the smoothness of $f$ : + +$$ +\begin{array}{l} U \left(x ^ {+}\right) - U (x) = f \left(\nabla \phi^ {*} \left(x ^ {+}\right)\right) - f \left(\nabla \phi^ {*} (x)\right) \\ \leq \langle \nabla^ {2} \phi^ {*} (x) \nabla f (\nabla \phi^ {*} (x)), x ^ {+} - x \rangle + \frac {L}{2} \| x ^ {+} - x \| ^ {2} \\ \end{array} +$$ + +On the other hand, we have + +$$ +\begin{array}{l} \left\| \left(\nabla^ {2} \phi^ {*} (x)\right) ^ {1 / 2} \left(x ^ {+} - x + \gamma \nabla f \circ \nabla \phi^ {*} (x)\right) \right\| ^ {2} \\ = \| \left(\nabla^ {2} \phi^ {*} (x)\right) ^ {1 / 2} (x ^ {+} - x) \| ^ {2} + \gamma^ {2} \| \left(\nabla^ {2} \phi^ {*} (x)\right) ^ {1 / 2} \nabla f (\nabla \phi^ {*} (x)) \| ^ {2} \\ + 2 \gamma \langle \nabla^ {2} \phi^ {*} (x) \nabla f \nabla \phi^ {*} (x), x ^ {+} - x \rangle \\ \end{array} +$$ + +Notice that in (37), the colored terms cancel out, and what we are left with is + +$$ +\begin{array}{l} \mathbb {E} e ^ {\frac {1}{2} U (x ^ {+}) - \frac {1}{2} U (x)} \\ \leq \frac {1}{C} \int \exp \left(\frac {L}{4} \| x ^ {+} - x \| ^ {2} - \frac {1}{4 \gamma} \| (\nabla^ {2} \phi^ {*} (x)) ^ {1 / 2} (x ^ {+} - x) \| ^ {2} - \frac {\gamma}{4} \| (\nabla^ {2} \phi^ {*} (x)) ^ {1 / 2} \nabla f (\nabla \phi^ {*} (x)) \| ^ {2}\right) d x ^ {+} \\ \end{array} +$$ + +As, by our assumption, $\nabla^2\phi^*$ is bounded from above and below, we get the exact form as in Lemma 5. The rest of the proof is the same as in Appendix D.2. + +# D.4 Proof of Proposition 1 + +In this section, we prove that Examples 1-6 satisfy our bias conditions, which, as we have seen in Section 5, implies Proposition 1. For brevity, we write $\mathcal{F}_k$ for $\mathcal{F}_{\tau_k}$ . + +$\S$ Proof for Example 1. For randomized mid-point method, by replacing $\widetilde{\nabla} f(x_k)$ and $\widetilde{\nabla} f(x_{k + 1 / 2})$ with $\nabla f(x_{k}) + U_{k + 1}^{\prime}$ and $\nabla f(x_{k + 1 / 2}) + U_{k + 1}$ respectively, we have + +$$ +\begin{array}{l} x _ {k + 1 / 2} = x _ {k} - \gamma_ {k + 1} \alpha_ {k + 1} \left\{\nabla f (x _ {k}) + U _ {k + 1} ^ {\prime} \right\} + \sqrt {2 \gamma_ {k + 1} \alpha_ {k + 1}} \xi_ {k + 1} ^ {\prime}, \\ x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \left\{\nabla f \left(x _ {k + 1 / 2}\right) + U _ {k + 1} \right\} + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, \\ \end{array} +$$ + +where $\{\alpha_{k}\}$ are i.i.d. and uniformly distributed in [0, 1], $\{U_k\}$ and $\{U_k^{\prime}\}$ are noises in evaluating $\nabla f$ at the corresponding points, and $\xi_{k},\xi_{k}^{\prime}$ are independent standard Gaussians. + +Notice that the Lipschitzness of $\nabla f$ , and the fact that $\alpha_{k} \leq 1$ implies that the bias term $b_{k+1} := \nabla f(x_{k+1/2}) - \nabla f(x_{k})$ satisfies + +$$ +\begin{array}{l} \mathbb {E} \big [ \| b _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} \big ] \leq L ^ {2} \mathbb {E} \big [ \| x _ {k + 1 / 2} - x _ {k} \| ^ {2} | \mathcal {F} _ {k} \big ] \\ \leq L ^ {2} \left(\gamma_ {k + 1} ^ {2} \mathbb {E} \left[ \| \nabla f (x _ {k}) + U _ {k + 1} ^ {\prime} \| ^ {2} \mid \mathscr {F} _ {k} \right] + 2 \gamma_ {k + 1} d\right) \\ \leq 2 L ^ {2} \gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + 2 L ^ {2} \gamma_ {k + 1} ^ {2} \sigma^ {2} + 2 L ^ {2} d \gamma_ {k + 1} \\ = O \left(\gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + \gamma_ {k + 1}\right). \\ \end{array} +$$ + +$\S$ Proof for Example 2. Recall that the new algorithm Optimistic Randomized Mid-Point Method has the iterates + +$$ +x _ {k + 1 / 2} = x _ {k} - \gamma_ {k + 1} \alpha_ {k + 1} \widetilde {\nabla} f \left(x _ {k - \frac {1}{2}}\right) + \sqrt {2 \gamma_ {k + 1} \alpha_ {k + 1}} \xi_ {k + 1} ^ {\prime}, +$$ + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \widetilde {\nabla} f (x _ {k + 1 / 2}) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, +$$ + +where $\{\alpha_k\}$ , $\xi_k, \xi_k'$ , and $\widetilde{\nabla} f$ are the same as in (RMM), and the noise and bias are $U_{k+1} := \widetilde{\nabla} f(x_{k+1/2}) - \nabla f(x_{k+1/2})$ and $b_{k+1} := \nabla f(x_{k+1/2}) - \nabla f(x_k)$ . We have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| b _ {k + 1} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] = \mathbb {E} \left[ \left\| \nabla f \left(x _ {k + 1 / 2}\right) - \nabla f \left(x _ {k}\right) \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ \leq L ^ {2} \mathbb {E} \left[ \left\| x _ {k + 1 / 2} - x _ {k} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ = L ^ {2} \mathbb {E} \big [ \| - \gamma_ {k + 1} \alpha_ {k + 1} \widetilde {\nabla} f (x _ {k - \frac {1}{2}}) + \sqrt {2 \gamma_ {k + 1} \alpha_ {k + 1}} \xi_ {k + 1} ^ {\prime} \| ^ {2} \mid \mathcal {F} _ {k} \big ] \\ \leq 2 L ^ {2} \gamma_ {k + 1} ^ {2} \mathbb {E} [ \| \nabla f (x _ {k - \frac {1}{2}}) \| | \mathcal {F} _ {k} ] + 2 L ^ {2} \gamma_ {k + 1} ^ {2} \sigma^ {2} + 4 L ^ {2} d \gamma_ {k + 1}. \\ \end{array} +$$ + +Similar to the proof for Example 6, notice that $\| \nabla f(x_{k - \frac{1}{2}})\| ^2\leq 2\| \nabla f(x_{k - \frac{1}{2}}) - \nabla f(x_k)\| ^2 +$ $2\| \nabla f(x_{k})\|^{2}$ . As $\gamma_{k}\to 0$ , one can assume that $2L^{2}\gamma_{k + 1}^{2} < \frac{1}{2}$ , and we get + +$$ +\mathbb {E} \big [ \| b _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} \big ] \leq 4 L ^ {2} \gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + 4 L ^ {2} \gamma_ {k + 1} ^ {2} \sigma^ {2} + 8 L ^ {2} d \gamma_ {k + 1} = O (\gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + \gamma_ {k + 1}), +$$ + +as desired. + +$\S$ Proof for Example 3. The iterates of stochastic Runge-Kutta Langevin algorithm is as follows: + +$$ +h _ {1} = x _ {k} + \sqrt {2 \gamma_ {k + 1}} \left[ (1 / 2 + 1 / \sqrt {6}) \xi_ {k + 1} + \xi_ {k + 1} ^ {\prime} / \sqrt {1 2} \right] +$$ + +$$ +h _ {2} = x _ {k} - \gamma_ {k + 1} \left\{\nabla f (x _ {k}) + U _ {k + 1} ^ {\prime} \right\} + \sqrt {2 \gamma_ {k + 1}} \left[ (1 / 2 - 1 / \sqrt {6}) \xi_ {k + 1} + \xi_ {k + 1} ^ {\prime} / \sqrt {1 2} \right] +$$ + +$$ +x _ {k + 1} = x _ {k} - \frac {\gamma_ {k + 1}}{2} (\nabla f (h _ {1}) + \nabla f (h _ {2})) + \gamma_ {k + 1} U _ {k + 1} + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}, +$$ + +where $\xi_{k + 1}$ and $\xi_{k + 1}^{\prime}$ are independent standard Gaussian random variables independent of $x_{k}$ , and $U_{k + 1}$ and $U_{k + 1}^{\prime}$ are noise in the evaluation of $f$ . + +Observe that + +$$ +b _ {k + 1} = \frac {1}{2} (\nabla f (h _ {1}) - \nabla f (x _ {k})) + \frac {1}{2} (\nabla f (h _ {2}) - \nabla f (x _ {k})). +$$ + +We have + +$$ +\mathbb {E} \big [ \| \nabla f (h _ {1}) - \nabla f (x _ {k}) \| ^ {2} | \mathcal {F} _ {k} \big ] \leq 2 L ^ {2} d (1 / 4 + 1 / 6 + 1 / 1 2) \gamma_ {k + 1} = O (\gamma_ {k + 1}), +$$ + +and + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \nabla f (h _ {2}) - \nabla f (x _ {k}) \| ^ {2} \mid \mathcal {F} _ {k} \right] \leq 2 L ^ {2} \left(\gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + 2 \gamma_ {k + 1} ^ {2} \sigma^ {2} + 2 d (1 / 4 - 1 / 6 + 1 / 1 2) \gamma_ {k + 1}\right) \\ = \mathcal {O} (\gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + \gamma_ {k + 1}). \\ \end{array} +$$ + +We thus have + +$$ +\begin{array}{l} \mathbb {E} \left[ \| b _ {k + 1} \| ^ {2} \mid \mathcal {F} _ {t} \right] \leq \frac {1}{2} \mathbb {E} \left[ \| \nabla f (h _ {1}) - \nabla f (x _ {k}) \| ^ {2} \mid \mathcal {F} _ {k} \right] + \frac {1}{2} \mathbb {E} \left[ \| \nabla f (h _ {2}) - \nabla f (x _ {k}) \| ^ {2} \mid \mathcal {F} _ {k} \right] \\ = O (\gamma_ {k + 1} ^ {2} \left\| \nabla f (x _ {k}) \right\| ^ {2} + \gamma_ {k + 1}), \\ \end{array} +$$ + +as desired. + +$\S$ Proof for Example 4. Suppose $\phi$ is a Legendre function [52] for $\mathbb{R}^d$ , and consider the iterates + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \nabla f \left(\nabla \phi^ {*} \left(x _ {k}\right)\right) + \sqrt {2 \gamma_ {k + 1}} \left(\nabla^ {2} \phi^ {*} \left(x _ {k}\right) ^ {- 1}\right) ^ {1 / 2} \xi_ {k + 1}, +$$ + +where $\phi^{*}$ is the Fenchel dual of $\phi$ , that is, $\phi^{*}(x) = \sup_{y\in \mathbb{R}^{d}}(\langle x,y\rangle -\phi (y))$ . Also recall that [52] + +$$ +\nabla \phi (\nabla \phi^ {*} (x)) = x, \quad \nabla^ {2} \phi^ {*} (\nabla \phi (x)) ^ {- 1} = \nabla^ {2} \phi (x), \quad \forall x \in \mathbb {R} ^ {d}. +$$ + +Let $v = -\nabla f \circ \nabla \phi^{*}$ and $\sigma = (\nabla^2\phi^*)^{-1 / 2}$ . First, we mention what our assumptions imply on $f$ : + +- The Lipschitzness of $v$ corresponds to a similar condition in [31, A2]: + +$$ +\| \nabla f (x) - \nabla f (y) \| \leq L \| \nabla \phi (x) - \nabla \phi (y) \| +$$ + +- The Lipschitzness of $\sigma$ in Frobenius norm corresponds to modified self-concordance in [31, A1]: + +$$ +\| \nabla^ {2} \phi (x) ^ {1 / 2} - \nabla^ {2} \phi (y) ^ {1 / 2} \| _ {F} \leq L \| \nabla \phi (x) - \nabla \phi (y) \|. +$$ + +- Boundedness of $\sigma$ in Hilbert-Schmidt norm implies + +$$ +\left\| \nabla^ {2} \phi (x) ^ {- 1 / 2} \right\| _ {F} \leq C _ {\sigma}. +$$ + +- Dissipativity and weak-dissipativity of $v$ corresponds to the conditions below, respectively: + +$$ +\langle \nabla \phi (x), \nabla f (x) \rangle \geq \alpha \| \nabla \phi (x) \| ^ {2} - \beta , \quad \langle \nabla \phi (x), \nabla f (x) \rangle \geq \alpha \| \nabla \phi (x) \| ^ {1 + \kappa} - \beta . +$$ + +If $f$ and $\phi$ satisfy the conditions above, then the mirror Langevin algorithm Example 4 fits into the (LRM) scheme. + +Remark. Note that this version of Mirror Langevin cannot handle the case where $e^{-f}$ is supported on a compact domain; in that case, the Hessian of $\phi$ has to blow up near the boundary, and will not satisfy our boundedness assumption. The version of mirror Langevin we consider in this paper, though, can be thought as an adaptive conditioning method for densities supported on $\mathbb{R}^d$ . This setting has also been studied in the literature, see [55]. + +§ Proof for Example 6. The iterates of (PLA) follow + +$$ +x _ {k + 1} = x _ {k} - \gamma_ {k + 1} \nabla f \left(x _ {k + 1}\right) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1}. \tag {PLA} +$$ + +We mentioned that the bias term is $b_{k + 1} = \nabla f(x_{k + 1}) - \nabla f(x_k)$ . Now it remains to prove that it satisfies the conditions (5) and (11). We have + +$$ +\begin{array}{l} \mathbb {E} \left[ \| b _ {k + 1} \| ^ {2} \mid \mathcal {F} _ {k} \right] = \mathbb {E} \left[ \| \nabla f (x _ {k + 1}) - \nabla f (x _ {k}) \| ^ {2} \mid \mathcal {F} _ {k} \right] \\ \leq L ^ {2} \mathbb {E} \left[ \left\| x _ {k + 1} - x _ {k} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ = L ^ {2} \mathbb {E} [ \| - \gamma_ {k + 1} \nabla f (x _ {k + 1}) + \sqrt {2 \gamma_ {k + 1}} \xi_ {k + 1} \| ^ {2} \mid \mathcal {F} _ {k} ] \\ \leq 2 L ^ {2} \gamma_ {k + 1} ^ {2} \mathbb {E} \left[ \| \nabla f (x _ {k + 1}) \| ^ {2} \mid \mathcal {F} _ {k} \right] + 4 L ^ {2} d \gamma_ {k + 1}. \\ \end{array} +$$ + +Now, notice that $\| \nabla f(x_{k + 1})\| ^2\leq 2\| \nabla f(x_{k + 1}) - \nabla f(x_k)\| ^2 +2\| \nabla f(x_k)\| ^2$ . As $\gamma_{k}\to 0$ , one can assume that $2L^{2}\gamma_{k + 1}^{2} < \frac{1}{2}$ , and we get + +$$ +\mathbb {E} \big [ \| b _ {k + 1} \| ^ {2} \mid \mathcal {F} _ {k} \big ] \leq \frac {1}{2} \mathbb {E} \big [ \| b _ {k + 1} \| ^ {2} \mid \mathcal {F} _ {k} \big ] + \| \nabla f (x _ {k}) \| ^ {2} + 4 L ^ {2} d \gamma_ {k + 1}, +$$ + +which implies + +$$ +\mathbb {E} \big [ \| b _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} \big ] \leq 4 L ^ {2} \gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + 8 L ^ {2} d \gamma_ {k + 1} = O (\gamma_ {k + 1} ^ {2} \| \nabla f (x _ {k}) \| ^ {2} + \gamma_ {k + 1}), +$$ + +as desired. \ No newline at end of file diff --git a/adynamicalsystemviewoflangevinbasednonconvexsampling/images.zip b/adynamicalsystemviewoflangevinbasednonconvexsampling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..157ad200662575b7db21b455e1bec3bba0ed02f9 --- /dev/null +++ b/adynamicalsystemviewoflangevinbasednonconvexsampling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9a797e704b153d223946b114e3ecb8eb8ad64031df661f499093d87110d740b +size 1287991 diff --git a/adynamicalsystemviewoflangevinbasednonconvexsampling/layout.json b/adynamicalsystemviewoflangevinbasednonconvexsampling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cce431954cf5ef2f498d853b3608ac1c4c978e0e --- /dev/null +++ b/adynamicalsystemviewoflangevinbasednonconvexsampling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7aa45476c9014eaee89217de500e1425351535121a6a723039157d0faab97574 +size 1038220 diff --git a/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_content_list.json b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bb143500c998048e81f34c33b200951259361801 --- /dev/null +++ b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f715ef2e3ec9e8067d8349c572db4919dcc3fe068fca351cfe7831bd3bc9269c +size 81204 diff --git a/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_model.json b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5815354b5ffcf88e6d9fdb7987ea8a19d01498b6 --- /dev/null +++ b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db730fb20137f8cbe66a4405e567ba6b99a5b5d2c8b9df787bc766686010efc2 +size 93430 diff --git a/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_origin.pdf b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bf0874f0269a9b1584b11825628ae0f46604019b --- /dev/null +++ b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/bc085ea0-1a8f-4b88-a51c-a9c0a16b5105_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf3817d580c30496b6bd6f9860559b137ce2e154d9c133c2d0d113cdd193cfca +size 455578 diff --git a/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/full.md b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1acb358a540985f96d6d0f7036988b2528f4e3fe --- /dev/null +++ b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/full.md @@ -0,0 +1,305 @@ +# A Fast and Accurate Estimator for Large Scale Linear Model via Data Averaging + +# Rui Wang + +Center for Applied Statistics + +and School of Statistics + +Renmin University of China + +Beijing 100872, China 446100240@qq.com + +# Yanyan Ouyang + +Center for Applied Statistics + +and School of Statistics + +Renmin University of China + +Beijing 100872, China staoyyy@ruc.edu.cn + +# Panpan Yu + +NavInfo + +Beijing 100094, China + +yuponpan@navinfo.com + +# Wangli Xu + +Center for Applied Statistics + +and School of Statistics + +Renmin University of China + +Beijing 100872, China w1xu@ruc.edu.cn + +# Abstract + +This work is concerned with the estimation problem of linear model when the sample size is extremely large and the data dimension can vary with the sample size. In this setting, the least square estimator based on the full data is not feasible with limited computational resources. Many existing methods for this problem are based on the sketching technique which uses the sketched data to perform least square estimation. We derive fine-grained lower bounds of the conditional mean squared error for sketching methods. For sampling methods, our lower bound provides an attainable optimal convergence rate. Our result implies that when the dimension is large, there is hardly a sampling method can have a faster convergence rate than the uniform sampling method. To achieve a better statistical performance, we propose a new sketching method based on data averaging. The proposed method reduces the original data to a few averaged observations. These averaged observations still satisfy the linear model and are used to estimate the regression coefficients. The asymptotic behavior of the proposed estimation procedure is studied. Our theoretical results show that the proposed method can achieve a faster convergence rate than the optimal convergence rate for sampling methods. Theoretical and numerical results show that the proposed estimator has good statistical performance as well as low computational cost. + +# 1 Introduction + +Linear regression model is one of the simplest and the most fundamental models in statistics and machine learning. Suppose one collects independent and identically distributed (i.i.d.) observations $\{Z_i, y_i\}_{i=1}^N$ , where $Z_i \in \mathbb{R}^p$ is the vector of the predictors and $y_i \in \mathbb{R}$ is the response. The linear model assumes + +$$ +y _ {i} = \beta_ {0} + Z _ {i} ^ {\top} \boldsymbol {\beta} _ {1} + \varepsilon_ {i}, \quad i = 1, \dots , N, \tag {1} +$$ + +where $\beta_0\in \mathbb{R},\pmb {\beta}_1\in \mathbb{R}^p$ are the unknown coefficients and $\varepsilon_{1},\ldots ,\varepsilon_{N}$ are random variables representing noise. Let $X_{i} = (1,Z_{i}^{\top})^{\top}$ , $\pmb {\beta} = (\beta_{0},\pmb{\beta}_{1}^{\top})^{\top}$ , $\mathbf{y} = (y_{1},\dots,y_{N})^{\top}$ and $\mathbf{X} = (X_{1},\dots,X_{N})^{\top}$ . The classical least square estimator equals $\arg \min_{\pmb {\beta}\in \mathbb{R}^{p + 1}}\| \mathbf{y} - \mathbf{X}\pmb {\beta}\| ^2$ . If $\mathbf{X}$ has full column rank, the least square estimator equals $(\sum_{i = 1}^{N}X_{i}X_{i}^{\top})^{-1}\sum_{i = 1}^{N}X_{i}y_{i}$ and the direct computation costs $O(Np^{2})$ + +time. For large scale linear models with $N \gg p$ , the computing time $O(Np^2)$ of the exact least square estimator is not negligible. Faster estimators of $\beta$ can largely facilitate the practical data analysis pipelines. + +Numerous research efforts have been devoted to the estimation problem for large scale linear model. Many existing work in this area can be understood as matrix sketching methods which explicitly or implicitly use matrix sketches as surrogates for the original observations to reduce the data size. Specifically, sketching methods solve the sketched least square problem + +$$ +\min _ {\boldsymbol {\beta} \in \mathbb {R} ^ {p + 1}} \| \mathbf {O} ^ {\top} \mathbf {y} - \mathbf {O} ^ {\top} \mathbf {X} \boldsymbol {\beta} \| ^ {2}, \tag {2} +$$ + +where $\mathbf{O} \in \mathbb{R}^{N \times n}$ is a sketching matrix with $n \ll N$ . The solution to the problem (2) is the least square estimator based on the reduced data $\mathbf{O}^{\top} \mathbf{X} \in \mathbb{R}^{n \times (p + 1)}$ and $\mathbf{O}^{\top} \mathbf{y} \in \mathbb{R}^{n}$ . Since $n \ll N$ , the sketched least square problem can be solved much faster than the least square estimator for the full data. In this paper, we only consider the case that $\mathbf{O}$ is independent of $\varepsilon_1, \ldots, \varepsilon_N$ . That is, $\mathbf{O}$ may rely on $\mathbf{X}$ , but not rely on $\mathbf{y}$ . This guarantees that the solution to (2) is an unbiased estimator of $\beta$ . Note that sampling methods are special cases of the sketching framework (2). In fact, for a sampling method, each column of $\mathbf{O}$ is a vector whose elements are all 0 except one that equals 1. Sketching methods have been intensively researched in algorithmic aspect; see Mahoney [2010], Woodruff [2014], Drineas and Mahoney [2016] for reviews. Recently, the statistical aspect of sketching methods also draws much attention; see, e.g., Ma et al. [2015], Raskutti and Mahoney [2016], Wang et al. [2017], Dobriban and Liu [2019], Ma et al. [2020], Ahfock et al. [2021]. + +Probably the simplest sketching method is the uniform sampling method which randomly selects $n$ observations with equal probability to form the reduced data. Recently, Pilanci and Wainwright [2016] provides a minimax lower bound for the mean squared prediction error of random sketching methods. Theorem 1 of Pilanci and Wainwright [2016] shows that for a large class of random sketching methods, including many existing data-oblivious sketching methods and sampling methods, the convergence rate of the mean squared prediction error can not be faster than the uniform sampling method. Hence it is a nontrivial task to construct a sketching method which has significantly better statistical performance than the uniform sampling method. + +Recently, Wang et al. [2019] initiates the study of sampling methods based on extreme values. Motivated by the D-optimal criterion, Wang et al. [2019] proposed the information-based optimal subdata selection (IBOSS) algorithm which successively selects informative observations based on extreme values of variables. They showed that for fixed $p$ , the estimator produced by the IBOSS algorithm can have a faster convergence rate than the uniform sampling method. Meanwhile, the computation of the IBOSS algorithm can be completed within $O(Np + np^2)$ time which has the same order as the uniform sampling method if $n = cN / p$ for some constant $c > 0$ . Now the algorithm of Wang et al. [2019] has become the building block of some recent methods for large scale problems. For example, Wang [2019] proposed an algorithm which combines the algorithm of Wang et al. [2019] and the divide and conquer strategy. Cheng et al. [2020] extended the algorithm of Wang et al. [2019] to the logistic regression model. Existing asymptotic results for the IBOSS algorithm are obtained in the setting of fixed $n$ and $p$ . At present, there is still a lack of theoretical understanding of the behavior of the IBOSS algorithm in the setting of varying $n$ and $p$ . + +The IBOSS algorithm is a sampling method and is therefore an instance of the sketching framework (2). Interestingly, the IBOSS algorithm can surpass the minimax lower bound of Pilanci and Wainwright [2016]. In fact, a key condition for Theorem 1 of Pilanci and Wainwright [2016] does not hold for the IBOSS algorithm. Thus, the IBOSS algorithm is not restricted by the minimax bound of Pilanci and Wainwright [2016]. This fact is detailed in Section 2. Note that there are many potential sketching methods which are not restricted by the minimax bound of Pilanci and Wainwright [2016]. To give a more comprehensive understanding of the behavior of these sketching methods, we derive fine-grained lower bounds for the conditional mean squared error of the sketched least square estimators produced by (2) in the setting that $Z_{i}$ is a standard normal random vector. In particular, our result provides a lower bound for any sampling method which may possibly rely on $\mathbf{X}$ but does not rely on $\mathbf{y}$ . It turns out that if $p \ll \log (N / n)$ , then the optimal lower bound for sampling methods can have a faster convergence rate than the uniform sampling method. On the other hand, if $\log (N / n) \ll p$ , any sampling method can not largely surpass the uniform sampling method. Furthermore, we derive the asymptotic behavior of the IBOSS algorithm in the setting of varying $n$ and $p$ . It turns out that under certain conditions, the IBOSS algorithm can achieve the optimal rate for sampling methods. + +Table 1: Theoretical performance of the ideal sampling method (abbreviated as ISM), the IBOSS algorithm and the proposed method when $Z_{1} \sim \mathcal{N}(\mathbf{0}_{p},\mathbf{I}_{p})$ . Assume that as $N \to \infty$ , $p \to \infty$ , $p^3 (\log (p))^4\log (N) / N \to \infty$ , $n = O(N^{\epsilon})$ for some $0 < \epsilon < 1 / 2$ and $p = O(n^{1 / 2 - \epsilon^*})$ for some $0 < \epsilon^{*} < 1 / 2$ , $\log (N / n) = O(p^2)$ . See Theorems 2, 3, 4. The reported computing time is under the assumption that the multiplication of an $m \times n$ matrix and an $n \times p$ matrix costs $O(mnp)$ time, and the inversion of a $p \times p$ matrix costs $O(p^3)$ time. + +
MethodsReduced sample sizeE{||βI-β||2|Z}Computing time
ISMnOP\(\left(\frac{p^2}{n(p+\log\left(\frac{N}{n}\right))}\right)\)
IBOSSnOP\(\left(\frac{p^2}{n(p+\log\left(\frac{N}{n}\right))}\right)\)O(Np+np2)
NEW2p(1+op(1))\(\frac{p^2\sigma_{\varepsilon}^2}{2\log(2p)N}\)O(Np+p3)
+ +For large scale linear models, it is often the case that $\log (N / n)\ll p$ . In this case, any sampling method can not have a significantly better statistical performance than the uniform sampling method. Inspired by this phenomenon, we propose an alternative sketching method which can reduce the full data to just a few observations while the resulting estimator of $\beta$ may have smaller conditional mean squared error than sampling methods. The proposed method is based on data averaging. The main idea is to partition the observations into $2p$ groups such that the averages of $Z_{i}$ within groups are separated. The least square estimator based on $2p$ averaged observations is used to estimate $\beta$ . The computation of the proposed method can be completed within $O(Np + p^3)$ time. Our theoretical results show that the proposed method can have a faster convergence rate than any sampling methods with comparative computing time. Also, the proposed method reduces the full data to merely $2p$ averaged observations. These averaged observations also satisfy the linear model (1) and have independent errors. Consequently, it is convenient to further compute other estimators or conduct statistical inferences using the reduced data. The good performance of the proposed estimator is also verified by simulation results and a real data example. Table 1 summarizes the theoretical performance of the proposed method and compare it with the ideal sampling method implied by Theorem 2 and the IBOSS algorithm. + +The rest of the paper is organized as follows. Section 2 investigates lower bounds for the conditional mean squared error of the sketched least square estimators produced by (2). In Section 3, we propose a data averaging method to estimate $\beta$ and investigate its asymptotic behavior. Section 4 presents the simulation results briefly. Section 5 concludes the paper. The simulation results, a real data analysis and all proofs are deferred to the Supplement Material. + +We close this section by introducing some notations and assumptions that will be used throughout the paper. For any real number $w$ , let $\lfloor w \rfloor$ denote the largest integer not larger than $w$ . For any vector $W$ , let $\| W \|$ denote the Euclidean norm of $W$ . For any matrix $\mathbf{B}$ , let $\| \mathbf{B} \|$ and $\| \mathbf{B} \|_F$ denote the operator norm and the Frobenius norm of $\mathbf{B}$ , respectively. Moreover, denote by $\mathbf{B}_{\cdot,j}$ the $j$ th column of $\mathbf{B}$ . If $\mathbf{B}$ is symmetric, denote by $\lambda_i(\mathbf{B})$ the $i$ th largest eigenvalue of $\mathbf{B}$ . In this paper, the symmetric matrices are equipped with Loewner partial order. That is, for two symmetric matrices $\mathbf{B}_1$ and $\mathbf{B}_2$ , $\mathbf{B}_1 > \mathbf{B}_2$ if and only if $\mathbf{B}_1 - \mathbf{B}_2$ is positive definite. For a positive semidefinite matrix $\mathbf{B}$ , let $\mathbf{B}^{1/2}$ denote a positive semidefinite matrix such that $(\mathbf{B}^{1/2})^2 = \mathbf{B}$ . For any set $\mathcal{A}$ , denote by $\mathcal{A}^\complement$ its complement and $\operatorname{Card}(\mathcal{A})$ its cardinality. Let $\Phi(x)$ and $\varphi(x)$ denote the distribution function and density function of the standard normal distribution, respectively. For random variables $\xi \in \mathbb{R}$ and $\eta > 0$ , $\xi = o_P(\eta)$ means that $\xi/\eta$ converges to 0 in probability, and $\xi = O_P(\eta)$ means that $\xi/\eta$ is bounded in probability. + +Let $N$ denote the size of full sample, $p$ denote the dimension of covariates. Let $\mathbf{Z} = (Z_1,\dots,Z_N)^\top$ be an $N\times p$ matrix of covariates. Denote by $z_{i,j}$ the $j$ th element of $Z_{i}$ , $i = 1,\ldots ,N$ , $j = 1,\ldots ,p$ . Let $z_{(1),j}\leq \dots \leq z_{(N),j}$ denote the order statistics of $\{z_{i,j}\}_{i = 1}^{N}$ , $j = 1,\dots ,p$ . The following assumption for the data distribution is assumed throughout the paper. + +Assumption 1 Suppose $\{Z_i, y_i\}_{i=1}^N$ are i.i.d. and satisfy the linear model (1), where $\operatorname{E}(\varepsilon_1) = 0$ , $\operatorname{Var}(\varepsilon_1) = \sigma_{\varepsilon}^2$ and $\varepsilon_1$ is independent of $Z_1$ . Suppose $Z_1$ has a density function $f(z_{1,1}, \ldots, z_{1,p})$ with respect to the Lebesgue measure on $\mathbb{R}^p$ , and $\operatorname{E}(Z_1) = \mu$ , $\operatorname{Cov}(Z_1) = \Sigma$ are finite. Suppose $\Sigma = (\sigma_{i,j})_{i,j=1}^p$ is positive definite. Suppose $r > 0$ . As $N \to \infty$ , the dimension $p$ is a function of $N$ , while the distribution of $(Z_1, y_1)$ only relies on $p$ . Finally, assume $\sigma_{\varepsilon}^2$ is a constant which does not depend on $N$ . + +For simplicity, our notations suppress the dependence of $p$ on $N$ , and the dependence of the distribution of $(Z_1, y_1)$ on $p$ . + +# 2 Risk bounds for sketched least square estimators + +Theorem 1 of Pilanci and Wainwright [2016] provides a minimax lower bound for the mean squared prediction error of random sketching methods. Their result implies that under certain conditions, there exists a constant $C > 0$ such that for any estimator $\hat{\beta}$ which only relies on $(\mathbf{O}^{\top}\mathbf{X},\mathbf{O}^{\top}\mathbf{y})$ , + +$$ +\sup _ {\boldsymbol {\beta} \in \mathbb {R} ^ {p}} \mathrm {E} \left\{N ^ {- 1} \left\| \mathbf {X} (\hat {\boldsymbol {\beta}} - \boldsymbol {\beta}) \right\| ^ {2} \mid \mathbf {Z} \right\} \geq \frac {C p}{n} \sigma_ {\varepsilon} ^ {2}. +$$ + +The optimal convergence rate $p / n$ can be achieved by the least square estimator based on $n$ uniformly selected observations. A key condition for the above result is that + +$$ +\left\| \operatorname {E} \left(\mathbf {O} \left(\mathbf {O} ^ {\top} \mathbf {O}\right) ^ {- 1} \mathbf {O} ^ {\top} \mid \mathbf {Z}\right) \right\| \leq c n / N \tag {3} +$$ + +for some constant $c > 0$ + +The result of Pilanci and Wainwright [2016] can be applied to general estimators based on $(\mathbf{O}^{\top}\mathbf{X},\mathbf{O}^{\top}\mathbf{y})$ . In this paper, however, we focus on the least square estimator based on $(\mathbf{O}^{\top}\mathbf{X},\mathbf{O}^{\top}\mathbf{y})$ . Let $\hat{\beta}_{\mathbf{O}}$ denote the solution to the sketched least square problem (2). In this paper, we use the conditional mean squared error $\operatorname {E}\{\| \hat{\beta}_{\mathbf{O}} - \boldsymbol {\beta}\|^{2}\mid \mathbf{Z}\}$ to measure the performance of $\hat{\beta}_{\mathbf{O}}$ . The following theorem gives a lower bound for the conditional mean squared error of $\hat{\beta}_{\mathbf{O}}$ . + +Theorem 1 Suppose Assumption 1 holds, $Z \sim \mathcal{N}(\mathbf{0}_p, \mathbf{I}_p)$ , the sketching matrix $\mathbf{O}$ is an $N \times n$ matrix with full column rank. Assume that $\mathbf{O}$ is independent of $\varepsilon_1, \ldots, \varepsilon_N$ and with probability 1, $\mathbf{O}^\top \mathbf{X}$ has full column rank. Suppose as $N \to \infty$ , $p/N \to 0$ . Then as $N \to \infty$ , + +$$ +\operatorname {E} \left\{\| \hat {\boldsymbol {\beta}} _ {\mathbf {O}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z} \right\} \geq (1 + o _ {P} (1)) \| \operatorname {E} (\mathbf {O} (\mathbf {O} ^ {\top} \mathbf {O}) ^ {- 1} \mathbf {O} ^ {\top} \mid \mathbf {Z}) \| ^ {- 1} \frac {p + 1}{N} \sigma_ {\varepsilon} ^ {2}. +$$ + +Theorem 1 gives an explicit characterization of the impact of $\operatorname{E}(\mathbf{O}(\mathbf{O}^{\top}\mathbf{O})^{-1}\mathbf{O}^{\top}|\mathbf{Z})$ on the lower bound of $\operatorname{E}\{\|\hat{\beta}_{\mathbf{O}} - \beta\|^{2}|\mathbf{Z}\}$ . Pilanci and Wainwright [2016] showed that the condition (3) is satisfied by many classical sketching methods. Under the conditions of Theorem 1, for sketching methods satisfying the condition (3), the convergence rate of $\operatorname{E}\{\|\hat{\beta}_{\mathbf{O}} - \beta\|^{2}|\mathbf{Z}\}$ is lower bounded by $p / n$ , which is the convergence rate for the uniform sampling method. Thus, in order to achieve a faster convergence rate than the uniform sampling method, the condition (3) should be violated. + +Many existing sketching methods are through sampling the observations. For sampling methods, $\mathbf{O}$ is a column orthogonal matrix and each column of $\mathbf{O}$ has a single nonzero element with value 1. Hence $\mathbf{O}(\mathbf{O}^{\top}\mathbf{O})^{-1}\mathbf{O}^{\top}$ is a diagonal matrix whose diagonal elements are zeros and ones. For the IBOSS algorithm of Wang et al. [2019], the selected observations are completely determined by $\mathbf{X}$ and does not rely on additional randomness. Consequently $\| \operatorname {E}(\mathbf{O}(\mathbf{O}^{\top}\mathbf{O})^{-1}\mathbf{O}^{\top}\mid \mathbf{Z})\| =$ $\| \mathbf{O}(\mathbf{O}^{\top}\mathbf{O})^{-1}\mathbf{O}^{\top}\| = 1.$ In this case, the lower bound provided by Theorem 1 has rate $p / N$ which is too loose. The following theorem gives a tighter lower bound of the mean squared error for sampling methods. + +Theorem 2 Suppose Assumption 1 holds, $Z \sim \mathcal{N}(\mathbf{0}_p, \mathbf{I}_p)$ , the sketching matrix $\mathbf{O}$ is an $N \times n$ matrix with full column rank. Assume that $\mathbf{O}$ is independent of $\varepsilon_1, \ldots, \varepsilon_N$ and with probability 1, $\mathbf{O}^{\top} \mathbf{X}$ has full column rank. Furthermore, suppose $\operatorname{E}(\mathbf{O}(\mathbf{O}^{\top} \mathbf{O})^{-1} \mathbf{O}^{\top} \mid \mathbf{Z}) = \operatorname{diag}(d_1, \ldots, d_N)$ . Let $d_{\max} = \max_{i \in \{1, \ldots, N\}} d_i$ Then + +$$ +\operatorname {E} \left\{\| \hat {\boldsymbol {\beta}} _ {\mathbf {O}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z} \right\} \geq \frac {p ^ {2}}{6 n (p + \log (\frac {N d _ {\max}}{n})) + O _ {P} (n)} \sigma_ {\varepsilon} ^ {2}. +$$ + +If the matrix $\operatorname{E}(\mathbf{O}(\mathbf{O}^\top \mathbf{O})^{-1}\mathbf{O}^\top \mid \mathbf{Z}) = \operatorname{diag}(d_1, \ldots, d_N)$ is diagonal, then + +$$ +d _ {\max } = \max _ {\alpha \in \mathbb {R} ^ {N}, \| \boldsymbol {\alpha} \| = 1} \alpha^ {\top} \operatorname {E} (\mathbf {O} (\mathbf {O} ^ {\top} \mathbf {O}) ^ {- 1} \mathbf {O} ^ {\top} | \mathbf {Z}) \alpha \leq \operatorname {E} (\max _ {\alpha \in \mathbb {R} ^ {N}, \| \boldsymbol {\alpha} \| = 1} \alpha^ {\top} \mathbf {O} (\mathbf {O} ^ {\top} \mathbf {O}) ^ {- 1} \mathbf {O} ^ {\top} \alpha | \mathbf {Z}) = 1. +$$ + +Under the conditions of Theorem 2, the optimal convergence rate for sampling methods is lower bounded by + +$$ +\frac {p ^ {2}}{n \left(p + \log \left(\frac {N}{n}\right)\right)}. \tag {4} +$$ + +Note that if $p \ll \log (N / n)$ , then the rate (4) is faster than the uniform sampling method. Theorem 4 in Section 3.2 will show that under certain conditions, the method of Wang et al. [2019] can achieve the optimal rate (4). + +It is worth mentioning that Theorems 1 and 2 are obtained under the condition $Z_{i} \sim \mathcal{N}(\mathbf{0}_{p},\mathbf{I}_{p})$ . Perhaps, these results can be extended to the case that $Z_{i}$ has a general multivariate distribution. However, such results may not be valid if the distribution of $Z_{i}$ has heavier tail than normal distribution. In fact, our numerical results imply that a faster convergence rate may be achieved when the distribution of $Z_{i}$ has heavy tail. + +# 3 An estimator via data averaging + +In this section, we would like to propose a new sketching method which can hopefully have good statistical performance with low computational cost. To be simple, when considering computation time, it is understood that the multiplication of an $m \times n$ matrix and $n \times p$ matrix costs $O(mnp)$ time, and the inversion of a $p \times p$ matrix costs $O(p^3)$ time. Note that the computation time $O(Np)$ is essential if each observation is accessed at least once, e.g., to be loaded into the memory. The sketched least square problem (2) involves $n$ reduced observations and the direct computation costs $O(np^2 + p^3)$ computation time. The computation time $O(p^3)$ comes from the inversion of a $(p + 1) \times (p + 1)$ matrix which is essential no matter how $n$ is chosen. Hence the direct computation of any reasonable estimator which uses the information of the full data requires at least $O(Np + p^3)$ time. Thus, we restrict our attention to algorithms that can be completed within $O(Np + p^3)$ time. + +To complete the computation within $O(Np + p^3)$ time, one needs to take $n = O(N / p + p)$ . Theorem 2 implies that if $n = c_1N / p + c_2p$ where $c_{1},c_{2} > 0$ are constants, then the optimal convergence rate for sampling methods reduces to $p / n$ which is equal to the convergence rate for the uniform sampling method. Also, for large $N$ , the reduced sample size $n = c_1N / p + c_2p$ may still be large. To achieve a faster convergence rate and a better reduction of data, we would like to consider sketching methods other than sampling methods. This motivates us to propose a new data averaging method. + +# 3.1 Methodology + +Let $\mathcal{J}_1, \ldots, \mathcal{J}_k \subset \{1, \ldots, N\}$ be $k$ mutually disjoint index sets, each containing $r$ indices, and $\bigcup_{i=1}^{k} \mathcal{J}_i = \{1, \ldots, N\}$ . To use the information of the full data, we assume $N = kr$ . Let $\bar{Z}_j = r^{-1} \sum_{i \in \mathcal{J}_j} Z_i$ and $\bar{y}_j = r^{-1} \sum_{i \in \mathcal{J}_j} y_i$ be the averaged observation within the $j$ th index set, $j = 1, \ldots, k$ . It can be seen that + +$$ +\bar {y} _ {j} = \beta_ {0} + \bar {Z} _ {j} ^ {\top} \boldsymbol {\beta} _ {1} + \bar {\varepsilon} _ {j}, +$$ + +where $\bar{\varepsilon}_j = r^{-1}\sum_{i\in \mathcal{J}_j}\varepsilon_i$ . Suppose that the choice of the index sets $\mathcal{J}_1,\ldots ,\mathcal{J}_k$ is based on the covariates $\{Z_i\}_{i = 1}^N$ and does not rely on the responses $\{y_i\}_{i = 1}^N$ . Then $\bar{\varepsilon}_1,\dots,\bar{\varepsilon}_k$ are mutually independent and are independent of $\{\bar{Z}_j\}_{j = 1}^k$ . Also, $\bar{\varepsilon}_j$ has mean 0 and variance $\sigma_{\varepsilon}^{2} / r$ . Thus, the averaged observations also satisfy the linear model and one can estimate $\beta$ by the least square estimator base on $k$ reduced observations as $\hat{\beta} = (\sum_{j = 1}^{k}\bar{X}_{j}\bar{X}_{j}^{\top})^{-1}(\sum_{j = 1}^{k}\bar{X}_{j}\bar{y}_{j})$ , where $\bar{X}_j = r^{-1}\sum_{i\in \mathcal{J}_j}X_i$ , $j = 1,\ldots ,k$ . We would like to choose $\mathcal{J}_1,\ldots ,\mathcal{J}_k$ such that $\hat{\beta}$ is a fast and accurate estimator of $\beta$ . Let $\mathbf{H} = \sum_{\ell = 1}^{k}(\bar{Z}_{\ell} - \bar{Z})(\bar{Z}_{\ell} - \bar{Z})^{\top}$ and $\bar{Z} = N^{-1}\sum_{i = 1}^{N}Z_{i}$ . The + +conditional mean squared error of $\hat{\beta}$ is + +$$ +\begin{array}{l} \operatorname {E} (\| \hat {\boldsymbol {\beta}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z}) = \frac {k \sigma_ {\epsilon} ^ {2}}{N} \operatorname {t r} \left\{\left( \begin{array}{c c} k & \sum_ {j = 1} ^ {k} \bar {Z} _ {j} ^ {\top} \\ \sum_ {j = 1} ^ {k} \bar {Z} _ {j} & \sum_ {j = 1} ^ {k} \bar {Z} _ {j} \bar {Z} _ {j} ^ {\top} \end{array} \right) ^ {- 1} \right\} \\ = \frac {k \sigma_ {\epsilon} ^ {2}}{N} \operatorname {t r} \left\{\left( \begin{array}{c c} \frac {1}{k} + \bar {Z} ^ {\top} \mathbf {H} ^ {- 1} \bar {Z} & - \bar {Z} ^ {\top} \mathbf {H} ^ {- 1} \\ - \mathbf {H} ^ {- 1} \bar {Z} & \mathbf {H} ^ {- 1} \end{array} \right) \right\} \\ = \left(k \left(\operatorname {t r} \left(\mathbf {H} ^ {- 1}\right) + \bar {Z} ^ {\top} \mathbf {H} ^ {- 1} \bar {Z}\right) + 1\right) \frac {\sigma_ {\varepsilon} ^ {2}}{N}. \\ \end{array} +$$ + +In order to achieve a good statistical accuracy, we would like to choose the index sets such that $\mathrm{tr}(\mathbf{H}^{-1}) + \bar{Z}^{\top}\mathbf{H}^{-1}\bar{Z}$ is minimized. + +First we consider the simplest case of $p = 1$ . In this case, the matrix $\mathbf{H}$ reduces to a real number. Since $\beta \in \mathbb{R}^2$ , one needs at least two observations to estimate $\beta$ . To achieve maximum reduction of data, we take $k = 2$ . Then $\mathbf{H}$ takes its maximum when $\mathcal{J}_1 = \{i \in \{1, \dots, N\} : z_{i,1} \leq z_{(N/2),1}\}$ and $\mathcal{J}_2 = \{i \in \{1, \dots, N\} : z_{i,1} \geq z_{(N/2+1),1}\}$ . The least square estimator of $\beta$ based on the averaged observations is $\hat{\beta} = ((\bar{Z}_2\bar{y}_1 - \bar{Z}_1\bar{y}_2) / (\bar{Z}_2 - \bar{Z}_1), (\bar{y}_2 - \bar{y}_1) / (\bar{Z}_2 - \bar{Z}_1))^{\top}$ . The above estimator $(\bar{y}_2 - \bar{y}_1) / (\bar{Z}_2 - \bar{Z}_1)$ of $\beta_1$ is considered in Barton and Casley [1958] as a quick estimate of $\beta_1$ , which only considered the case of $p = 1$ . To the best of our knowledge, no previous study generalized this estimator of Barton and Casley [1958] to the case $p > 1$ . + +For the general case of $p \geq 1$ , the exact minimizer of $\operatorname{E}(\|\hat{\beta} - \beta\|^2 \mid \mathbf{Z})$ may not be easy to obtain. A simpler criterion to choose the index sets is to maximize the trace $\mathrm{tr}(\mathbf{H}) = \sum_{\ell=1}^{k} \|\bar{Z}_{\ell} - \bar{Z}\|^2$ . This problem is equivalent to minimizing $\sum_{\ell=1}^{k} \sum_{i \in \mathcal{I}_{\ell}} \|Z_i - \bar{Z}_{\ell}\|^2$ and is an instance of the balanced $k$ -means clustering problem; see Lin et al. [2019] and the references therein. Unfortunately, algorithms for the $k$ -means clustering problem are computationally intensive. In fact, for the vanilla $k$ -means algorithm, each iteration takes $O(Npk)$ time which even exceeds the computing time of the least square estimator based on the full data. To achieve a balance between the statistical accuracy and the computing time, we deal with each variable in turn. We take $k = 2p$ and for $j = 1, \dots, p$ , we determine two index sets, namely $\mathcal{L}_{r,j}$ and $\mathcal{R}_{r,j}$ , based on the $j$ th variable. Hence the set $\{1, \dots, N\}$ is partitioned into $2p$ index sets $\mathcal{L}_{r,1}, \dots, \mathcal{L}_{r,p}$ and $\mathcal{R}_{r,1}, \dots, \mathcal{R}_{r,p}$ , each containing $r = N/(2p)$ indices. The choice of these index sets is based on the following lower bound of $\mathrm{tr}(\mathbf{H})$ , + +$$ +\begin{array}{l} \operatorname {t r} (\mathbf {H}) = \sum_ {j = 1} ^ {p} \sum_ {\ell = 1} ^ {p} \left\{\left(\frac {1}{r} \sum_ {i \in \mathcal {L} _ {r, \ell}} z _ {i, j} - \frac {1}{N} \sum_ {i = 1} ^ {N} z _ {i, j}\right) ^ {2} + \left(\frac {1}{r} \sum_ {i \in \mathcal {R} _ {r, \ell}} z _ {i, j} - \frac {1}{N} \sum_ {i = 1} ^ {N} z _ {i, j}\right) ^ {2} \right\} \\ \geq \sum_ {j = 1} ^ {p} \left[ \sum_ {\ell = 1} ^ {j - 1} \left\{\left(\frac {1}{r} \sum_ {i \in \mathscr {L} _ {r, \ell}} z _ {i, j} - \frac {1}{N} \sum_ {i = 1} ^ {N} z _ {i, j}\right) ^ {2} + \left(\frac {1}{r} \sum_ {i \in \mathscr {R} _ {r, \ell}} z _ {i, j} - \frac {1}{N} \sum_ {i = 1} ^ {N} z _ {i, j}\right) ^ {2} \right\} \right. \\ \left. + \left\{\left(\max \left(\tilde {z} _ {j} - \frac {1}{r} \sum_ {i \in \mathscr {L} _ {r, j}} z _ {i, j}, 0\right)\right) ^ {2} + \left(\max \left(\frac {1}{r} \sum_ {i \in \mathscr {R} _ {r, j}} z _ {i, j} - \tilde {z} _ {j}, 0\right)\right) ^ {2} \right\} \right], \\ \end{array} +$$ + +where $\tilde{z}_j = (2r(p - j + 1))^{-1}\sum_{i\notin \bigcup_{\ell = 1}^{j - 1}(\mathcal{L}_{r,\ell}\cup \mathcal{R}_{r,\ell})}z_{i,j}$ . For $j = 1,\ldots ,p$ , we choose $\mathcal{L}_{r,j}$ and $\mathcal{R}_{r,j}$ to maximize the $j$ th term of the above lower bound. Specifically, the first term of the above lower bound is $\{\max (\tilde{z}_1 - \sum_{i\in \mathcal{L}_{r,1}}z_{i,1} / r,0)\} ^2 +\{\max (\sum_{i\in \mathcal{R}_{r,1}}z_{i,1} / r - \tilde{z}_1,0)\} ^2$ , which takes its maximum when $\mathcal{L}_{r,1} = \{i\in \{1,\dots ,N\} :z_{i,1}\leq \gamma_{1,1}\}$ and $\mathcal{R}_{r,1} = \{i\in \{1,\dots ,N\} :z_{i,1}\geq \gamma_{2,1}\}$ where $\gamma_{1,1} = z_{(r),1}$ and $\gamma_{2,1} = z_{(N - r + 1),1}$ . After obtaining the index sets $\mathcal{L}_{r,1},\ldots ,\mathcal{L}_{r,j - 1}$ and $\mathcal{R}_{r,1},\ldots ,\mathcal{R}_{r,j - 1}$ , we choose $\mathcal{L}_{r,j}$ and $\mathcal{R}_{r,j}$ to maximize the $j$ th term of the above lower bound, which is equivalent to maximizing $\{\max (\tilde{z}_j - \sum_{i\in \mathcal{L}_{r,j}}z_{i,j} / r,0)\} ^2 +\{\max (\sum_{i\in \mathcal{R}_{r,j}}z_{i,j} / r-$ $\tilde{z}_j,0)\} ^2$ . Hence we choose $\mathcal{L}_{r,j}$ and $\mathcal{R}_{r,j}$ to be the indices of the remaining observations whose $j$ th variable is no larger than $\gamma_{1,j}$ and no less than $\gamma_{2,j}$ , respectively, where $\gamma_{1,j}$ and $\gamma_{2,j}$ are the $r$ th smallest and the $r$ th largest element of $\{z_{i,j}:i\in \{1,\dots ,N\} \backslash (\bigcup_{\ell = 1}^{j - 1}(\mathcal{L}_{r,\ell}\cup \mathcal{R}_{r,\ell}))\}$ , respectively. We average the observations within the groups $\mathcal{L}_{r,1},\ldots ,\mathcal{L}_{r,p}$ and $\mathcal{R}_{r,1},\ldots ,\mathcal{R}_{r,p}$ . Finally, we use the least square estimator based on the $2p$ averaged observations to estimate $\beta$ . The proposed estimation procedure is summarized in Algorithm 1. + +Algorithm 1: Data averaging algorithm +Input: Observations $\{Z_i, y_i\}_{i=1}^N$ , covariate dimension $p$ +Output: Estimator of $\beta$ $r = \frac{N}{2p}$ is assumed to be an integer +for $j \in \{1, \dots, p\}$ do + $\gamma_{1,j} \gets$ the $r$ th smallest element of $\{z_{i,j} : i \in \{1, \dots, N\} \setminus \left( \bigcup_{\ell=1}^{j-1} (\mathcal{L}_{r,\ell} \cup \mathcal{R}_{r,\ell}) \right)\}$ $\gamma_{2,j} \gets$ the $r$ th largest element of $\{z_{i,j} : i \in \{1, \dots, N\} \setminus \left( \bigcup_{\ell=1}^{j-1} (\mathcal{L}_{r,\ell} \cup \mathcal{R}_{r,\ell}) \right)\}$ $\mathcal{L}_{r,j} \gets \{i \in \{1, \dots, N\} \setminus \left( \bigcup_{\ell=1}^{j-1} (\mathcal{L}_{r,\ell} \cup \mathcal{R}_{r,\ell}) \right) : z_{i,j} \leq \gamma_{1,j}\}$ $\mathcal{R}_{r,j} \gets \{i \in \{1, \dots, N\} \setminus \left( \bigcup_{\ell=1}^{j-1} (\mathcal{L}_{r,\ell} \cup \mathcal{R}_{r,\ell}) \right) : z_{i,j} \geq \gamma_{2,j}\}$ $\bar{Z}_j^L \gets r^{-1} \sum_{i \in \mathcal{L}_j} Z_i$ , $\bar{X}_j^L = (1, \bar{Z}_j^{L^\top})^\top$ , $\bar{y}_j^L \gets r^{-1} \sum_{i \in \mathcal{L}_j} y_i$ $\bar{Z}_j^R \gets r^{-1} \sum_{i \in \mathcal{R}_j} Z_i$ , $\bar{X}_j^R = (1, \bar{Z}_j^{R^\top})^\top$ , $\bar{y}_j^R \gets r^{-1} \sum_{i \in \mathcal{R}_j} y_i$ $\hat{\beta}_{\mathrm{A}} \gets \left( \sum_{j=1}^{p} \bar{X}_j^L \bar{X}_j^{L^\top} + \sum_{j=1}^{p} \bar{X}_j^R \bar{X}_j^{R^\top} \right)^{-1} \left( \sum_{j=1}^{p} \bar{X}_j^L \bar{y}_j^L + \sum_{j=1}^{p} \bar{X}_j^R \bar{y}_j^R \right)$ +return $\hat{\beta}_{\mathrm{A}}$ + +In Algorithm 1, our strategy to select the index sets $\mathcal{L}_{r,j}$ and $\mathcal{R}_{r,j}$ is closely related to the IBOSS algorithm of Wang et al. [2019]. In fact, the index sets in Algorithm 1 is exactly the index sets selected by IBOSS algorithm with subdata size $n := N$ . Of course, for IBOSS algorithm, taking $n = N$ is unreasonable since the sample size is not reduced. In fact, for IBOSS algorithm, one needs to take $n = O(N / p + p)$ to complete the computation within $O(Np + p^3)$ time. Thus, the selection procedures of the proposed method and the IBOSS algorithm have different behavior. Theorem 4 will show that under certain conditions, IBOSS can achieve the optimal convergence rate (4) among all sampling methods. We shall see that the statistical performance of Algorithm 1 is even better than the IBOSS algorithm. + +Now we give an analysis of the computing time of Algorithm 1. Note that $\gamma_{1,j}$ and $\gamma_{2,j}$ are order statistics of no more than $N$ elements. It is known that the selection of an order statistic among $m$ elements can be completed within $O(m)$ time even in the worst case; see Paterson [1996]. Hence the computation of $\gamma_{1,1},\ldots ,\gamma_{1,p}$ and $\gamma_{2,1},\ldots ,\gamma_{2,p}$ can be completed within $O(Np)$ time in total. It takes only one scan of the full data to compute the averaged observations, which takes $O(Np)$ time. Finally, the computation of $\hat{\beta}_{\mathrm{A}}$ based on $2p$ averaged observations can be completed within $O(p^{3})$ time. In summary, Algorithm 1 can be completed within $O(Np + p^3)$ time and reduces the full data to merely $2p$ observations. + +# 3.2 Asymptotic results + +Now we investigate the asymptotic behavior of the conditional mean squared error of $\hat{\beta}_{\mathrm{A}}$ . In our asymptotic results, we treat $p$ as a function of $N$ , and $N$ tends to infinity. Let $Z = (z_{1},\dots ,z_{p})^{\top}$ be a random vector which is independent of $\mathbf{Z}$ and $\mathbf{y}$ and has the same distribution as $Z_{1}$ . The following theorem gives the exact limit of $\operatorname {E}\{\| \hat{\beta}_{\mathrm{A}} - \beta \| ^2\mid \mathbf{Z}\}$ when $Z$ is a standard normal random vector. + +Theorem 3 Suppose that Assumption 1 holds, $r = N / (2p)$ is an integer, $N > 2p^2$ , and $Z \sim \mathcal{N}(\mathbf{0}_p, \mathbf{I}_p)$ . Also suppose that as $N \to \infty$ , $p \to \infty$ and $p^3 (\log(p))^4 \log(N) / N \to 0$ . Then as $N \to \infty$ , + +$$ +\operatorname {E} \left\{\| \hat {\boldsymbol {\beta}} _ {\mathrm {A}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z} \right\} = (1 + o _ {P} (1)) \frac {p ^ {2} \sigma_ {\varepsilon} ^ {2}}{2 \log (2 p) N}. +$$ + +Theorem 3 implies that when $Z$ is a standard normal random vector, the conditional mean squared error of $\hat{\beta}_{\mathrm{A}}$ has convergence rate $p^2 / (\log(2p)N)$ . On the other hand, for sampling methods with $n = c_1 N / p + c_2 p$ for constants $c_1, c_2 > 0$ such that the computing time may be comparable, Theorem 2 implies that the optimal convergence rate of the conditional mean squared error is $p^2 / N$ . In this view, the convergence rate of the proposed estimator is faster than sampling methods for $p \to \infty$ . + +The proposed algorithm is closely related to the IBOSS algorithm. We would like to derive the asymptotic behavior of the conditional mean squared error of $\hat{\beta}_{\mathrm{I}}$ . Theorem 6(i) of Wang et al. [2019] gives an asymptotic expression of the conditional covariance of $\hat{\beta}_{\mathrm{I}}$ under the assumption that $Z$ is + +normally distributed. It is implied that if $n$ and $p$ are fixed, then as $N \to \infty$ , + +$$ +\mathrm {E} \left\{\| \hat {\boldsymbol {\beta}} _ {\mathrm {I}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z} \right\} = (1 + o _ {P} (1)) \left(\frac {p}{2 \log (N)} \operatorname {t r} \left(\boldsymbol {\Sigma} ^ {- 1} \operatorname {d i a g} (\boldsymbol {\Sigma}) \boldsymbol {\Sigma} ^ {- 1}\right) + 1\right) \frac {\sigma_ {\varepsilon} ^ {2}}{n}. \tag {5} +$$ + +Now we derive the fine-grained limiting behavior of $\operatorname{E}\left\{\|\hat{\boldsymbol{\beta}}_{\mathrm{I}} - \boldsymbol{\beta}\|^{2} \mid \mathbf{Z}\right\}$ for varying $n$ and $p$ . Let $\rho_{i,j} = \sigma_{i,j} / (\sigma_{i,i}\sigma_{j,j})^{1/2}$ denote the correlation coefficient between $z_{i}$ and $z_{j}$ , $i,j = 1,\dots,p$ . Define + +$$ +\alpha_ {N} = \frac {p}{p + 2 \log (N / r)}, \quad \mathbf {W} _ {N} = \alpha_ {N} \boldsymbol {\Sigma} + (1 - \alpha_ {N}) \boldsymbol {\Sigma} \operatorname {d i a g} (\boldsymbol {\Sigma}) ^ {- 1} \boldsymbol {\Sigma}. +$$ + +We have the following theorem. + +Theorem 4 Suppose Assumption 1 holds and $Z \sim \mathcal{N}(\mu, \Sigma)$ , $r = n/(2p)$ is an integer, there exist constants $C_1, C_2, C_3 > 0$ such that $C_1 < \lambda_p(\Sigma) < \lambda_1(\Sigma) < C_2$ and $\|\mu\| < C_3$ , there exists a constant $0 < \rho < 1/\sqrt{2}$ such that $\max_{1 \leq i < j \leq p} |\rho_{i,j}| \leq \rho$ , there exist $\epsilon_1, \epsilon_2 \in (0,1)$ such that for sufficiently large $N$ , $4r \leq N^{\epsilon_1}$ , $p \leq N^{\epsilon_2}$ and + +$$ +\left. \left(1 + 2 \epsilon_ {2}\right) ^ {1 / 2} \mid \rho \right| + \left\{\left(\epsilon_ {1} + 2 \epsilon_ {2}\right) \left(1 - \rho^ {2}\right) \right\} ^ {1 / 2} < \left(1 - \epsilon_ {1}\right) ^ {1 / 2}. \tag {6} +$$ + +Furthermore, suppose as $N\to \infty$ + +$$ +\frac {r}{N} \rightarrow 0 \quad a n d \quad \frac {p ^ {2} (\log (n)) ^ {4}}{\max (n , r \log (N / r))} \rightarrow 0. \tag {7} +$$ + +Then as $N\to \infty$ + +$$ +\operatorname {E} \left\{\| \hat {\boldsymbol {\beta}} _ {\mathrm {I}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z} \right\} = (1 + o _ {P} (1)) \left(\alpha_ {N} \operatorname {t r} (\mathbf {W} _ {N} ^ {- 1}) + \alpha_ {N} \boldsymbol {\mu} ^ {\top} \mathbf {W} _ {N} ^ {- 1} \boldsymbol {\mu} + 1\right) \frac {\sigma_ {\varepsilon} ^ {2}}{n}. +$$ + +Remark 1 If $n$ and $p$ are fixed, then the condition (6) is satisfied for sufficiently large $N$ . On the other hand, if $\rho_{i,j} = 0$ for all $1 \leq i < j \leq p$ , that is, the variables are independent, then the condition (6) becomes $\epsilon_1 + \epsilon_2 < 1/2$ . In this case, the condition (6) holds for $n = O(N^{\epsilon})$ for some $0 < \epsilon < 1/2$ . + +Remark 2 The condition (7) is satisfied if $r / N \to 0$ and $p = O(n^{1 / 2 - \epsilon})$ for some $\epsilon > 0$ . Also, the condition (7) can be satisfied for arbitrary $n$ , $p$ provided $N$ is sufficiently large. + +Compared with Theorem 6(i) in Wang et al. [2019], our Theorem 4 gives a more comprehensive characterization of the asymptotics of $\operatorname{Var}(\hat{\beta}_{\mathrm{I}} \mid \mathbf{X})$ . If $\mu = \mathbf{0}_p$ and $\alpha_N \to 0$ , then Theorem 4 implies that + +$$ +\operatorname {E} \left\{\| \hat {\boldsymbol {\beta}} _ {\mathrm {I}} - \boldsymbol {\beta} \| ^ {2} \mid \mathbf {Z} \right\} = (1 + o _ {P} (1)) \left(\frac {p}{2 \log (N / r)} \operatorname {t r} (\boldsymbol {\Sigma} ^ {- 1} \operatorname {d i a g} (\boldsymbol {\Sigma}) \boldsymbol {\Sigma} ^ {- 1}) + 1\right) \frac {\sigma_ {\varepsilon} ^ {2}}{n}. \tag {8} +$$ + +If we further assume that $\log (r) / \log (N)\to 0$ , then the expressions (5) and (8) are equivalent. However, Theorem 4 implies that the expression (8) is not valid if $\alpha_{N}$ does not converge to 0. + +Now we consider the special case that $Z \sim \mathcal{N}(\mathbf{0}_p, \mathbf{I}_p)$ . In this case, Theorem 4 implies that if $r / N \to 0$ , $n = O(N^{\epsilon})$ for some $0 < \epsilon < 1 / 2$ and $p = O(n^{1 / 2 - \epsilon^*})$ for some $0 < \epsilon^* < 1 / 2$ , then $\operatorname{E}\{\|\hat{\beta}_{\mathrm{I}} - \beta\|^{2} \mid \mathbf{Z}\}$ has convergence rate $(\alpha_{N}p + 1) / n$ . We have + +$$ +\alpha_ {N} = \frac {p}{p + 2 \log (2 p) + 2 \log (N / n)} = O \left(\frac {p}{p + \log (N / n)}\right). +$$ + +Hence if $\log (N / n) = O(p^2)$ , then $\operatorname{E}\left\{\|\hat{\beta}_{\mathrm{I}} - \beta\|^2 \mid \mathbf{Z}\right\} = O_P(p^2 / (n(p + \log (N / n)))$ which matches (4). In this case, $\hat{\beta}_{\mathrm{I}}$ achieves the optimal rate of sampling methods given by Theorem 2, and hence the optimal rate given by Theorem 2 is tight. + +# 4 Simulation results + +In this section, we conduct simulations to examine the performance of the proposed estimator $\hat{\beta}_{\mathrm{A}}$ . For comparison, the simulations also include the vanilla data averaging algorithm (abbreviated as VDA) where the full data is uniformly divided into $k := 2p$ groups in random and the observations are averaged within groups, the least square estimator based on the uniform sampling method (abbreviated as UNI), the leverage score sampling estimator (abbreviated as LEV) as described in Ma et al. [2015], the sketched least square estimator based on the subsampled randomized Hadamard transform (abbreviated as SRHT), the estimator $\hat{\beta}_{\mathrm{I}}$ produced by the IBOSS algorithm, and the least square estimator based on the full data (abbreviated as FULL). The methods VDA, UNI, LEV, SRHT and IBOSS are instances of the sketching framework (2). For these three methods, we take $n = N / p$ . For SRHT, if $N$ is a power of 2, then the sketching matrix $\mathbf{O} = (\mathbf{PHD})^{\top}$ , where $\mathbf{P}$ is an $n \times N$ matrix whose rows are uniformly sampled from the standard bases of $\mathbb{R}^N$ , $\mathbf{H}$ is an $N \times N$ Walsh-Hadamard matrix (see, e.g., Dobriban and Liu [2019]) and $\mathbf{D}$ is an $N \times N$ diagonal matrix whose diagonal elements are i.i.d. Rademacher random variables; and if $N$ is not a power of 2, we pad zeros to the original data to make $N$ reach a power of 2. The computation of the proposed method and the IBOSS estimator rely on certain order statistics. For these two methods, the algorithm SELECT of Floyd and Rivest [1975] is used to select the order statistics. The algorithms are implemented by C++. To be fair, for all algorithms, the estimators of $\beta$ are solved by Gaussian elimination. The simulations are performed on a CPU with 3.30 GHz. + +The statistical performance of an estimator $\hat{\beta}$ of $\beta$ is evaluated by the empirical mean squared error based on 100 independent replications. Specifically, the empirical mean squared error is defined as $100^{-1} \sum_{i=1}^{100} \|\hat{\beta}^{(i)} - \beta\|^2$ , where $\hat{\beta}^{(i)}$ is the estimator in the $i$ th replication. In all simulations, the ground truth of $\beta$ is a vector with all elements equal to 1. We consider two distributions of $\varepsilon_1$ : the normal distribution $\varepsilon_1 \sim \mathcal{N}(0,1)$ and the normalized chi-squared distribution $\varepsilon_1 \sim (\chi^2(1) - 1)/\sqrt{2}$ . We consider the following distributions of $Z$ . + +- Case 1: $\{z_j\}_{j=1}^p$ are i.i.d. with uniform distribution $\mathrm{Uniform}(0,1)$ . +- Case 2: $\{z_j\}_{j=1}^p$ are i.i.d. with normal distribution $\mathcal{N}(0,1)$ . +- Case 3: $\{z_j\}_{j=1}^p$ are i.i.d. with lognormal distribution, that is, $\log(z_i) \sim \mathcal{N}(0,1)$ . +- Case 4: $\{z_j\}_{j=1}^p$ are i.i.d. with student $t$ distribution with 3 degrees of freedom $t_3$ . +- Case 5: $Z \sim \mathcal{N}(\mathbf{0}_p, \Sigma)$ , where the diagonal elements of $\boldsymbol{\Sigma}$ all equals 1 and the off diagonal elements all equals 0.5. +- Case 6: $Z$ is distributed as a mixture of $\mathcal{N}(\mu, \Sigma)$ and $\mathcal{N}(-\mu, \Sigma)$ where $\mu$ has all 1 entries, $\Sigma$ is defined as in Case 5, and the mixing proportions of the two component distributions are both 0.5. + +Table 2 and Tables A.1-A.3 in Supplementary Material list the empirical mean squared errors of various estimators, where the proposed method is referred to as NEW. Among the implemented methods, VDA, UNI and SRHT are data-oblivious sketching methods while NEW, LEV and IBOSS are data-aware sketching methods. It can be seen that VDA has the worst performance. This implies that the selection procedure is necessary for the proposed method. The simulation results show that UNI, SRHT, LEV have similar statistical performance. It can be seen that the proposed estimator can achieve substantial improvement over the competing sketching methods. Especially, the proposed method shows superiority when $p$ is large. + +We also evaluate the computing time for various algorithms. Table 3 lists the computing time for Case 1 with $\epsilon_1 \sim \mathcal{N}(0,1)$ . Results for other settings are similar. It can be seen that the proposed method is slower than VDA and UNI. Compared with VDA and UNI, however, the proposed method has significantly better statistical performance and can achieve better data reduction. Compared with IBOSS, the proposed method has a comparable computing time but a much better statistical performance. In summary, the new estimator has good performance in both speed and statistical performance. + +Table 2: Empirical mean squared errors (multiplied by $10^{3}$ ) of various algorithms with $N = 8 \times 10^{4}$ and $\varepsilon_{1} \sim \mathcal{N}(0,1)$ . + +
pNEWVDAUNISRHTLEVIBOSSFULL
Case 150182.43816134492.376497.665488.99507.6429.38737
100651.59614751.42149.1921632114.552186.9419.2125
2002469.561501415398.115788.114584.215159.238.0159
Case 2506.959061039.0833.624733.501732.970227.030.624944
10021.97351039.08144.342144.579141.629125.2181.28266
20069.49131034.621033.371021.281035.9926.7622.54829
Case 3505.996071008.3828.223232.81734.445413.6350.465752
10011.6466998.573105.39126.361168.77540.35531.00904
20023.0075866.214920.9891003.37982.843235.4441.95179
Case 4502.00812345.55912.047711.213811.68382.362050.213812
1004.66736346.25953.605749.828550.30767.968780.439798
20010.658345.624372.099346.143344.28337.75420.856995
Case 55019.84212127.6665.189763.460461.490459.32761.21801
10062.27821922.4284.473281.714275.586264.8962.4893
200192.0852015.872022.82031.461979.681973.695.00354
Case 65019.56011987.2262.55865.526362.690955.70361.22561
10060.95932022.88282.951286.522280.046263.0082.47379
200194.5662007.681988.912022.942000.111938.255.09764
+ +Table 3: Computing time (in seconds) of various algorithms. + +
NpNEWVDAUNISRHTLEVIBOSSFULL
8 × 104500.051270.003020.002040.065670.127220.032790.05261
8 × 1041000.105800.006350.003320.156070.524880.071270.22732
8 × 1042000.212610.016180.006530.327402.141220.163880.90082
6.4 × 1051001.329520.057150.032921.754144.510721.168101.95327
6.4 × 1052002.478270.105030.051323.3128117.83392.311807.56397
6.4 × 1054004.888180.250920.127856.8644585.07934.7124236.3755
+ +# 5 Conclusion + +In this paper, we presented a new sketching method which is based on data averaging. The computation of the proposed method can be completed within $O(Np + p^3)$ time. We proved that the proposed method can achieve a faster convergence rate than sampling methods. + +In the proposed algorithm, we need to select certain order statistics of variables. This selection procedure is adapted from the IBOSS algorithm and thus allows us to compare the performance of these two methods in a fair manner. In theory, the selection of order statistics can be completed within $O(Np)$ time. However, this procedure may cost a lot of time in practice. It is interesting to investigate other selection procedures for data averaging. Also, this work focuses on the data averaging method for the linear model. It is interesting to apply the data averaging method to regularized linear models and generalized linear models. We leave these topics for possible future research. + +# 6 Supplementary Material + +The Supplementary Material includes additional numerical results, all proofs and codes. + +# Acknowledgments and Disclosure of Funding + +Wangli Xu is the corresponding author of this paper. The majority of this work was done when the first author was a postdoc in Renmin University of China. At the time the paper was accepted, the first author was with Inspur (Beijing) Electronic Information Industry Co., Ltd. This work was supported by Beijing Natural Science Foundation (No Z200001), National Natural Science Foundation of China (No 11971478), and by Public Health & Disease Control and Prevention, Major Innovation & Planning Interdisciplinary Platform for the "Double-First Class" Initiative, Renmin University of China (No. 2023PDPC), and the MOE Project of Key Research Institute of Humanities and Social Sciences (No. 22JJD910001). + +# References + +D. C. Ahfock, W. J. Astle, and S. Richardson. Statistical properties of sketching algorithms. Biometrika, 108(2):283-297, jul 2021. +D. E. Barton and D. J. Casley. A quick estimate of the regression coefficient. Biometrika, 45(3-4): 431-435, 1958. +Q. Cheng, H. Wang, and M. Yang. Information-based optimal subdata selection for big data logistic regression. Journal of Statistical Planning and Inference, 209:112-122, 2020. +E. Dobriban and S. Liu. Asymptotics for sketching in least squares regression. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. +P. Drineas and M. W. Mahoney. RandNLA. Communications of the ACM, 59(6):80-90, may 2016. +R. W. Floyd and R. L. Rivest. Algorithm 489: The algorithm selects for finding the $i$ th smallest of $n$ elements [m1]. Communications of the ACM, 18(3):173, Mar. 1975. +W. Lin, Z. He, and M. Xiao. Balanced clustering: A uniform model and fast algorithm. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 2987-2993. International Joint Conferences on Artificial Intelligence Organization, 7 2019. +P. Ma, M. W. Mahoney, and B. Yu. A statistical perspective on algorithmic leveraging. Journal of Machine Learning Research, 16(27):861-911, 2015. +P. Ma, X. Zhang, X. Xing, J. Ma, and M. Mahoney. Asymptotic analysis of sampling estimators for randomized numerical linear algebra algorithms. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, pages 1026-1035, Online, 2020. +M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends® in Machine Learning, 3(2):123-224, 2010. +M. Paterson. Progress in selection. In Algorithm Theory — SWAT'96, pages 368-379, 1996. +M. Pilanci and M. J. Wainwright. Iterative Hessian sketch: fast and accurate solution approximation for constrained least-squares. Journal of Machine Learning Research, 17:1-38, 2016. +G. Raskutti and M. W. Mahoney. A statistical perspective on randomized sketching for ordinary least-squares. Journal of Machine Learning Research, 17:1-31, 2016. +H. Wang. Divide-and-conquer information-based optimal subdata selection algorithm. Journal of Statistical Theory and Practice, 13(3):1-19, 2019. +H. Wang, M. Yang, and J. Stufken. Information-based optimal subdata selection for big data linear regression. Journal of the American Statistical Association, 114(525):393-405, 2019. +J. Wang, J. D. Lee, M. Mahdavi, M. Kolar, and N. Srebro. Sketching meets random projection in the dual: A provable recovery algorithm for big and high-dimensional data. Electronic Journal of Statistics, 11(2):4896-4944, 2017. +D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1-2):iv+157, 2014. \ No newline at end of file diff --git a/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/images.zip b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..34db79fedf82c19ccbd5aeeff4db0238b1f926bf --- /dev/null +++ b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cc3a18377ee8a80b83dc5596c620aaac25d83c08d5a5ede28d17364154e391f +size 395898 diff --git a/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/layout.json b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..166cf5c29b331ced5bad6980629faa248ed4727b --- /dev/null +++ b/afastandaccurateestimatorforlargescalelinearmodelviadataaveraging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:825f2abe825b8ca7bea7f298d3a0bec7e0eb3bc8cc8f9a23c3f49503473e1efc +size 659660 diff --git a/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_content_list.json b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..110f3b441331fa7699adfd751a69dc563e092db1 --- /dev/null +++ b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aae0e061ef369fb97e2a441bdc8d2720cde300358e36a31f696625aa013c1e2a +size 116495 diff --git a/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_model.json b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7d0935a8631fd69224034667eefce6ebdb0529dd --- /dev/null +++ b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e401b7149fbdae77ca79b78875a0ef4249b5e5bc91c9a0bb16174c374753bcf2 +size 133140 diff --git a/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_origin.pdf b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d0e2b03adfe5db75b894f4c217aa36e2ad32065e --- /dev/null +++ b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/fb662cd9-375f-4983-b4e7-0ee93d1d9ed4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbbf40442ef4bdc72d2710d8127744f76077381dc19cb9f9d2497886dc9502bf +size 439292 diff --git a/afiniteparticleconvergencerateforsteinvariationalgradientdescent/full.md b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/full.md new file mode 100644 index 0000000000000000000000000000000000000000..47b19f76fce750a613924d143f18264284d3c7e3 --- /dev/null +++ b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/full.md @@ -0,0 +1,658 @@ +# A Finite-Particle Convergence Rate for Stein Variational Gradient Descent + +Jiaxin Shi* + +Stanford University + +Stanford, CA 94305 + +jiaxins@stanford.edu + +Lester Mackey + +Microsoft Research New England + +Cambridge, MA 02474 + +lmackey@microsoft.com + +# Abstract + +We provide the first finite-particle convergence rate for Stein variational gradient descent (SVGD), a popular algorithm for approximating a probability distribution with a collection of particles. Specifically, whenever the target distribution is sub-Gaussian with a Lipschitz score, SVGD with $n$ particles and an appropriate step size sequence drives the kernel Stein discrepancy to zero at an order $1 / \sqrt{\log\log n}$ rate. We suspect that the dependence on $n$ can be improved, and we hope that our explicit, non-asymptotic proof strategy will serve as a template for future refinements. + +# 1 Introduction + +Stein variational gradient descent [SVGD, 18] is an algorithm for approximating a target probability distribution $P$ on $\mathbb{R}^d$ with a collection of $n$ particles. Given an initial particle approximation $\mu_0^n = \frac{1}{n}\sum_{i=1}^{n}\delta_{x_i}$ with locations $x_i \in \mathbb{R}^d$ , SVGD (Algorithm 1) iteratively evolves the particle locations to provide a more faithful approximation of the target $P$ by performing optimization in the space of probability measures. SVGD has demonstrated encouraging results for a wide variety of inferential tasks, including approximate inference [18, 28, 27], generative modeling [26, 13], and reinforcement learning [11, 21]. + +Despite the popularity of SVGD, relatively little is known about its approximation quality. A first analysis by Liu [17, Thm. 3.3] showed that continuous SVGD—that is, Algorithm 2 initialized with a continuous distribution $\mu_0^\infty$ in place of the discrete particle approximation $\mu_0^n$ —converges to $P$ in kernel Stein discrepancy [KSD, 4, 19, 9]. KSD convergence is also known to imply weak convergence [9, 3, 12, 1] and Wasserstein convergence [14] under various conditions on the target $P$ and the SVGD kernel $k$ . Follow-up work by Korba et al. [15], Salim et al. [22], Sun et al. [24] sharpened the result of Liu with path-independent constants, weaker smoothness conditions, and explicit rates of convergence. In addition, Duncan et al. [6] analyzed the continuous-time limit of continuous SVGD to provide conditions for exponential convergence. However, each of these analyses applies only to continuous SVGD and not to the finite-particle algorithm used in practice. + +To bridge this gap, Liu [17, Thm. 3.2] showed that $n$ -particle SVGD converges to continuous SVGD in bounded-Lipschitz distance but only under boundedness assumptions violated by most applications of SVGD. To provide a more broadly applicable proof of convergence, Gorham et al. [10, Thm. 7] showed that $n$ -particle SVGD converges to continuous SVGD in 1-Wasserstein distance under assumptions commonly satisfied in SVGD applications. However, both convergence results are asymptotic, providing neither explicit error bounds nor rates of convergence. Korba et al. [15, Prop. 7] explicitly bounded the expected squared Wasserstein distance between $n$ -particle and continuous SVGD but only under the assumption of bounded $\nabla \log p$ , an assumption that rules out all strongly + +Algorithm 1 $n$ -particle Stein Variational Gradient Descent [18]: $\mathrm{SVGD}(\mu_0^n,r)$ + +Input: Target $P$ with density $p$ , kernel $k$ , step sizes $(\epsilon_s)_{s \geq 0}$ , particle approximation $\mu_0^n = \frac{1}{n} \sum_{i=1}^{n} \delta_{x_i}$ , rounds $r$ + +for $s = 0,\dots ,r - 1$ do + +$$ +x _ {i} \leftarrow x _ {i} + \epsilon_ {s} \frac {1}{n} \sum_ {j = 1} ^ {n} k \left(x _ {j}, x _ {i}\right) \nabla \log p \left(x _ {j}\right) + \nabla_ {x} k \left(x _ {j}, x _ {i}\right) \text {f o r} i = 1, \dots , n. +$$ + +Output: Updated approximation $\mu_r^n = \frac{1}{n}\sum_{i = 1}^n\delta_{x_i}$ of the target $P$ + +# Algorithm 2 Generic Stein Variational Gradient Descent [18]: $\operatorname{SVGD}(\mu_0, r)$ + +Input: Target $P$ with density $p$ , kernel $k$ , step sizes $(\epsilon_s)_{s \geq 0}$ , approximating measure $\mu_0$ , rounds $r$ for $s = 0, \dots, r - 1$ do + +Let $\mu_{s + 1}$ be the distribution of $X^{s} + \epsilon_{s}\int k(x,X^{s})\nabla \log p(x) + \nabla_{x}k(x,X^{s})d\mu_{s}(x)$ for $X^s\sim \mu_s$ + +Output: Updated approximation $\mu_r$ of the target $P$ + +log concave or dissipative distributions and all distributions for which the KSD is currently known to control weak convergence [9, 3, 12, 1]. In addition, Korba et al. [15] do not provide a unified bound for the convergence of $n$ -particle SVGD to $P$ and ultimately conclude that "the convergence rate for SVGD using $[\mu_0^n]$ remains an open problem." The same open problem was underscored in the later work of Salim et al. [22], who write "an important and difficult open problem in the analysis of SVGD is to characterize its complexity with a finite number of particles." + +In this work, we derive the first unified convergence bound for finite-particle SVGD to its target. To achieve this, we first bound the 1-Wasserstein discretization error between finite-particle and continuous SVGD under assumptions commonly satisfied in SVGD applications and compatible with KSD weak convergence control (see Theorem 1). We next bound KSD in terms of 1-Wasserstein distance and SVGD moment growth to explicitly control KSD discretization error in Theorem 2. Finally, Theorem 3 combines our results with the established KSD analysis of continuous SVGD to arrive at an explicit KSD error bound for $n$ -particle SVGD. + +# 2 Notation and Assumptions + +Throughout, we fix a nonnegative step size sequence $(\epsilon_s)_{s\geq 0}$ , a target distribution $P$ in the set $\mathcal{P}_1$ of probability measures on $\mathbb{R}^d$ with integrable first moments, and a reproducing kernel $k$ — a symmetric positive definite function mapping $\mathbb{R}^d\times \mathbb{R}^d$ to $\mathbb{R}$ — with reproducing kernel Hilbert space (RKHS) $\mathcal{H}$ and product RKHS $\mathcal{H}^d\triangleq \bigotimes_{i = 1}^d\mathcal{H}$ [2]. We will use the terms "kernel" and "reproducing kernel" interchangeably. For all $\mu ,\nu \in \mathcal{P}_1$ , we let $\Gamma (\mu ,\nu)$ be the set of all couplings of $\mu$ and $\nu$ , i.e., joint probability distributions $\gamma$ over $\mathcal{P}_1\times \mathcal{P}_1$ with $\mu$ and $\nu$ as marginal distributions of the first and second variable respectively. We further let $\mu \otimes \nu$ denote the independent coupling, the distribution of $(X,Z)$ when $X$ and $Z$ are drawn independently from $\mu$ and $\nu$ respectively. With this notation in place we define the 1-Wasserstein distance between $\mu ,\nu \in \mathcal{P}_1$ as $W_{1}(\mu ,\nu)\triangleq$ inf $\gamma \in \Gamma (\mu ,\nu)\mathbb{E}_{(X,Z)\sim \gamma}[\| Z - X\| _2]$ and introduce the shorthand $m_{\mu ,x^{\star}}\triangleq \mathbb{E}_{\mu}[\| \cdot -x^{\star}\| ]_2$ for each $x^{\star}\in \mathbb{R}^{d}$ , $m_{\mu ,P}\triangleq \mathbb{E}_{(X,Z)\sim \mu \otimes P}[\| X - Z\| _2]$ , and $M_{\mu ,P}\triangleq \mathbb{E}_{(X,Z)\sim \mu \otimes P}[\| X - Z\| _2^2 ]$ . We further define the Kullback-Leibler (KL) divergence as $\mathrm{KL}(\mu \| \nu)\triangleq \mathbb{E}_{\mu}[\log (\frac{d\mu}{d\nu})]$ when $\mu$ is absolutely continuous with respect to $\nu$ (denoted by $\mu \ll \nu$ ) and as $\infty$ otherwise. + +Our analysis will make use of the following assumptions on the SVGD kernel and target distribution. + +Assumption 1 (Lipschitz, mean-zero score function). The target distribution $P \in \mathcal{P}_1$ has a differentiable density $p$ with an $L$ -Lipschitz score function $s_p \triangleq \nabla \log p$ , i.e., $\| s_p(x) - s_p(y) \|_2 \leq L \| x - y \|_2$ for all $x, y \in \mathbb{R}^d$ . Moreover, $\mathbb{E}_P[s_p] = 0$ and $s_p(x^\star) = 0$ for some $x^\star \in \mathbb{R}^d$ . + +Assumption 2 (Bounded kernel derivatives). The kernel $k$ is twice differentiable and $\sup_{x,y\in \mathbb{R}^d}\max (|k(x,y)|,\| \nabla_xk(x,y)\| _2,\| \nabla_y\nabla_xk(x,y)\|_{\mathrm{op}},\left\| \nabla_x^2 k(x,y)\right\|_{\mathrm{op}})\leq \kappa_1^2$ for $\kappa_{1} > 0$ . Moreover, for all $i,j\in \{1,2,\ldots ,d\}$ , $\sup_{x\in \mathbb{R}^d}\nabla_{y_i}\nabla_{y_j}\nabla_{x_i}\nabla_{x_j}k(x,y)|_{y = x}\leq \kappa_2^2$ for $\kappa_{2} > 0$ . + +Assumption 3 (Decaying kernel derivatives). The kernel $k$ is differentiable and admits a $\gamma \in \mathbb{R}$ such that, for all $x, y \in \mathbb{R}^d$ satisfying $\| x - y \|_2 \geq 1$ , + +$$ +\left\| \nabla_ {x} k (x, y) \right\| _ {2} \leq \gamma / \| x - y \| _ {2}. +$$ + +Assumptions 1, 2, and 3 are both commonly invoked and commonly satisfied in the literature. For example, the Lipschitz score assumption is consistent with prior SVGD convergence analyses [17, 15, 22] and, by Gorham and Mackey [8, Prop. 1], the score $s_p$ is mean-zero under the mild integrability condition $\mathbb{E}_{X \sim P}[\| s_p(X)\|_2] < \infty$ . The bounded and decaying derivative assumptions have also been made in prior analyses [15, 9] and, as we detail in Appendix A, are satisfied by the kernels most commonly used in SVGD, like the Gaussian and inverse multiquadric (IMQ) kernels. Notably, in these cases, the bounds $\kappa_1$ and $\kappa_2$ are independent of the dimension $d$ . + +To leverage the continuous SVGD convergence rates of Salim et al. [22], we additionally assume that the target $P$ satisfies Talagrand's $T_{1}$ inequality [25, Def. 22.1]. Remarkably, Villani [25, Thm. 22.10] showed that Assumption 4 is equivalent to $P$ being a sub-Gaussian distribution. Hence, this mild assumption holds for all strongly log concave $P$ [23, Def. 2.9], all $P$ satisfying the log Sobolev inequality [25, Thm. 22.17], and all distantly dissipative $P$ for which KSD is known to control weak convergence [9, Def. 4]. + +Assumption 4 (Talagrand's $T_{1}$ inequality [25, Def. 22.1]). For $P \in \mathcal{P}_1$ , there exists $\lambda > 0$ such that, for all $\mu \in \mathcal{P}_1$ , + +$$ +W _ {1} (\mu , P) \leq \sqrt {2 \mathrm {K L} (\mu \| P) / \lambda}. +$$ + +Finally we make use of the following notation specific to the SVGD algorithm. + +Definition 1 (Stein operator). For any differentiable vector-valued function $g: \mathbb{R}^d \to \mathbb{R}^d$ , the Langevin Stein operator [8] for $P$ satisfying Assumption 1 is defined by + +$$ +\left(\mathcal {T} _ {P} g\right) (x) \triangleq \langle s _ {p} (x), g (x) \rangle + \nabla \cdot g (x) \quad f o r a l l \quad x \in \mathbb {R} ^ {d}. +$$ + +Definition 2 (Vector-valued Stein operator). For any differentiable function $h: \mathbb{R}^d \to \mathbb{R}$ , the vector-valued Langevin Stein operator [19] for $P$ satisfying Assumption 1 is defined by + +$$ +\left(\mathcal {A} _ {P} h\right) (x) \triangleq s _ {p} (x) h (x) + \nabla h (x) \quad f o r a l l \quad x \in \mathbb {R} ^ {d}. +$$ + +Definition 3 (SVGD transport map and pushforward). The SVGD transport map [18] for a target $P$ satisfying Assumption 1, a kernel $k$ satisfying Assumption 2, a step size $\epsilon \geq 0$ , and an approximating distribution $\mu \in \mathcal{P}_1$ takes the form + +$$ +T _ {\mu , \epsilon} (x) \triangleq x + \epsilon \mathbb {E} _ {X \sim \mu} [ (\mathcal {A} _ {P} k (\cdot , x)) (X) ] \quad f o r a l l \quad x \in \mathbb {R} ^ {d}. +$$ + +Moreover, the SVGD pushforward $\Phi_{\epsilon}(\mu)$ represents the distribution of $T_{\mu ,\epsilon}(X)$ when $X\sim \mu$ + +Definition 4 (Kernel Stein discrepancy). The Langevin kernel Stein discrepancy [KSD, 4, 19, 9] for $P$ satisfying Assumption 1, $k$ satisfying Assumption 2, and measures $\mu, \nu \in \mathcal{P}_1$ is given by + +$$ +\operatorname {K S D} _ {P} (\mu , \nu) \triangleq \sup _ {\| g \| _ {\mathcal {H} ^ {d}} \leq 1} \mathbb {E} _ {\mu} [ \mathcal {T} _ {P} g ] - \mathbb {E} _ {\nu} [ \mathcal {T} _ {P} g ]. +$$ + +Notably, the KSD so-defined is symmetric in its two arguments and satisfies the triangle inequality. + +Lemma 1 (KSD symmetry and triangle inequality). Under Definition 4, for all $\mu, \nu, \pi \in \mathcal{P}_1$ , + +$$ +\mathrm {K S D} _ {P} (\mu , \nu) = \mathrm {K S D} _ {P} (\nu , \mu) \quad a n d \quad \mathrm {K S D} _ {P} (\mu , \nu) \leq \mathrm {K S D} _ {P} (\mu , \pi) + \mathrm {K S D} _ {P} (\pi , \nu). +$$ + +Proof. Fix any $\mu, \nu, \pi \in \mathcal{P}_1$ . For symmetry, we note that $g \in \mathcal{H}^d \Leftrightarrow f = -g \in \mathcal{H}^d$ , so + +$$ +\operatorname {KSD}_{P}(\mu ,\pi) = \sup_{\| g\|_{\mathcal{H}^{d}}\leq 1}\mathbb{E}_{\mu}[\mathcal{T}_{P}g] - \mathbb{E}_{\pi}[\mathcal{T}_{P}g] = \sup_{\| f\|_{\mathcal{H}^{d}}\leq 1}\mathbb{E}_{\pi}[\mathcal{T}_{P}f] - \mathbb{E}_{\mu}[\mathcal{T}_{P}f] = \operatorname {KSD}_{P}(\pi ,\mu). +$$ + +To establish the triangle inequality, we write + +$$ +\begin{array}{l} \operatorname {K S D} _ {P} (\mu , \nu) = \sup _ {\| g \| _ {\mathcal {H} ^ {d}} \leq 1} \mathbb {E} _ {\mu} \left[ \mathcal {T} _ {P} g \right] - \mathbb {E} _ {\pi} \left[ \mathcal {T} _ {P} g \right] + \mathbb {E} _ {\pi} \left[ \mathcal {T} _ {P} g \right] - \mathbb {E} _ {\nu} \left[ \mathcal {T} _ {P} g \right] \\ \leq \sup _ {\| g \| _ {\mathcal {H} ^ {d}} \leq 1} \left(\mathbb {E} _ {\mu} \left[ \mathcal {T} _ {P} g \right] - \mathbb {E} _ {\pi} \left[ \mathcal {T} _ {P} g \right]\right) + \sup _ {\| h \| _ {\mathcal {H} ^ {d}} \leq 1} \left(\mathbb {E} _ {\pi} \left[ \mathcal {T} _ {P} h \right] - \mathbb {E} _ {\nu} \left[ \mathcal {T} _ {P} h \right]\right) \\ \leq \mathrm {K S D} _ {P} (\mu , \pi) + \mathrm {K S D} _ {P} (\pi , \nu). \\ \end{array} +$$ + +![](images/3071a132b5263f3be03438e5740962a67097d7d044c9d541ffe39deae32c4338.jpg) + +# 3 Wasserstein Discretization Error of SVGD + +Our first main result concerns the discretization error of SVGD and shows that $n$ -particle SVGD remains close to its continuous SVGD limit whenever the step size sum $b_{r-1} = \sum_{s=0}^{r-1} \epsilon_s$ is sufficiently small. + +Theorem 1 (Wasserstein discretization error of SVGD). Suppose Assumptions 1, 2, and 3 hold. For any $\mu_0^n, \mu_0^\infty \in \mathcal{P}_1$ , the outputs $\mu_r^n = \mathrm{SVGD}(\mu_0^n, r)$ and $\mu_r^\infty = \mathrm{SVGD}(\mu_0^\infty, r)$ of Algorithm 2 satisfy + +$$ +\log \left(\frac {W _ {1} \left(\mu_ {r} ^ {n} , \mu_ {r} ^ {\infty}\right)}{W _ {1} \left(\mu_ {0} ^ {n} , \mu_ {0} ^ {\infty}\right)}\right) \leq b _ {r - 1} (A + B \exp \left(C b _ {r - 1}\right)) +$$ + +for $b_{r - 1}\triangleq \sum_{s = 0}^{r - 1}\epsilon_s$ , $A = (c_{1} + c_{2})(1 + m_{P,x^{\star}})$ , $B = c_{1}m_{\mu_{0}^{n},P} + c_{2}m_{\mu_{0}^{\infty},P}$ , $C = \kappa_1^2 (3L + 1)$ , $c_{1} = \max (\kappa_{1}^{2}L,\kappa_{1}^{2})$ , and $c_{2} = \kappa_{1}^{2}(L + 1) + L\max (\gamma ,\kappa_{1}^{2})$ + +We highlight that Theorem 1 applies to any $\mathcal{P}_1$ initialization of SVGD: the initial particles supporting $\mu_0^n$ could, for example, be drawn i.i.d. from a convenient auxiliary distribution $\mu_0^\infty$ or even generated deterministically from some quadrature rule. To marry this result with the continuous SVGD convergence bound of Section 5, we will ultimately require $\mu_0^\infty$ to be a continuous distribution with finite $\mathrm{KL}(\mu_0^\infty \| P)$ . Hence, our primary desideratum for SVGD initialization is that $\mu_0^n$ have small Wasserstein distance to some $\mu_0^\infty$ with $\mathrm{KL}(\mu_0^\infty \| P) < \infty$ . Then, by Theorem 1, the SVGD discretization error $W_{1}(\mu_{r}^{n},\mu_{r}^{\infty})$ will remain small whenever the step size sum is not too large. + +The proof of Theorem 1 in Section 6 relies on two lemmas. The first, due to Gorham et al. [10], shows that the one-step SVGD pushforward map $\Phi_{\epsilon}$ (Definition 3) is pseudo-Lipschitz with respect to the 1-Wasserstein distance whenever the score function $\nabla \log p$ and kernel $k$ fulfill a commonly-satisfied pseudo-Lipschitz condition. Here, for any $g:\mathbb{R}^d\to \mathbb{R}^d$ , we define the Lipschitz constant $\mathrm{Lip}(g)\triangleq \sup_{x,z\in \mathbb{R}^d}\| g(x) - g(z)\| _2 / \| x - z\| _2$ . + +Lemma 2 (Wasserstein pseudo-Lipschitzness of SVGD [10, Lem. 12]). For $P$ satisfying Assumption 1, suppose that the following pseudo-Lipschitz bounds hold + +$$ +\operatorname {L i p} \left(s _ {p} (x) k (x, \cdot) + \nabla_ {x} k (x, \cdot)\right) \leq c _ {1} \left(1 + \| x - x ^ {\star} \| _ {2}\right), +$$ + +$$ +\operatorname {L i p} \left(s _ {p} k (\cdot , z) + \nabla_ {x} k (\cdot , z)\right) \leq c _ {2} \left(1 + \| z - x ^ {\star} \| _ {2}\right). +$$ + +for some constants $c_{1}, c_{2} \in \mathbb{R}$ and all $x, z \in \mathbb{R}^{d}$ . Then, for any $\mu, \nu \in \mathcal{P}_{1}$ , + +$$ +W _ {1} \left(\Phi_ {\epsilon} (\mu), \Phi_ {\epsilon} (\nu)\right) \leq W _ {1} (\mu , \nu) (1 + \epsilon c _ {\mu , \nu}), +$$ + +where $\Phi_{\epsilon}$ is the one-step SVGD pushforward (Definition 3) and $c_{\mu,\nu} = c_1(1 + m_{\mu,x^{\star}}) + c_2(1 + m_{\nu,x^{\star}})$ . + +In Section 6, we will show that, under Assumptions 1, 2, and 3, the preconditions of Lemma 2 are fulfilled with $c_{1}$ and $c_{2}$ exactly as in Theorem 1. The second lemma, proved in Section 7, controls the growth of the first and second absolute moments under SVGD. + +Lemma 3 (SVGD moment growth). Suppose Assumptions 1 and 2 hold, and let $C = \kappa_1^2(3L + 1)$ . Then the SVGD output $\mu_r$ of Algorithm 2 with $b_{r-1} \triangleq \sum_{s=0}^{r-1} \epsilon_s$ satisfies + +$$ +\begin{array}{l} m _ {\mu_ {r}, x ^ {*}} - m _ {P, x ^ {*}} \leq m _ {\mu_ {r}, P} \leq m _ {\mu_ {0}, P} \prod_ {s = 0} ^ {r - 1} (1 + \epsilon_ {s} C) \leq m _ {\mu_ {0}, P} \exp \left(C b _ {r - 1}\right), \\ M _ {\mu_ {r}, P} \leq M _ {\mu_ {0}, P} \prod_ {s = 0} ^ {r - 1} (1 + \epsilon_ {s} C) ^ {2} \leq M _ {\mu_ {0}, P} \exp (2 C b _ {r - 1}). \\ \end{array} +$$ + +The key to the proof of Lemma 3 is that we show the norm of any SVGD update, i.e., $\| T_{\mu ,\epsilon}(x) - x\| _2$ is controlled by $m_{\mu ,P}$ , the first absolute moment of $\mu$ measured against $P$ . This is mainly due to the Lipschitzness of the score function $s_p$ and our assumptions on the boundedness of the kernel and its derivatives. Then, we can use the result to control the growth of $m_{\mu_r,P}$ across iterations since $m_{\mu_{r + 1},P} = \mathbb{E}_{(X,Z)\sim \mu_r\otimes P}[\| T_{\mu_r,\epsilon_r}(X) - Z\| _2]$ . The same strategy applies to the second absolute moment $M_{\mu ,P}$ . The proof of Theorem 1 then follows directly from Lemma 2 where we plug in the first moment bound of Lemma 3. + +# 4 KSD Discretization Error of SVGD + +Our next result translates the Wasserstein error bounds of Theorem 1 into KSD error bounds. + +Theorem 2 (KSD discretization error of SVGD). Suppose Assumptions 1, 2, and 3 hold. For any $\mu_0^n, \mu_0^\infty \in \mathcal{P}_1$ , the outputs of Algorithm 2, $\mu_r^n = \mathrm{SVGD}(\mu_0^n, r)$ and $\mu_r^\infty = \mathrm{SVGD}(\mu_0^\infty, r)$ , satisfy + +$$ +\begin{array}{l} \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, \mu_ {r} ^ {\infty}\right) \leq \left(\kappa_ {1} L + \kappa_ {2} d\right) w _ {0, n} \exp \left(b _ {r - 1} (A + B \exp \left(C b _ {r - 1}\right))\right) \\ + \kappa_ {1} d ^ {1 / 4} L \sqrt {2 M _ {\mu_ {0} ^ {\infty} , P W _ {0 , n}}} \exp (b _ {r - 1} (2 C + A + B \exp (C b _ {r - 1})) / 2) \\ \end{array} +$$ + +for $w_{0,n} \triangleq W_1(\mu_0^n, \mu_0^\infty)$ and $A, B, C$ defined as in Theorem 1. + +Our proof of Theorem 2 relies on the following lemma, proved in Section 8, that shows that the KSD is controlled by the 1-Wasserstein distance. + +Lemma 4 (KSD-Wasserstein bound). Suppose Assumptions 1 and 2 hold. For any $\mu, \nu \in \mathcal{P}_1$ , + +$$ +\mathrm {K S D} _ {P} (\mu , \nu) \leq \left(\kappa_ {1} L + \kappa_ {2} d\right) W _ {1} (\mu , \nu) + \kappa_ {1} d ^ {1 / 4} L \sqrt {2 M _ {\nu , P} W _ {1} (\mu , \nu)}. +$$ + +Lemma 4 is proved in two steps. We first linearize $(\mathcal{T}_Pg)(x)$ in the KSD definition through the Lipschitzness of $s_p$ and the boundedness and Lipschitzness of RKHS functions. Then, we assign a 1-Wasserstein optimal coupling of $(\mu ,\nu)$ to obtain the Wasserstein bound on the right. + +Proof of Theorem 2 The result follows directly from Lemma 4, the second moment bound of Lemma 3, and Theorem 1. + +# 5 A Finite-particle Convergence Rate for SVGD + +To establish our main SVGD convergence result, we combine Theorems 1 and 2 with the following descent lemma for continuous SVGD error due to Salim et al. [22] which shows that continuous SVGD decreases the KL divergence to $P$ and drives the KSD to $P$ to zero. + +Lemma 5 (Continuous SVGD descent lemma [22, Thm. 3.2]). Suppose Assumptions 1, 2, and 4 hold, and consider the outputs $\mu_r^\infty = \mathrm{SVGD}(\mu_0^\infty, r)$ and $\mu_{r+1}^\infty = \mathrm{SVGD}(\mu_0^\infty, r+1)$ of Algorithm 2 with $\mu_0^\infty \ll P$ . If $\max_{0 \leq s \leq r} \epsilon_s \leq R_{\alpha,2}$ for some $\alpha > 1$ and + +$$ +R _ {\alpha , p} \triangleq \min \left(\frac {p}{\kappa_ {1} ^ {2} (L + \alpha^ {2})}, (\alpha - 1) \left(1 + L m _ {\mu_ {0} ^ {\infty}, x ^ {\star}} + 2 L \sqrt {\frac {2}{\lambda} \mathrm {K L} \left(\mu_ {0} ^ {\infty} \| P\right)}\right)\right) f o r p \in \{1, 2 \}, \tag {1} +$$ + +then + +$$ +\mathrm {K L} \left(\mu_ {r + 1} ^ {\infty} \| P\right) - \mathrm {K L} \left(\mu_ {r} ^ {\infty} \| P\right) \leq - \epsilon_ {r} \left(1 - \frac {\kappa_ {1} ^ {2} (L + \alpha^ {2})}{2} \epsilon_ {r}\right) \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {\infty}, P\right) ^ {2}. \tag {2} +$$ + +By summing the result (2) over $r \in \{0, \dots, t\}$ , we obtain the following corollary. + +Corollary 1. Under the assumptions and notation of Lemma 5, suppose $\max_{0\leq r\leq t}\epsilon_r\leq R_{\alpha ,1}$ for some $\alpha >1$ , and let $\pi_r\triangleq \frac{c(\epsilon_r)}{\sum_{r = 0}^t c(\epsilon_r)}$ for $c(\epsilon)\triangleq \epsilon \Big(1 - \frac{\kappa_1^2(L + \alpha^2)}{2}\epsilon \Big).$ Since $\frac{\epsilon}{2}\leq c(\epsilon) < \epsilon$ we have + +$$ +\sum_ {r = 0} ^ {t} \pi_ {r} \mathrm {K S D} _ {P} (\mu_ {r} ^ {\infty}, P) ^ {2} \leq \frac {1}{\sum_ {r = 0} ^ {t} c (\epsilon_ {r})} \mathrm {K L} (\mu_ {0} ^ {\infty} \| P) \leq \frac {2}{\sum_ {r = 0} ^ {t} \epsilon_ {r}} \mathrm {K L} (\mu_ {0} ^ {\infty} \| P). +$$ + +Finally, we arrive at our main result that bounds the approximation error of $n$ -particle SVGD in terms of the chosen step size sequence and the initial discretization error $W_{1}(\mu_{0}^{n}, \mu_{0}^{\infty})$ . + +Theorem 3 (KSD error of finite-particle SVGD). Suppose Assumptions 1, 2, 3, and 4 hold, fix any $\mu_0^\infty \ll P$ and $\mu_0^n\in \mathcal{P}_1$ , and let $w_{0,n}\triangleq W_1(\mu_0^n,\mu_0^\infty)$ . If $\max_{0\leq r < t}\epsilon_r\leq \epsilon_t\triangleq R_{\alpha ,1}^3$ for some $\alpha >1$ and $R_{\alpha ,1}$ defined in Lemma 5, then the Algorithm 2 outputs $\mu_r^n = \mathrm{SVGD}(\mu_0^n,r)$ satisfy + +$$ +\min _ {0 \leq r \leq t} \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) \leq \sum_ {r = 0} ^ {t} \pi_ {r} \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) \leq a _ {t - 1} + \sqrt {\frac {2}{R _ {\alpha , 1} + b _ {t - 1}} \mathrm {K L} \left(\mu_ {0} ^ {\infty} \| P\right)}, \tag {3} +$$ + +for $\pi_r$ as defined in Lemma 5, $(A,B,C)$ as defined in Theorem 1, $b_{t - 1}\triangleq \sum_{r = 0}^{t - 1}\epsilon_r$ , and + +$$ +\begin{array}{l} a _ {t - 1} \triangleq \left(\kappa_ {1} L + \kappa_ {2} d\right) w _ {0, n} \exp \left(b _ {t - 1} (A + B \exp \left(C b _ {t - 1}\right))\right) \tag {4} \\ + \kappa_ {1} d ^ {1 / 4} L \sqrt {2 M _ {\mu_ {0} ^ {\infty} , P} w _ {0 , n}} \exp (b _ {t - 1} (2 C + A + B \exp (C b _ {t - 1})) / 2). \\ \end{array} +$$ + +Proof. By the triangle inequality (Lemma 1) and Theorem 2 we have + +$$ +\left| \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) - \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {\infty}, P\right) \right| \leq \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, \mu_ {r} ^ {\infty}\right) \leq a _ {r - 1} +$$ + +for each $r$ . Therefore + +$$ +\sum_ {r = 0} ^ {t} \pi_ {r} \left(\mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) - a _ {r - 1}\right) ^ {2} \leq \sum_ {r = 0} ^ {t} \pi_ {r} \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {\infty}, P\right) ^ {2} \leq \frac {2}{R _ {\alpha , 1} + b _ {t - 1}} \mathrm {K L} \left(\mu_ {0} ^ {\infty} \| P\right), \tag {5} +$$ + +where the last inequality follows from Corollary 1. Moreover, by Jensen's inequality, + +$$ +\sum_ {r = 0} ^ {t} \pi_ {r} \left(\mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) - a _ {r - 1}\right) ^ {2} \geq \left(\sum_ {r = 0} ^ {t} \pi_ {r} \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) - \sum_ {r = 0} ^ {t} \pi_ {r} a _ {r - 1}\right) ^ {2}. \tag {6} +$$ + +Combining (5) and (6), we have + +$$ +\sum_ {r = 0} ^ {t} \pi_ {r} \mathrm {K S D} _ {P} \left(\mu_ {r} ^ {n}, P\right) \leq \sum_ {r = 0} ^ {t} \pi_ {r} a _ {r - 1} + \sqrt {\frac {2}{R _ {\alpha , 1} + b _ {t - 1}} \mathrm {K L} \left(\mu_ {0} ^ {\infty} \| P\right)}. +$$ + +We finish the proof by noticing that $\sum_{r=0}^{t} \pi_r a_{r-1} \leq \max_{0 \leq r \leq t} a_{r-1} = a_{t-1}$ . + +![](images/314cc413ea86d714dad31827c4ef5c5d1ce5a735f530985f1ca7b2511fb4eac6.jpg) + +The following corollary, proved in Appendix B, provides an explicit SVGD convergence bound and rate by choosing the step size sum to balance the terms on the right-hand side of (3). In particular, Corollary 2 instantiates an explicit SVGD step size sequence that drives the kernel Stein discrepancy to zero at an order $1 / \sqrt{\log\log(n)}$ rate. + +Corollary 2 (A finite-particle convergence rate for SVGD). Instantiate the notation and assumptions of Theorem 3, let $(\bar{w}_{0,n},\bar{A},\bar{B},\bar{C})$ be any upper bounds on $(w_{0,n},A,B,C)$ respectively, and define the growth functions + +$$ +\phi (w) \triangleq \log \log (e ^ {e} + \frac {1}{w}) \quad a n d \quad \psi_ {\bar {B}, \bar {C}} (x, y, \beta) \triangleq \frac {1}{\bar {C}} \log (\frac {1}{B} \max (\bar {B}, \frac {1}{\beta} \log \frac {1}{x} - y)). +$$ + +If the step size sum $b_{t-1} = \sum_{r=0}^{t-1} \epsilon_r = s_n^\star$ for + +$$ +s _ {n} ^ {\star} \triangleq \min \left(\psi_ {\bar {B}, \bar {C}} \big (\bar {w} _ {0, n} \sqrt {\phi (\bar {w} _ {0 , n})}, \bar {A}, \beta_ {1} \big), \psi_ {\bar {B}, \bar {C}} \big (\bar {w} _ {0, n} \phi (\bar {w} _ {0, n}), \bar {A} + 2 \bar {C}, \beta_ {2} \big)\right), +$$ + +$$ +\beta_ {1} \triangleq \max \left(1, \psi_ {\bar {B}, \bar {C}} \left(\bar {w} _ {0, n} \sqrt {\phi (\bar {w} _ {0 , n})}, \bar {A}, 1\right)\right), \quad a n d +$$ + +$$ +\beta_ {2} \triangleq \max \left(1, \psi_ {\bar {B}, \bar {C}} (\bar {w} _ {0, n} \phi (\bar {w} _ {0, n}), \bar {A} + 2 \bar {C}, 1)\right) +$$ + +then + +$$ +\begin{array}{l} \min_{0\leq r\leq t}\operatorname{KSD}_{P}(\mu_{r}^{n},P) \\ \leq \left\{\begin{array}{l l}\left(\kappa_ {1} L + \kappa_ {2} d\right) \bar {w} _ {0, n} + \kappa_ {1} d ^ {1 / 4} L \sqrt {2 M _ {\mu_ {0} ^ {\infty} , P} \bar {w} _ {0 , n}} + \sqrt {\frac {2}{R _ {\alpha , 1}} \mathrm {K L} \left(\mu_ {0} ^ {\infty} \| P\right)}&i f s _ {n} ^ {\star} = 0\\\frac {\left(\kappa_ {1} L + \kappa_ {2} d\right) + \kappa d ^ {1 / 4} L \sqrt {2 M _ {\mu_ {0} ^ {\infty} , P}}}{\sqrt {\phi (\bar {w} _ {0 , n})}} + \sqrt {\frac {2 \mathrm {K L} \left(\mu_ {0} ^ {\infty} \| P\right)}{R _ {\alpha , 1} + \frac {1}{C} \log \left( \right.\frac {1}{B} \left(\frac {\log \left(1 / \left(\bar {w} _ {0 , n} \phi \left(\bar {w} _ {0 , n}\right)\right)\right)}{\max \left(1 , \psi_ {\bar {B}} , \bar {C} \left(\bar {w} _ {0 , n} , 0 , 1\right)\right)} - \bar {A} - 2 \bar {C}\right))}}&o t h e r w i s e\end{array}\right. (7) \\ = O \left(\frac {1}{\sqrt {\log \log \left(e ^ {e} + \frac {1}{\bar {w} _ {0 , n}}\right)}}\right). (8) \\ \end{array} +$$ + +If, in addition, $\mu_0^n = \frac{1}{n}\sum_{i = 1}^n\delta_{x_i}$ for $x_{i}\stackrel {i.i.d.}{\sim}\mu_{0}^{\infty}$ with $M_{\mu_0^\infty}\triangleq \mathbb{E}_{\mu_0^\infty}[\| \cdot \| _2^2 ] < \infty$ then + +$$ +\bar {w} _ {0, n} \triangleq \frac {M _ {\mu_ {0} ^ {\infty}} \log (n) ^ {\mathbb {I} [ d = 2 ]}}{\delta n ^ {1 / (2 \vee d)}} \geq w _ {0, n} \tag {9} +$$ + +with probability at least $1 - c\delta$ for a universal constant $c > 0$ . Hence, with this choice of $\bar{w}_{0,n}$ + +$$ +\min _ {0 \leq r \leq t} \mathrm {K S D} _ {P} (\mu_ {r} ^ {n}, P) = O \left(\frac {1}{\sqrt {\log \log (n \delta)}}\right) +$$ + +with probability at least $1 - c\delta$ + +Specifically, given any upper bounds $(\bar{w}_{0,n},\bar{A},\bar{B},\bar{C})$ on the quantities $(w_{0,n},A,B,C)$ defined in Theorem 3, Corollary 2 specifies a recommended step size sum $s_n^\star$ to achieve an order $1 / \sqrt{\log\log(e^e + \frac{1}{\bar{w}_{0,n}})}$ rate of SVGD convergence in KSD. Several remarks are in order. First, + +the target step size sum $s_n^\star$ is easily computed given $(\bar{w}_{0,n},\bar{A},\bar{B},\bar{C})$ . Second, we note that the target $s_n^\star$ can equal 0 when the initial Wasserstein upper bound $\bar{w}_{0,n}$ is insufficiently small since $\log \left(\frac{1}{B}\max \left(\bar{B},\frac{1}{\beta}\log \frac{1}{x} -y\right)\right) = \log \left(\frac{\bar{B}}{B}\right) = 0$ for small arguments $x$ . In this case, the setting $b_{t - 1} = 0$ amounts to not running SVGD at all or, equivalently, to setting all step sizes to 0. + +Third, Corollary 2 also implies a time complexity for achieving an order $1 / \sqrt{\log\log(e^{e} + \frac{1}{\bar{w}_{0,n}})}$ error rate. Recall that Theorem 3 assumes that $\max_{0\leq r < t}\epsilon_r\leq R_{\alpha ,1}$ for $R_{\alpha ,1}$ defined in (1). Hence, $t^\star \triangleq \lceil s_n^\star /R_{\alpha ,1}\rceil$ rounds of SVGD are necessary to achieve the recommended setting $\sum_{r = 0}^{t^{\star} - 1}\epsilon_{r} = s_{n}^{\star}$ while also satisfying the constraints of Theorem 3. Moreover, $t^\star$ rounds are also sufficient if each step size is chosen equal to $s_n^\star /t^\star$ . In addition, since $s_n^\star = O(\log \log (e^e +\frac{1}{\bar{w}_{0,n}}))$ , Corollary 2 implies that SVGD can deliver $\min_{0\leq r\leq t^{\star}}\mathrm{KSD}_P(\mu_r^n,P)\leq \Delta$ in $t^\star = O(1 / \Delta^2)$ rounds. Since the computational complexity of each SVGD round is dominated by $\Theta (n^2)$ kernel gradient evaluations (i.e., evaluating $\nabla_xk(x_i,x_j)$ for each pair of particles $(x_{i},x_{j})$ ), the overall computational complexity to achieve the order $1 / \sqrt{\log\log(e^{e} + \frac{1}{\bar{w}_{0,n}})}$ error rate is $O(n^{2}\lceil s_{n}^{\star} / R_{\alpha ,1}\rceil) = O(n^{2}\log \log (e^{e} + \frac{1}{\bar{w}_{0,n}}))$ . + +# 6 Proof of Theorem 1: Wasserstein discretization error of SVGD + +In order to leverage Lemma 2, we first show that the pseudo-Lipschitzness conditions of Lemma 2 hold given our Assumptions 1 to 3. Recall that $s_p$ is Lipschitz and $x^{\star}$ satisfies $s_p(x^{\star}) = 0$ by Assumption 1. Then, by the triangle inequality, the definition of $\| \cdot \|_{\mathrm{op}}$ , $\| k(x,z)\|_2 \leq \kappa_1^2$ and $\| \nabla_z\nabla_xk(x,z)\|_{\mathrm{op}} \leq \kappa_1^2$ from Assumption 2, and Cauchy-Schwartz, + +$$ +\begin{array}{l} \operatorname {L i p} \left(s _ {p} (x) k (x, \cdot) + \nabla_ {x} k (x, \cdot)\right) \\ \leq \left\| s _ {p} (x) - s _ {p} \left(x ^ {\star}\right) \right\| _ {2} \left\| \nabla_ {z} k (x, z) \right\| _ {2} + \left\| \nabla_ {z} \nabla_ {x} k (x, z) \right\| _ {\mathrm {o p}} \\ = \sup _ {\| u \| _ {2} \leq 1} \left(\| s _ {p} (x) - s _ {p} \left(x ^ {\star}\right) \| _ {2} \left| \nabla_ {z} k (x, z) ^ {\top} u \right|\right) + \| \nabla_ {z} \nabla_ {x} k (x, z) \| _ {\mathrm {o p}} \\ \leq L \| x - x ^ {\star} \| _ {2} \| \nabla_ {z} k (x, z) \| _ {2} + \| \nabla_ {z} \nabla_ {x} k (x, z) \| _ {\mathrm {o p}} \\ \leq \max (\kappa_ {1} ^ {2} L, \kappa_ {1} ^ {2}) (1 + \| x - x ^ {\star} \| _ {2}). \\ \end{array} +$$ + +Letting $c_{1} = \max(\kappa_{1}^{2}L, \kappa_{1}^{2})$ and taking supremum over $z$ proves the first pseudo-Lipschitzness condition. Similarly, we have + +$$ +\begin{array}{l} \operatorname {L i p} \left(s _ {p} k (\cdot , z) + \nabla_ {x} k (\cdot , z)\right) \\ \leq \sup _ {x \in \mathbb {R} ^ {d}} \operatorname {L i p} (s _ {p}) | k (x, z) | + \| s _ {p} (x) - s _ {p} (x ^ {\star}) \| _ {2} \| \nabla_ {x} k (x, z) \| _ {2} + \left\| \nabla_ {x} ^ {2} k (x, z) \right\| _ {\mathrm {o p}} \\ \leq \kappa_ {1} ^ {2} L + \sup _ {x \in \mathbb {R} ^ {d}} L \| x - x ^ {\star} \| _ {2} \| \nabla_ {x} k (x, z) \| _ {2} + \kappa_ {1} ^ {2}, \tag {10} \\ \end{array} +$$ + +where we used the Lipschitzness of $s_p$ from Assumption 1 and $|k(x,z)| \leq \kappa_1^2$ , $\left\| \nabla_x^2 k(x,z) \right\|_{\mathrm{op}} \leq \kappa_1^2$ from Assumption 2. Now we consider two cases separately: when $\| x - z \|_2 \geq 1$ and $\| x - z \|_2 < 1$ . + +- Case 1: $\| x - z\| _2\geq 1$ . Recall that there exists $\gamma >0$ such that $\| \nabla_xk(x,z)\| _2\leq \gamma /\| x - z\| _2$ by Assumption 3. Then, using this together with the triangle inequality, we have + +$$ +\left\| x - x ^ {\star} \right\| _ {2} \left\| \nabla_ {x} k (x, z) \right\| _ {2} \leq \gamma \frac {\left\| x - z \right\| _ {2} + \left\| z - x ^ {\star} \right\| _ {2}}{\left\| x - z \right\| _ {2}} \leq \gamma \left(1 + \left\| z - x ^ {\star} \right\| _ {2}\right). \tag {11} +$$ + +- Case 2: $\| x - z \|_2 < 1$ . Then, using $\| \nabla_x k(x,z) \|_2 \leq \kappa_1^2$ from Assumption 2 and by the triangle inequality, we have + +$$ +\left\| x - x ^ {\star} \right\| _ {2} \left\| \nabla_ {x} k (x, z) \right\| _ {2} \leq \kappa_ {1} ^ {2} \left(\left\| x - z \right\| _ {2} + \left\| z - x ^ {\star} \right\| _ {2}\right) < \kappa_ {1} ^ {2} \left(1 + \left\| z - x ^ {\star} \right\| _ {2}\right). \tag {12} +$$ + +Combining (11) and (12) and using the triangle inequality, we get + +$$ +\left\| x - x ^ {\star} \right\| _ {2} \left\| \nabla_ {x} k (x, z) \right\| _ {2} \leq \max \left(\gamma , \kappa_ {1} ^ {2}\right) \left(1 + \left\| z - x ^ {\star} \right\| _ {2}\right). \tag {13} +$$ + +Plugging (13) back into (10), we can show the second pseudo-Lipschitzness condition holds for $c_{2} = \max(\kappa_{1}^{2}(L + 1) + L\max(\gamma, \kappa_{1}^{2}), L\max(\gamma, \kappa_{1}^{2})) = \kappa_{1}^{2}(L + 1) + L\max(\gamma, \kappa_{1}^{2})$ . + +Now we have proved that both pseudo-Lipschitzness preconditions of Lemma 2 hold under our Assumptions 1 to 3. By repeated application of Lemma 2 and the inequality $(1 + x) \leq e^{x}$ , we have + +$$ +\begin{array}{l} W _ {1} \left(\mu_ {r + 1} ^ {n}, \mu_ {r} ^ {\infty}\right) = W _ {1} \left(\Phi_ {\epsilon_ {r}} \left(\mu_ {r} ^ {n}\right), \Phi_ {\epsilon_ {r}} \left(\mu_ {r} ^ {\infty}\right)\right) \leq \left(1 + \epsilon_ {r} D _ {r}\right) W \left(\mu_ {r} ^ {n}, \mu_ {r} ^ {\infty}\right) \\ \leq W _ {1} \left(\mu_ {0} ^ {n}, \mu_ {0} ^ {\infty}\right) \prod_ {s = 0} ^ {r} \left(1 + \epsilon_ {s} D _ {s}\right) \leq W _ {1} \left(\mu_ {0} ^ {n}, \mu_ {0} ^ {\infty}\right) \exp \left(\sum_ {s = 0} ^ {r} \epsilon_ {s} D _ {s}\right) \tag {14} \\ \end{array} +$$ + +for $D_{s} = c_{1}(1 + m_{\mu_{s}^{\infty},x^{\star}}) + c_{2}(1 + m_{\mu_{s}^{\infty},x^{\star}})$ . + +Using the result from Lemma 3, we have + +$$ +D _ {s + 1} \leq A + B \exp \left(C b _ {s}\right) +$$ + +for $A = (c_{1} + c_{2})(1 + m_{P,x^{\star}})$ , $B = c_{1}m_{\mu_{0}^{n},P} + c_{2}m_{\mu_{0}^{\infty},P}$ , and $C = \kappa_1^2 (3L + d)$ . Therefore + +$$ +\sum_ {s = 0} ^ {r} \epsilon_ {s} D _ {s} \leq \max _ {0 \leq s \leq r} D _ {s} \sum_ {s = 0} ^ {r} \epsilon_ {s} \leq b _ {r} (A + B \exp (C b _ {r - 1})) \leq b _ {r} (A + B \exp (C b _ {r})). +$$ + +Plugging this back into (14) proves the result. + +# 7 Proof of Lemma 3: SVGD moment growth + +From Assumption 2 we know $|k(y,x)| \leq \kappa_1^2$ and $\left\| \nabla_y^2 k(y,x) \right\|_{op} \leq \kappa_1^2$ . The latter implies + +$$ +\left\| \nabla_ {y} k (y, x) - \nabla_ {z} k (z, x) \right\| _ {2} \leq \kappa_ {1} ^ {2} \| y - z \| _ {2}. +$$ + +Recall that $s_p$ is Lipschitz and satisfies $\mathbb{E}_P[s_p(\cdot)] = 0$ by Assumption 1. Let $\mu$ be any probability measure. Using the above results, Jensen's inequality and the fact that $\mathbb{E}_{Z\sim P}[(\mathcal{A}_PK(\cdot,x))(Z)] = 0$ , we have + +$$ +\begin{array}{l} \left\| T _ {\mu , \epsilon} (x) - x \right\| _ {2} \leq \epsilon \| \mathbb {E} _ {X \sim \mu} [ (\mathcal {A} _ {P} k (\cdot , x)) (X) ] \| _ {2} \\ = \epsilon \| \mathbb {E} _ {X \sim \mu} [ (\mathcal {A} _ {P} k (\cdot , x)) (X) ] - \mathbb {E} _ {Z \sim P} [ (\mathcal {A} _ {P} k (\cdot , x)) (Z) ] \| _ {2} \\ = \epsilon \| \mathbb {E} _ {(X, Z) \sim \mu \otimes P} [ k (Z, x) (s _ {p} (X) - s _ {p} (Z)) + (k (X, x) - k (Z, x)) (s _ {p} (X) - \mathbb {E} _ {P} [ s _ {p} (\cdot) ]) \\ \left. + \left(\nabla_ {X} k (X, x) - \nabla_ {Z} k (Z, x)\right) \right] \| _ {2} \\ \leq \epsilon \mathbb {E} _ {(X, Z) \sim \mu \otimes P} [ | k (Z, x) | \| s _ {p} (X) - s _ {p} (Z) \| _ {2} + (| k (X, x) | + | k (Z, x) |) \| s _ {p} (X) - \mathbb {E} _ {Y \sim P} [ s _ {p} (Y) ] \| _ {2} \\ + \| \nabla_ {X} k (X, x) - \nabla_ {Z} k (Z, x) \| _ {2} ] \\ \leq \epsilon \mathbb {E} _ {(X, Z) \sim \mu \otimes P} [ \kappa_ {1} ^ {2} (L + 1) \| X - Z \| _ {2} ] + \epsilon \cdot 2 \kappa_ {1} ^ {2} L \mathbb {E} _ {(X, Y) \sim \mu \otimes P} [ \| X - Y \| _ {2} ] \\ = \epsilon \kappa_ {1} ^ {2} (3 L + 1) \mathbb {E} _ {(X, Z) \sim \mu \otimes P} [ \| X - Z \| _ {2} ] \\ = \epsilon C m _ {\mu , P}. \tag {15} \\ \end{array} +$$ + +The last step used the definitions $m_{\mu ,P}\triangleq \mathbb{E}_{(X,Z)\sim \mu \otimes P}[\| X - Z\| _2]$ and $C = \kappa_1^2 (3L + 1)$ . Then, applying the triangle inequality and (15), we have + +$$ +\begin{array}{l} m _ {\mu_ {r + 1}, P} = \mathbb {E} _ {(X, Z) \sim \mu_ {r + 1} \otimes P} [ \| X - Z \| _ {2} ] = \mathbb {E} _ {(X, Z) \sim \mu_ {r} \otimes P} [ \| T _ {\mu_ {r}, \epsilon_ {r}} (X) - Z \| _ {2} ] \\ \leq \mathbb {E} _ {(X, Z) \sim \mu_ {r} \otimes P} [ \| T _ {\mu_ {r}, \epsilon_ {r}} (X) - X \| _ {2} + \| X - Z \| _ {2} ] \leq (1 + \epsilon_ {r} C) m _ {\mu_ {r}, P}, \tag {16} \\ \end{array} +$$ + +$$ +\begin{array}{l} M _ {\mu_ {r + 1}, P} = \mathbb {E} _ {(X, Z) \sim \mu_ {r + 1} \otimes P} [ \| X - Z \| _ {2} ^ {2} ] = \mathbb {E} _ {(X, Z) \sim \mu_ {r} \otimes P} [ \| T _ {\mu_ {r}, \epsilon_ {r}} (X) - Z \| _ {2} ^ {2} ] \\ \leq \mathbb {E} _ {(X, Z) \sim \mu_ {r} \otimes P} [ \| T _ {\mu_ {r}, \epsilon_ {r}} (X) - X \| _ {2} ^ {2} + 2 \| T _ {\mu_ {r}, \epsilon_ {r}} (X) - X \| _ {2} \| X - Z \| _ {2} + \| X - Z \| _ {2} ^ {2} ] \\ \leq \left(\epsilon_ {r} ^ {2} C ^ {2} + 2 \epsilon_ {r} C\right) m _ {\mu_ {r}, P} ^ {2} + M _ {\mu_ {r}, P} \leq \left(1 + 2 \epsilon_ {r} C + \epsilon_ {r} ^ {2} C ^ {2}\right) M _ {\mu_ {r}, P} \\ = \left(1 + \epsilon_ {r} C\right) ^ {2} M _ {\mu_ {r}, P}, \tag {17} \\ \end{array} +$$ + +where the second last step used Jensen's inequality $m_{\mu_r,P}^2 \leq M_{\mu_r,P}$ . Then, we repeatedly apply (16) and (17) together with the triangle inequality and the bound $1 + x \leq e^{x}$ to get + +$$ +M _ {\mu_ {r}, P} \leq M _ {\mu_ {0}, P} \prod_ {s = 0} ^ {r - 1} (1 + \epsilon_ {s} C) ^ {2} \leq M _ {\mu_ {0}, P} \exp (2 C \sum_ {s = 0} ^ {r - 1} \epsilon_ {s}) \leq M _ {\mu_ {0}, P} \exp (2 C b _ {r - 1}) \text {a n d} +$$ + +$$ +m _ {\mu_ {r}, x ^ {\star}} - m _ {P, x ^ {\star}} \leq m _ {\mu_ {r}, P} \leq m _ {\mu_ {0}, P} \prod_ {s = 0} ^ {r - 1} (1 + \epsilon_ {s} C) \leq m _ {\mu_ {0}, P} \exp \left(C b _ {r - 1}\right). +$$ + +# 8 Proof of Lemma 4: KSD-Wasserstein bound + +Our proof generalizes that of Gorham and Mackey [9, Lem. 18]. Consider any $g \in \mathcal{H}^d$ satisfying $\| g\|_{\mathcal{H}^d}^2 \triangleq \sum_{i = 1}^d\| g_i\|_{\mathcal{H}}^2 \leq 1$ . From Assumption 2 we know + +$$ +\begin{array}{l} \left\| g (x) \right\| _ {2} ^ {2} \leq k (x, x) \sum_ {i = 1} ^ {d} \left\| g _ {i} \right\| _ {\mathcal {H}} ^ {2} \leq \kappa_ {1} ^ {2}, (18) \\ \| \nabla g (x) \| _ {o p} ^ {2} \leq \| \nabla g (x) \| _ {F} ^ {2} = \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {d} | \nabla_ {x _ {i}} g _ {j} (x) | ^ {2} \leq \| g \| _ {\mathcal {H} ^ {d}} ^ {2} \mathrm {t r} (\nabla_ {y} \nabla_ {x} k (x, y) | _ {y = x}) \\ \leq d \| \nabla_ {y} \nabla_ {x} k (x, y) | _ {y = x} \| _ {\mathrm {o p}} \leq \kappa_ {1} ^ {2} d, \text {a n d} (19) \\ \end{array} +$$ + +$$ +\begin{array}{l} \| \nabla (\nabla \cdot g (x)) \| _ {2} ^ {2} = \sum_ {i = 1} ^ {d} \left(\sum_ {j = 1} ^ {d} \nabla_ {x _ {i}} \nabla_ {x _ {j}} g _ {j} (x)\right) ^ {2} \leq d \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {d} | \nabla_ {x _ {i}} \nabla_ {x _ {j}} g _ {j} (x) | ^ {2} \\ \leq d \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {d} \| g _ {j} \| _ {\mathcal {H}} ^ {2} \big (\nabla_ {y _ {i}} \nabla_ {y _ {j}} \nabla_ {x _ {i}} \nabla_ {x _ {j}} k (x, y) | _ {y = x} \big) \leq \kappa_ {2} ^ {2} d ^ {2}, \\ \end{array} +$$ + +Suppose $X, Y, Z$ are distributed so that $(X, Y)$ is a 1-Wasserstein optimal coupling of $(\mu, \nu)$ and $Z$ is independent of $(X, Y)$ . Since $s_p$ is $L$ -Lipschitz with $\mathbb{E}_P[s_p] = 0$ (Assumption 1), $g$ is bounded (18), and $g$ and $\nabla \cdot g$ are Lipschitz (19), repeated use of Cauchy-Schwarz gives + +$$ +\begin{array}{l} \mathbb {E} _ {\mu} \left[ \mathcal {T} _ {P} g \right] - \mathbb {E} _ {\nu} \left[ \mathcal {T} _ {P} g \right] \\ = \mathbb {E} [ \nabla \cdot g (X) - \nabla \cdot g (Y) ] + \mathbb {E} [ \langle s _ {p} (X) - s _ {p} (Y), g (X) \rangle ] + \mathbb {E} [ \langle s _ {p} (Y) - s _ {p} (Z), g (X) - g (Y) \rangle ] \\ \leq \left(\kappa_ {2} d + \kappa_ {1} L\right) W _ {1} (\mu , \nu) + L \mathbb {E} \left[ \| Y - Z \| _ {2} \min \left(2 \kappa_ {1}, \kappa_ {1} \sqrt {d} \| X - Y \| _ {2}\right) \right]. \\ \end{array} +$$ + +Since our choice of $g$ was arbitrary, the first advertised result now follows from the definition of KSD (Definition 4). The second claim then follows from Cauchy-Schwarz and the inequality $\min(a, b)^2 \leq ab$ for $a, b \geq 0$ , since + +$$ +\begin{array}{l} \mathbb {E} \left[ \| Y - Z \| _ {2} \min \left(2 \kappa_ {1}, \kappa_ {1} \sqrt {d} \| X - Y \| _ {2}\right) \right] \leq M _ {\nu , P} ^ {1 / 2} \mathbb {E} \left[ \min \left(2 \kappa_ {1}, \kappa_ {1} \sqrt {d} \| X - Y \| _ {2}\right) ^ {2} \right] ^ {1 / 2} \\ \leq \sqrt {2 M _ {\nu , P}} \kappa_ {1} d ^ {1 / 4} \mathbb {E} [ \| X - Y \| _ {2} ] ^ {1 / 2} = \sqrt {2 M _ {\nu , P} W _ {1} (\mu , \nu)} \kappa_ {1} d ^ {1 / 4}. \\ \end{array} +$$ + +# 9 Conclusions and Limitations + +In summary, we have proved the first unified convergence bound and rate for finite-particle SVGD. In particular, our results show that with a suitably chosen step size sequence, SVGD with $n$ -particles drives the KSD to zero at an order $1 / \sqrt{\log\log(n)}$ rate. The assumptions we have made on the target and kernel are mild and strictly weaker than those used in prior work to establish KSD weak convergence control [9, 3, 12, 1]. However, we suspect that, with additional effort, the Lipschitz score assumption (Assumption 1) can be relaxed to accommodate pseudo-Lipschitz scores as in Erdogdu et al. [7] or weakly-smooth scores as in Sun et al. [24]. A second limitation of this work is that the obtained rate of convergence is quite slow. However, we hope that this initial recipe for explicit, non-asymptotic convergence will serve as both a template and a catalyst for the field to develop refined upper and lower bounds for SVGD error. To this end, we leave the reader with several open challenges. First, can one establish a non-trivial minimax lower bound for the convergence of SVGD? Second, can one identify which types of target distributions lead to worst-case convergence behavior for SVGD? Finally, can one identify commonly met assumptions on the target distribution and kernel under which the guaranteed convergence rate of SVGD can be significantly improved? Promising follow-up work has already begun investigating speed-ups obtainable by focusing on the convergence of a finite set of moments [20] or by modifying the SVGD algorithm [5]. + +# References + +[1] Alessandro Barp, Carl-Johann Simon-Gabriel, Mark Girolami, and Lester Mackey. Targeted separation and convergence with kernel discrepancies. arXiv preprint arXiv:2209.12835, 2022. 1, 2, 9 +[2] Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011. 2 +[3] Wilson Ye Chen, Lester Mackey, Jackson Gorham, François-Xavier Briol, and Chris Oates. Stein points. In International Conference on Machine Learning, pages 844-853, 2018. 1, 2, 9 + +[4] Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In International Conference on Machine Learning, pages 2606-2615, 2016. 1, 3 +[5] Aniket Das and Dheeraj Nagaraj. Provably fast finite particle variants of svgd via virtual particle stochastic approximation. arXiv preprint arXiv:2305.17558, 2023. 9 +[6] Andrew Duncan, Nikolas Nusken, and Lukasz Szpruch. On the geometry of Stein variational gradient descent. arXiv preprint arXiv:1912.00894, 2019. 1 +[7] Murat A. Erdogdu, Lester Mackey, and Ohad Shamir. Global non-convex optimization with discretized diffusions. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 9694-9703, 2018. 9 +[8] Jackson Gorham and Lester Mackey. Measuring sample quality with Stein's method. In Advances in Neural Information Processing Systems, pages 226-234, 2015. 3 +[9] Jackson Gorham and Lester Mackey. Measuring sample quality with kernels. In International Conference on Machine Learning, pages 1292-1301, 2017. 1, 2, 3, 9, 12 +[10] Jackson Gorham, Anant Raj, and Lester Mackey. Stochastic Stein discrepancies. Advances in Neural Information Processing Systems, 33:17931-17942, 2020. 1, 4 +[11] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In International Conference on Machine Learning, pages 1352-1361, 2017. 1 +[12] Jonathan Huggins and Lester Mackey. Random feature Stein discrepancies. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, pages 1903-1913. 2018. 1, 2, 9 +[13] Priyank Jaini, Lars Holdijk, and Max Welling. Learning equivariant energy based models with equivariant Stein variational gradient descent. Advances in Neural Information Processing Systems, 34:16727-16737, 2021. 1 +[14] Heishiro Kanagawa, Arthur Gretton, and Lester Mackey. Controlling moments with kernel stein discrepancies. arXiv preprint arXiv:2211.05408, 2022. 1 +[15] Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, and Arthur Gretton. A non-asymptotic analysis for Stein variational gradient descent. Advances in Neural Information Processing Systems, 33, 2020. 1, 2, 3 +[16] Jing Lei. Convergence and concentration of empirical measures under Wasserstein distance in unbounded functional spaces. Bernoulli, 26(1):767-798, 2020. 14 +[17] Qiang Liu. Stein variational gradient descent as gradient flow. In Advances in Neural Information Processing Systems, pages 3115-3123, 2017. 1, 3 +[18] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose Bayesian inference algorithm. Advances in Neural Information Processing Systems, 29:2378-2386, 2016. 1, 2, 3, 13 +[19] Qiang Liu, Jason Lee, and Michael Jordan. A kernelized Stein discrepancy for goodness-of-fit tests. In International Conference on Machine Learning, pages 276-284, 2016. 1, 3 +[20] Tianle Liu, Promit Ghosal, Krishnakumar Balasubramanian, and Natesh Pillai. Towards understanding the dynamics of gaussian–stein variational gradient descent. arXiv preprint arXiv:2305.14076, 2023. 9 +[21] Yang Liu, Prajit Ramachandran, Qiang Liu, and Jian Peng. Stein variational policy gradient. In 33rd Conference on Uncertainty in Artificial Intelligence, 2017. 1 +[22] Adil Salim, Lukang Sun, and Peter Richtarik. A convergence theory for SVGD in the population limit under Talagrand's inequality T1. In International Conference on Machine Learning, pages 19139-19152. PMLR, 2022. 1, 2, 3, 5 +[23] Adrien Saumard and Jon A Wellner. Log-concavity and strong log-concavity: a review. Statistics surveys, 8:45, 2014. 3 +[24] Lukang Sun, Avetik Karagulyan, and Peter Richtarik. Convergence of Stein variational gradient descent under a weaker smoothness condition. arXiv preprint arXiv:2206.00508, 2022. 1, 9 + +[25] Cédric Villani. Optimal transport: old and new, volume 338. Springer, 2009. 3 +[26] Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized MLE for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016. 1 +[27] Dilin Wang, Zhe Zeng, and Qiang Liu. Stein variational message passing for continuous graphical models. In International Conference on Machine Learning, pages 5219-5227, 2018. 1 +[28] Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. Message passing Stein variational gradient descent. In International Conference on Machine Learning, pages 6018-6027, 2018. 1 + +# A Kernel Assumptions + +To show that Assumptions 2 and 3 are met by the most commonly used SVGD kernels with constants independent of dimension, we begin by bounding the derivatives of any radial kernel of the form $k(x,y) = \phi (\| x - y\| _2^2 /2)$ with $\phi :\mathbb{R}\to \mathbb{R}$ four times differentiable. By the reproducing property and Cauchy-Schwarz we have + +$$ +| k (x, y) | = | \langle k (x, \cdot), k (y, \cdot) \rangle_ {\mathcal {H}} | \leq \| k (x, \cdot) \| _ {\mathcal {H}} \| k (y, \cdot) \| _ {\mathcal {H}} = \sqrt {k (x , x)} \sqrt {k (y , y)} = \phi (0), +$$ + +$$ +\left\| \nabla_ {x} k (x, y) \right\| _ {2} = \left| \phi^ {\prime} \left(\left\| x - y \right\| _ {2} ^ {2} / 2\right) \right| \| x - y \| _ {2}, +$$ + +$$ +\begin{array}{l} \| \nabla_ {y} \nabla_ {x} k (x, y) \| _ {\mathrm {o p}} = \left\| - \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (x - y) (x - y) ^ {\top} - \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) I \right\| _ {\mathrm {o p}} \\ \leq | \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) | \| x - y \| _ {2} ^ {2} + | \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) |, \quad a n d \\ \left\| \nabla_ {x} ^ {2} k (x, y) \right\| _ {\mathrm {o p}} = \left\| \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (x - y) (x - y) ^ {\top} + \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) I \right\| _ {\mathrm {o p}} \\ \leq | \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) | \| x - y \| _ {2} ^ {2} + | \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) |. \\ \end{array} +$$ + +Similarly, the partial derivatives take the form + +$$ +\nabla_ {x _ {j}} k (x, y) = \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) (x _ {j} - y _ {j}) +$$ + +$$ +\nabla_ {y _ {j}} \nabla_ {x _ {j}} k (x, y) = - \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) - \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (y _ {j} - x _ {j}) ^ {2} +$$ + +$$ +\begin{array}{l} \nabla_ {x _ {i}} \nabla_ {y _ {j}} \nabla_ {x _ {j}} k (x, y) = - \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (x _ {i} - y _ {i}) - \phi^ {\prime \prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (y _ {j} - x _ {j}) ^ {2} (x _ {i} - y _ {i}) \\ - \mathbb {I} [ i = j ] 2 \phi^ {\prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (x _ {i} - y _ {i}) \\ = - \phi^ {\prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(x _ {i} - y _ {i}\right) \\ - \mathbb {I} [ i \neq j ] \phi^ {\prime \prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (y _ {j} - x _ {j}) ^ {2} (x _ {i} - y _ {i}) \\ + \mathbb {I} [ i = j ] \left(\phi^ {\prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(y _ {i} - x _ {i}\right) ^ {3} - 2 \phi^ {\prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(x _ {i} - y _ {i}\right)\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} \nabla_ {y _ {i}} \nabla_ {x _ {i}} \nabla_ {y _ {j}} \nabla_ {x _ {j}} k (x, y) = \phi^ {\prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(x _ {i} - y _ {i}\right) ^ {2} + \phi^ {\prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \\ + \mathbb {I} [ i \neq j ] \left(\phi^ {\prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(y _ {j} - x _ {j}\right) ^ {2} \left(x _ {i} - y _ {i}\right) ^ {2} \right. \\ + \phi^ {\prime \prime \prime} (\left\| x - y \right\| _ {2} ^ {2} / 2) \left(y _ {j} - x _ {j}\right) ^ {2}) \\ + \mathbb {I} [ i = j ] \left(\phi^ {\prime \prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(y _ {i} - x _ {i}\right) ^ {4} + 5 \phi^ {\prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(y _ {i} - x _ {i}\right) ^ {2} \right. \\ + 2 \phi^ {\prime \prime} (\left\| x - y \right\| _ {2} ^ {2} / 2)) \\ = \phi^ {\prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(\left(x _ {i} - y _ {i}\right) ^ {2} + \left(x _ {j} - y _ {j}\right) ^ {2}\right) + \phi^ {\prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \\ + \phi^ {\prime \prime \prime \prime} (\| x - y \| _ {2} ^ {2} / 2) (y _ {j} - x _ {j}) ^ {2} (x _ {i} - y _ {i}) ^ {2} \\ + \mathbb {I} [ i = j ] \left(4 \phi^ {\prime \prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right) \left(y _ {i} - x _ {i}\right) ^ {2} + 2 \phi^ {\prime \prime} \left(\| x - y \| _ {2} ^ {2} / 2\right)\right) \\ \end{array} +$$ + +so that both $|k(x,y)|$ and + +$$ +\nabla_ {y _ {i}} \nabla_ {x _ {i}} \nabla_ {y _ {j}} \nabla_ {x _ {j}} k (x, y) | _ {y = x} = \phi^ {\prime \prime} (0) + \mathbb {I} [ i = j ] 2 \phi^ {\prime \prime} (0) +$$ + +are bounded (Assumption 2) by constants independent of dimension. + +Gorham and Mackey [9] popularized the use of IMQ kernels for SVGD, by establishing the convergence-determining properties of the associated KSD. The corresponding $\phi$ satisfies + +$$ +\phi (t) = (c ^ {2} + 2 t) ^ {\beta} \quad \text {f o r} \quad c > 0 \quad \text {a n d} \quad \beta \in (- 1, 0), +$$ + +$$ +\phi^ {\prime} (t) = 2 \beta (c ^ {2} + 2 t) ^ {\beta - 1}, \quad \text {a n d} \quad \phi^ {\prime \prime} (t) = 4 \beta (\beta - 1) (c ^ {2} + 2 t) ^ {\beta - 2}. +$$ + +In this case, $\| \nabla_y\nabla_xk(x,y)\|_{\mathrm{op}}$ and $\left\| \nabla_x^2 k(x,y)\right\|_{\mathrm{op}}$ are bounded (Assumption 2) by constants independent of dimension as + +$$ +\begin{array}{l} | \phi^ {\prime} (\| x - y \| _ {2} ^ {2} / 2) | = - 2 \beta (c ^ {2} + \| x - y \| _ {2} ^ {2}) ^ {\beta - 1} \\ \leq - 2 \beta \min \left(c ^ {2 \beta - 2}, \| x - y \| _ {2} ^ {2 \beta - 2}\right) \leq - 2 \beta c ^ {2 \beta - 2} \quad \text {a n d} \\ \end{array} +$$ + +$$ +\begin{array}{l} | \phi^ {\prime \prime} (\left\| x - y \right\| _ {2} / 2) | \| x - y \| _ {2} ^ {2} = 4 \beta (\beta - 1) (c ^ {2} + \| x - y \| _ {2} ^ {2}) ^ {\beta - 2} \| x - y \| _ {2} ^ {2} \\ \leq 4 \beta (\beta - 1) \left(c ^ {2} + \| x - y \| _ {2} ^ {2}\right) ^ {\beta - 1} \leq 4 \beta (\beta - 1) c ^ {2 \beta - 2}. \\ \end{array} +$$ + +For $\| \nabla_x k(x,y)\| _2$ , we consider two cases: + +- When $\| x - y\| _2\geq 1$ + +$$ +\begin{array}{l} \left| \phi^ {\prime} \left(\left\| x - y \right\| _ {2} ^ {2} / 2\right) \right| \| x - y \| _ {2} \leq - 2 \beta \| x - y \| _ {2} ^ {2 \beta - 2} \| x - y \| _ {2} = - 2 \beta \| x - y \| _ {2} ^ {2 \beta - 1} \\ \leq - 2 \beta / \| x - y \| _ {2} \leq - 2 \beta . \\ \end{array} +$$ + +- When $\| x - y \|_2 < 1$ , $|\phi'(\| x - y \|_2^2 / 2)| \| x - y \|_2 < |\phi'(\| x - y \|_2^2 / 2)| \leq -2\beta c^{2\beta - 2}$ . + +Therefore, $\| \nabla_x k(x,y)\| _2$ is also bounded (Assumption 2) by constants independent of dimension, and Assumption 3 holds with $\gamma = -2\beta$ . + +The original SVGD paper [18] used Gaussian kernels in all experiments, and they remain perhaps the most common choice in the literature. In this case, $\phi$ satisfies + +$$ +\phi (t) = e ^ {- 2 \alpha t} \quad \mathrm {f o r} \quad \alpha > 0, \quad \phi^ {\prime} (t) = - 2 \alpha e ^ {- 2 \alpha t} = - 2 \alpha \phi (t), \quad \mathrm {a n d} \quad \phi^ {\prime \prime} (t) = 4 \alpha^ {2} \phi (t). +$$ + +Using the inequality $x \leq e^{x - 1}$ , we find that + +$$ +\left| \phi^ {\prime} \left(\left\| x - y \right\| _ {2} ^ {2} / 2\right) \right| = 2 \alpha e ^ {- \alpha \| x - y \| _ {2} ^ {2}} \leq \min \left(2 \alpha , 2 / \left(e \| x - y \| _ {2} ^ {2}\right)\right) \quad \text {a n d} +$$ + +$$ +| \phi^ {\prime \prime} (\| x - y \| _ {2} / 2) | \| x - y \| _ {2} ^ {2} = 4 \alpha^ {2} e ^ {- \alpha \| x - y \| _ {2} ^ {2}} \| x - y \| _ {2} ^ {2} \leq 4 \alpha / e +$$ + +so that $\| \nabla_y\nabla_xk(x,y)\|_{\mathrm{op}},\left\| \nabla_x^2 k(x,y)\right\|_{\mathrm{op}}$ , and $\| \nabla_xk(x,y)\| _2$ are bounded (Assumption 2) by constants independent of dimension, and Assumption 3 holds with $\gamma = 2 / e$ + +# B Proof of Corollary 2: A finite-particle convergence rate for SVGD + +We begin by establishing a lower bound on $b_{t - 1}$ . Let + +$$ +b _ {t - 1} ^ {(1)} = \psi_ {\bar {B}, \bar {C}} (\bar {w} _ {0, n} \sqrt {\phi (\bar {w} _ {0 , n})}, \bar {A}, \beta_ {1}) \quad \mathrm {a n d} \quad b _ {t - 1} ^ {(2)} = \psi_ {\bar {B}, \bar {C}} (\bar {w} _ {0, n} \phi (\bar {w} _ {0, n}), \bar {A} + 2 \bar {C}, \beta_ {2}) +$$ + +so that $b_{t-1} = \min(b_{t-1}^{(1)}, b_{t-1}^{(2)})$ . Since $\beta_1, \beta_2, \phi(\bar{w}_{0,n}) \geq 1$ , we have + +$$ +\begin{array}{l} \beta_ {1} = \max (1, \frac {1}{C} \log (\frac {1}{B} (\log \frac {1}{\bar {w} _ {0 , n} \sqrt {\phi (\bar {w} _ {0 , n})}} - \bar {A}))) \\ \leq \max (1, \frac {1}{C} \log (\frac {1}{B} (\log \frac {1}{\bar {w} _ {0 , n} \sqrt {\phi (\bar {w} _ {0 , n})}}))) \\ \leq \max \left(1, \frac {1}{C} \log \left(\frac {1}{B} \left(\log \frac {1}{\bar {w} _ {0 , n}}\right)\right)\right) \quad \text {a n d} \\ \end{array} +$$ + +$$ +\begin{array}{l} \beta_ {2} = \max \left(1, \frac {1}{\bar {C}} \log \left(\frac {1}{\bar {B}} \left(\log \frac {1}{\bar {w} _ {0 , n} \phi (\bar {w} _ {0 , n})} - \bar {A} - 2 \bar {C}\right)\right)\right) \\ \leq \max (1, \frac {1}{C} \log (\frac {1}{B} (\log \frac {1}{\bar {w} _ {0 , n} \phi (\bar {w} _ {0 , n})}))) \\ \leq \max \left(1, \frac {1}{C} \log \left(\frac {1}{B} \left(\log \frac {1}{\bar {w} _ {0 , n}}\right)\right)\right). \\ \end{array} +$$ + +Hence, $\phi (\bar{w}_{0,n})\geq 1$ implies that + +$$ +\begin{array}{l} b _ {t - 1} ^ {(1)} \geq \frac {1}{C} \log \left(\frac {1}{B} \left(\frac {\log \frac {1}{\bar {w} _ {0 , n} \sqrt {\phi (\bar {w} _ {0 , n})}}}{\max \left(1 , \frac {1}{C} \log \left(\frac {1}{B} \left(\log \frac {1}{\bar {w} _ {0 , n}}\right)\right)\right)} - \bar {A}\right)\right) \\ \geq \frac {1}{C} \log \left(\frac {1}{B} \left(\frac {\log \frac {1}{\bar {w} _ {0 , n} \phi (\bar {w} _ {0 , n})}}{\max \left(1 , \frac {1}{C} \log \left(\frac {1}{B} \left(\log \frac {1}{\bar {w} _ {0 , n}}\right)\right)\right)} - \bar {A} - 2 \bar {C}\right)\right) \quad \text {a n d} \tag {20} \\ \end{array} +$$ + +$$ +b _ {t - 1} ^ {(2)} \geq \frac {1}{C} \log \big (\frac {1}{B} \big (\frac {\log \frac {1}{\bar {w} _ {0 , n}} \phi (\bar {w} _ {0 , n})}{\max (1 , \frac {1}{C} \log (\frac {1}{B} (\log \frac {1}{\bar {w} _ {0 , n}})))} - \bar {A} - 2 \bar {C}) \big) \big). +$$ + +We divide the remainder of our proof into four parts. First we prove each of the two cases in the generic KSD bound (7) in Appendices B.1 and B.2. Next we show in Appendix B.3 that these two cases yield the generic convergence rate (8). Finally, we prove the high probability upper estimate (9) for $w_{0,n}$ under i.i.d. initialization in Appendix B.4. + +# B.1 Case $b_{t-1} = 0$ + +In this case, the error bound (7) follows directly from Theorem 3. + +# B.2 Case $b_{t-1} > 0$ + +We first state and prove a useful lemma. + +Lemma 6. Suppose $x = f(\beta)$ for a non-increasing function $f: \mathbb{R} \to \mathbb{R}$ and $\beta = \max(1, f(1))$ . Then $x \leq \beta$ and $x \leq f(x)$ . + +Proof. Because $f$ is non-increasing and $\beta \geq 1$ , $x = f(\beta) \leq f(1) \leq \beta$ . Since $x \leq \beta$ and $f$ is non-increasing, we further have $f(x) \geq f(\beta) = x$ as advertised. + +Since $\psi_{\bar{B},\bar{C}}$ is non-increasing in its third argument, Lemma 6 implies that $b_{t - 1}^{(1)}\leq \beta_{1}$ and + +$$ +b _ {t - 1} ^ {(1)} \leq \psi_ {\bar {B}, \bar {C}} (\bar {w} _ {0, n} \sqrt {\phi (\bar {w} _ {0 , n})}, \bar {A}, b _ {t - 1} ^ {(1)}). +$$ + +Rearranging the terms and noting that + +$$ +\bar {B} < \frac {1}{\beta_ {1}} \log \frac {1}{\bar {w} _ {0 , n} \sqrt {\phi (\bar {w} _ {0 , n})}} - \bar {A} \leq \frac {1}{b _ {t - 1} ^ {(1)}} \log \frac {1}{\bar {w} _ {0 , n} \sqrt {\phi (\bar {w} _ {0 , n})}} - \bar {A} +$$ + +since $b_{t-1}^{(1)} \geq b_{t-1} > 0$ , we have + +$$ +\bar {w} _ {0, n} \exp \left(b _ {t - 1} ^ {(1)} (\bar {A} + \bar {B} \exp \left(\bar {C} b _ {t - 1} ^ {(1)}\right)\right) \leq \frac {1}{\sqrt {\phi \left(\bar {w} _ {0 , n}\right)}}. \tag {21} +$$ + +Similarly, we have $b_{t - 1}^{(2)} \leq \psi_{\bar{B},\bar{C}}(\bar{w}_{0,n}\log \log \frac{1}{\bar{w}_{0,n}},\bar{A} +2\bar{C},b_{t - 1}^{(2)})$ and + +$$ +\sqrt {\bar {w} _ {0 , n}} \exp \left(b _ {t - 1} ^ {(2)} \left(2 \bar {C} + \bar {A} + \bar {B} \exp \left(\bar {C} b _ {t - 1} ^ {(2)}\right)\right) / 2\right) \leq \frac {1}{\sqrt {\phi \left(\bar {w} _ {0 , n}\right)}}. \tag {22} +$$ + +Since $b_{t-1} = \min(b_{t-1}^{(1)}, b_{t-1}^{(2)})$ , the inequalities (21) and (22) are also satisfied when $b_t$ is substituted for $b_{t-1}^{(1)}$ and $b_{t-1}^{(2)}$ . Since the error term $a_{t-1}$ (4) is non-decreasing in each of $(w_{0,n}, A, B, C)$ , we have + +$$ +a _ {t - 1} \leq \left(\kappa_ {1} L + \kappa_ {2} d + \kappa_ {1} d ^ {1 / 4} L \sqrt {2 M _ {\mu_ {0} ^ {\infty} , P}}\right) / \sqrt {\phi (\bar {w} _ {0 , n})}. +$$ + +Since $b_{t-1} = \min(b_{t-1}^{(1)}, b_{t-1}^{(2)})$ , the claim (7) follows from this estimate, the lower bounds (20), and Theorem 3. + +# B.3 Generic convergence rate + +The generic convergence rate (8) holds as, by the lower bounds (20), $b_{t - 1} = \min (b_{t - 1}^{(1)}, b_{t - 1}^{(2)}) > 0$ whenever + +$$ +e ^ {- (\bar {B} + \bar {A} + 2 \bar {C})} > \bar {w} _ {0, n} \phi (\bar {w} _ {0, n}) \quad \text {a n d} \quad \bar {B} ^ {(\bar {B} + \bar {A} + 2 \bar {C}) / \bar {C}} \quad > \bar {w} _ {0, n} \phi (\bar {w} _ {0, n}) (\log (1 / \bar {w} _ {0, n})) ^ {(\bar {B} + \bar {A} + 2 \bar {C}) / \bar {C}}, +$$ + +a condition which occurs whenever $\bar{w}_{0,n}$ is sufficiently small since the right-hand side of each inequality converges to zero as $\bar{w}_{0,n} \to 0$ . + +# B.4 Initializing with i.i.d. particles + +We begin by restating an expected Wasserstein bound due to Lei [16]. + +Lemma 7 (Lei [16, Thm. 3.1]). Suppose $\mu_0^n = \frac{1}{n}\sum_{i=1}^n\delta_{x_i}$ for $x_i \stackrel{i.i.d.}{\sim} \mu_0^\infty$ with $M_{\mu_0^\infty} \triangleq \mathbb{E}_{\mu_0^\infty}[\|\cdot\|_2^2] < \infty$ . Then, for a universal constant $c > 0$ , + +$$ +\mathbb {E} \left[ W _ {1} \left(\mu_ {0} ^ {n}, \mu_ {0} ^ {\infty}\right) \right] \leq c M _ {\mu_ {0} ^ {\infty}} \frac {\log (n) ^ {\mathbb {I} [ d = 2 ]}}{n ^ {1 / (2 \vee d)}}. +$$ + +Together, Lemma 7 and Markov's inequality imply that + +$$ +W _ {1} (\mu_ {0} ^ {n}, \mu_ {0} ^ {\infty}) \leq \mathbb {E} [ W _ {1} (\mu_ {0} ^ {n}, \mu_ {0} ^ {\infty}) ] / (c \delta) \leq M _ {\mu_ {0} ^ {\infty}} \frac {\log (n) ^ {\mathbb {I} [ d = 2 ]}}{n ^ {1 / (2 \lor d)}} / \delta +$$ + +with probability at least $1 - c\delta$ , proving the high probability upper estimate (9). \ No newline at end of file diff --git a/afiniteparticleconvergencerateforsteinvariationalgradientdescent/images.zip b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b2adbaa378102bc0a9bf52dd5033c8437066eb37 --- /dev/null +++ b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc3aa3e79b98564d52c58d12fc1505d67430fdb5fc769b56c4359f841d97df3b +size 934761 diff --git a/afiniteparticleconvergencerateforsteinvariationalgradientdescent/layout.json b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4a2091c7dd6d12d144cc21aea65bd9b021d109bf --- /dev/null +++ b/afiniteparticleconvergencerateforsteinvariationalgradientdescent/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f3231e81956999ff5397117f17193a617e3e092a8e1048de691b7afba7c14f5 +size 717994 diff --git a/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_content_list.json b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..75120d289906904f071f8d8dc3e1d3b9d34264c3 --- /dev/null +++ b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b27d5b05b4b217f7c3a1235964b9a8362e70f3992fa5a76a9e9552d8e1c3849 +size 447241 diff --git a/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_model.json b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..be3569131fc9b9007eb408451138cf70e329a555 --- /dev/null +++ b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:191be7b3e98a47635fcbc107a49de8978f126688fdc63b16232f9ca0c4247749 +size 510061 diff --git a/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_origin.pdf b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5b272aeb5d52d43fb68588e1984a091ad1a2c514 --- /dev/null +++ b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/47ea3a77-cdb4-4b88-a0f1-70bdf5f2edc5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5b8bc2eadc63263cb54a229bd7d22b9338e3634ac8bc3af1dedae4f5700b1a5 +size 1046059 diff --git a/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/full.md b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ab0eecc66eb0db3269a84de2e47c4d410d7d55b8 --- /dev/null +++ b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/full.md @@ -0,0 +1,2202 @@ +# A Finite-Sample Analysis of Payoff-Based Independent Learning in Zero-Sum Stochastic Games + +Zaiwei Chen $^{1,\ast}$ , Kaiqing Zhang $^{2}$ , Eric Mazumdar $^{1,\dagger}$ , Asuman Ozdaglar $^{3}$ , Adam Wierman $^{1,\ddagger}$ , CMS, Caltech, *zchen458@caltech.edu, $^\dagger$ mazumdar@caltech.edu, $^\ddagger$ adamw@caltech.edu + $^{2}$ ECE & ISR, University of Maryland, College Park, kaiqing@umd.edu + $^{3}$ EECS, MIT, asuman@mit.edu + +# Abstract + +In this work, we study two-player zero-sum stochastic games and develop a variant of the smoothed best-response learning dynamics that combines independent learning dynamics for matrix games with the minimax value iteration for stochastic games. The resulting learning dynamics are payoff-based, convergent, rational, and symmetric between the two players. Our theoretical results present to the best of our knowledge the first last-iterate finite-sample analysis of such independent learning dynamics. To establish the results, we develop a coupled Lyapunov drift approach to capture the evolution of multiple sets of coupled and stochastic iterates, which might be of independent interest. + +# 1 Introduction + +Recent years have seen remarkable successes in reinforcement learning (RL) in a variety of applications, such as board games [1], autonomous driving [2], robotics [3], and city navigation [4]. A common feature of these applications is that there are multiple decision makers interacting with each other in a common environment. Although empirical successes have shown the potential of multi-agent reinforcement learning (MARL) [5, 6], the training of MARL agents is largely based on heuristics and parameter tuning and, therefore, is not always reliable. In particular, many practical MARL algorithms are directly extended from their single-agent counterparts and lack guarantees because of the adaptive strategies of multiple agents. + +A growing literature seeks to provide theoretical insights to substantiate the empirical success of MARL and inform the design of efficient and provably convergent algorithms. Work along these lines can be broadly categorized into work on cooperative MARL where agents seek to reach a common goal [7-10], and work on competitive MARL where agents have individual (and possibly misaligned) objectives [11-22]. While some earlier work focused on providing guarantees on the asymptotic convergence, the more recent ones share an increasing interest in understanding the finite-time/sample behavior. This follows from a line of recent advances in establishing finite-sample guarantees of single-agent RL algorithms, see e.g., [23-26] and many others. + +In this paper, we focus on the benchmark setting of two-player1 zero-sum stochastic games, and develop best-response-type learning dynamics with provable finite-sample guarantees. Crucially, our learning dynamics are independent (requiring no coordination between the agents in learning) and rational (each agent will converge to the best response to the opponent if the opponent plays an (asymptotically) stationary policy [27]), and therefore capture learning in settings with multiple game-theoretic agents. Indeed, learning dynamics with self-interested agents should not enforce information communication or coordination among agents. Furthermore, we focus on the more challenging but practically relevant setting of payoff-based learning, where each agent can only + +observe their realized payoff at each stage, without observing the policy or even the action taken by the opponent. For learning dynamics with such properties, we establish to the best of our knowledge the first last-iterate finite-sample guarantees. We detail our contributions as follows. + +# 1.1 Contributions + +We first consider zero-sum matrix games and provide the last-iterate finite-sample guarantees for the smoothed best-response dynamics proposed in [28]. Then, we extend the algorithmic idea to the setting of stochastic games and develop an algorithm called value iteration with smoothed best-response dynamics (VI-SBR) that also enjoys last-iterate finite-sample convergence. + +Two-Player Zero-Sum Matrix Games. We start with the smoothed best-response dynamics in [28] and establish the last-iterate finite-sample bounds when using stepsizes of various decay rates. The result implies a sample complexity of $\mathcal{O}(\epsilon^{-1})$ in terms of the last iterate to find the Nash distribution [29], which is also known as the quantal response equilibrium in the literature [30]. To our knowledge, this is the first last-iterate finite-sample result for best-response learning dynamics that are payoff-based, rational, and symmetric in zero-sum matrix games. + +Two-Player Zero-Sum Stochastic Games. Building on the algorithmic ideas for matrix games, we develop best-response-type learning dynamics for stochastic games called VI-SBR, which uses a single trajectory of Markovian samples. Our learning dynamics consist of two loops and can be viewed as a combination of the smoothed best-response dynamics for an induced auxiliary matrix game (conducted in the inner loop) and an independent way of performing minimax value iteration (conducted in the outer loop). In particular, in the inner loop, the iterate of the outer loop, i.e., the value function, is fixed, and the players learn the approximate Nash equilibrium of an auxiliary matrix game induced by the value function; then the outer loop is updated by approximating the minimax value iteration updates for the stochastic game, with only local information. + +We establish the last-iterate finite-sample bounds for VI-SBR when using both constant step sizes and diminishing step sizes of $\mathcal{O}(1 / k)$ decay rate. To the best of our knowledge, this appears to be the first last-iterate finite-sample analysis of best-response-type independent learning dynamics that are convergent and rational for stochastic games. Most existing MARL algorithms are either symmetric across players, but not payoff-based, e.g., [31-35], or not symmetric and thus not rational, e.g., [14, 36-38], or do not have last-iterate finite-time/sample guarantees, e.g., [39, 15, 40]. + +# 1.2 Challenges & Techniques + +The main challenge in analyzing our learning dynamics is that it maintains multiple sets of stochastic iterates and updates them in a coupled manner. To overcome this challenge, we develop a novel coupled Lyapunov drift approach. Specifically, we construct a Lyapunov function for each set of the stochastic iterates and establish a Lyapunov drift inequality for each. We then carefully combine the coupled Lyapunov drift inequalities to establish the finite-sample bounds. Although a more detailed analysis is provided in the appendices, we briefly give an overview of the main challenges in analyzing the payoff-based independent learning dynamics in stochastic games, as well as our techniques to overcome them. + +Time-Inhomogeneous Markovian Noise. The fact that our learning dynamics are payoff-based presents major challenges in handling the stochastic errors in the update. In particular, due to the best-response nature of the dynamics, the behavior policy for sampling becomes time-varying. In fact, the sample trajectory used for learning forms a time-inhomogeneous Markov chain. This makes it challenging to establish finite-sample guarantees, as time-inhomogeneity prevents us from directly exploiting the uniqueness of stationary distributions and the fast mixing of Markov chains. Building on existing work [23, 24, 41, 42], we overcome this challenge by tuning the algorithm parameters (in particular, the stepsizes) and developing a refined conditioning argument. + +Non-Zero-Sum Payoffs Due to Independent Learning. As illustrated in Section 1.1, the inner loop of VI-SBR is designed to approximately learn the Nash equilibrium of an auxiliary matrix game induced by the value functions for the two players, which we denote by $v_{t}^{1}$ and $v_{t}^{2}$ , where $t$ is the iteration index of the outer loop. Importantly, the value functions $v_{t}^{1}$ and $v_{t}^{2}$ are maintained individually by players 1 and 2, and therefore do not necessarily satisfy $v_{t}^{1} + v_{t}^{2} = 0$ due to independent learning. As a result, the auxiliary matrix game from the inner loop does not necessarily + +admit a zero-sum structure during learning. The error induced from such a non-zero-sum structure appears in existing work [15, 43], and was handled by designing a novel truncated Lyapunov function. However, the truncated Lyapunov function was sufficient to establish the asymptotic convergence, but did not provide the explicit rate at which the induced error goes to zero. To enable finite-sample analysis, we introduce $\| v_t^1 + v_t^2 \|_{\infty}$ as a Lyapunov function in our coupled Lyapunov framework, which is customized to capture the behavior of the induced error from the non-zero-sum structure of the inner-loop auxiliary matrix game. + +Coupled Lyapunov Drift Inequalities. In the existing literature of stochastic iterative algorithms [44, 24, 45, 46], when using a Lyapunov approach for finite-sample analysis, once the Lyapunov drift inequality is established, the finite-sample bound follows straightforwardly by repeatedly invoking the result. However, since our learning dynamics (for stochastic games) maintain multiple sets of stochastic iterates and update them in a coupled manner, the Lyapunov drift inequalities we establish are also highly coupled. Decoupling these Lyapunov inequalities is a major challenge. To overcome it, we develop a systematic approach for decoupling, which crucially relies on a bootstrapping argument. Specifically, we first use the Lyapunov inequalities in a direct way to establish a crude bound for each Lyapunov function. Then, we substitute the crude bounds back into the Lyapunov drift inequalities to obtain tighter bounds. See Appendix B.4 for a more detailed illustration of our decoupling procedure. + +# 1.3 Related Work + +Due to space constraints, here we only discuss related work in independent learning in matrix games and stochastic games, and single-agent RL. See Appendix A for a more detailed literature review. + +Independent Learning in Matrix Games. Independent learning has been well-studied in the literature on learning in matrix games. Fictitious play (FP) [47] may be viewed as the earliest one of this kind, and its convergence analysis for the zero-sum setting is provided in [48]. Smoothed versions of FP have been developed [49, 50] to make the learning dynamics consistent [51, 52]. It was shown that the behavior of smoothed FP is captured by an ODE known as the smoothed best-response dynamics, which were also studied extensively in the literature [53]. Note that the Lyapunov function used to study the smoothed best-response dynamics is the regularized Nash gap [54, 53, 29], a variant of which is also used in our Lyapunov framework. To make the learning dynamics payoff-based, [28] developed a two time-scale variant of the smoothed best-response dynamics and established the asymptotic convergence. Moreover, no-regret learning algorithms, extensively studied in online learning, can also be used as independent learning dynamics for matrix games [55]. It is known that they are both convergent and rational by the definition in [27], and are usually implemented in a symmetric way. See [55] for a detailed introduction to no-regret learning in games. + +Independent Learning in Stochastic Games. For stochastic games, independent and symmetric policy gradient methods have been developed in recent years, mostly for the case of Markov potential games [18, 19, 56]. The zero-sum case is more challenging since there is no single Lyapunov function to capture the learning dynamics (which is why we need to develop a coupled Lyapunov approach with multiple Lyapunov functions), such as the potential function in Markov potential games. For non-potential game settings, symmetric variants of policy gradient methods have been proposed, but have only been studied under the full-information setting without finite-sample guarantees [31, 32, 57, 33-35], with the exception of [58, 59]. However, the learning algorithm in [58] requires some coordination between the players when sampling, and is thus not completely independent; that in [59] is extragradient-based and needs some stage-based sampling process that also requires coordination across players. Best-response-type independent learning for stochastic games has been studied recently in [39, 15, 43, 60, 40, 61, 62], with [15, 43, 40, 61] tackling the zero-sum setting. However, only asymptotic convergence was established in these works, which motivated this work. + +Finite-Sample Analysis in Single-Agent RL. The most related works (in single-agent RL) to our paper are those that perform finite-sample analysis for RL in infinite-horizon discounted Markov decision processes following a single trajectory of Markovian samples. See [63, 23, 41, 24-26, 64-67, 10] and the references therein. Among these works, [23, 24] established finite-sample bounds for temporal difference (TD)-learning (with linear function approximation), and [25, 64, 65] established finite-sample bounds for $Q$ -learning. In both cases, the behavior policy for sampling is a stationary policy. For non-stationary behavior policies as we consider, [41] established finite-sample bounds for SARSA, an on-policy variant of $Q$ -learning, and [10] provided finite-sample bounds for off-policy actor-critic, which is established based on a general framework of contractive stochastic approximation with time-inhomogeneous Markovian noise. + +# 2 Zero-Sum Matrix Games + +We begin by considering zero-sum matrix games. This section introduces algorithmic and technical ideas that are important for the setting of stochastic games. For $i \in \{1,2\}$ , let $\mathcal{A}^i$ be the finite action space of player $i$ , and let $R_i \in \mathbb{R}^{|\mathcal{A}^i| \times |\mathcal{A}^{-i}|}$ (where $-i$ denotes the index of player $i$ 's opponent) be the payoff matrix of player $i$ . Note that in a zero-sum game we have $R_1 + R_2^\top = 0$ . Since there are finitely many actions for each player, we assume without loss of generality that $\max_{a^1, a^2} |R_1(a^1, a^2)| \leq 1$ . Furthermore, we denote $A_{\mathrm{max}} = \max(|\mathcal{A}^1|, |\mathcal{A}^2|)$ . + +The decision variables here are the policies $\pi^i\in \Delta (\mathcal{A}^i)$ , $i\in \{1,2\}$ , where $\Delta (\mathcal{A}^i)$ denotes the probability simplex supported on $\mathcal{A}^i$ . Given a joint policy $(\pi^1,\pi^2)$ , the expected reward received by player $i$ is $\mathbb{E}_{A^i\sim \pi^i,A^{-i}\sim \pi^{-i}}[R_i(A^i,A^{-i})] = (\pi^i)^\top R_i\pi^{-i}$ , where $i\in \{1,2\}$ . Both players aim to maximize their rewards against their opponents. Unlike in the single-agent setting, since the performance of player $i$ 's policy depends on its opponent $-i$ 's policy, there is, in general, no universal optimal policy. Instead, we use the Nash gap and the regularized Nash gap as measurements of the performance of the learning dynamics, as formally defined below. + +Definition 2.1 (Nash Gap in Matrix Games). Given a joint policy $\pi = (\pi^1, \pi^2)$ , the Nash gap $\mathrm{NG}(\pi^1, \pi^2)$ is defined as $\mathrm{NG}(\pi^1, \pi^2) = \sum_{i=1,2} \max_{\hat{\pi}^i \in \Delta(\mathcal{A}^i)} (\hat{\pi}^i - \pi^i)^\top R_i \pi^{-i}$ . + +Note that $\mathrm{NG}(\pi^1, \pi^2) = 0$ if and only if $(\pi^1, \pi^2)$ is in a Nash equilibrium of the matrix game (which may not be unique), in which no player has the incentive to change its policy. + +Definition 2.2 (Regularized Nash Gap in Matrix Games). Given a joint policy $\pi = (\pi^1, \pi^2)$ and a constant $\tau > 0$ , the entropy-regularized Nash gap $\mathrm{NG}_{\tau}(\pi^1, \pi^2)$ is defined as $\mathrm{NG}_{\tau}(\pi^1, \pi^2) = \sum_{i=1,2} \left\{ \max_{\hat{\pi}^i \in \Delta(\mathcal{A}^i)} (\hat{\pi}^i - \pi^i)^{\top} R_i \pi^{-i} + \tau \nu(\hat{\pi}^i) - \tau \nu(\pi^i) \right\}$ , where $\nu(\cdot)$ is the entropy function defined as $\nu(\pi^i) = -\sum_{a^i \in \mathcal{A}^i} \pi^i(a^i) \log(\pi^i(a^i))$ for $i \in \{1,2\}$ . + +A joint policy $(\pi^1, \pi^2)$ satisfying $\mathrm{NG}_{\tau}(\pi^1, \pi^2) = 0$ is called the Nash distribution [29] or the quantal response equilibrium [30], which, unlike Nash equilibria, is unique in zero-sum matrix games. As $\tau$ approaches 0, the corresponding Nash distribution approximates a Nash equilibrium [68]. + +# 2.1 The Learning Dynamics in Zero-Sum Matrix Games + +We start by presenting in Algorithm 1 (from the perspective of player $i$ , where $i \in \{1,2\}$ ) the independent learning dynamics for zero-sum matrix games, which was first proposed in [28]. Given $\tau > 0$ , we use $\sigma_{\tau} : \mathbb{R}^{|\mathcal{A}^i|} \mapsto \mathbb{R}^{|\mathcal{A}^i|}$ for the softmax function with temperature $\tau$ , that is, $[\sigma_{\tau}(q^i)](a^i) = \exp(q^i(a^i) / \tau) / \sum_{\tilde{a}^i \in \mathcal{A}^i} \exp(q^i(\tilde{a}^i) / \tau)$ for all $a^i \in \mathcal{A}^i$ , $q^i \in \mathbb{R}^{|\mathcal{A}^i|}$ , and $i \in \{1,2\}$ . + +# Algorithm 1 Independent Learning Dynamics in Zero-Sum Matrix Games + +1: Input: Integer $K$ , initializations $q_0^i = 0 \in \mathbb{R}^{|\mathcal{A}^i|}$ and $\pi_0^i = \mathrm{Unif}(\mathcal{A}^i)$ . +2: for $k = 0,1,\dots ,K - 1$ do +3: $\pi_{k + 1}^{i} = \pi_{k}^{i} + \beta_{k}(\sigma_{\tau}(q_{k}^{i}) - \pi_{k}^{i})$ +4: Play $A_{k}^{i} \sim \pi_{k+1}^{i}(\cdot)$ (against $A_{k}^{-i}$ ), and receive reward $R_{i}(A_{k}^{i}, A_{k}^{-i})$ . +5: $q_{k + 1}^{i}(a^{i}) = q_{k}^{i}(a^{i}) + \alpha_{k}\mathbb{1}_{\{a^{i} = A_{k}^{i}\}}\left(R_{i}(A_{k}^{i},A_{k}^{-i}) - q_{k}^{i}(A_{k}^{i})\right)$ for all $a^i\in \mathcal{A}^i$ +6: end for + +To make this paper self-contained, we next provide a detailed interpretation of Algorithm 1, which also motivates our algorithm for stochastic games in Section 3. At a high level, Algorithm 1 can be viewed as a discrete and smoothed variant of the best-response dynamics, where each player constructs an approximation of the best response to its opponent's policy using the $q$ -function. The update for the $q$ -function is in the spirit of the TD-learning algorithm in RL [69]. + +The Policy Update. To understand the update equation for the policies (cf. Algorithm 1 Line 3), consider the discrete version of the smoothed best-response dynamics: + +$$ +\pi_ {k + 1} ^ {i} = \pi_ {k} ^ {i} + \beta_ {k} \left(\sigma_ {\tau} \left(R _ {i} \pi_ {k} ^ {- i}\right) - \pi_ {k} ^ {i}\right), \quad i \in \{1, 2 \}. \tag {1} +$$ + +In Eq. (1), each player updates its policy $\pi_k^i$ incrementally towards the smoothed best response to its opponent's current policy. While the dynamics in Eq. (1) provably converge for zero-sum matrix + +games, see e.g., [70], implementing it requires player $i$ to compute $\sigma_{\tau}(R_i\pi_k^{-i})$ . Note that $\sigma_{\tau}(R_i\pi_k^{-i})$ involves the exact knowledge of the opponent's policy and the reward matrix, both of which cannot be accessed in payoff-based independent learning. This leads to the update equation for the $q$ -functions, which estimate the quantity $R_i\pi_k^{-i}$ that is needed for implementing Eq. (1). + +The $q$ -Function Update. Suppose for now that we are given a stationary joint policy $\pi = (\pi^1, \pi^2)$ . Fix $i \in \{1, 2\}$ , the problem of player $i$ estimating $R_i \pi^{-i}$ can be viewed as a policy evaluation problem, which is usually solved with TD-learning in RL [69]. Specifically, the two players repeatedly play the matrix game with the joint policy $\pi = (\pi^1, \pi^2)$ and produce a sequence of joint actions $\{(A_k^1, A_k^2)\}_{k \geq 0}$ . Then, player $i$ forms an estimate of $R_i \pi^{-i}$ through the following iterative algorithm: + +$$ +q _ {k + 1} ^ {i} \left(a ^ {i}\right) = q _ {k} ^ {i} \left(a ^ {i}\right) + \alpha_ {k} \mathbb {1} _ {\left\{a ^ {i} = A _ {k} ^ {i} \right\}} \left(R _ {i} \left(A _ {k} ^ {i}, A _ {k} ^ {- i}\right) - q _ {k} ^ {i} \left(A _ {k} ^ {i}\right)\right), \quad \forall a ^ {i} \in \mathcal {A} ^ {i}, \tag {2} +$$ + +with an arbitrary initialization $q_0^i \in \mathbb{R}^{|\mathcal{A}^i|}$ , where $\alpha_{k} > 0$ is the stepsize. To understand Eq. (2), suppose that $q_{k}^{i}$ converges to some $\bar{q}^i$ . Then Eq. (2) should be "stationary" at the limit point $\bar{q}^i$ in the sense that $\mathbb{E}_{A^i\sim \pi^i(\cdot),A^{-i}\sim \pi^{-i}(\cdot)}[\mathbb{1}_{\{a^i = A^i\}}(R_i(A^i,A^{-i}) - \bar{q}^i (A^i))] = 0$ for all $a^i \in \mathcal{A}^i$ , which implies $\bar{q}^i = R_i\pi^{-i}$ , as desired. Although Eq. (2) is motivated by the case when the joint policy $(\pi^1,\pi^2)$ is stationary, the joint policy $\pi_k = (\pi_k^1,\pi_k^2)$ from Eq. (1) is time-varying. A natural approach to address this issue is to make sure that the policies evolve at a slower time-scale compared to that of the $q$ -functions, so that $\pi_{k}$ is close to being stationary from the perspectives of $q_{k}^{i}$ . + +Remark. In [28], where Algorithm 1 was first proposed, the authors require $\beta_{k} = o(\alpha_{k})$ to establish the asymptotic convergence, making Algorithm 1 a two time-scale algorithm. In this work, for finite-sample analysis and easier implementation, we update $\pi_k^i$ and $q_{k}^{i}$ on a single time scale with only a multiplicative constant difference in their stepsizes, i.e., $\beta_{k} = c_{\alpha ,\beta}\alpha_{k}$ for some $c_{\alpha ,\beta}\in (0,1)$ . + +# 2.2 Finite-Sample Analysis + +In this section, we present the finite-sample analysis of Algorithm 1 for the convergence to the Nash distribution [29]. We consider using either constant step sizes, i.e., $\alpha_{k} \equiv \alpha$ and $\beta_{k} \equiv \beta = c_{\alpha,\beta}\alpha$ , or diminishing step sizes with $\mathcal{O}(1/k)$ decay rate, i.e., $\alpha_{k} = \alpha/(k+h)$ and $\beta_{k} = \beta/(k+h) = c_{\alpha,\beta}\alpha/(k+h)$ . Let $\ell_{\tau} = [(A_{\max}-1)\exp(2/\tau)+1]^{-1}$ and $L_{\tau} = \tau/\ell_{\tau} + A_{\max}^{2}/\tau$ . The requirement for choosing the step sizes is stated in the following condition. + +Condition 2.1. When using either constant or diminishing step sizes, we choose $\tau \leq 1$ , $\alpha_0 < \frac{2}{\ell_\tau}$ , $\beta_0 < \min(2, \frac{\tau}{128A_{\max}^2})$ , and $c_{\alpha,\beta} = \beta_k / \alpha_k \leq \min\left(\frac{\tau\ell_\tau^3}{32}, \frac{\ell_\tau\tau^3}{128A_{\max}^2}, \frac{2\sqrt{2}}{L_\tau^{1/2}}\right)$ . + +We next state the finite-sample bounds of Algorithm 1. See Appendix C for the proof. + +Theorem 2.1. Suppose that both players follow the learning dynamics presented in Algorithm 1, and the stepsizes $\{\alpha_{k}\}$ and $\{\beta_k\}$ are chosen such that Condition 2.1 is satisfied. Then we have the following results. + +(1) When using constant step sizes, i.e., $\alpha_{k}\equiv \alpha$ and $\beta_{k}\equiv \beta$ , we have + +$$ +\mathbb {E} \left[ N G _ {\tau} \left(\pi_ {K} ^ {1}, \pi_ {K} ^ {2}\right) \right] \leq B _ {i n} \left(1 - \frac {\beta}{4}\right) ^ {K} + 8 L _ {\tau} \beta + \frac {6 4 \alpha}{c _ {\alpha , \beta}}, +$$ + +where $B_{in} = 4 + 2\tau \log (A_{\mathrm{max}}) + 2A_{\mathrm{max}}$ + +(2) When using $\alpha_{k} = \alpha /(k + h)$ and $\beta_{k} = \beta /(k + h)$ , by choosing $\beta > 4$ , we have + +$$ +\mathbb {E} \left[ N G _ {\tau} \left(\pi_ {K} ^ {1}, \pi_ {K} ^ {2}\right) \right] \leq B _ {i n} \left(\frac {h}{K + h}\right) ^ {\beta / 4} + \left(6 4 e L _ {\tau} \beta + \frac {5 1 2 e \alpha}{c _ {\alpha , \beta}}\right) \frac {1}{K + h}. +$$ + +The convergence bounds in Theorem 2.1 are qualitatively consistent with the existing results on the finite-sample analysis of general stochastic approximation algorithms [44, 46, 24, 65, 23, 42]. Specifically, when using constant step sizes, the bound consists of a geometrically decaying term (known as the optimization error) and two constant terms (known as the statistical error) that are proportional to the step sizes. When using diminishing step sizes with suitable hyperparameters, both the optimization error and the statistical error achieve an $\mathcal{O}(1 / K)$ rate of convergence. + +Although Theorem 2.1 is stated in terms of the expectation of the regularized Nash gap, it implies the mean-square convergence of the policy iterates $(\pi_k^1,\pi_k^2)$ . To see this, note that the regularized Nash gap $\mathrm{NG}_{\tau}(\pi^{1},\pi^{2})$ has a unique minimizer, i.e., the Nash distribution and is denoted by $(\pi_{*,\tau}^{1},\pi_{*,\tau}^{2})$ . In addition, fixing $\pi^1$ (respectively, $\pi^2$ ), the function $\mathrm{NG}_{\tau}(\pi^{1},\cdot)$ (respectively, $\mathrm{NG}_{\tau}(\cdot ,\pi^{2}))$ is a $\tau$ -strongly convex function with respect to $\pi^2$ (respectively, $\pi^1$ ). See Lemma D.7 for a proof. Therefore, by the quadratic growth property of strongly convex functions, we have + +$$ +\begin{array}{l} \mathrm {N G} _ {\tau} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) = \mathrm {N G} _ {\tau} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) - \mathrm {N G} _ {\tau} \left(\pi_ {* , \tau} ^ {1}, \pi_ {k} ^ {2}\right) + \mathrm {N G} _ {\tau} \left(\pi_ {* , \tau} ^ {1}, \pi_ {k} ^ {2}\right) - \mathrm {N G} _ {\tau} \left(\pi_ {* , \tau} ^ {1}, \pi_ {* , \tau} ^ {2}\right) \\ \geq \frac {\tau}{2} \left(\| \pi_ {k} ^ {1} - \pi_ {*, \tau} ^ {1} \| _ {2} ^ {2} + \| \pi_ {k} ^ {2} - \pi_ {*, \tau} ^ {2} \| _ {2} ^ {2}\right). \\ \end{array} +$$ + +As a result, up to a multiplicative constant, the convergence bound for $\mathbb{E}[\mathrm{NG}_{\tau}(\pi_k^1,\pi_k^2)]$ implies a convergence bound of $\mathbb{E}[\| \pi_k^1 -\pi_{*,\tau}^1\| _2^2 ] + \mathbb{E}[\| \pi_k^2 -\pi_{*,\tau}^2\| _2^2 ]$ + +Based on Theorem 2.1, we next derive the sample complexity of Algorithm 1 in the following corollary. See Appendix C.5 for the proof. + +Corollary 2.1.1. Given $\epsilon >0$ , to achieve $\mathbb{E}[NG_{\tau}(\pi_K^1,\pi_K^2)]\leq \epsilon$ , the sample complexity is $\mathcal{O}(\epsilon^{-1})$ + +To the best of our knowledge, Theorem 2.1 and Corollary 2.1.1 present the first last-iterate finite-sample analysis of Algorithm 1 [28]. Importantly, with only feedback in the form of realized payoffs, we achieve a sample complexity of $\mathcal{O}(\epsilon^{-1})$ to find the Nash distribution. In general, for smooth and strongly monotone games, the lower bound for the sample complexity of payoff-based or zeroth-order algorithms is $\mathcal{O}(\epsilon^{-2})$ [71]. We have an improved $\mathcal{O}(\epsilon^{-1})$ sample complexity due to the bilinear structure of the game (up to a regularizer). In particular, with bandit feedback, the $q$ -function is constructed as an efficient estimator for the marginalized payoff $R_{i}\pi_{k}^{-i}$ , which can also be interpreted as the gradient. Therefore, Algorithm 1 enjoys the fast $\mathcal{O}(\epsilon^{-1})$ sample complexity that is comparable to the first-order method [72]. + +The Dependence on the Temperature $\tau$ . Although our finite-sample bound enjoys the $\mathcal{O}(1 / K)$ rate of convergence, the stepsize ratio $c_{\alpha,\beta}$ appears as $c_{\alpha,\beta}^{-1}$ in the bound. Since $c_{\alpha,\beta} = o(\ell_{\tau})$ (cf. Condition 2.1) and $\ell_{\tau}$ is exponentially small in $\tau$ , the finite-sample bound is actually exponentially large in $\tau^{-1}$ . To illustrate this phenomenon, consider the update equation for the $q$ -functions (cf. Algorithm 1 Line 5). Observe that the $q$ -functions are updated asynchronously because only one component (which corresponds to the action taken at time step $k$ ) of the vector-valued $q_k^i$ is updated in the $k$ -th iteration. Suppose that an action $a^i$ is never taken in the algorithm trajectory, which means that $q_k^i(a^i)$ is never updated during learning. Then, in general, we cannot expect the convergence of $q_k^i$ or $\pi_k^i$ . Similarly, suppose that an action is rarely taken in the learning dynamics, we would expect the overall convergence rate to be slow. Therefore, the finite-sample bound should depend on the quantity $\min_{i \in \{1,2\}} \min_{0 \leq k \leq K} \min_{a^i} \pi_k^i(a^i)$ , which captures the exploration abilities of Algorithm 1. Due to the exponential nature of softmax functions, the parameter $\ell_{\tau}$ , which we establish in Lemma C.2 as a lower bound of $\min_{i \in \{1,2\}} \min_{0 \leq k \leq K} \min_{a^i} \pi_k^i(a^i)$ , is also exponentially small in $\tau$ . This eventually leads to the exponential dependence in $\tau^{-1}$ in the finite-sample bound. + +A consequence of having such an exponential factor of $\tau^{-1}$ in the sample complexity bound is that, if we want to have convergence to a Nash equilibrium rather than to the Nash distribution, the sample complexity can be exponentially large. To see this, note that the following bound holds regarding the Nash gap and the regularized Nash gap: + +$$ +\mathrm {N G} \left(\pi^ {1}, \pi^ {2}\right) \leq \mathrm {N G} _ {\tau} \left(\pi^ {1}, \pi^ {2}\right) + 2 \tau \log \left(A _ {\max }\right), \quad \forall (\pi^ {1}, \pi^ {2}), \tag {3} +$$ + +which, after combining with Theorem 2.1, gives the following corollary. For simplicity of presentation, we only state the result for using constant stepsizes. + +Corollary 2.1.2. Under the same conditions stated in Theorem 2.1 (1), we have + +$$ +\mathbb {E} \left[ N G \left(\pi_ {K} ^ {1}, \pi_ {K} ^ {2}\right) \right] \leq B _ {i n} \left(1 - \frac {\beta}{4}\right) ^ {K} + 8 L _ {\tau} \beta + \frac {6 4 \alpha}{c _ {\alpha , \beta}} + 2 \tau \log \left(A _ {\max }\right). \tag {4} +$$ + +The last term on the RHS of Eq. (4) can be viewed as the bias due to using smoothed best-response. In view of Eq. (4), to achieve $\mathbb{E}[\mathrm{NG}(\pi_K^1,\pi_K^2)]\leq \epsilon$ , we need $\tau = \mathcal{O}(\epsilon)$ . Since $c_{\alpha ,\beta}$ appears in the denominator of our finite-sample bound and is exponentially small in $\tau$ , the overall sample complexity + +for the convergence to a Nash equilibrium can be exponentially large in $\epsilon^{-1}$ . In Appendix F, we conduct numerical experiments to investigate the impact of $\tau$ on this smoothing bias. + +In light of the discussion before Corollary 2.1.2, the reason for such an exponentially large sample complexity for finding a Nash equilibrium is due to the limitation of using the softmax policies in smoothed best-response for exploration. We kept the softmax policy without further modification to preserve the "naturalness" of the learning dynamics, which is part of the motivation for studying independent learning in games [73]. A future direction of this work is to remove such an exponential dependence on $\tau$ by designing an improved exploration strategy. + +# 3 Zero-Sum Stochastic Games + +Moving to the setting of stochastic games, we consider an infinite-horizon discounted two-player zero-sum stochastic game $\mathcal{M} = (\mathcal{S},\mathcal{A}^1,\mathcal{A}^2,p,R_1,R_2,\gamma)$ , where $\mathcal{S}$ is a finite state space, $\mathcal{A}^1$ (respectively, $\mathcal{A}^2$ ) is a finite action space for player 1 (respectively, player 2), $p$ represents the transition probabilities, in particular, $p(s' \mid s,a^1,a^2)$ is the probability of transitioning to state $s'$ after player 1 taking action $a^1$ and player 2 taking action $a^2$ simultaneously at state $s$ , $R_1: \mathcal{S} \times \mathcal{A}^1 \times \mathcal{A}^2 \mapsto \mathbb{R}$ (respectively, $R_2: \mathcal{S} \times \mathcal{A}^2 \times \mathcal{A}^1 \mapsto \mathbb{R}$ ) is player 1's (respectively, player 2's) reward function, and $\gamma \in (0,1)$ is the discount factor. Note that we have $R_1(s,a^1,a^2) + R_2(s,a^2,a^1) = 0$ for all $(s,a^1,a^2)$ . We assume without loss of generality that $\max_{s,a^1,a^2}|R_1(s,a^1,a^2)| \leq 1$ , and denote $A_{\mathrm{max}} = \max(|\mathcal{A}^1|,|\mathcal{A}^2|)$ . + +Given a joint policy $\pi = (\pi^1, \pi^2)$ , where $\pi^i: S \mapsto \Delta(\mathcal{A}^i)$ , $i \in \{1, 2\}$ , we define the local $q$ -function $q_{\pi}^{i} \in \mathbb{R}^{|\mathcal{S}| |\mathcal{A}^{i}|}$ of player $i$ as $q_{\pi}^{i}(s, a^{i}) = \mathbb{E}_{\pi}\left[\sum_{k=0}^{\infty} \gamma^{k} R_{i}(S_{k}, A_{k}^{i}, A_{k}^{-i}) \mid S_{0} = s, A_{0}^{i} = a^{i}\right]$ for all $(s, a^{i})$ , where we use the notation $\mathbb{E}_{\pi}[\cdot]$ to indicate that the actions are chosen according to the joint policy $\pi$ . In addition, we define the global value function $v_{\pi}^{i} \in \mathbb{R}^{|\mathcal{S}|}$ as $v_{\pi}^{i}(s) = \mathbb{E}_{A^{i} \sim \pi^{i}(\cdot|s)}[q_{\pi}^{i}(s, A^{i})]$ for all $s$ , and the expected value function $U^{i}(\pi^{i}, \pi^{-i}) \in \mathbb{R}$ as $U^{i}(\pi^{i}, \pi^{-i}) = \mathbb{E}_{S \sim p_{o}}[v_{\pi}^{i}(S)]$ , where $p_{o} \in \Delta(\mathcal{S})$ is an arbitrary initial distribution on the states. The Nash gap in the case of stochastic games is defined in the following. + +Definition 3.1 (Nash Gap in Zero-Sum Stochastic Games). Given a joint policy $\pi = (\pi^1, \pi^2)$ , the Nash gap $\mathrm{NG}(\pi^1, \pi^2)$ is defined as $\mathrm{NG}(\pi^1, \pi^2) = \sum_{i=1,2} \left( \max_{\hat{\pi}^i} U^i(\hat{\pi}^i, \pi^{-i}) - U^i(\pi^i, \pi^{-i}) \right)$ . + +Similar to the matrix-game setting, a joint policy $\pi = (\pi^1,\pi^2)$ satisfying $\mathrm{NG}(\pi^1,\pi^2) = 0$ is called a Nash equilibrium, which may not be unique. + +Additional Notation. In what follows, we will frequently work with the real vectors in $\mathbb{R}^{|S||\mathcal{A}^i|}$ , $\mathbb{R}^{|S||\mathcal{A}^{-i}|}$ , and $\mathbb{R}^{|S||\mathcal{A}^i||\mathcal{A}^{-i}|}$ , where $i \in \{1,2\}$ . To simplify the notation, for any $x \in \mathbb{R}^{|S||\mathcal{A}^i||\mathcal{A}^{-i}|}$ we use $x(s)$ to denote the $|\mathcal{A}^i| \times |\mathcal{A}^{-i}|$ matrix with the $(a^i, a^{-i})$ -th entry being $x(s, a^i, a^{-i})$ . For any $y \in \mathbb{R}^{|S||\mathcal{A}^i|}$ , we use $y(s)$ to denote the $|\mathcal{A}^i|$ -dimensional vector with its $a^i$ -th entry being $y(s, a^i)$ . + +# 3.1 Value Iteration with Smoothed Best-Response Dynamics + +Our learning dynamics for stochastic games (cf. Algorithm 2) build on the dynamics for matrix games studied in Section 2.1, with the additional incorporation of minimax value iteration, a well-known approach for solving zero-sum stochastic games [74]. + +Algorithmic Ideas. To motivate the learning dynamics, we first introduce the minimax value iteration. For $i \in \{1,2\}$ , let $\mathcal{T}^i: \mathbb{R}^{|\mathcal{S}|} \mapsto \mathbb{R}^{|\mathcal{S}||\mathcal{A}^i||\mathcal{A}^{-i}|}$ be an operator defined as + +$$ +\mathcal {T} ^ {i} (v) \left(s, a ^ {i}, a ^ {- i}\right) = R _ {i} \left(s, a ^ {i}, a ^ {- i}\right) + \gamma \mathbb {E} \left[ v \left(S _ {1}\right) \mid S _ {0} = s, A _ {0} ^ {i} = a ^ {i}, A _ {0} ^ {- i} = a ^ {- i} \right] +$$ + +for all $(s,a^i,a^{-i})$ and $v\in \mathbb{R}^{|\mathcal{S}|}$ . Given $X\in \mathbb{R}^{|\mathcal{A}^i|\times |\mathcal{A}^{-i}|}$ , we define $\nu a^{i}:\mathbb{R}^{|\mathcal{A}^{i}|\times |\mathcal{A}^{-i}|}\mapsto \mathbb{R}$ as + +$$ +val^{i}(X) = \max_{\mu^{i}\in \Delta (\mathcal{A}^{i})}\min_{\mu^{-i}\in \Delta (\mathcal{A}^{-i})}\{(\mu^{i})^{\top}X\mu^{-i}\} = \min_{\mu^{-i}\in \Delta (\mathcal{A}^{-i})}\max_{\mu^{i}\in \Delta (\mathcal{A}^{i})}\{(\mu^{i})^{\top}X\mu^{-i}\} . +$$ + +Then, the minimax Bellman operator $\mathcal{B}^i:\mathbb{R}^{|\mathcal{S}|}\mapsto \mathbb{R}^{|\mathcal{S}|}$ is defined as $[\mathcal{B}^i (v)](s) = val^i (\mathcal{T}^i (v)(s))$ for all $s\in S$ , where $\mathcal{T}^i (v)(s)$ is an $|\mathcal{A}^i |\times |\mathcal{A}^{-i}|$ matrix according to our notation. It is known that the operator $\mathcal{B}^i (\cdot)$ is a $\gamma$ -contraction mapping with respect to the $\ell_{\infty}$ -norm [74], hence it admits a unique fixed point, which we denote by $v_{*}^{i}$ . + +A common approach for solving zero-sum stochastic games is to first implement the minimax value iteration $v_{t+1}^{i} = \mathcal{B}^{i}(v_{t}^{i})$ until (approximate) convergence to $v_{*}^{i}$ [75], and then solve the matrix game $\max_{\mu^{i} \in \Delta(\mathcal{A}^{i})} \min_{\mu^{-i} \in \Delta(\mathcal{A}^{-i})} (\mu^{i})^{\top} \mathcal{T}^{i}(v_{*}^{i})(s) \mu^{-i}$ for each state $s$ to obtain an (approximate) Nash equilibrium policy. However, implementing this algorithm requires complete knowledge of the underlying transition probabilities. Moreover, since it is an off-policy algorithm, the output is independent of the opponent's policy. Thus, it is not rational by the definition in [27]. To develop a model-free and rational learning dynamics, let us first rewrite the minimax value iteration: + +$$ +v _ {t + 1} ^ {i} = \hat {v}, \text {w h e r e} \hat {v} (s) = \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \mu^ {- i}, \forall s \in \mathcal {S}, \tag {5} +$$ + +where $\hat{v} \in \mathbb{R}^{|S|}$ is a dummy variable. In view of Eq. (5), we need to solve a matrix game with payoff matrix $\mathcal{T}^i(v_t^i)(s)$ for each state $s$ and then update the value of the game to $v_{t+1}^i(s)$ . In light of Algorithm 1, we already know how to solve matrix games with independent learning. Thus, what remains to do is to combine Algorithm 1 with value iteration. This leads to Algorithm 2, which is presented from player $i$ 's perspective, where $i \in \{1,2\}$ . + +Algorithm 2 Value Iteration with Smoothed Best-Response (VI-SBR) Dynamics +1: Input: Integers $K$ and $T$ , initializations $v_0^i = 0 \in \mathbb{R}^{|S|}$ , $q_{0,0}^i = 0 \in \mathbb{R}^{|S||A^i|}$ , and $\pi_{0,0}^i (\cdot |s) = \mathrm{Unif}(\mathcal{A}^i)$ for all $s \in S$ . +2: for $t = 0,1,\dots ,T - 1$ do +3: for $k = 0,1,\dots ,K - 1$ do +4: $\pi_{t,k + 1}^i (s) = \pi_{t,k}^i (s) + \beta_k(\sigma_\tau (q_{t,k}^i (s)) - \pi_{t,k}^i (s))$ for all $s \in S$ +5: Play $A_k^i \sim \pi_{t,k + 1}^i (\cdot |S_k)$ (against $A_k^{-i}$ ) and observe $S_{k + 1} \sim p(\cdot |S_k, A_k^i, A_k^{-i})$ +6: $q_{t,k + 1}^i (s,a^i) = q_{t,k}^i (s,a^i) + \alpha_k\mathbb{1}_{\{(s,a^i) = (S_k,A_k^i)\}}(R_i(S_k,A_k^i,A_k^{-i}) + \gamma v_t^i (S_{k + 1}) - q_{t,k}^i (S_k,A_k^i))$ for all $(s,a^i)$ +7: end for +8: $v_{t + 1}^i (s) = \pi_{t,K}^i (s)^\top q_{t,K}^i (s)$ for all $s \in S$ +9: $S_0 = S_K$ , $q_{t + 1,0}^i = q_{t,K}^i$ , and $\pi_{t + 1,0}^i = \pi_{t,K}^i$ +10: end for + +Algorithm Details. For each state $s$ , the inner loop of Algorithm 2 is designed to solve a matrix game with payoff matrices $\mathcal{T}^1 (v_t^1)(s)$ and $\mathcal{T}^2 (v_t^2)(s)$ for each state $s\in S$ , which reduces to Algorithm 1 when (1) the stochastic game has only one state, and (2) $v_{t}^{1} = v_{t}^{2} = 0$ . However, in general, since $v_{t}^{1}$ and $v_{t}^{2}$ are independently maintained by players 1 and 2, the quantity + +$$ +\mathcal {T} ^ {1} (v _ {t} ^ {1}) (s, a ^ {1}, a ^ {2}) + \mathcal {T} ^ {2} (v _ {t} ^ {2}) (s, a ^ {2}, a ^ {1}) = \gamma \sum_ {s ^ {\prime}} p (s ^ {\prime} \mid s, a ^ {1}, a ^ {2}) (v _ {t} ^ {1} (s ^ {\prime}) + v _ {t} ^ {2} (s ^ {\prime})) +$$ + +is in general non-zero during learning. As a result, the auxiliary matrix game (with payoff matrices $\mathcal{T}^1(v_t^1)(s)$ and $\mathcal{T}^2(v_t^2)(s)$ ) at state $s$ that the inner loop of Algorithm 2 is designed to solve is not necessarily a zero-sum matrix game, which presents a major challenge in the finite-sample analysis, as illustrated previously in Section 1.2. + +The outer loop of Algorithm 2 is an "on-policy" variant of minimax value iteration. To see this, note that, ideally, we would synchronize $v_{t+1}^i(s)$ with $\pi_{t,K}^i(s)^\top \mathcal{T}^i(v_t^i)(s)\pi_{t,K}^{-i}(s)$ , which is an approximation of $[\mathcal{B}^i(v)](s) = \text{val}^i(\mathcal{T}^i(v_t^i)(s))$ by design of our inner loop. However, player $i$ has no access to $\pi_{t,K}^{-i}$ in independent learning. Fortunately, the $q$ -function $q_{t,K}^i$ is precisely constructed as an estimate of $\mathcal{T}^i(v_t^i)(s)\pi_{t,K}^{-i}(s)$ , as illustrated in Section 2.1, which leads to the outer loop of Algorithm 2. In Algorithm 2 Line 8, we set $S_0 = S_K$ to ensure that the initial state of the next inner loop is the last state of the previous one; hence Algorithm 2 is driven by a single trajectory of Markovian samples. + +# 3.2 Finite-Sample Analysis + +We now state our main results, which, to the best of our knowledge, provide the first last-iterate finite-sample bound for best-response-type payoff-based independent learning dynamics in zero-sum stochastic games. Our results are based on the following assumption. + +Assumption 3.1. There exists a joint policy $\pi_{b} = (\pi_{b}^{1},\pi_{b}^{2})$ such that the Markov chain $\{S_k\}_{k\geq 0}$ induced by $\pi_{b}$ is irreducible and aperiodic. + +One challenge in our finite-sample analysis is that the behavior policies used for taking the actions are time-varying, due to the best-response nature of the dynamics. Most, if not all, existing finite-sample guarantees of RL algorithms under time-varying behavior policies assume that the induced Markov chain of any policy, or any policy encountered along the algorithm trajectory, is uniformly geometrically ergodic [41, 76, 77, 59, 78-80]. Assumption 3.1 is weaker, since it assumes only the existence of one policy that induces an irreducible and aperiodic Markov chain. + +We consider using either constant step sizes, i.e., $\alpha_{k} \equiv \alpha$ and $\beta_{k} \equiv \beta = c_{\alpha,\beta}$ , or diminishing step sizes of $\mathcal{O}(1/k)$ decay rate, i.e., $\alpha_{k} = \alpha/(k+h)$ and $\beta_{k} = \beta/(k+h) = c_{\alpha,\beta}\alpha/(k+h)$ , where $c_{\alpha,\beta} \in (0,1)$ is the step size ratio. In the stochastic-game setting, we redefine $\ell_{\tau} = [1 + (A_{\max}-1)\exp(2/[(1-\gamma)\tau])]^{-1}$ , which, analogous to the matrix-game setting, is a uniform lower bound on the entries of the policies generated by Algorithm 2 (cf. Lemma D.1). We next state our requirement for choosing the step sizes. + +Condition 3.1. When using either constant or diminishing step sizes, we choose $\tau \leq 1 / (1 - \gamma)$ and the stepsize ratio $c_{\alpha,\beta}$ to satisfy $c_{\alpha,\beta} \leq \min \left(\frac{1}{60L_p|S|A_{\max}}, \frac{c_\tau\tau(1-\gamma)^2}{34|S|A_{\max}^2}, \frac{c_\tau\ell_\tau^2\tau^3(1-\gamma)^2}{144A_{\max}^2}\right)$ , where $c_\tau \propto \ell_\tau$ and $L_p > 0$ are defined in Appendix B.3. In addition, when using $\alpha_k \equiv \alpha$ and $\beta_k \equiv \beta$ , we require $\alpha < 1/c_\tau$ and $\beta < 1$ , and when using $\alpha_k = \alpha/(k+h)$ and $\beta_k = \beta/(k+h)$ , we require $^2\beta = 4$ , $\alpha > 1/c_\tau$ , and $h > 1$ such that $\alpha_0 < 1/c_\tau$ and $\beta_0 < 1$ . + +We next state the finite-sample bound of Algorithm 2. For simplicity of presentation, we use $a \lesssim b$ to mean that there exists an absolute constant $c > 0$ such that $a \leq bc$ . + +Theorem 3.1. Suppose that both players follow Algorithm 2, Assumption 3.1 is satisfied, and the stepsizes $\{\alpha_k\}$ and $\{\beta_k\}$ satisfy Condition 3.1. Then, we have the following results. + +(1) When using constant step sizes, there exists $z_{\beta} = \mathcal{O}(\log (1 / \beta))$ such that the following inequality holds as long as $K \geq z_{\beta}$ : + +$$ +\begin{array}{l} \mathbb {E} [ N G (\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}) ] \lesssim \underbrace {\frac {A _ {\operatorname* {m a x}} ^ {2} T}{\tau (1 - \gamma) ^ {3}} \left(\frac {1 + \gamma}{2}\right) ^ {T - 1}} _ {:= \mathcal {E} _ {1}} + \underbrace {\frac {A _ {\operatorname* {m a x}} ^ {2} L _ {i n} (K - z _ {\beta}) ^ {1 / 2}}{\tau (1 - \gamma) ^ {4}} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}}} _ {:= \mathcal {E} _ {2}} \\ + \underbrace {\frac {| \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {4} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha^ {1 / 2}} _ {:= \mathcal {E} _ {3}} + \underbrace {\frac {\tau \log (A _ {\max})}{(1 - \gamma) ^ {2}}} _ {:= \mathcal {E} _ {4}}, \\ \end{array} +$$ + +$$ +w h e r e L _ {i n} = \frac {4}{(1 - \gamma)} + 2 \tau \log (A _ {\mathrm {m a x}}) + \frac {8 | S | A _ {\mathrm {m a x}}}{(1 - \gamma) ^ {2}}. +$$ + +(2) When using $\alpha_{k} = \alpha /(k + h)$ and $\beta_{k} = \beta /(k + h)$ , there exists $k_0 > 0$ such that the following inequality holds as long as $K\geq k_0$ : + +$$ +\mathbb {E} [ N G (\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}) ] \lesssim \frac {A _ {\max} ^ {2} T}{\tau (1 - \gamma) ^ {3}} \left(\frac {1 + \gamma}{2}\right) ^ {T - 1} + \frac {L _ {i n} | \mathcal {S} | A _ {\max} z _ {K} ^ {2} \alpha_ {K} ^ {1 / 2}}{(1 - \gamma) ^ {4} \alpha_ {k _ {0}} ^ {1 / 2} c _ {\alpha , \beta}} + \frac {\tau \log (A _ {\max})}{(1 - \gamma) ^ {2}}, +$$ + +$$ +w h e r e z _ {K} = \mathcal {O} (\log (K)). +$$ + +Remark. Analogous to [29, 15], our learning dynamics are symmetric between the two players in the sense that there is no time-scale separation between the two players, that is, they both implement the algorithm with the same step sizes. + +A detailed proof sketch of Theorem 3.1 is provided in Appendix B and the complete proof is provided in Appendix D. Next, we discuss the result in Theorem 3.1 (1). The bound in Theorem 3.1 (1) involves a value iteration error term $\mathcal{E}_1$ , an optimization error term $\mathcal{E}_2$ , a statistical error term $\mathcal{E}_3$ , and a smoothing bias term $\mathcal{E}_4$ due to the use of smoothed best-response in the learning dynamics. Note that $\mathcal{E}_1$ would be the only error term if we were able to perform minimax value iteration to solve + +the game. Since minimax value iteration converges geometrically, the term $\mathcal{E}_1$ also goes to zero at a geometric rate. Notably, the terms $\mathcal{E}_2$ and $\mathcal{E}_3$ are orderwise larger compared to their matrix-game counterparts, see Corollary 2.1.2. Intuitively, the reason is that the induced auxiliary matrix game (with payoff matrices $\mathcal{T}^1(v_t^1)(s)$ and $\mathcal{T}^2(v_t^2)(s)$ ) that the inner loop of Algorithm 2 aims at solving does not necessarily have a zero-sum structure (see the discussion in Section 3.1 after Algorithm 2). Consequently, the error due to such a "non-zero-sum" structure propagates through the algorithm and eventually undermines the convergence bound. + +Recall that in the matrix game setting, we proved convergence to the Nash distribution (or the Nash equilibrium of the entropy-regularized matrix game). In the stochastic-game setting, we do not have convergence to the Nash equilibrium of the entropy-regularized stochastic game. The main reason is that, in order to have such a convergence, our outer loop should be designed to approximate the entropy-regularized minimax value iteration rather than the vanilla minimax value iteration as in Algorithm 2 Line 8. However, in the payoff-based setting, since each player does not even observe the actions of their opponent, it is unclear how to construct an estimator of the entropy function of the opponent's policy, which is an interesting future direction to investigate. + +Although the transient terms in Theorem 3.1 enjoy a desirable rate of convergence (e.g., geometric in $T$ and $\tilde{\mathcal{O}}(1 / K^{1/2})$ in $K$ ), the stepsize ratio $c_{\alpha, \beta}$ (which is exponentially small in $\tau$ ) appears as $c_{\alpha, \beta}^{-1}$ in the bound; see Theorem 3.1. Therefore, due to the presence of the smoothing bias (i.e., the term $\mathcal{E}_4$ on the RHS of the bound in Theorem 3.1 (1)), to achieve $\mathbb{E}[\mathrm{NG}(\pi_{T,K}^1, \pi_{T,K}^2)] \leq \epsilon$ , the overall sample complexity can also be exponentially large in $\epsilon^{-1}$ . This is analogous to Corollary 2.1.2 for zero-sum matrix games. As illustrated in detail in Section 2, the reason here is due to the exploration limitation of using softmax as a means for smoothed best response, which we kept without further modification to preserve the naturalness of the learning dynamics. Removing such exponential factors by developing improved exploration strategies is an immediate future direction. + +Finally, we consider the case where the opponent of player $i$ (where $i \in \{1, 2\}$ ) plays the game with a stationary policy and provide a finite-sample bound for player $i$ to find the best response. + +Corollary 3.1.1. [Rationality $^3$ ] Given $i \in \{1,2\}$ , suppose that player $i$ follows the learning dynamics presented in Algorithm 2, but its opponent player $-i$ follows a stationary policy, denoted by $\pi^{-i}$ . Then, we have $\max_{\hat{\pi}^i} U^i(\hat{\pi}^i, \pi^{-i}) - \mathbb{E}[U^i(\pi_{T,K}^i, \pi^{-i})] \leq \tilde{\mathcal{O}}\left(\omega_1 T\left(\frac{\gamma + 1}{2}\right)^T + \frac{\omega_2}{K^{1/2}} + \tau\right)$ , where $\omega_1$ and $\omega_2$ are constants that are exponential in $\tau^{-1}$ , but polynomial in $|\mathcal{S}|$ , $A_{\max}$ , and $1/(1 - \gamma)$ . + +Intuitively, the reason that our algorithm is rational is that it performs an on-policy update in RL. In contrast to an off-policy update, where the behavior policy can be arbitrarily different from the policy being generated during learning (such as in $Q$ -learning [81]), in the on-policy update for games, each player is actually playing with the policy that is moving towards the best response to its opponent. As a result, when the opponent's policy is stationary, it reduces to a single-agent problem and the player naturally finds the best response (also up to a smoothing bias). This is an advantage of using symmetric and independent learning dynamics. One challenge of analyzing such on-policy learning dynamics is that the behavior policy is time-varying. + +# 4 Conclusion and Future Directions + +In this work, we consider payoff-based independent learning for zero-sum matrix games and stochastic games. In both settings, we establish the last-iterate finite-sample guarantees. Our approach, i.e., the coupled Lyapunov drift argument, provides a number of tools that are likely to be of interest more broadly for dealing with iterative algorithms with multiple sets of coupled and stochastic iterates. + +Limitations and Future Directions. As mentioned before Corollary 2.1 and after Theorem 3.1, the convergence bounds involve constants that are exponential in $\tau^{-1}$ , which arise due to the use of the smoothed best response to preserve the naturalness of the learning dynamics. An immediate future direction of this work is to remove such exponential factors by designing better exploration strategies. In the long term, we are interested to see if the algorithmic ideas and the analysis techniques developed in this work can be used to study other classes of games beyond zero-sum stochastic games. + +# Acknowledgement + +The authors would like to thank the anonymous reviewers for the helpful feedback, especially for pointing out several related references. ZC acknowledges support from the PIMCO Postdoctoral Fellowship. KZ acknowledges support from the Northrop Grumman – Maryland Seed Grant Program. EM acknowledges support from NSF CAREER Award 2240110. AW acknowledges support from NSF Grants CNS-2146814, CPS-2136197, CNS-2106403, and NGSDI-2105648. + +# References + +[1] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of Go without human knowledge. Nature, 550(7676):354, 2017. +[2] Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. Preprint arXiv:1610.03295, 2016. +[3] Shangding Gu, Jakub Grudzien Kuba, Yuanpei Chen, Yali Du, Long Yang, Alois Knoll, and Yaodong Yang. Safe multi-agent reinforcement learning for multi-robot control. Artificial Intelligence, 319:103905, 2023. +[4] Piotr Mirowski, Matt Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith Anderson, Denis Teptyashin, Karen Simonyan, Andrew Zisserman, Raia Hadsell, et al. Learning to navigate in cities without a map. Advances in neural information processing systems, 31, 2018. +[5] Lucian Busoniu, Robert Babuska, Bart De Schutter, et al. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 38(2):156-172, 2008. +[6] Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control, pages 321-384, 2021. +[7] Gurdal Arslan and Serdar Yüksel. Decentralized $Q$ -Learning for Stochastic Teams and Games. IEEE Transactions on Automatic Control, 62(4):1545-1558, 2017. +[8] Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Basar. Fully decentralized multi-agent reinforcement learning with networked agents. In International Conference on Machine Learning, pages 5867-5876, 2018. +[9] Guannan Qu, Adam Wierman, and Na Li. Scalable reinforcement learning of localized policies for multi-agent networked systems. In Learning for Dynamics and Control, pages 256-266. PMLR, 2020. +[10] Yizhou Zhang, Guannan Qu, Pan Xu, Yiheng Lin, Zaiwei Chen, and Adam Wierman. Global convergence of localized policy iteration in networked multi-agent reinforcement learning. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 7(1):1-51, 2023. +[11] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the Eleventh International Conference on International Conference on Machine Learning, ICML'94, page 157-163, San Francisco, CA, USA, 1994. Morgan Kaufmann Publishers Inc. +[12] Michael L Littman. Friend-or-foe $Q$ -learning in general-sum games. In International Conference on Machine Learning, volume 1, pages 322-328, 2001. +[13] Junling Hu and Michael P Wellman. Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4(Nov):1039-1069, 2003. +[14] Constantinos Daskalakis, Dylan J Foster, and Noah Golowich. Independent policy gradient methods for competitive reinforcement learning. Advances in neural information processing systems, 33:5527-5540, 2020. + +[15] Muhammed Sayin, Kaiqing Zhang, David Leslie, Tamer Basar, and Asuman Ozdaglar. Decentralized $Q$ -learning in zero-sum Markov games. Advances in Neural Information Processing Systems, 34:18320-18334, 2021. +[16] Yu Bai and Chi Jin. Provable self-play algorithms for competitive reinforcement learning. In International conference on machine learning, pages 551-560. PMLR, 2020. +[17] Qiaomin Xie, Yudong Chen, Zhaoran Wang, and Zhuoran Yang. Learning zero-sum simultaneous-move Markov games using function approximation and correlated equilibrium. In Conference on Learning Theory, pages 3674-3682. PMLR, 2020. +[18] Runyu Zhang, Zhaolin Ren, and Na Li. Gradient play in stochastic games: Stationary points, convergence, and sample complexity. Preprint arXiv:2106.00198, 2021. +[19] Dongsheng Ding, Chen-Yu Wei, Kaiqing Zhang, and Mihailo Jovanovic. Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence. In International Conference on Machine Learning, pages 5166–5220. PMLR, 2022. +[20] Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin. A sharp analysis of model-based reinforcement learning with self-play. In International Conference on Machine Learning, pages 7001-7010. PMLR, 2021. +[21] Chi Jin, Qinghua Liu, Yuanhao Wang, and Tiancheng Yu. V-learning-A simple, efficient, decentralized algorithm for multiagent RL. In ICLR 2022 Workshop on Gamification and Multiagent Solutions, 2022. +[22] Constantinos Daskalakis, Noah Golowich, and Kaiqing Zhang. The complexity of Markov equilibrium in stochastic games. In *The Thirty Sixth Annual Conference on Learning Theory*, pages 4180–4234. PMLR, 2023. +[23] Jalaj Bhandari, Daniel Russo, and Raghav Singal. A finite-time analysis of temporal difference learning with linear function approximation. In Conference on learning theory, pages 1691-1692. PMLR, 2018. +[24] R Srikant and Lei Ying. Finite-time error bounds for linear stochastic approximation and TD learning. In Conference on Learning Theory, pages 2803-2830, 2019. +[25] Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, and Yuxin Chen. Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction. IEEE Transactions on Information Theory, 68(1):448-473, 2021. +[26] Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, and Karthikeyan Shanmugam. Finite-sample analysis of contractive stochastic approximation using smooth convex envelopes. Advances in Neural Information Processing Systems, 33, 2020. +[27] Michael Bowling and Manuela Veloso. Rational and convergent learning in stochastic games. In International Joint Conference on Artificial Intelligence, volume 17, pages 1021-1026, 2001. +[28] David S Leslie and Edmund J Collins. Convergent multiple-timescales reinforcement learning algorithms in normal form games. The Annals of Applied Probability, 13(4):1231-1251, 2003. +[29] David S Leslie and Edmund J Collins. Individual $Q$ -learning in normal form games. SIAM Journal on Control and Optimization, 44(2):495-514, 2005. +[30] Richard D McKelvey and Thomas R Palfrey. Quantal response equilibria for normal form games. Games and economic behavior, 10(1):6-38, 1995. +[31] Shicong Cen, Yuting Wei, and Yuejie Chi. Fast policy extragradient methods for competitive games with entropy regularization. Advances in Neural Information Processing Systems, 34:27952-27964, 2021. +[32] Shicong Cen, Yuejie Chi, Simon Shaolei Du, and Lin Xiao. Faster last-iterate convergence of policy optimization in zero-sum Markov games. In The Eleventh International Conference on Learning Representations, 2023. + +[33] Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, and Yu Bai. Policy optimization for Markov games: Unified framework and faster convergence. Advances in Neural Information Processing Systems, 35:21886-21899, 2022. +[34] Sihan Zeng, Thinh Doan, and Justin Romberg. Regularized gradient descent ascent for two-player zero-sum Markov games. Advances in Neural Information Processing Systems, 35:34546-34558, 2022. +[35] Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, and Yishay Mansour. Regret minimization and convergence to equilibria in general-sum Markov games. In International Conference on Machine Learning, pages 9343-9373. PMLR, 2023. +[36] Yulai Zhao, Yuandong Tian, Jason Lee, and Simon Du. Provably efficient policy optimization for two-player zero-sum Markov games. In International Conference on Artificial Intelligence and Statistics, pages 2736-2761. PMLR, 2022. +[37] Kaiqing Zhang, Xiangyuan Zhang, Bin Hu, and Tamer Başar. Derivative-free policy optimization for linear risk-sensitive and robust control design: Implicit regularization and sample complexity. Advances in Neural Information Processing Systems, 34:2949-2964, 2021. +[38] Ahmet Alacaoglu, Luca Viano, Niao He, and Volkan Cevher. A natural actor-critic framework for zero-sum Markov games. In International Conference on Machine Learning, pages 307-366. PMLR, 2022. +[39] David S Leslie, Steven Perkins, and Zibo Xu. Best-response dynamics in zero-sum stochastic games. Journal of Economic Theory, 189:105095, 2020. +[40] Lucas Baudin and Rida Laraki. Smooth fictitious play in stochastic games with perturbed payoffs and unknown transitions. Advances in Neural Information Processing Systems, 35:20243-20256, 2022. +[41] Shaofeng Zou, Tengyu Xu, and Yingbin Liang. Finite-sample analysis for SARSA with linear function approximation. In Advances in Neural Information Processing Systems, pages 8668-8678, 2019. +[42] Shangtong Zhang, R Tachet, and Romain Laroche. Global optimality and finite sample analysis of softmax off-policy actor critic under state distribution mismatch. Journal of Machine Learning Research, 23(343):1-91, 2022. +[43] Muhammed O Sayin, Francesca Parise, and Asuman Ozdaglar. Fictitious play in zero-sum stochastic games. SIAM Journal on Control and Optimization, 60(4):2095-2114, 2022. +[44] Guanghui Lan. First-order and Stochastic Optimization Methods for Machine Learning. Springer, 2020. +[45] Zaiwei Chen, Sheng Zhang, Thinh T. Doan, John-Paul Clarke, and Siva Theja Maguluri. Finite-Sample Analysis of Nonlinear Stochastic Approximation with Applications in Reinforcement Learning. Preprint arXiv:1905.11425, 2019. +[46] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311, 2018. +[47] George W Brown. Iterative solution of games by fictitious play. Activity Analysis of Production and Allocation, 13(1):374-376, 1951. +[48] Julia Robinson. An iterative method of solving a game. Annals of Mathematics, pages 296-301, 1951. +[49] D. Fudenberg and D. Kreps. Learning mixed equilibria. Games and Economic Behavior, 5:320-367, 1993. +[50] Josef Hofbauer and William H Sandholm. On the global convergence of stochastic fictitious play. *Econometrica*, 70(6):2265-2294, 2002. + +[51] James Hannan. Approximation to Bayes risk in repeated play. Contributions to the Theory of Games, 3:97-139, 1957. +[52] Drew Fudenberg and David K Levine. Consistency and cautious fictitious play. Journal of Economic Dynamics and Control, 19(5-7):1065-1089, 1995. +[53] Josef Hofbauer and Ed Hopkins. Learning in perturbed asymmetric games. Games and Economic Behavior, 52(1):133-152, 2005. +[54] Jeff S Shamma and Gurdal Arslan. Unified convergence proofs of continuous-time fictitious play. IEEE Transactions on Automatic Control, 49(7):1137-1141, 2004. +[55] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. +[56] Stefanos Leonardos, Will Overman, Ioannis Panageas, and Georgios Piliouras. Global convergence of multi-agent policy gradient in Markov potential games. In International Conference on Learning Representations, 2022. +[57] Sarath Pattathil, Kaiqing Zhang, and Asuman Ozdaglar. Symmetric (optimistic) natural policy gradient for multi-agent learning with parameter convergence. In International Conference on Artificial Intelligence and Statistics, pages 5641-5685. PMLR, 2023. +[58] Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive Markov games. In Conference on Learning Theory, pages 4259–4299. PMLR, 2021. +[59] Ziyi Chen, Shaocong Ma, and Yi Zhou. Sample efficient stochastic policy extragradient algorithm for zero-sum Markov game. In International Conference on Learning Representations, 2021. +[60] Muhammed O Sayin, Kaiqing Zhang, and Asuman Ozdaglar. Fictitious play in Markov games with single controller. In Proceedings of the 23rd ACM Conference on Economics and Computation, pages 919-936, 2022. +[61] Lucas Baudin and Rida Laraki. Fictitious play and best-response dynamics in identical interest and zero-sum stochastic games. In International Conference on Machine Learning, pages 1664-1690. PMLR, 2022. +[62] Chinmay Maheshwari, Manxi Wu, Druv Pai, and Shankar Sastry. Independent and decentralized learning in Markov potential games. Preprint arXiv:2205.14590, 2022. +[63] Eyal Even-Dar and Yishay Mansour. Learning rates for $Q$ -learning. Journal of Machine Learning Research, 5(Dec):1-25, 2003. +[64] Guannan Qu and Adam Wierman. Finite-time analysis of asynchronous stochastic approximation and $Q$ -learning. In Conference on Learning Theory, pages 3185-3205. PMLR, 2020. +[65] Zaiwei Chen, Siva T Maguluri, Sanjay Shakkottai, and Karthikeyan Shanmugam. A Lyapunov theory for finite-sample guarantees of Markovian stochastic approximation. Operations Research, 2023. +[66] Guanghui Lan. Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes. Mathematical programming, pages 1-48, 2022. +[67] Yuling Yan, Gen Li, Yuxin Chen, and Jianqing Fan. The efficacy of pessimism in asynchronous $Q$ -learning. IEEE Transactions on Information Theory, 2023. +[68] Srihari Govindan, Philip J Reny, Arthur J Robson, et al. A short proof of Harsanyi's purification theorem. Games and Economic Behavior, 45(2):369-374, 2003. +[69] Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9-44, 1988. + +[70] Josef Hofbauer and Sylvain Sorin. Best response dynamics for continuous zero-sum games. Discrete and Continuous Dynamical Systems Series B, 6(1):215, 2006. +[71] Tianyi Lin, Zhengyuan Zhou, Wenjia Ba, and Jiawei Zhang. Doubly optimal no-regret online learning in strongly monotone games with bandit feedback. Preprint arXiv:2112.02856, 2021. +[72] Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. Stochastic gradient descent-ascent: Unified theory and new efficient methods. In International Conference on Artificial Intelligence and Statistics, pages 172-235. PMLR, 2023. +[73] Drew Fudenberg and David K Levine. The Theory of Learning in Games, volume 2. MIT press, 1998. +[74] Lloyd S Shapley. Stochastic games. Proceedings of the National Academy of Sciences, 39(10):1095-1100, 1953. +[75] Stefan Banach. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fund. math, 3(1):133-181, 1922. +[76] Sajad Khodadadian, Thinh T Doan, Justin Romberg, and Siva Theja Maguluri. Finite sample analysis of two-time-scale natural actor-critic algorithm. IEEE Transactions on Automatic Control, 2022. +[77] Ziyi Chen, Yi Zhou, Rong-Rong Chen, and Shaofeng Zou. Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis. In International Conference on Machine Learning, pages 3794-3834. PMLR, 2022. +[78] Tengyu Xu and Yingbin Liang. Sample complexity bounds for two timescale value-based reinforcement learning algorithms. In International Conference on Artificial Intelligence and Statistics, pages 811-819. PMLR, 2021. +[79] Yue Frank Wu, Weitong Zhang, Pan Xu, and Quanquan Gu. A finite-time analysis of two time-scale actor-critic methods. Advances in Neural Information Processing Systems, 33:17617-17628, 2020. +[80] Shuang Qiu, Zhuoran Yang, Jieping Ye, and Zhaoran Wang. On finite-time convergence of actor-critic algorithm. IEEE Journal on Selected Areas in Information Theory, 2(2):652-664, 2021. +[81] Christopher JCH Watkins and Peter Dayan. $Q$ -learning. Machine learning, 8(3-4):279-292, 1992. +[82] Yu Bai, Chi Jin, and Tiancheng Yu. Near-optimal reinforcement learning with self-play. Advances in neural information processing systems, 33:2159-2170, 2020. +[83] Ziang Song, Song Mei, and Yu Bai. When can we learn general-sum Markov games with a large number of players sample-efficiently? In International Conference on Learning Representations, 2022. +[84] Weichao Mao, Lin Yang, Kaiqing Zhang, and Tamer Başar. On improving model-free algorithms for decentralized multi-agent reinforcement learning. In International Conference on Machine Learning, pages 15007-15049. PMLR, 2022. +[85] Qiwen Cui, Kaiqing Zhang, and Simon Du. Breaking the curse of multiagents in a large state space: RL in Markov games with independent linear function approximation. In The Thirty Sixth Annual Conference on Learning Theory, pages 2651-2652. PMLR, 2023. +[86] Chen-Yu Wei, Yi-Te Hong, and Chi-Jen Lu. Online reinforcement learning in stochastic games. In Advances in Neural Information Processing Systems, pages 4987–4997, 2017. +[87] Kaiqing Zhang, Sham Kakade, Tamer Başar, and Lin Yang. Model-based multi-agent RL in zero-sum Markov games with near-optimal sample complexity. Advances in Neural Information Processing Systems, 33:1166-1178, 2020. + +[88] Gen Li, Yuejie Chi, Yuting Wei, and Yuxin Chen. Minimax-optimal multi-agent RL in Markov games with a generative model. Advances in Neural Information Processing Systems, 35:15353-15367, 2022. +[89] Qiwen Cui and Simon S Du. When are offline two-player zero-sum Markov games solvable? Advances in Neural Information Processing Systems, 35:25779-25791, 2022. +[90] Qiwen Cui and Simon S Du. Provably efficient offline multi-agent reinforcement learning via strategy-wise bonus. Advances in Neural Information Processing Systems, 35:11739-11751, 2022. +[91] Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, and Zhuoran Yang. Pessimistic minimax value iteration: Provably efficient equilibrium learning from offline datasets. In International Conference on Machine Learning, pages 27117-27142. PMLR, 2022. +[92] Zuguang Gao, Qianqian Ma, Tamer Başar, and John R. Birge. Sample complexity of decentralized tabular $Q$ -learning for stochastic games. 2023 American Control Conference (ACC), pages 1098-1103, 2023. +[93] David A Levin and Yuval Peres. Markov Chains and Mixing Times, volume 107. American Mathematical Soc., 2017. +[94] Dimitri P Bertsekas and John N Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. +[95] John M Danskin. The Theory of Max-Min and Its Application to Weapons Allocation Problems, volume 5. Springer Science & Business Media, 2012. +[96] Bolin Gao and Lacra Pavel. On the properties of the softmax function with application in game theory and reinforcement learning. Preprint arXiv:1704.00805, 2017. +[97] Amir Beck. First-order methods in optimization, volume 25. SIAM, 2017. +[98] Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum games and constrained min-max optimization. 10th Innovations in Theoretical Computer Science, 2019. +[99] Kenshi Abe, Kaito Ariu, Mitsuki Sakamoto, Kentaro Toyoshima, and Atsushi Iwasaki. Last-iterate convergence with full and noisy feedback in two-player zero-sum games. In International Conference on Artificial Intelligence and Statistics, pages 7999–8028. PMLR, 2023. + +# A Extended Related Work + +Continued from Section 1.3, we here discuss several other existing works that are relevant. + +Recently, there has been an increasing study of MARL with sample efficiency guarantees recently [16, 82, 20, 17, 21, 83, 84, 22, 85]. Most of them focus on the finite-horizon episodic setting with online exploration, and perform regret analysis, which differs from our last-iterate finite-sample analysis under the stochastic approximation paradigm. Additionally, these algorithms are episodic due to the finite-horizon nature of the setting and are not best-response-type independent learning dynamics that are repeatedly run for infinitely long, which can be viewed as a non-equilibrating adaptation process. In fact, the primary focus of this line of work is a self-play setting where all the players can be controlled to perform centralized learning [86, 16, 82, 20, 17]. Beyond the online setting, finite-sample efficiency has also been established for MARL using a generative model [87, 88] or offline datasets [89–91, 67]. These algorithms tend to be centralized in nature and focus on equilibrium computation, instead of performing independent learning. + +Finite-sample complexity has also been established for policy gradient methods, a popular RL approach when applied to solving zero-sum stochastic games [14, 36-38]. However, to ensure convergence, these methods are asymmetric in that the players update their policies at different timescales, e.g., one player updates faster than the other with larger step sizes, or one player fixes its policy while waiting for the other to update. Such asymmetric policy gradient methods are not independent, as some implicit coordination is required to enable such a timescale separation across agents. This style of implicit coordination is also required for the finite-sample analysis of decentralized learning in certain general-sum stochastic games, e.g., [92], which improves the asymptotic convergence in [7]. In contrast, our learning dynamics only require the update of each player's policy to be slower than the update of their $q$ -functions, but crucially we do not assume a time-scale separation between the two players, making our learning dynamics symmetric. + +# B Proof Sketch of Theorem 3.1 + +In this section, we present the key steps and technical ideas used to prove Theorem 3.1. The core challenge here is that Algorithm 2 maintains 3 sets of iterates $\left(\{q_{t,k}^i\}, \{\pi_{t,k}^i\}, \{v_t^i\}\right)$ , which are coupled. The coupling of their update equations means that it is not possible to separately analyze them. Instead, we develop a coupled Lyapunov drift approach to establish the finite-sample bounds of Algorithm 2. Specifically, we first show that the expected Nash gap can be upper bounded by a sum of properly defined Lyapunov functions, one for each set of the iterates (i.e., the $v$ -functions, the policies, and the $q$ -functions). Then, we establish a set of coupled Lyapunov drift inequalities - one for each Lyapunov function. Finally, we decouple the Lyapunov drift inequalities to establish the overall finite-sample bounds. We outline the key steps in the argument below. + +To begin with, we show in Lemma D.1 that the $q$ -functions $\{q_{t,k}^i\}$ and the $v$ -functions $\{v_t^i\}$ generated by Algorithm 2 are uniformly bounded from above in $\ell_{\infty}$ -norm by $1 / (1 - \gamma)$ and the entries of the policies $\{\pi_{t,k}^i\}$ are uniformly bounded below by $\ell_{\tau} > 0$ . This result will be frequently used in our analysis. We next introduce the Lyapunov functions we use to analyze Algorithm 2. Specifically, for any $t,k\geq 0$ , let $\overline{q}_{t,k}^{i}\in \mathbb{R}^{|\mathcal{S}||\mathcal{A}^{i}|}$ be defined as $\overline{q}_{t,k}^{i}(s) = \mathcal{T}^{i}(v_{t}^{i})(s)\pi_{t,k}^{-i}(s)$ for all $s\in S$ . Let + +$$ +\mathcal {L} _ {\mathrm {s u m}} (t) = \| v _ {t} ^ {1} + v _ {t} ^ {2} \| _ {\infty}, \quad \mathcal {L} _ {v} (t) = \sum_ {i = 1, 2} \| v _ {t} ^ {i} - v _ {*} ^ {i} \| _ {\infty}, +$$ + +$$ +\mathcal {L} _ {q} (t, k) = \sum_ {i = 1, 2} \sum_ {s \in \mathcal {S}} \| q _ {t, k} ^ {i} (s) - \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, k} ^ {- i} (s) \| _ {2} ^ {2} = \sum_ {i = 1, 2} \| q _ {t, k} ^ {i} - \bar {q} _ {t, k} ^ {i} \| _ {2} ^ {2}, +$$ + +$$ +\mathcal {L} _ {\pi} (t, k) = \max _ {s \in \mathcal {S}} \sum_ {i = 1, 2} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{(\mu^ {i} - \pi_ {t, k} ^ {i} (s)) ^ {\top} \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, k} ^ {- i} (s) + \tau \nu (\mu^ {i}) - \tau \nu (\pi_ {t, k} ^ {i} (s)) \right\}. +$$ + +Note that $\mathcal{L}_{\mathrm{sum}}(t)$ is introduced to deal with the fact that the induced matrix game the inner loop of Algorithm 2 is designed to solve may not be a zero-sum game due to independent learning. See the discussion in Section 1.2 and the paragraph after Algorithm 2. At the core of our argument is the following inequality (cf. Lemma D.4): + +$$ +\mathrm {N G} \left(\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}\right) \leq \frac {4}{1 - \gamma} \left(2 \mathcal {L} _ {\text {s u m}} (T) + \mathcal {L} _ {v} (T) + \mathcal {L} _ {\pi} (T, K) + 2 \tau \log \left(A _ {\max }\right)\right), \tag {6} +$$ + +which motivates us to bound all the Lyapunov functions. + +# B.1 Analysis of the Outer Loop: $v$ -Function Update + +Motivated by Eq. (6), we need to bound $\mathcal{L}_{\mathrm{sum}}(T)$ and $\mathcal{L}_v(T)$ . To achieve that, we establish Lyapunov drift inequalities for them. Specifically, we show in Lemmas D.5 and D.6 that + +$$ +\mathcal {L} _ {v} (t + 1) \leq \underbrace {\gamma \mathcal {L} _ {v} (t)} _ {\text {D r i f t}} + \underbrace {4 \mathcal {L} _ {\text {s u m}} (t) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t , K) + 4 \mathcal {L} _ {\pi} (t , K) + 6 \tau \log \left(A _ {\max }\right)} _ {\text {A d d i t i v e E r r o r s}}, \tag {7} +$$ + +$$ +\mathcal {L} _ {\text {s u m}} (t + 1) \leq \underbrace {\gamma \mathcal {L} _ {\text {s u m}} (t)} _ {\text {D r i f t}} + \underbrace {2 \mathcal {L} _ {q} (t , K) ^ {1 / 2}} _ {\text {A d d i t i v e E r r o r s}}, \quad \forall t \geq 0. \tag {8} +$$ + +Suppose that the Additive Errors in the previous two inequalities were only functions of $v_{t}^{1}$ and $v_{t}^{2}$ , then these two Lyapunov drift inequalities can be repeatedly used to obtain a convergence bound for $\mathcal{L}_{\mathrm{sum}}(T)$ and $\mathcal{L}_v(T)$ . However, the coupled nature of Eqs. (7) and (8) requires us to analyze the policies and the $q$ -functions in the inner loop, and establish their Lyapunov drift inequalities. + +# B.2 Analysis of the Inner Loop: Policy Update + +As illustrated in Section 2.1 and Section 3.1, for each state $s$ , the update equation of the policies can be viewed as a discrete and stochastic variant of the smoothed best-response dynamics for solving matrix games [29]. Typically, the following Lyapunov function is used to study such dynamics [53]: + +$$ +V _ {X} \left(\mu^ {1}, \mu^ {2}\right) = \sum_ {i = 1, 2} \max _ {\hat {\mu} ^ {i} \in \Delta \left(\mathcal {A} ^ {i}\right)} \left\{\left(\hat {\mu} ^ {i} - \mu^ {i}\right) ^ {\top} X _ {i} \mu^ {- i} + \tau \nu \left(\hat {\mu} ^ {i}\right) - \tau \nu \left(\mu^ {i}\right) \right\}, \tag {9} +$$ + +where $X_{1}$ and $X_{2}$ are the payoff matrices for player 1 and player 2, respectively, and $\nu(\cdot)$ is the entropy function. Specialized to our case, given a joint $v$ -function $v = (v^{1}, v^{2})$ from the outer loop4 and a state $s \in S$ , we would like to use + +$$ +V _ {v, s} (\pi^ {1} (s), \pi^ {2} (s)) = \sum_ {i = 1, 2} \max _ {\hat {\mu} ^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{(\hat {\mu} ^ {i} - \pi^ {i} (s)) ^ {\top} \mathcal {T} ^ {i} (v ^ {i}) (s) \pi^ {- i} (s) + \tau \nu (\hat {\mu} ^ {i}) - \tau \nu (\pi^ {i} (s)) \right\} +$$ + +as our Lyapunov function. Note that $\max_{s\in S}V_{v_t,s}(\pi_{t,k}^1 (s),\pi_{t,k}^2 (s)) = \mathcal{L}_\pi (t,k)$ . A sequence of properties (e.g., strong convexity, smoothness, etc.) regarding the Lyapunov function $V_{X}(\cdot ,\cdot)$ is established in Lemma D.7. In the end, we show in Lemma D.8 that + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k + 1) \right] \leq \underbrace {\left(1 - \frac {3 \beta_ {k}}{4}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t , k) \right]} _ {\text {D r i f t}} \\ + \underbrace {2 L _ {\tau} \beta_ {k} ^ {2} + \frac {3 2 A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{\tau^ {3} \ell_ {\tau} ^ {2} (1 - \gamma) ^ {2}} \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t , k) \right] + \frac {1 6 A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{\tau} \mathcal {L} _ {\operatorname* {s u m}} (t) ^ {2}} _ {\text {A d d i t i v e E r r o r s}}, \tag {10} \\ \end{array} +$$ + +where $\mathbb{E}_t[\cdot ]$ stands for conditional expectation conditioned on the history up to the beginning of the $t$ -th outer loop. To interpret the above, suppose that we were considering the continuous-time smoothed best-response dynamics. Then, the additive error term would disappear in the sense that the time-derivative of the Lyapunov function along the trajectory of the ODE is strictly negative. Thus, the three terms in the Additive Errors can be interpreted as (1) the discretization error in the update equation, (2) the stochastic error in the $q$ -function estimate, and (3) the error due to the non-zero-sum structure of the inner-loop auxiliary matrix game. + +# B.3 Analysis of the Inner Loop: $q$ -Function Update + +Our next focus is the $q$ -function update. The $q$ -function update equation is in the same spirit as TD-learning in RL, and a necessary condition for the convergence of TD-learning is that the behavior + +policy (i.e., the policy used to collect samples) should enable the agent to sufficiently explore the environment. To achieve this goal, since we show in Lemma D.1 that all joint policies from the algorithm trajectory have uniformly lower-bounded entries (with lower bound $\ell_{\tau} > 0$ ), it is enough to restrict our attention to a "soft" policy class $\Pi_{\tau} \coloneqq \{\pi = (\pi^{1}, \pi^{2}) \mid \min_{s,a^{1}} \pi^{1}(a^{1}|s) \geq \ell_{\tau}, \min_{s,a^{2}} \pi^{2}(a^{2}|s) \geq \ell_{\tau}\}$ . The following lemma, which is an extension of [10, Lemma 4], establishes a uniform exploration property under Assumption 3.1. + +To present the result, we need the following notation. Under Assumption 3.1, the Markov chain induced by the joint policy $\pi_b$ has a unique stationary distribution $\mu_b \in \Delta(S)$ [93], the minimum component of which is denoted by $\mu_{b,\min}$ . In addition, there exists $\rho_b \in (0,1)$ such that $\max_{s \in S} \left\| P_{\pi_b}^k(s,\cdot) - \mu_b(\cdot) \right\|_{\mathrm{TV}} \leq 2\rho_b^k$ for all $k \geq 0$ [93], where $P_{\pi_b}$ is the transition probability matrix of the Markov chain $\{S_k\}$ under $\pi_b$ . We also define the mixing time in the following. Given a joint policy $\pi = (\pi^1, \pi^2)$ and an accuracy level $\eta > 0$ , the $\eta$ -mixing time of the Markov chain $\{S_k\}$ induced by $\pi$ is defined as + +$$ +t _ {\pi , \eta} = \min \left\{k \geq 0: \max _ {s \in \mathcal {S}} \| P _ {\pi} ^ {k} (s, \cdot) - \mu_ {\pi} (\cdot) \| _ {\mathrm {T V}} \leq \eta \right\}, \tag {11} +$$ + +where $P_{\pi}$ is the $\pi$ -induced transition probability matrix and $\mu_{\pi}$ is the stationary distribution of $\{S_k\}$ under $\pi$ , provided that it exists and is unique. When the induced Markov chain mixes at a geometric rate, it is easy to see that $t_{\pi,\eta} = \mathcal{O}(\log(1/\eta))$ . + +Lemma B.1 (An Extension of Lemma 4 in [10]). Suppose that Assumption 3.1 is satisfied. Then we have the following results. + +(1) For any $\pi = (\pi^1, \pi^2) \in \Pi_{\tau}$ , the Markov chain $\{S_k\}$ induced by the joint policy $\pi$ is irreducible and aperiodic, hence admits a unique stationary distribution $\mu_{\pi} \in \Delta(S)$ . +(2) It holds that $\sup_{\pi \in \Pi_{\tau}} \max_{s \in S} \| P_{\pi}^{k}(s, \cdot) - \mu_{\pi}(\cdot) \|_{TV} \leq 2\rho_{\tau}^{k}$ for any $k \geq 0$ , where $\rho_{\tau} = \rho_{b}^{\ell_{\tau}^{2r_b}\mu_{b,\min}}$ and $r_b := \min \{k \geq 0 : P_{\pi_b}^{k}(s, s') > 0, \forall (s, s')\}$ . As a result, we have + +$$ +t \left(\ell_ {\tau}, \eta\right) := \sup _ {\pi \in \Pi_ {\tau}} t _ {\pi , \eta} \leq \frac {t _ {\pi_ {b} , \eta}}{\ell_ {\tau} ^ {2 r _ {b}} \mu_ {b , \min }}, \tag {12} +$$ + +where we recall that $t_{\pi_b,\eta}$ is the $\eta$ - mixing time of the Markov chain $\{S_k\}$ induced by $\pi_b$ . + +(3) There exists $L_{p} \geq 1$ (which was used in the statement of Theorem 3.1) such that + +$$ +\| \mu_ {\pi} - \mu_ {\bar {\pi}} \| _ {1} \leq L _ {p} \left(\max _ {s \in \mathcal {S}} \| \pi^ {1} (s) - \pi^ {2} (s) \| _ {1} + \max _ {s \in \mathcal {S}} \| \bar {\pi} ^ {1} (s) - \bar {\pi} ^ {2} (s) \| _ {1}\right) +$$ + +for all $\pi = (\pi^1,\pi^2),\bar{\pi} = (\bar{\pi}^1,\bar{\pi}^2)\in \Pi_{\tau}$ + +(4) $\mu_{\mathrm{min}}\coloneqq \inf_{\pi \in \Pi_{\tau}}\min_{s\in S}\mu_{\pi}(s) > 0.$ + +Remark. Lemma B.1 (1), (3), and (4) were previous established in [10, Lemma 4]. Lemma B.1 (2) enables us to see the explicit dependence of the "uniform mixing time" on the margin $\ell_{\tau}$ and the mixing time of the benchmark exploration policy $\pi_b$ . + +In view of Lemma B.1 (2), we have fast mixing for all policies in $\Pi_{\tau}$ if $(i)$ the margin $\ell_{\tau}$ is large, and $(ii)$ the Markov chain $\{S_k\}$ induced by the benchmark exploration policy $\pi_b$ is well-behaved. By "well-behaved" we mean the mixing time is small (i.e., small $t_{\pi_b,\eta}$ ) and the stationary distribution is relatively well-balanced (i.e., large $\mu_{b,\min}$ ). Point $(i)$ agrees with our intuition as a large margin encourages more exploration. To make sense of point $(ii)$ , since $\pi(a|s) \geq \ell_{\tau}^{2}\pi_{b}(a|s)$ for all $s$ and $a = (a^1, a^2)$ , we can write $\pi$ as a convex combination between $\pi_b$ and some residual policy $\tilde{\pi}: \pi(\cdot|s) = \ell_{\tau}^{2}\pi_{b}(\cdot|s) + (1 - \ell_{\tau}^{2})\tilde{\pi}(\cdot|s)$ for all $s \in S$ . Therefore, since any $\pi \in \Pi_{\tau}$ has a portion of the benchmark exploration policy $\pi_b$ in it, it makes intuitive sense that fast mixing of $\{S_k\}$ under $\pi_b$ implies, to some extent, fast mixing of $\{S_k\}$ under $\pi \in \Pi_{\tau}$ . Note that, as the margin $\ell_{\tau}$ approaches zero, the uniform mixing time in Lemma B.1 (2) goes to infinity. This is not avoidable in general, as demonstrated by a simple MDP example constructed in Appendix E. + +We define $c_{\tau} = \mu_{\min}\ell_{\tau}$ , which was used in the statement of Theorem 3.1. With Lemma B.1 in hand, we are now able to analyze the behavior of the $q$ -functions. We model the $q$ -function update as a + +stochastic approximation algorithm driven by time-inhomogeneous Markovian noise, and use the norm-square function + +$$ +\sum_ {i = 1, 2} \sum_ {s} \| q ^ {i} (s) - \mathcal {T} ^ {i} (v ^ {i}) (s) \pi^ {- i} (s) \| _ {2} ^ {2} +$$ + +as the Lyapunov function to study its behavior. Note that $\mathcal{L}_q(t,k) = \sum_{i=1,2} \sum_s \|q_{t,k}^i(s) - T^i(v_t^i)(s)\pi_{t,k}^{-i}(s)\|_2^2$ . The key challenge to establishing a Lyapunov drift inequality is to control a difference of the form + +$$ +\mathbb {E} \left[ F ^ {i} \left(q ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \right] - \mathbb {E} \left[ F ^ {i} \left(q ^ {i}, \hat {S}, \hat {A} ^ {i}, \hat {A} ^ {- i}, \hat {S} ^ {\prime}\right) \right] \tag {13} +$$ + +for any $q^i \in \mathbb{R}^{|\mathcal{S}||\mathcal{A}^i|}$ and $i \in \{1,2\}$ , where $F^i(\cdot)$ is some appropriately defined operator that captures the dynamics of the update equation; see Appendix D.5.2 for its definition. In the term (13), the random tuple $(S_k,A_k^i,A_k^{-i},S_{k + 1})$ is the $k$ -th sample from the time-inhomogeneous Markov chain $\{(S_k,A_k^i,A_k^{-i},S_{k + 1})\}_{k\geq 0}$ generated by the time-varying joint policies $\{\pi_k\}_{k\geq 0}$ and $(\hat{S},\hat{A}^i,\hat{A}^{-i},\hat{S}^{\prime})$ is a random tuple such that $S \sim \mu_k(\cdot)$ , $A^i \sim \pi_k^i (\cdot |S)$ , $A^{-i} \sim \pi_k^{-i}(\cdot |S)$ , and $S^{\prime} \sim p(\cdot |S,A^{i},A^{-i})$ , where $\mu_k(\cdot)$ denotes the unique stationary distribution of the Markov chain $\{S_n\}_{n\geq 0}$ induced by the joint policy $\pi_k$ . Lemma B.1 implies that $\mu_k$ exists and is unique. + +In the existing literature, when $\{(S_k,A_k^i,A_k^{-i},S_{k + 1})\}$ is sampled either in an i.i.d. manner or forms an ergodic time-homogeneous Markov chain, there are techniques that successfully bound the term (13) [94, 24, 23]. To deal with time-inhomogeneous Markovian noise, building upon existing conditioning results [23, 24, 41, 76] and also Lemma B.1, we develop a refined conditioning argument to show that + +$$ +(1 3) = \mathcal {O} \left(z _ {k} \sum_ {n = k - z _ {k}} ^ {k - 1} \alpha_ {n}\right), \tag {See Lemma D.11} +$$ + +where $z_{k} = t(\ell_{\tau}, \beta_{k})$ is a uniform upper bound on the $\beta_{k}$ - the mixing time (i.e., the uniform mixing time with accuracy $\beta_{k}$ , see Eq. (12)) of the Markov chain $\{S_{n}\}_{n \geq 0}$ induced by an arbitrary joint policy from the algorithm trajectory. Suppose that we are using diminishing step sizes of $\mathcal{O}(1 / k)$ decay rate (similar results hold for using constant step sizes). Then, the uniform mixing property from Lemma B.1 (2) implies that $z_{k} = \mathcal{O}(\log(1 / k))$ . As a result, we have $\lim_{k \to \infty} (13) \leq \lim_{k \to \infty} z_{k} \sum_{n=k-z_{k}}^{k-1} \alpha_{n} = 0$ , which provides us a way to control the term in (13). After successfully handling (13), we are able to establish a Lyapunov drift inequality of $\mathcal{L}_{q}(t, k)$ : + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k + 1) \right] \leq \underbrace {(1 - \alpha_ {k} c _ {\tau}) \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t , k) \right]} _ {\text {D r i f t}} + \underbrace {C _ {0} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k} , k - 1} + \frac {\beta_ {k}}{4} \mathbb {E} \left[ \mathcal {L} _ {\pi} (t , k) \right]} _ {\text {A d d i t i v e E r r o r s}}, \tag {14} +$$ + +where $C_0$ is a (problem-dependent) constant, and we use the notation $\alpha_{k_1,k_2} \coloneqq \sum_{k = k_1}^{k_2}\alpha_k$ to simplify the notation. See Lemma D.12 for more details. When $k$ is large, it can be shown that $\alpha_{k - z_k,k - 1} \leq 2\alpha_kz_k$ [65, Appendix 1.8]. + +# B.4 Solving Coupled Lyapunov Drift Inequalities + +Until this point, we have established the Lyapunov drift inequalities for the individual $v$ -functions, the sum of the $v$ -functions, the policies, and the $q$ -functions in Eqs. (7), (8), (10), and (14), respectively. The last challenge is to find a way of using these coupled inequalities to derive the finite-sample bound. To elaborate, we first restate all the Lyapunov drift inequalities in the following: + +$$ +\mathcal {L} _ {v} (t + 1) \leq \gamma \mathcal {L} _ {v} (t) + 4 \mathcal {L} _ {\text {s u m}} (t) + 4 \mathcal {L} _ {\pi} (t, K) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K) + 6 \tau \log \left(A _ {\max }\right), \tag {15} +$$ + +$$ +\mathcal {L} _ {\text {s u m}} (t + 1) \leq \gamma \mathcal {L} _ {\text {s u m}} (t) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K), \tag {16} +$$ + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k + 1) \right] \leq \left(1 - 3 \beta_ {k} / 4\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] + C _ {1} \left(\beta_ {k} ^ {2} + \beta_ {k} \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] + \beta_ {k} \mathcal {L} _ {\operatorname {s u m}} ^ {2} (t)\right), \tag {17} +$$ + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k + 1) \right] \leq \left(1 - c _ {\tau} \alpha_ {k}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] + \beta_ {k} \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] / 4 + C _ {3} z _ {k} ^ {2} \alpha_ {k} ^ {2}. \tag {18} +$$ + +To decouple the Lyapunov inequalities stated above, our high-level ideas are (1) using the Lyapunov drift inequalities in a combined way instead of in a separate manner, and (2) a bootstrapping procedure + +where we first derive a crude bound and then substitute the crude bound back into the Lyapunov drift inequalities to derive a tighter bound. We next present our approach. + +For ease of presentation, for a scalar-valued quantity $W$ that is a function of $k$ and/or $t$ , we say $W = o_k(1)$ if $\lim_{k\to \infty}W = 0$ and $W = o_t(1)$ if $\lim_{t\to \infty}W = 0$ . The explicit convergence rates of the $o_k(1)$ term and the $o_t(1)$ term will be revealed in the complete proof in Appendix D.6, but is not important for the illustration here. + +Step 1. Adding up Eq. (17) and (18), using Condition 3.1, and then repeatedly using the resulting inequality, we obtain: + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] \leq \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) + \mathcal {L} _ {q} (t, k) \right] = o _ {k} (1) + \mathcal {O} (1) \mathcal {L} _ {\text {s u m}} ^ {2} (t), \forall t, k. \tag {19} +$$ + +Step 2. Substituting the bound for $\mathbb{E}_t[\mathcal{L}_{\pi}(t,k)]$ in Eq. (19) into Eq. (18) and repeatedly using the resulting inequality, and we obtain: + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, K) \right] = o _ {K} (1) + \mathcal {O} \left(c _ {\alpha , \beta}\right) \mathcal {L} _ {\text {s u m}} ^ {2} (t), \forall t, +$$ + +which in turn implies (by first using Jensen's inequality and then taking total expectation) that: + +$$ +\mathbb {E} \left[ \mathcal {L} _ {q} ^ {1 / 2} (t, K) \right] = o _ {K} (1) + \mathcal {O} \left(c _ {\alpha , \beta} ^ {1 / 2}\right) \mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right], \forall t, \tag {20} +$$ + +where we recall that $c_{\alpha, \beta} = \beta_k / \alpha_k$ is the stepsize ratio. The fact that we are able to get a factor of $\mathcal{O}(c_{\alpha, \beta}^{1/2})$ in front of $\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ is crucial for the decoupling procedure. + +Step 3. Taking total expectation on both sides of Eq. (16) and then using the upper bound of $\mathbb{E}[\mathcal{L}_q^{1/2}(t, K)]$ we obtained in Eq. (20), we obtain + +$$ +\mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t + 1) \right] \leq \left(\gamma + \mathcal {O} \left(c _ {\alpha , \beta} ^ {1 / 2}\right)\right) \mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right] + o _ {K} (1), \forall t. +$$ + +By choosing $c_{\alpha, \beta}$ so that $\mathcal{O}(c_{\alpha, \beta}^{1/2}) \leq (1 - \gamma)/2$ , the previous inequality implies + +$$ +\mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t + 1) \right] \leq \left(1 - \frac {1 - \gamma}{2}\right) \mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right] + o _ {K} (1), \forall t, \tag {21} +$$ + +which can be repeatedly used to obtain + +$$ +\mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right] = o _ {t} (1) + o _ {K} (1). \tag {22} +$$ + +Substituting the previous bound on $\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ into Eq. (19), we have + +$$ +\max (\mathbb {E} [ \mathcal {L} _ {\pi} (t, K) ], \mathbb {E} [ \mathcal {L} _ {q} (t, K) ]) = o _ {t} (1) + o _ {K} (1). \tag {23} +$$ + +Step 4. Substituting the bounds we obtained for $\mathbb{E}[\mathcal{L}_{\pi}(t,K)]$ , $\mathbb{E}[\mathcal{L}_q(t,K)]$ , and $\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ in Eqs. (22) and (23) into Eq. (15), and then repeatedly using the resulting inequality from $t = 0$ to $t = T$ , we have + +$$ +\mathbb {E} \left[ \mathcal {L} _ {v} (T) \right] = o _ {T} (1) + o _ {K} (1) + \mathcal {O} (\tau). +$$ + +Now that we have obtained finite-sample bounds for $\mathbb{E}[\mathcal{L}_v(T)],\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(T)],\mathbb{E}[\mathcal{L}_\pi (T,K)]$ , and $\mathbb{E}[\mathcal{L}_q(T,K)]$ , using them in Eq. (6), we finally obtain the desired finite-sample bound for the expected Nash gap. + +Looking back at the decoupling procedure, Steps 2 and 3 are crucial. In fact, in Step 1 we already obtain a bound on $\mathbb{E}_t[\mathcal{L}_q(t,k)]$ , where the additive error is $\mathcal{O}(1)\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ . However, directly using this bound on $\mathbb{E}_t[\mathcal{L}_q(t,k)]$ in Eq. (16) would result in an expansive inequality for $\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ . By performing Step 2, we are able to obtain a tighter bound for $\mathbb{E}_t[\mathcal{L}_q(t,k)]$ , with the additive error being $\mathcal{O}(c_{\alpha,\beta}^{1/2})\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ . Furthermore, we can choose $c_{\alpha,\beta}$ to be small enough so that after using the bound from Eq. (20) in Eq. (16), the additive error $\mathcal{O}(c_{\alpha,\beta}^{1/2})\mathbb{E}[\mathcal{L}_{\mathrm{sum}}(t)]$ is dominated by the negative drift in Eq. (21). + +# C Proof of Theorem 2.1 + +The proof is divided into 4 steps. In Appendix C.1, we prove an important boundedness property regarding the iterates generated by Algorithm 1. In Appendices C.2 and C.3, we analyze the evolution of the policies and the $q$ -functions by establishing the negative drift inequalities with respect to their associated Lyapunov functions. In Appendix C.4, we solve the coupled Lyapunov drift inequalities to prove Theorem 2.1. Moreover, we prove Corollary 2.1.1 in Appendix C.5. The statement and proof of all supporting lemmas used in this section are presented in Appendix C.6. + +# C.1 Boundedness of the Iterates + +In this subsection, we show that the $q$ -functions generated by Algorithm 1 are uniformly bounded from above, and the entries of the policies are uniformly bounded from below. The following lemma is needed to establish the result. + +Lemma C.1. For any $i\in \{1,2\}$ and $q^i\in \mathbb{R}^{|\mathcal{A}|^i}$ , we have + +$$ +\min _ {a ^ {i} \in \mathcal {A} ^ {i}} [ \sigma_ {\tau} (q ^ {i}) ] (a ^ {i}) \geq \frac {1}{(A _ {\max } - 1) \exp (2 \| q ^ {i} \| _ {\infty} / \tau) + 1}. +$$ + +Proof of Lemma C.1. Given $i\in \{1,2\}$ , for any $q^i\in \mathbb{R}^{|\mathcal{A}|^i}$ and $a^i\in \mathcal{A}^i$ , we have + +$$ +\begin{array}{l} [ \sigma_ {\tau} (q ^ {i}) ] (a ^ {i}) = \frac {\exp (q ^ {i} (a ^ {i}) / \tau)}{\sum_ {\bar {a} ^ {i} \in \mathcal {A} ^ {i}} \exp (q ^ {i} (\bar {a} ^ {i}) / \tau)} \\ = \frac {1}{\sum_ {\bar {a} ^ {i} \neq a ^ {i}} \exp ((q ^ {i} (\bar {a} ^ {i}) - q ^ {i} (a ^ {i})) / \tau) + 1} \\ \geq \frac {1}{(| \mathcal {A} ^ {i} | - 1) \exp (2 \| q ^ {i} \| _ {\infty} / \tau) + 1} \\ \geq \frac {1}{(A _ {\max} - 1) \exp (2 \| q ^ {i} \| _ {\infty} / \tau) + 1}. \\ \end{array} +$$ + +Since the RHS of the previous inequality does not depend on $a^i$ , we have the desired inequality. + +![](images/5f83984a82d42391966fc74ce1db192610266b3349ed79d297001e4f891184f9.jpg) + +We next derive the boundedness property in the following lemma. + +Lemma C.2. It holds for all $k \geq 0$ and $i \in \{1,2\}$ that $\| q_k^i \|_{\infty} \leq 1$ and $\min_{a^i \in \mathcal{A}^i} \pi_k^i(a^i) \geq \ell_{\tau}$ , where $\ell_{\tau} = [(A_{\max} - 1) \exp(2/\tau) + 1]^{-1}$ . + +Proof of Lemma C.2. We prove the results by induction. Since $q_0^i = 0$ and $\pi_0^i$ is initialized as a uniform distribution on $\mathcal{A}^i$ , we have the base case. Now suppose that the results hold for some $k \geq 0$ . Using the update equation for $q_k^i$ in Algorithm 1 Line 5, we have + +$$ +\begin{array}{l} | q _ {k + 1} ^ {i} (a ^ {i}) | = | (1 - \alpha_ {k} \mathbb {1} _ {\{a ^ {i} = A _ {k} ^ {i} \}}) q _ {k} ^ {i} (a ^ {i}) + \alpha_ {k} \mathbb {1} _ {\{a ^ {i} = A _ {k} ^ {i} \}} R _ {i} (A _ {k} ^ {i}, A _ {k} ^ {- i}) | \\ \leq \max \left(\left| q _ {k} ^ {i} \left(a ^ {i}\right) \right|, \left(1 - \alpha_ {k}\right) \left| q _ {k} ^ {i} \left(a ^ {i}\right) \right| + \alpha_ {k} \mid R _ {i} \left(A _ {k} ^ {i}, A _ {k} ^ {- i}\right)\right) \\ \leq 1 \\ \end{array} +$$ + +for any $a^i \in \mathcal{A}^i$ , where the last line follows from the induction hypothesis $\|q_k^i\|_{\infty} \leq 1$ and $|R_i(a^i, a^{-i})| \leq 1$ for all $(a^i, a^{-i})$ . As for $\pi_{k+1}^i$ , using the update equation for $\pi_k^i$ in Algorithm 1, Line 3, we have + +$$ +\begin{array}{l} \pi_ {k + 1} ^ {i} \left(a ^ {i}\right) = \left(1 - \beta_ {k}\right) \pi_ {k} ^ {i} \left(a ^ {i}\right) + \beta_ {k} \left[ \sigma_ {\tau} \left(q _ {k} ^ {i}\right) \right] \left(a ^ {i}\right) \\ \geq \left(1 - \beta_ {k}\right) \ell_ {\tau} + \frac {\beta_ {k}}{\left(A _ {\max } - 1\right) \exp \left(2 \| q _ {k} ^ {i} \| _ {\infty} / \tau\right) + 1} \tag {Lemma C.1} \\ \geq \left(1 - \beta_ {k}\right) \ell_ {\tau} + \beta_ {k} \ell_ {\tau} \quad \left(\| q _ {k} ^ {i} \| _ {\infty} \leq 1 \text {b y i n d u c t i o n h y p o t h e s i s}\right) \\ = \ell_ {\tau}. \\ \end{array} +$$ + +The induction is complete. + +![](images/687b51833a3beded4b67b42d23aeb7fbb529a38f32c33af14694527339971da7.jpg) + +# C.2 Analysis of the Policies + +Let $V_{R}:\Delta (\mathcal{A}^{1})\times \Delta (\mathcal{A}^{2})\mapsto \mathbb{R}$ be the regularized Nash gap defined as + +$$ +V _ {R} (\mu^ {1}, \mu^ {2}) = \sum_ {i = 1, 2} \max _ {\hat {\mu} ^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{(\hat {\mu} ^ {i} - \mu^ {i}) ^ {\top} R _ {i} \mu^ {- i} + \tau \nu (\hat {\mu} ^ {i}) - \tau \nu (\mu^ {i}) \right\}, +$$ + +where $\nu (\cdot)$ is the entropy function. A sequence of properties regarding $V_{R}(\cdot ,\cdot)$ are provided in Lemma C.7. For simplicity of notation, we use $\nabla_1V_R(\cdot ,\cdot)$ (respectively, $\nabla_2V_R(\cdot ,\cdot))$ to represent the gradient with respect to the first argument (respectively, the second argument). + +Lemma C.3. It holds for all $k \geq 0$ that + +$$ +\mathbb {E} [ V _ {R} (\pi_ {k + 1} ^ {1}, \pi_ {k + 1} ^ {2}) ] \leq \left(1 - \frac {\beta_ {k}}{2}\right) \mathbb {E} [ V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}) ] + \frac {\ell_ {\tau} \alpha_ {k}}{4} \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] + 2 L _ {\tau} \beta_ {k} ^ {2}, +$$ + +where we recall that $L_{\tau} = \tau /\ell_{\tau} + A_{\mathrm{max}}^{2} / \tau$ + +Proof of Lemma C.3. Using the smoothness property of $V_{R}(\cdot, \cdot)$ (cf. Lemma C.7 (1)) and the update equation in Algorithm 1 Line 3, we have for any $k \geq 0$ that + +$$ +\begin{array}{l} V _ {R} \left(\pi_ {k + 1} ^ {1}, \pi_ {k + 1} ^ {2}\right) \leq V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) + \beta_ {k} \left\langle \nabla_ {2} V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right), \sigma_ {\tau} \left(q _ {k} ^ {2}\right) - \pi_ {k} ^ {2} \right\rangle \\ + \beta_ {k} \langle \nabla_ {1} V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}), \sigma_ {\tau} (q _ {k} ^ {1}) - \pi_ {k} ^ {1} \rangle + \frac {L _ {\tau} \beta_ {k} ^ {2}}{2} \sum_ {i = 1, 2} \| \sigma_ {\tau} (q _ {k} ^ {i}) - \pi_ {k} ^ {i} \| _ {2} ^ {2} \\ \leq V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) + \beta_ {k} \left\langle \nabla_ {2} V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right), \sigma_ {\tau} \left(R _ {2} \pi_ {k} ^ {1}\right) - \pi_ {k} ^ {2} \right\rangle \\ + \beta_ {k} \langle \nabla_ {1} V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k + 1} ^ {2}\right), \sigma_ {\tau} \left(R _ {1} \pi_ {k} ^ {2}\right) - \pi_ {k} ^ {1} \rangle \\ + \beta_ {k} \langle \nabla_ {2} V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}), \sigma_ {\tau} (q _ {k} ^ {2}) - \sigma_ {\tau} (R _ {2} \pi_ {k} ^ {1}) \rangle \\ + \beta_ {k} \langle \nabla_ {1} V _ {R} (\pi_ {k} ^ {1}, \pi_ {k + 1} ^ {2}), \sigma_ {\tau} (q _ {k} ^ {1}) - \sigma_ {\tau} (R _ {1} \pi_ {k} ^ {2}) \rangle + 2 L _ {\tau} \beta_ {k} ^ {2} \\ \leq \left(1 - \frac {\beta_ {k}}{2}\right) V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}) + 4 \beta_ {k} \left(\frac {1}{\tau \ell_ {\tau} ^ {2}} + \frac {A _ {\mathrm {m a x}} ^ {2}}{\tau^ {3}}\right) \sum_ {i = 1, 2} \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} + 2 L _ {\tau} \beta_ {k} ^ {2}, \\ \end{array} +$$ + +where the last line follows from Lemma C.7 (2) and (3). Taking expectations on both sides of the previous inequality and using the condition that $c_{\alpha, \beta} = \frac{\beta_k}{\alpha_k} \leq \min \left( \frac{\tau \ell_{\tau}^3}{32}, \frac{\ell_{\tau} \tau^3}{32A_{\max}^2} \right)$ (cf. Condition 2.1), we have + +$$ +\mathbb {E} [ V _ {R} (\pi_ {k + 1} ^ {1}, \pi_ {k + 1} ^ {2}) ] \leq \left(1 - \frac {\beta_ {k}}{2}\right) \mathbb {E} [ V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}) ] + \frac {\ell_ {\tau} \alpha_ {k}}{4} \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] + 2 L _ {\tau} \beta_ {k} ^ {2}. +$$ + +The proof is complete. + +![](images/631646cc2e01211f2ff5d0fcc8cbe985022127a1096088833e572e5b8b3644e5.jpg) + +# C.3 Analysis of the $q$ -Functions + +We study the $q$ -functions generated by Algorithm 1 through a stochastic approximation framework. For $i \in \{1,2\}$ , let $F^i: \mathbb{R}^{|\mathcal{A}^i|} \times \mathcal{A}^i \times \mathcal{A}^{-i} \mapsto \mathbb{R}^{|\mathcal{A}^i|}$ be an operator defined as + +$$ +[ F ^ {i} (q ^ {i}, a _ {0} ^ {i}, a _ {0} ^ {- i}) ] (a ^ {i}) = \mathbb {1} _ {\{a _ {0} ^ {i} = a ^ {i} \}} \left(R _ {i} (a _ {0} ^ {i}, a _ {0} ^ {- i}) - q ^ {i} (a _ {0} ^ {i})\right), \quad \forall \left(q ^ {i}, a _ {0} ^ {i}, a _ {0} ^ {- i}\right) \text {a n d} a ^ {i}. +$$ + +Then, Algorithm 1 Line 5 can be compactly written as + +$$ +q _ {k + 1} ^ {i} = q _ {k} ^ {i} + \alpha_ {k} F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right). \tag {24} +$$ + +Given a joint policy $(\pi^1,\pi^2)$ , let $\bar{F}_{\pi}^{i}:\mathbb{R}^{|\mathcal{A}^{i}|}\mapsto \mathbb{R}^{|\mathcal{A}^{i}|}$ be defined as + +$$ +\bar {F} _ {\pi} ^ {i} \left(q ^ {i}\right) := \mathbb {E} _ {A ^ {i} \sim \pi^ {i} (\cdot), A ^ {- i} \sim \pi^ {- i} (\cdot)} \left[ F ^ {i} \left(q ^ {i}, A ^ {i}, A ^ {- i}\right) \right] = \operatorname {d i a g} \left(\pi^ {i}\right) \left(R _ {i} \pi^ {- i} - q ^ {i}\right). +$$ + +Then, Eq. (24) can be viewed as a stochastic approximation algorithm for solving the slowly time-varying equation $\bar{F}_{\pi_k}^i (q^i) = 0$ + +Lemma C.4. The following inequality holds for all $k \geq 0$ : + +$$ +\sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k + 1} ^ {i} - R _ {i} \pi_ {k + 1} ^ {- i} \| _ {2} ^ {2} ] \leq \left(1 - \frac {\ell_ {\tau} \alpha_ {k}}{2}\right) \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] + \frac {\beta_ {k}}{4} \mathbb {E} [ V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}) ] + 1 6 \alpha_ {k} ^ {2}. +$$ + +Proof of Lemma C.4. For any $k\geq 0$ and $i\in \{1,2\}$ , we have + +$$ +\begin{array}{l} \left\| q _ {k + 1} ^ {i} - R _ {i} \pi_ {k + 1} ^ {- i} \right\| _ {2} ^ {2} \\ = \| q _ {k + 1} ^ {i} - q _ {k} ^ {i} + q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} + R _ {i} \pi_ {k} ^ {- i} - R _ {i} \pi_ {k + 1} ^ {- i} \| _ {2} ^ {2} \\ = \| q _ {k + 1} ^ {i} - q _ {k} ^ {i} \| _ {2} ^ {2} + \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} + \| R _ {i} \pi_ {k} ^ {- i} - R _ {i} \pi_ {k + 1} ^ {- i} \| _ {2} ^ {2} + 2 \langle q _ {k + 1} ^ {i} - q _ {k} ^ {i}, q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \rangle \\ \end{array} +$$ + +$$ +\begin{array}{l} + 2 \left\langle q _ {k + 1} ^ {i} - q _ {k} ^ {i}, R _ {i} \pi_ {k} ^ {- i} - R _ {i} \pi_ {k + 1} ^ {- i} \right\rangle + 2 \left\langle R _ {i} \pi_ {k} ^ {- i} - R _ {i} \pi_ {k + 1} ^ {- i}, q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle \\ = \alpha_ {k} ^ {2} \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} ^ {2} + \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} + \beta_ {k} ^ {2} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} ^ {2} \\ + 2 \alpha_ {k} \left\langle \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle + 2 \alpha_ {k} \left\langle F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) - \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle \\ - 2 \alpha_ {k} \beta_ {k} \langle F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right), R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \rangle - 2 \beta_ {k} \langle R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \rangle \tag {Algorithm 1 Lines 3 and 5} \\ \leq \alpha_ {k} ^ {2} \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} ^ {2} + \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} + \beta_ {k} ^ {2} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} ^ {2} \\ + 2 \alpha_ {k} \left\langle \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle + 2 \alpha_ {k} \left\langle F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) - \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle \\ + 2 \alpha_ {k} \beta_ {k} \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} \\ + 2 \beta_ {k} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} \quad (\text {C a u c h y - S c h w a r z i n e q u a l i t y}) \\ \leq \alpha_ {k} ^ {2} \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} ^ {2} + \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} + \beta_ {k} ^ {2} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} ^ {2} \\ + 2 \alpha_ {k} \left\langle \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle + 2 \alpha_ {k} \left\langle F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) - \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle \\ + \frac {\alpha_ {k} \beta_ {k}}{c _ {1}} \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} ^ {2} + \alpha_ {k} \beta_ {k} c _ {1} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} ^ {2} \\ + \frac {\beta_ {k}}{c _ {2}} \| R _ {i} \left(\sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i}\right) \| _ {2} ^ {2} + c _ {2} \beta_ {k} \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} \\ \end{array} +$$ + +(This follows from the AM-GM inequality, where $c_{1}, c_{2} > 0$ can be arbitrary.) + +$$ +\begin{array}{l} = \left(\alpha_ {k} ^ {2} + \frac {\alpha_ {k} \beta_ {k}}{c _ {1}}\right) \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} ^ {2} + \left(1 + c _ {2} \beta_ {k}\right) \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} \\ + 2 \alpha_ {k} \left\langle \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle + 2 \alpha_ {k} \left\langle F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) - \bar {F} _ {\pi_ {k}} ^ {i} \left(q _ {k} ^ {i}\right), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \right\rangle \\ + \left(\beta_ {k} ^ {2} + \frac {\beta_ {k}}{c _ {2}} + \alpha_ {k} \beta_ {k} c _ {1}\right) \| R _ {i} \| _ {2} ^ {2} \| \sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i} \| _ {2} ^ {2}. \\ \end{array} +$$ + +Taking expectations on both sides of the previous inequality, we have + +$$ +\begin{array}{l} \mathbb {E} [ \| q _ {k + 1} ^ {i} - R _ {i} \pi_ {k + 1} ^ {- i} \| ^ {2} ] \\ \leq \left(\alpha_ {k} ^ {2} + \frac {\alpha_ {k} \beta_ {k}}{c _ {1}}\right) \mathbb {E} [ \| F ^ {i} (q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}) \| _ {2} ^ {2} ] + (1 + c _ {2} \beta_ {k}) \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] \\ + 2 \alpha_ {k} \mathbb {E} [ \langle \bar {F} _ {\pi_ {k}} ^ {i} (q _ {k} ^ {i}), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \rangle ] + \left(\beta_ {k} ^ {2} + \frac {\beta_ {k}}{c _ {2}} + \alpha_ {k} \beta_ {k} c _ {1}\right) \| R _ {i} \| _ {2} ^ {2} \mathbb {E} [ \| \sigma_ {\tau} (q _ {k} ^ {- i}) - \pi_ {k} ^ {- i} \| _ {2} ^ {2} ], \\ \end{array} +$$ + +where the term $\mathbb{E}[\langle F^i (q_k^i,A_k^i,A_k^{-i}) - \bar{F}_{\pi_k}^i (q_k^i),q_k^i -R_i\pi_k^{-i}\rangle ]$ vanishes due to the tower property of conditional expectations. To proceed, observe + +$$ +\begin{array}{l} \mathbb {E} \left[ \| F ^ {i} \left(q _ {k} ^ {i}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) \| _ {2} ^ {2} \right] = \mathbb {E} \left[ \sum_ {a ^ {i} \in \mathcal {A} ^ {i}} \mathbb {1} _ {\left\{A _ {k} ^ {i} = a ^ {i} \right\}} \left(R _ {i} \left(A _ {k} ^ {i}, A _ {k} ^ {- i}\right) - q _ {k} ^ {i} \left(A _ {k} ^ {i}\right)\right) ^ {2} \right] \\ = \mathbb {E} \left[ \left(R _ {i} \left(a ^ {i}, A _ {k} ^ {- i}\right) - q _ {k} ^ {i} \left(a ^ {i}\right)\right) ^ {2} \right] \\ \leq \mathbb {E} [ (| R _ {i} (a ^ {i}, A _ {k} ^ {- i}) | + | q _ {k} ^ {i} (a ^ {i}) |) ^ {2} ] \\ \leq 4 \tag {Lemma C.2} \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathbb {E} [ \langle \bar {F} _ {\pi_ {k}} ^ {i} (q _ {k} ^ {i}), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \rangle ] = \mathbb {E} [ \langle \operatorname {d i a g} (\pi_ {k} ^ {i}) (R _ {i} \pi_ {k} ^ {- i} - q _ {k} ^ {i}), q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \rangle ] \\ \leq - \ell_ {\tau} \mathbb {E} \left[ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} \right]. \tag {Lemma C.2} \\ \end{array} +$$ + +In addition, we have + +$$ +\begin{array}{l} \mathbb {E} [ \| \sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] \leq 2 \mathbb {E} [ \| \sigma_ {\tau} \left(q _ {k} ^ {- i}\right) - \sigma_ {\tau} \left(R _ {- i} \pi_ {k} ^ {i}\right) \| _ {2} ^ {2} ] + 2 \mathbb {E} [ \| \sigma_ {\tau} \left(R _ {- i} \pi_ {k} ^ {i}\right) - \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] \\ (a ^ {2} + b ^ {2} \geq 2 a b) \\ \leq \frac {2}{\tau^ {2}} \mathbb {E} \left[ \left\| q _ {k} ^ {- i} - R _ {- i} \pi_ {k} ^ {i} \right\| _ {2} ^ {2} \right] + \frac {4}{\tau} \mathbb {E} \left[ V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) \right], \\ \end{array} +$$ + +where the last line follows from Lemma C.6. Using the previous 4 inequalities all together, we obtain + +$$ +\begin{array}{l} \mathbb {E} [ \| q _ {k + 1} ^ {i} - R _ {i} \pi_ {k + 1} ^ {- i} \| _ {2} ^ {2} ] \leq 4 \left(\alpha_ {k} ^ {2} + \frac {\alpha_ {k} \beta_ {k}}{c _ {1}}\right) + (1 - 2 \ell_ {\tau} \alpha_ {k} + c _ {2} \beta_ {k}) \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] \\ + \frac {4 A _ {\mathrm {m a x}} ^ {2}}{\tau} \left(\beta_ {k} ^ {2} + \frac {\beta_ {k}}{c _ {2}} + \alpha_ {k} \beta_ {k} c _ {1}\right) \mathbb {E} [ V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}) ] \\ + \frac {2 A _ {\operatorname* {m a x}} ^ {2}}{\tau^ {2}} \left(\beta_ {k} ^ {2} + \frac {\beta_ {k}}{c _ {2}} + \alpha_ {k} \beta_ {k} c _ {1}\right) \mathbb {E} [ \| q _ {k} ^ {- i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ]. \\ \end{array} +$$ + +Summing up the previous inequality for $i \in \{1, 2\}$ , we have + +$$ +\begin{array}{l} \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k + 1} ^ {i} - R _ {i} \pi_ {k + 1} ^ {- i} \| _ {2} ^ {2} ] \\ \leq \left(1 - 2 \ell_ {\tau} \alpha_ {k} + c _ {2} \beta_ {k} + \frac {2 A _ {\operatorname* {m a x}} ^ {2}}{\tau^ {2}} \left(\beta_ {k} ^ {2} + \frac {\beta_ {k}}{c _ {2}} + \alpha_ {k} \beta_ {k} c _ {1}\right)\right) \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] \\ + \frac {8 A _ {\operatorname* {m a x}} ^ {2}}{\tau} \left(\beta_ {k} ^ {2} + \frac {\beta_ {k}}{c _ {2}} + \alpha_ {k} \beta_ {k} c _ {1}\right) \mathbb {E} \left[ V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) \right] + 8 \left(\alpha_ {k} ^ {2} + \frac {\alpha_ {k} \beta_ {k}}{c _ {1}}\right) \\ = \left(1 - \frac {3 \ell_ {\tau} \alpha_ {k}}{2} + \frac {2 A _ {\operatorname* {m a x}} ^ {2}}{\tau^ {2}} \left(2 \beta_ {k} ^ {2} + \frac {2 \beta_ {k} ^ {2}}{\ell_ {\tau} \alpha_ {k}}\right)\right) \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] \\ + \frac {8 A _ {\max } ^ {2}}{\tau} \left(2 \beta_ {k} ^ {2} + \frac {2 \beta_ {k} ^ {2}}{\ell_ {\tau} \alpha_ {k}}\right) \mathbb {E} \left[ V _ {R} \left(\pi_ {k} ^ {1}, \pi_ {k} ^ {2}\right) \right] + 1 6 \alpha_ {k} ^ {2} \quad (\text {C h o o s i n g} c _ {1} = \frac {\beta_ {k}}{\alpha_ {k}} \text {a n d} c _ {2} = \frac {\ell_ {\tau} \alpha_ {k}}{2 \beta_ {k}}) \\ \leq \left(1 - \frac {\ell_ {\tau} \alpha_ {k}}{2}\right) \sum_ {i = 1, 2} \mathbb {E} [ \| q _ {k} ^ {i} - R _ {i} \pi_ {k} ^ {- i} \| _ {2} ^ {2} ] + \frac {\beta_ {k}}{4} \mathbb {E} [ V _ {R} (\pi_ {k} ^ {1}, \pi_ {k} ^ {2}) ] + 1 6 \alpha_ {k} ^ {2}, \\ \end{array} +$$ + +where the last line follows from $c_{\alpha, \beta} \leq \min \left(\frac{\tau^2 \ell_\tau}{8A_{\max}^2}, \frac{\tau \ell_\tau}{128A_{\max}^2}\right)$ and $\beta_0 \leq \frac{\tau}{128A_{\max}^2}$ (cf. Condition 2.1). The proof is complete. + +# C.4 Solving the Coupled Lyapunov Inequalities + +For simplicity of notation, denote $\mathcal{L}_q(k) = \sum_{i=1,2} \mathbb{E}[||q_k^i - R_i\pi_k^{-i}||_2^2]$ and $\mathcal{L}_{\pi}(k) = \mathbb{E}[V_R(\pi_k^1, \pi_k^2)]$ . Then, Lemmas C.3 and C.4 state that + +$$ +\mathcal {L} _ {\pi} (k + 1) \leq \left(1 - \frac {\beta_ {k}}{2}\right) \mathcal {L} _ {\pi} (k) + \frac {\ell_ {\tau} \alpha_ {k}}{4} \mathcal {L} _ {q} (k) + 2 L _ {\tau} \beta_ {k} ^ {2}, +$$ + +and + +$$ +\mathcal {L} _ {q} (k + 1) \leq \left(1 - \frac {\ell_ {\tau} \alpha_ {k}}{2}\right) \mathcal {L} _ {q} (k) + \frac {\beta_ {k}}{4} \mathcal {L} _ {\pi} (k) + 1 6 \alpha_ {k} ^ {2}. +$$ + +Adding up the previous two inequalities, we obtain + +$$ +\begin{array}{l} \mathcal {L} _ {q} (k + 1) + \mathcal {L} _ {\pi} (k + 1) \leq \left(1 - \frac {\beta_ {k}}{4}\right) \mathcal {L} _ {\pi} (k) + 2 L _ {\tau} \beta_ {k} ^ {2} + \left(1 - \frac {\ell_ {\tau} \alpha_ {k}}{4}\right) \mathcal {L} _ {q} (k) + 1 6 \alpha_ {k} ^ {2} \\ \leq \left(1 - \frac {\beta_ {k}}{4}\right) \left(\mathcal {L} _ {\pi} (k) + \mathcal {L} _ {q} (k)\right) + 2 L _ {\tau} \beta_ {k} ^ {2} + 1 6 \alpha_ {k} ^ {2}, \tag {25} \\ \end{array} +$$ + +where the second inequality follows from $c_{\alpha, \beta} \leq \ell_{\tau}$ (cf. Condition 2.1). + +Constant Stepsizes. When using constant step sizes, i.e., $\alpha_{k}\equiv \alpha$ and $\beta_{k}\equiv \beta$ , repeatedly using Eq. (25), we have for all $k\geq 0$ that + +$$ +\begin{array}{l} \mathcal {L} _ {q} (k) + \mathcal {L} _ {\pi} (k) \leq \left(1 - \frac {\beta}{4}\right) ^ {k} \left(\mathcal {L} _ {\pi} (0) + \mathcal {L} _ {q} (0)\right) + 8 L _ {\tau} \beta + 6 4 \alpha^ {2} / \beta \\ \leq \left(1 - \frac {\beta}{4}\right) ^ {k} \left(4 + 2 \tau \log \left(A _ {\max }\right) + 2 A _ {\max }\right) + 8 L _ {\tau} \beta + 6 4 \alpha^ {2} / \beta \\ \end{array} +$$ + +$$ += B _ {\mathrm {i n}} \left(1 - \frac {\beta}{4}\right) ^ {k} + 8 L _ {\tau} \beta + \frac {6 4 \alpha}{c _ {\alpha , \beta}} +$$ + +where the second inequality follows from + +$$ +\mathcal {L} _ {\pi} (0) \leq 4 + 2 \tau \log \left(A _ {\max }\right), \quad \text {a n d} \quad \mathcal {L} _ {q} (0) \leq 2 A _ {\max } +$$ + +Theorem 2.1 (1) follows by observing that $\mathcal{L}_q(k) + \mathcal{L}_{\pi}(k)\geq \mathcal{L}_{\pi}(k) = \mathbb{E}[\mathrm{NG}_{\tau}(\pi_k^1,\pi_k^2)]$ + +**Diminishing Stepsizes.** Consider using $\alpha_{k} = \frac{\alpha}{k + h}$ and $\beta_{k} = \frac{\beta}{k + h}$ . Recursions of the form presented in Eq. (25) have been well studied in the existing literature for the convergence rates of iterative algorithms [44, 24, 65]. Since $\beta > 4$ , using the same line of analysis as in [65, Appendix A.2], we have + +$$ +\mathcal {L} _ {q} (k) + \mathcal {L} _ {\pi} (k) \leq B _ {\mathrm {i n}} \left(\frac {h}{k + h}\right) ^ {\beta / 4} + \left(6 4 e L _ {\tau} \beta + 5 1 2 e \alpha / c _ {\alpha , \beta}\right) \frac {1}{k + h}. +$$ + +Theorem 2.1 (2) follows by observing that $\mathcal{L}_q(k) + \mathcal{L}_{\pi}(k)\geq \mathcal{L}_{\pi}(k) = \mathbb{E}[\mathrm{NG}_{\tau}(\pi_k^1,\pi_k^2)]$ + +# C.5 Proof of Corollary 2.1.1 + +We use Theorem 2.1 (1) to derive the sample complexity, and choose $\beta = c_{\alpha,\beta}\alpha$ with $c_{\alpha,\beta}$ satisfying Condition 2.1. To achieve $\mathbb{E}[\mathrm{NG}_{\tau}(\pi_K^1,\pi_K^2)] \leq \epsilon$ , in view of Theorem 2.1 (1), it is sufficient that + +$$ +B _ {\mathrm {i n}} e ^ {- \beta K / 4} \leq \frac {\epsilon}{3}, \quad 8 L _ {\tau} \beta \leq \frac {\epsilon}{3}, \quad 6 4 \alpha / c _ {\alpha , \beta} \leq \frac {\epsilon}{3}, +$$ + +which implies $\beta = \mathcal{O}(\epsilon)$ . It follows that $K = \mathcal{O}\left(\epsilon^{-1}\right)$ . + +# C.6 Supporting Lemmas + +Lemma C.5. For any $i\in \{1,2\}$ and $\mu_1^i,\mu_2^i\in \{\mu^i\in \Delta (\mathcal{A}^i)\mid \min_{a^i\in \mathcal{A}^i}\mu^i (a^i)\geq \ell_\tau \}$ , we have + +$$ +\| \nabla \nu (\mu_ {1} ^ {i}) - \nabla \nu (\mu_ {2} ^ {i}) \| _ {2} \leq \frac {1}{\ell_ {\tau}} \| \mu_ {1} ^ {i} - \mu_ {2} ^ {i} \| _ {2}. +$$ + +Proof of Lemma C.5. For any $i \in \{1,2\}$ and $\mu^i \in \Delta(\mathcal{A}^i)$ such that $\min_{a^i \in \mathcal{A}^i} \mu^i(a^i) \geq \ell_\tau$ , the Hessian of $\nu(\cdot)$ satisfies + +$$ +\operatorname {H e s s i a n} _ {\nu} (\mu^ {i}) = \operatorname {d i a g} \left(\mu^ {i}\right) ^ {- 1} \leq \frac {I _ {| \mathcal {A} ^ {i} |}}{\operatorname* {m i n} _ {a ^ {i} \in \mathcal {A} ^ {i}} \mu^ {i} (a ^ {i})} \leq \frac {I _ {| \mathcal {A} ^ {i} |}}{\ell_ {\tau}}. +$$ + +Therefore, the gradient of the negative entropy function $\nabla \nu (\cdot)$ is $\frac{1}{\ell_{\tau}}$ - Lipschitz continuous with respect to $\| \cdot \| _2$ on the set $\{\mu^i\in \Delta (\mathcal{A}^i)\mid \min_{a^i\in \mathcal{A}^i}\mu^i (a^i)\geq \ell_\tau \}$ , which implies the $\frac{1}{\ell_{\tau}}$ - smoothness of $\nu (\cdot)$ . + +Lemma C.6. For $i\in \{1,2\}$ , we have for all $\mu^i\in \Delta (\mathcal{A}^i)$ and $\mu^{-i}\in \Delta (\mathcal{A}^{-i})$ that + +$$ +\| \sigma_ {\tau} (R _ {i} \mu^ {- i}) - \mu^ {i} \| _ {2} ^ {2} \leq \frac {2}{\tau} V _ {R} (\mu^ {1}, \mu^ {2}). +$$ + +Proof of Lemma C.6. Recall that the negative entropy $\nu(\cdot)$ is 1-strongly concave with respect to $\|\cdot\|_2$ . Therefore, given $i \in \{1,2\}$ , fix $\mu^{-i}$ , the function + +$$ +\max _ {\hat {\mu} ^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{\left(\hat {\mu} ^ {i} - \mu^ {i}\right) ^ {\top} R _ {i} \mu^ {- i} + \tau \nu \left(\hat {\mu} ^ {i}\right) - \tau \nu \left(\mu^ {i}\right) \right\} +$$ + +is $\tau$ -strongly convex with respect to $\mu^i$ . As a result, by the quadratic growth property of strongly convex functions, we have + +$$ +\| \sigma_ {\tau} (R _ {i} \mu^ {- i}) - \mu^ {i} \| _ {2} ^ {2} \leq \frac {2}{\tau} \max _ {\hat {\mu} ^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{(\hat {\mu} ^ {i} - \mu^ {i}) ^ {\top} R _ {i} \mu^ {- i} + \tau \nu (\hat {\mu} ^ {i}) - \tau \nu (\mu^ {i}) \right\} \leq \frac {2}{\tau} V _ {R} (\mu^ {1}, \mu^ {2}). +$$ + +□ + +Denote $\Pi_{\tau} = \{(\pi^{1},\pi^{2})\in \Delta (\mathcal{A}^{1})\times \Delta (\mathcal{A}^{2})\mid \min_{a^{1}\in \mathcal{A}^{1}}\pi^{1}(a^{1})\geq \ell_{\tau},\min_{a^{2}\in \mathcal{A}^{2}}\pi^{2}(a^{2})\geq \ell_{\tau}\}$ + +Lemma C.7. The function $V_{R}(\cdot ,\cdot)$ has the following properties. + +(1) The function $V_{R}(\mu^{1},\mu^{2})$ is $L_{\tau}$ -smooth on $\Pi_{\tau}$ , where $L_{\tau} = \frac{\tau}{\ell_{\tau}} + \frac{A_{\max}^2}{\tau}$ . +(2) It holds for any $(\mu^1,\mu^2)\in \Pi_\tau$ that + +$$ +\langle \nabla_ {1} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (R _ {1} \mu^ {2}) - \mu^ {1} \rangle + \langle \nabla_ {2} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (R _ {2} \mu^ {1}) - \mu^ {2} \rangle \leq - V _ {R} (\mu^ {1}, \mu^ {2}). +$$ + +(3) For any $q^1 \in \mathbb{R}^{|\mathcal{A}^1|}$ and $q^2 \in \mathbb{R}^{|\mathcal{A}^2|}$ , we have for all $(\mu^1, \mu^2) \in \Pi_\tau$ that + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (q ^ {1}) - \sigma_ {\tau} (R _ {1} \mu^ {2}) \rangle + \langle \nabla_ {2} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (q ^ {2}) - \sigma_ {\tau} (R _ {2} \mu^ {1}) \rangle \\ \leq \frac {1}{2} V _ {R} (\mu^ {i}, \mu^ {- i}) + 4 \left(\frac {1}{\tau \ell_ {\tau} ^ {2}} + \frac {A _ {\mathrm {m a x}} ^ {2}}{\tau^ {3}}\right) \sum_ {i = 1, 2} \| q ^ {i} - R _ {i} \mu^ {- i} \| _ {2} ^ {2}. \\ \end{array} +$$ + +Proof of Lemma C.7. Recall the definition of $V_{R}(\cdot, \cdot)$ in the following: + +$$ +V _ {R} (\mu^ {1}, \mu^ {2}) = \sum_ {i = 1, 2} \max _ {\hat {\mu} ^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{(\hat {\mu} ^ {i} - \mu^ {i}) ^ {\top} R _ {i} \mu^ {- i} + \tau \nu (\hat {\mu} ^ {i}) - \tau \nu (\mu^ {i}) \right\}. +$$ + +By Danskin's theorem [95], we have + +$$ +\begin{array}{l} \nabla_ {1} V _ {R} \left(\mu^ {1}, \mu^ {2}\right) = - \tau \nabla \nu \left(\mu^ {1}\right) + \left(R _ {2}\right) ^ {\top} \sigma_ {\tau} \left(R _ {2} \mu^ {1}\right), \\ \nabla_ {2} V _ {R} (\mu^ {1}, \mu^ {2}) = - \tau \nabla \nu (\mu^ {2}) + (R _ {1}) ^ {\top} \sigma_ {\tau} (R _ {1} \mu^ {2}), \\ \end{array} +$$ + +both of which will be frequently used in our analysis. + +(1) For any $(\mu^1,\mu^2),(\bar{\mu}^1,\bar{\mu}^2)\in \Pi_\tau$ , we have + +$$ +\begin{array}{l} \left\| \nabla_ {1} V _ {R} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {1} V _ {R} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} \\ = \| \tau \nabla \nu (\bar {\mu} ^ {1}) - \tau \nabla \nu (\mu^ {1}) + (R _ {2}) ^ {\top} \sigma_ {\tau} (R _ {2} \mu^ {1}) - (R _ {2}) ^ {\top} \sigma_ {\tau} (R _ {2} \bar {\mu} ^ {1}) \| _ {2} \\ \leq \tau \| \nabla \nu (\bar {\mu} ^ {1}) - \nabla \nu (\mu^ {1}) \| _ {2} + \| R _ {2} \| _ {2} \| \sigma_ {\tau} (R _ {2} \mu^ {1}) - \sigma_ {\tau} (R _ {2} \bar {\mu} ^ {1}) \| _ {2} \\ \leq \frac {\tau}{\ell_ {\tau}} \| \mu^ {1} - \bar {\mu} ^ {1} \| _ {2} + \frac {\| R _ {2} \| _ {2} ^ {2}}{\tau} \| \mu^ {1} - \bar {\mu} ^ {1} \| _ {2} \\ \leq \left(\frac {\tau}{\ell_ {\tau}} + \frac {A _ {\operatorname* {m a x}} ^ {2}}{\tau}\right) \| \mu^ {1} - \bar {\mu} ^ {1} \| _ {2}, \\ \end{array} +$$ + +where the second last inequality follows from Lemma C.5 and $\sigma_{\tau}(\cdot)$ being $\frac{1}{\tau}$ -Lipschitz continuous with respect to $\| \cdot \|_2$ [96], and the last inequality follows from $\| R_i \|_2 \leq \sqrt{|\mathcal{A}^1| |\mathcal{A}^2|} \leq A_{\max}$ for $i \in \{1, 2\}$ . Similarly, we also have + +$$ +\| \nabla_ {2} V _ {R} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {2} V _ {R} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \| _ {2} ^ {2} \leq \left(\frac {\tau}{\ell_ {\tau}} + \frac {A _ {\operatorname* {m a x}} ^ {2}}{\tau}\right) \| \mu^ {2} - \bar {\mu} ^ {2} \| _ {2}. +$$ + +It follows from the previous two inequalities that + +$$ +\begin{array}{l} \left\| \nabla V _ {R} \left(\mu^ {1}, \mu^ {2}\right) - \nabla V _ {R} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} ^ {2} \\ = \| \nabla_ {1} V _ {R} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {1} V _ {R} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \| _ {2} ^ {2} + \| \nabla_ {2} V _ {R} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {2} V _ {R} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \| _ {2} ^ {2} \\ \leq \left(\frac {\tau}{\ell_ {\tau}} + \frac {A _ {\operatorname* {m a x}} ^ {2}}{\tau}\right) ^ {2} \sum_ {i = 1, 2} \| \mu^ {i} - \bar {\mu} ^ {i} \| _ {2} ^ {2}, \\ \end{array} +$$ + +which implies that $V_{R}(\cdot, \cdot)$ is an $L_{\tau}$ -smooth function on $\Pi_{\tau}$ [97], where $L_{\tau} = \frac{\tau}{\ell_{\tau}} + \frac{A_{\max}^{2}}{\tau}$ . + +(2) The result follows from Lemma D.7 by setting $X_{i} = R_{i}$ , $i \in \{1, 2\}$ , and by observing that $R_{1} + (R_{2})^{\top} = 0$ . + +(3) Using the formula of the gradient of $V_{R}(\cdot, \cdot)$ in the beginning of the proof, we have + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (q ^ {1}) - \sigma_ {\tau} (R _ {1} \mu^ {2}) \rangle \\ = \left\langle - \tau \nabla \nu \left(\mu^ {1}\right) + \left(R _ {2}\right) ^ {\top} \sigma_ {\tau} \left(R _ {2} \mu^ {1}\right), \sigma_ {\tau} \left(q ^ {1}\right) - \sigma_ {\tau} \left(R _ {1} \mu^ {2}\right) \right\rangle \\ = \tau \left\langle \nabla \nu \left(\sigma_ {\tau} \left(R _ {1} \mu^ {2}\right)\right) - \nabla \nu \left(\mu^ {1}\right), \sigma_ {\tau} \left(q ^ {1}\right) - \sigma_ {\tau} \left(R _ {1} \mu^ {2}\right) \right\rangle \\ + \left(\sigma_ {\tau} \left(R _ {2} \mu^ {1}\right) - \mu^ {2}\right) ^ {\top} R _ {2} \left(\sigma_ {\tau} \left(q ^ {1}\right) - \sigma_ {\tau} \left(R _ {1} \mu^ {2}\right)\right) \\ \end{array} +$$ + +(This follows from the order optimality condition: $R_{1}\mu^{2} + \tau \nabla \nu (\sigma_{\tau}(R_1\mu^2)) = 0$ + +$$ +\begin{array}{l} \leq \frac {\tau}{2 c _ {1}} \| \nabla \nu (\sigma_ {\tau} (R _ {1} \mu^ {2})) - \nabla \nu (\mu^ {1}) \| _ {2} ^ {2} + \frac {\tau c _ {1}}{2} \| \sigma_ {\tau} (q ^ {1}) - \sigma_ {\tau} (R _ {1} \mu^ {2}) \| _ {2} ^ {2} \\ + \frac {1}{2 c _ {2}} \| \sigma_ {\tau} (R _ {2} \mu^ {1}) - \mu^ {2}) \| _ {2} ^ {2} + \frac {c _ {2}}{2} \| R _ {2} (\sigma_ {\tau} (q ^ {1}) - \sigma_ {\tau} (R _ {1} \mu^ {2})) \| _ {2} ^ {2} \\ \end{array} +$$ + +(This follows from AM-GM inequality, where $c_{1}, c_{2} > 0$ can be arbitrary) + +$$ +\begin{array}{l} \leq \frac {\tau}{2 c _ {1} \ell_ {\tau} ^ {2}} \| \sigma_ {\tau} (R _ {1} \mu^ {2}) - \mu^ {1} \| _ {2} ^ {2} + \frac {c _ {1}}{2 \tau} \| q ^ {1} - R _ {1} \mu^ {2} \| _ {2} ^ {2} \\ + \frac {1}{2 c _ {2}} \| \sigma_ {\tau} \left(R _ {2} \mu^ {1}\right) - \mu^ {2}) \| _ {2} ^ {2} + \frac {c _ {2} \| R _ {2} \| _ {2} ^ {2}}{2 \tau^ {2}} \| q ^ {1} - R _ {1} \mu^ {2} \| _ {2} ^ {2} \tag {Lemma C.5} \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \left(\frac {1}{c _ {1} \ell_ {\tau} ^ {2}} + \frac {1}{\tau c _ {2}}\right) V _ {R} \left(\mu^ {1}, \mu^ {2}\right) + \frac {c _ {1}}{2 \tau} \| q ^ {1} - R _ {1} \mu^ {2} \| _ {2} ^ {2} + \frac {c _ {2} \| R _ {2} \| _ {2} ^ {2}}{2 \tau^ {2}} \| q ^ {1} - R _ {1} \mu^ {2} \| _ {2} ^ {2} (L e m m a C. 6) \\ \leq \frac {1}{4} V _ {R} (\mu^ {1}, \mu^ {2}) + \frac {4}{\tau \ell_ {\tau} ^ {2}} \| q ^ {1} - R _ {1} \mu^ {2} \| _ {2} ^ {2} + \frac {4 \| R _ {2} \| _ {2} ^ {2}}{\tau^ {3}} \| q ^ {1} - R _ {1} \mu^ {2} \| _ {2} ^ {2}, \\ \end{array} +$$ + +where the last line follows by choosing $c_{1} = \frac{8}{\ell_{\tau}^{2}}$ and $c_{2} = \frac{8}{\tau}$ . Similarly, we also have + +$$ +\begin{array}{l} \langle \nabla_ {2} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (q ^ {2}) - \sigma_ {\tau} (R _ {2} \mu^ {1}) \rangle \leq \frac {1}{4} V _ {R} (\mu^ {1}, \mu^ {2}) + \frac {4}{\tau \ell_ {\tau} ^ {2}} \| q ^ {2} - R _ {2} \mu^ {1} \| _ {2} ^ {2} \\ + \frac {4 \| R _ {1} \| _ {2} ^ {2}}{\tau^ {3}} \| q ^ {2} - R _ {2} \mu^ {1} \| _ {2} ^ {2}. \\ \end{array} +$$ + +Summing up the previous two inequalities, we obtain + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (q ^ {1}) - \sigma_ {\tau} (R _ {1} \mu^ {2}) \rangle + \langle \nabla_ {2} V _ {R} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (q ^ {2}) - \sigma_ {\tau} (R _ {2} \mu^ {1}) \rangle \\ \leq \frac {1}{2} V _ {R} (\mu^ {i}, \mu^ {- i}) + \left(\frac {4}{\tau \ell_ {\tau} ^ {2}} + \frac {4 A _ {\mathrm {m a x}} ^ {2}}{\tau^ {3}}\right) \sum_ {i = 1, 2} \| q ^ {i} - R _ {i} \mu^ {- i} \| _ {2} ^ {2}, \\ \end{array} +$$ + +where we used $\| R_i\| _2\leq \sqrt{A_{\mathrm{max}}}$ for $i\in \{1,2\}$ . + +![](images/274cab1aab014a785ccda3cbd6ba2bb041b40aabba0b31f10e27102fd8fde498.jpg) + +# D Proof of Theorem 3.1 + +We begin by introducing a summary of notation in Appendix D.1. In Appendix D.2, we establish an important boundedness property regarding the $q$ -functions, value functions, and the policies generated by Algorithm 2. In Appendix D.3, we bound the Nash gap in terms of the Lyapunov functions. In Appendices D.4 and D.5, we analyze the outer loop and the inner loop of Algorithm 2 and establish the Lyapunov drift inequalities. Finally, in Appendix D.6, we solve the coupled Lyapunov inequalities to obtain the finite-sample bound. The proof of all supporting lemmas are provided in Appendix D.7, and the proof of Corollary 3.1.1 is provided in Appendix D.8. + +# D.1 Notation + +We begin with a summary of the notation that will be used in the proof. + +(1) Given a pair of matrices $\{X_{i}\in \mathbb{R}^{|\mathcal{A}^{i}|\times |\mathcal{A}^{-i}|}\}_{i\in \{1,2\}}$ and a pair of distributions $\{\mu^i\in \Delta (\mathcal{A}^i)\}_{i\in \{1,2\}}$ , we define + +$$ +V _ {X} \left(\mu^ {1}, \mu^ {2}\right) = \sum_ {i = 1, 2} \max _ {\hat {\mu} ^ {i} \in \Delta \left(\mathcal {A} ^ {i}\right)} \left\{\left(\hat {\mu} ^ {i} - \mu^ {i}\right) ^ {\top} X _ {i} \mu^ {- i} + \tau \nu \left(\hat {\mu} ^ {i}\right) - \tau \nu \left(\mu^ {i}\right) \right\}, \tag {26} +$$ + +where $\nu (\cdot)$ is the entropy function. Note that $V_{X}(\cdot ,\cdot)$ is similar to $V_{R}(\cdot ,\cdot)$ defined in Appendix C.2 in the setting of matrix games. However, we do not assume that $X_{1} + X_{2} = 0$ + +(2) Given a pair of value functions $(v^{1}, v^{2})$ and a state $s \in S$ , when $X_{i} = \mathcal{T}^{i}(v^{i})(s)$ , $i \in \{1, 2\}$ , we write $V_{v,s}(\cdot, \cdot)$ for $V_{X}(\cdot, \cdot)$ . +(3) For any joint policy $(\pi^1, \pi^2)$ and state $s$ , given $i \in \{1, 2\}$ , we define $v_{*, \pi^{-i}}^i(s) = \max_{\hat{\pi}^i} v_{\hat{\pi}^i, \pi^{-i}}^i(s)$ , $v_{\pi^i, *}^i = \min_{\hat{\pi}^{-i}} v_{\pi^i, \hat{\pi}^{-i}}^i(s)$ , $v_{\pi^{-i}, *}^{-i}(s) = \min_{\hat{\pi}^i} v_{\pi^{-i}, \hat{\pi}^i}^{-i}(s)$ , and $v_{*, \pi^i}^{-i}(s) = \max_{\hat{\pi}^{-i}} v_{\hat{\pi}^{-i}, \hat{\pi}^i}^{-i}(s)$ . Note that we have $v_{*, \pi^2}^1 + v_{\pi^2, *}^2 = 0$ and $v_{\pi^1, *}^1 + v_{*, \pi^1}^2 = 0$ because of the zero-sum structure. +(4) For $i \in \{1, 2\}$ , denote $v_{*}^{i}$ as the unique fixed point of the equation $\mathcal{B}^i(v^i) = v^i$ , where $\mathcal{B}^i(\cdot)$ is the minimax Bellman operator defined in Section 3. Note that we have $v_{*}^{1} + v_{*}^{2} = 0$ . +(5) For any $t, k \geq 0$ and $i \in \{1,2\}$ , let $\bar{q}_{t,k}^i \in \mathbb{R}^{|\mathcal{S}| |\mathcal{A}^i|}$ be defined as $\bar{q}_{t,k}^i(s) = \mathcal{T}^i(v_t^i)(s)\pi_{t,k}^{-i}(s)$ for all $s \in S$ . In addition, let + +$$ +\mathcal {L} _ {\mathrm {s u m}} (t) = \| v _ {t} ^ {1} + v _ {t} ^ {2} \| _ {\infty}, \quad \mathcal {L} _ {v} (t) = \sum_ {i = 1, 2} \| v _ {t} ^ {i} - v _ {*} ^ {i} \| _ {\infty}, +$$ + +$$ +\mathcal {L} _ {q} (t, k) = \sum_ {i = 1, 2} \sum_ {s \in \mathcal {S}} \| q _ {t, k} ^ {i} (s) - \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, k} ^ {- i} (s) \| _ {2} ^ {2} = \sum_ {i = 1, 2} \| q _ {t, k} ^ {i} - \bar {q} _ {t, k} ^ {i} \| _ {2} ^ {2}, +$$ + +$$ +\mathcal {L} _ {\pi} (t, k) = \max _ {s \in \mathcal {S}} V _ {v _ {t}, s} (\pi_ {t, k} ^ {1} (s), \pi_ {t, k} ^ {2} (s)), +$$ + +which will be the Lyapunov functions we use in the analysis. + +(6) Given $k_{1} \leq k_{2}$ , we denote $\beta_{k_1,k_2} = \sum_{k = k_1}^{k_2}\beta_k$ and $\alpha_{k_1,k_2} = \sum_{k = k_1}^{k_2}\alpha_k$ . +(7) Recall that $z_{k} = t(\ell_{\tau}, \beta_{k})$ is the uniform mixing time defined in Lemma B.1 (2), where $\ell_{\tau}$ is the uniform lower bound of the policies. When using constant step sizes, $z_{k}$ is not a function of $k$ , and is simply denoted by $z_{\beta}$ . Observe that, due to the uniform geometric mixing property established in Lemma B.1 (2), we have $z_{k} = \mathcal{O}(\log(k))$ when using $\mathcal{O}(1/k)$ stepsizes and $z_{\beta} = \mathcal{O}(\log(1/\beta))$ when using constant step sizes. Let $k_{0} = \min k : k \geq z_{k}$ , which is well defined because $z_{k}$ grows logarithmically with $k$ . + +# D.2 Boundedness of the Iterates + +We first show in the following lemma that the $q$ -functions and the $v$ -functions generated by Algorithm 2 are uniformly bounded from above, and the policies are uniformly bounded from below. In the context of stochastic games, we redefine $\ell_{\tau} = [1 + (A_{\max} - 1)\exp (2 / [(1 - \gamma)\tau ])]^{-1}$ . + +Lemma D.1. For all $t, k$ and $i \in \{1, 2\}$ , we have (1) $\| v_t^i \|_\infty \leq 1 / (1 - \gamma)$ and $\| q_{t,k}^i \|_\infty \leq 1 / (1 - \gamma)$ , and (2) $\min_{s \in S, a^i \in \mathcal{A}^i} \pi_{t,k}^i(a^i \mid s) \geq \ell_\tau$ . + +Proof of Lemma D.1. The proof uses induction arguments. Let $i \in \{1, 2\}$ . + +(1) Given $t \geq 0$ , we first show by induction that, if $\| v_{t}^{i}\|_{\infty} \leq \frac{1}{1 - \gamma}$ and $\| q_{t,0}^{i}\|_{\infty} \leq \frac{1}{1 - \gamma}$ , we have $\| q_{t,k}^{i}\|_{\infty} \leq \frac{1}{1 - \gamma}$ for all $k \geq 0$ . The base case $\| q_{t,0}^{i}\|_{\infty} \leq \frac{1}{1 - \gamma}$ holds by our assumption. Suppose that $\| q_{t,k}^{i}\|_{\infty} \leq \frac{1}{1 - \gamma}$ for some $k \geq 0$ . Then, by Algorithm 2 Line 6, we have for all $(s,a^i)$ that + +$$ +\begin{array}{l} \left| q _ {t, k + 1} ^ {i} (s, a ^ {i}) \right| \\ = \left| q _ {t, k} ^ {i} (s, a ^ {i}) + \alpha_ {k} \mathbb {1} _ {\left\{\left(s, a ^ {i}\right) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}} \left(R _ {i} \left(S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) + \gamma v _ {t} ^ {i} \left(S _ {k + 1}\right) - q _ {t, k} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right)\right) \right| \\ \leq \left(1 - \alpha_ {k} \mathbb {1} _ {\left\{\left(s, a ^ {i}\right) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}}\right) \left| q _ {t, k} ^ {i} \left(s, a ^ {i}\right) \right| \\ + \alpha_ {k} \mathbb {1} _ {\left\{(s, a ^ {i}) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}} \left| R _ {i} \left(S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) + \gamma v _ {t} ^ {i} \left(S _ {k + 1}\right) \right| \\ \leq \left(1 - \alpha_ {k} \mathbb {1} _ {\left\{\left(s, a ^ {i}\right) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}}\right) \frac {1}{1 - \gamma} + \alpha_ {k} \mathbb {1} _ {\left\{\left(s, a ^ {i}\right) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}} \left(1 + \frac {\gamma}{1 - \gamma}\right) \tag {27} \\ = \frac {1}{1 - \gamma}, \\ \end{array} +$$ + +where Eq. (27) follows from the induction hypothesis $\| q_{t,k}^i\|_\infty \leq \frac{1}{1 - \gamma}$ , our assumption that $\| v_t^i\|_\infty \leq \frac{1}{1 - \gamma}$ , and $\max_{s,a^i,a^{-i}}|R_i(s,a^i,a^{-i})|\leq 1$ . The induction is now complete and we have $\| q_{t,k}^i\|_\infty \leq \frac{1}{1 - \gamma}$ for all $k\geq 0$ whenever $\| v_t^i\|_\infty \leq \frac{1}{1 - \gamma}$ and $\| q_{t,0}^i\|_\infty \leq \frac{1}{1 - \gamma}$ . + +We next again use induction to show that $\| v_{t}^{i}\|_{\infty}\leq \frac{1}{1 - \gamma}$ and $\| q_{t,0}^{i}\|_{\infty}\leq \frac{1}{1 - \gamma}$ for all $t\geq 0$ . Our initialization ensures that $\| v_0^i\|_\infty \leq \frac{1}{1 - \gamma}$ and $\| q_{0,0}^{i}\|_{\infty}\leq \frac{1}{1 - \gamma}$ . Suppose that $\| v_t^i\|_{\infty}\leq \frac{1}{1 - \gamma}$ and $\| q_{t,0}^{i}\|_{\infty}\leq \frac{1}{1 - \gamma}$ for some $t\geq 0$ . Using the update equation for $v_{t + 1}^{i}$ (cf. Algorithm 2 Line 8) and the fact that $\| q_{t,k}^{i}\|_{\infty}\leq \frac{1}{1 - \gamma}$ for all $k\geq 0$ (established in the previous paragraph), we have for all $s\in S$ that + +$$ +| v _ {t + 1} ^ {i} (s) | = \left| \sum_ {a ^ {i} \in \mathcal {A} ^ {i}} \pi_ {t, K} ^ {i} (a ^ {i} | s) q _ {t, K} ^ {i} (s, a ^ {i}) \right| \leq \sum_ {a ^ {i} \in \mathcal {A} ^ {i}} \pi_ {t, K} ^ {i} (a ^ {i} | s) \| q _ {t, K} ^ {i} \| _ {\infty} \leq \frac {1}{1 - \gamma}, +$$ + +which implies $\| v_{t + 1}^i\|_\infty \leq \frac{1}{1 - \gamma}$ . Moreover, we have by Algorithm 2 Line 9 that $\| q_{t + 1,0}^i\|_\infty = \| q_{t,K}^i\|_\infty \leq \frac{1}{1 - \gamma}$ . The induction is now complete and we have $\| v_t^i\|_\infty \leq \frac{1}{1 - \gamma}$ and $\| q_{t,0}^i\|_\infty \leq \frac{1}{1 - \gamma}$ for all $t \geq 0$ . + +(2) We first use induction to show that, given $t \geq 0$ , if $\min_{s,a^i} \pi_{t,0}^i(a^i \mid s) \geq \ell_\tau$ , then we have $\min_{s,a^i} \pi_{t,k}^i(a^i \mid s) \geq \ell_\tau$ for all $k \in \{0,1,\dots,K\}$ . Since $\min_{s,a^i} \pi_{t,0}^i(a^i \mid s) \geq \ell_\tau$ by our assumption, we have the base case. Now suppose that $\min_{s \in S, a^i \in A^i} \pi_{t,k}^i(a^i \mid s) \geq \ell_\tau$ for some $k \geq 0$ . Then we have by Algorithm 2 Line 4 that + +$$ +\begin{array}{l} \pi_ {t, k + 1} ^ {i} (a ^ {i} \mid s) = (1 - \beta_ {k}) \pi_ {t, k} ^ {i} (a ^ {i} \mid s) + \beta_ {k} \sigma_ {\tau} (q _ {t, k} ^ {i} (s)) (a ^ {i}) \\ \geq \left(1 - \beta_ {k}\right) \ell_ {\tau} + \beta_ {k} \ell_ {\tau} \\ = \ell_ {\tau}, \\ \end{array} +$$ + +where the inequality follows from (1) the induction hypothesis, and (2) $\sigma_{\tau}(q_{t,k}^{i}(s))(a^{i})\geq \ell_{\tau}$ , which follows from Lemma D.1 (1) and Lemma C.1. The induction is complete. + +We next again use induction to show that $\min_{s,a^i}\pi_{t,0}^i (a^i\mid s)\geq \ell_\tau$ for all $t\in \{0,1,\dots ,T\}$ . Since $\min_{s,a^i}\pi_{0,0}^i (a^i\mid s)$ is initialized as a uniform policy, we have the base case. Now suppose that $\min_{s,a^i}\pi_{t,0}^i (a^i\mid s)\geq \ell_\tau$ for some $t\geq 0$ . Since this implies that $\min_{s,a^i}\pi_{t,k}^i (a^i\mid s)\geq \ell_\tau$ for all $k\in \{0,1,\dots ,K\}$ , and in addition, $\pi_{t + 1,0}^{i} = \pi_{t,K}^{i}$ according to Algorithm 2 Line 9, we have $\min_{s,a^i}\pi_{t + 1,0}^i (a^i\mid s)\geq \ell_\tau$ . The induction is complete. + +![](images/63562721d36d028cb3a61cb075ddc1b8511b1645783716a404623987d9106333.jpg) + +# D.3 Bounding the Nash Gap + +Our ultimate goal is to bound the Nash gap + +$$ +\mathrm {N G} \left(\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}\right) = \sum_ {i = 1, 2} \left(\max _ {\pi^ {i}} U ^ {i} \left(\pi^ {i}, \pi_ {T, K} ^ {- i}\right) - U ^ {i} \left(\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}\right)\right). \tag {28} +$$ + +We first bound the Nash gap using the value functions of the output policies from Algorithm 2. + +Lemma D.2. It holds that + +$$ +\sum_ {i = 1, 2} \left(\max _ {\pi^ {i}} U ^ {i} \left(\pi^ {i}, \pi_ {T, K} ^ {- i}\right) - U ^ {i} \left(\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}\right)\right) \leq \sum_ {i = 1, 2} \left\| v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} \right\| _ {\infty}. \tag {29} +$$ + +Proof of Lemma D.2. Using the definition of the expected value functions, we have + +$$ +\sum_ {i = 1, 2} \left(\max _ {\pi^ {i}} U ^ {i} (\pi^ {i}, \pi_ {T, K} ^ {- i}) - U ^ {i} (\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i})\right) +$$ + +$$ +\begin{array}{l} = \sum_ {i = 1, 2} \left(\max _ {\pi^ {i}} \mathbb {E} _ {S \sim p _ {o}} \left[ v _ {\pi^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (S) - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (S) \right]\right) \\ \leq \sum_ {i = 1, 2} \left(\mathbb {E} _ {S \sim p _ {o}} \left[ \max _ {\pi^ {i}} v _ {\pi^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (S) - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (S) \right]\right) \quad (\text {J e s s e n ' s i n e q u a l i t y}) \\ = \sum_ {i = 1, 2} \left(\mathbb {E} _ {S \sim p _ {o}} \left[ v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} (S) - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (S) \right]\right) \\ \leq \sum_ {i = 1, 2} \left\| v _ {* , \pi_ {T, K} ^ {- i}} ^ {i} - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} \right\| _ {\infty}. \\ \end{array} +$$ + +![](images/ff23c08dd806f1e34ba59df0b85ea0dd263697ae1a734bd458d05fc5bf4ce5e4.jpg) + +The next lemma bounds the RHS of Eq. (29) using the actual iterates generated by Algorithm 2. + +Lemma D.3. It holds for $i \in \{1,2\}$ that + +$$ +\left\| v _ {*, \pi_ {t, K} ^ {- i}} ^ {i} - v _ {\pi_ {t, K} ^ {i}, \pi_ {t, K} ^ {- i}} ^ {i} \right\| _ {\infty} \leq \frac {2}{1 - \gamma} \left(2 \mathcal {L} _ {s u m} (T) + \mathcal {L} _ {v} (T) + \mathcal {L} _ {\pi} (T, K) + 2 \tau \log \left(A _ {\max }\right)\right). +$$ + +Proof of Lemma D.3. For any $s \in S$ and $i \in \{1,2\}$ , we have + +$$ +\begin{array}{l} 0 \leq \left| v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} (s) - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (s) \right| \\ = v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} (s) - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} (s) \\ \leq v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} (s) - v _ {\pi_ {T, K} ^ {i}, *} ^ {i} (s) \\ = - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} (s) - v _ {\pi_ {T, K} ^ {i}, *} ^ {i} (s) \\ = v _ {*} ^ {i} (s) - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} (s) + v _ {*} ^ {- i} (s) - v _ {\pi_ {T, K} ^ {i}, *} ^ {i} (s) \\ \leq \sum_ {j = 1, 2} \left\| v _ {*} ^ {- j} - v _ {\pi_ {T, K} ^ {- j}, *} ^ {- j} \right\| _ {\infty}. \\ \end{array} +$$ + +Since the RHS of the previous inequality does not depend on $s$ , we have for $i \in \{1, 2\}$ that + +$$ +\left\| v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} - v _ {\pi_ {T, K} ^ {i}, \pi_ {T, K} ^ {- i}} ^ {i} \right\| _ {\infty} \leq \sum_ {j = 1, 2} \left\| v _ {*} ^ {- j} - v _ {\pi_ {T, K} ^ {- j}, *} ^ {- j} \right\| _ {\infty}. \tag {30} +$$ + +It remains to bound the RHS of the previous inequality. Observe that for any $s \in S$ and $i \in \{1, 2\}$ , we have + +$$ +\begin{array}{l} 0 \leq v _ {*} ^ {- i} (s) - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} (s) \\ = v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} (s) - v _ {*} ^ {i} (s) \\ = \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {*, \pi_ {T, K} ^ {- i}} ^ {i}) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {*} ^ {i}) (s) \mu^ {- i} \\ \leq \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*, \pi_ {T, K} ^ {- i}} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right| \\ + \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) \mu^ {- i} \right| \\ \leq \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left| \left(\mu^ {i}\right) ^ {\top} \left(\mathcal {T} ^ {i} \left(v _ {*, \pi_ {T, K} ^ {- i}} ^ {i}\right) (s) - \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s)\right) \pi_ {T, K} ^ {- i} (s) \right| \\ + \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right| \\ + \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \mu^ {- i} \\ \end{array} +$$ + +$$ ++ \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \mu^ {- i} - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) \mu^ {- i} \right|. \tag {31} +$$ + +We next bound each term on the RHS of the previous inequality. + +The 1st Term on the RHS of Eq. (31). Using the definition of $\mathcal{T}^i (\cdot)$ , we have + +$$ +\begin{array}{l} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left| \left(\mu^ {i}\right) ^ {\top} \left(\mathcal {T} ^ {i} \left(v _ {*}, \pi_ {T, K} ^ {- i}\right) (s) - \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s)\right) \pi_ {T, K} ^ {- i} (s) \right| \\ \leq \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathcal {T} ^ {i} (v _ {\ast , \pi_ {T, K} ^ {- i}} ^ {i}) (s, a ^ {i}, a ^ {- i}) - \mathcal {T} ^ {i} (v _ {\ast} ^ {i}) (s, a ^ {i}, a ^ {- i}) \right| \\ = \gamma \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathbb {E} \left[ v _ {*} ^ {i} (S _ {1}) - v _ {*}, \pi_ {T, K} ^ {- i} (S _ {1}) \right. \right| S _ {0} = s, A _ {0} ^ {i} = a ^ {i}, A _ {0} ^ {- i} = a ^ {- i} \Bigg ] \Bigg | \\ \leq \gamma \max _ {s, a ^ {i}, a ^ {- i}} \mathbb {E} \left[ \left| v _ {*} ^ {i} (S _ {1}) - v _ {* , \pi_ {T, K} ^ {- i}} ^ {i} (S _ {1}) \right| \mid S _ {0} = s, A _ {0} ^ {i} = a ^ {i}, A _ {0} ^ {- i} = a ^ {- i} \right] \\ \leq \gamma \left\| v _ {*} ^ {i} - v _ {*}, \pi_ {T, K} ^ {- i} \right\| _ {\infty}. \\ \end{array} +$$ + +The 2nd Term on the RHS of Eq. (31). Using the definition of $\mathcal{T}^i (\cdot)$ , we have + +$$ +\begin{array}{l} \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right| \\ \leq \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left| \left(\mu^ {i}\right) ^ {\top} \left(\mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s) - \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s)\right) \pi_ {T, K} ^ {- i} (s) \right| \\ \leq \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) \left(s, a ^ {i}, a ^ {- i}\right) - \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) \left(s, a ^ {i}, a ^ {- i}\right) \right| \\ \leq \gamma \left\| v _ {*} ^ {i} - v _ {T} ^ {i} \right\| _ {\infty}. \\ \end{array} +$$ + +The 3rd Term on the RHS of Eq. (31). Bounding the third term requires more effort. To begin with, we decompose it in the following way: + +$$ +\begin{array}{l} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \mu^ {- i} \\ \leq \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \pi_ {T, K} ^ {- i} (s) - \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \pi_ {T, K} ^ {i} (s) \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \mu^ {- i} \right| \\ \leq \left| \max _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {- i}) ^ {\top} \mathcal {T} ^ {- i} (v _ {T} ^ {- i}) (s) \pi_ {T, K} ^ {i} (s) + \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {- i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) ^ {\top} \pi_ {T, K} ^ {i} (s) \right| \\ + \left| \sum_ {i = 1, 2} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right|. \tag {32} \\ \end{array} +$$ + +We next bound each term on the RHS of the previous inequality. For the first term, we have by definition of $\mathcal{T}^i (\cdot)$ that + +$$ +\begin{array}{l} \left| \max _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {- i}\right) ^ {\top} \mathcal {T} ^ {- i} \left(v _ {T} ^ {- i}\right) (s) \pi_ {T, K} ^ {i} (s) + \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {- i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) ^ {\top} \pi_ {T, K} ^ {i} (s) \right| \\ = \left| \max _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {- i}\right) ^ {\top} \mathcal {T} ^ {- i} \left(v _ {T} ^ {- i}\right) (s) \pi_ {T, K} ^ {i} (s) - \max _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {- i}\right) ^ {\top} \left[ - \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \right] ^ {\top} \pi_ {T, K} ^ {i} (s) \right| \\ \leq \max _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left| (\mu^ {- i}) ^ {\top} (\mathcal {T} ^ {- i} (v _ {T} ^ {- i}) (s) + \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) ^ {\top}) \pi_ {T, K} ^ {i} (s) \right| \\ \leq \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathcal {T} ^ {- i} \left(v _ {T} ^ {- i}\right) \left(s, a ^ {i}, a ^ {- i}\right) + \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) \left(s, a ^ {i}, a ^ {- i}\right) \right| \\ = \gamma \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathbb {E} \left[ v _ {T} ^ {- i} (S _ {1}) + v _ {T} ^ {i} (S _ {1}) \right. \mid S _ {0} = s, A _ {0} ^ {i} = a ^ {i}, A _ {0} ^ {- i} = a _ {0} ^ {- i} \right] | \\ \leq \gamma \left\| v _ {T} ^ {- i} + v _ {T} ^ {i} \right\| _ {\infty}. \\ \end{array} +$$ + +For the second term on the RHS of Eq. (32), using the Lyapunov function $V_{v,s}(\cdot ,\cdot)$ (defined in Appendix D.1), we have + +$$ +\begin{array}{l} \left| \sum_ {i = 1, 2} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right| \\ = \sum_ {i = 1, 2} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i} - \pi_ {T, K} ^ {i} (s)\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) + \left| \sum_ {i = 1, 2} \left(\pi_ {T, K} ^ {i} (s)\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right| \\ \leq \sum_ {i = 1, 2} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left\{\left(\mu^ {i} - \pi_ {T, K} ^ {i} (s)\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) + \tau \nu \left(\mu^ {i}\right) - \tau \nu \left(\pi_ {T, K} ^ {i} (s)\right) \right\} \\ + 2 \tau \log (A _ {\max }) + \left| \sum_ {i = 1, 2} \left(\pi_ {T, K} ^ {i} (s)\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) \pi_ {T, K} ^ {- i} (s) \right| \\ \leq V _ {v _ {T}, s} \left(\pi_ {T, K} ^ {i} (s), \pi_ {T, K} ^ {- i} (s)\right) + 2 \tau \log \left(A _ {\max }\right) \\ + \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) \left(s, a ^ {i}, a ^ {- i}\right) + \mathcal {T} ^ {- i} \left(v _ {T} ^ {- i}\right) \left(s, a ^ {i}, a ^ {- i}\right) \right| \\ \leq V _ {v _ {T}, s} \left(\pi_ {T, K} ^ {i} (s), \pi_ {T, K} ^ {- i} (s)\right) + 2 \tau \log \left(A _ {\max }\right) + \gamma \| v _ {T} ^ {i} + v _ {T} ^ {- i} \| _ {\infty}. \\ \end{array} +$$ + +Using the previous two inequalities together in Eq. (32), we obtain + +$$ +\begin{array}{l} \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \pi_ {T, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \mu^ {- i} \\ \leq V _ {v _ {T}, s} \left(\pi_ {T, K} ^ {i} (s), \pi_ {T, K} ^ {- i} (s)\right) + 2 \gamma \| v _ {T} ^ {i} + v _ {T} ^ {- i} \| _ {\infty} + 2 \tau \log \left(A _ {\max }\right). \tag {33} \\ \end{array} +$$ + +The 4th Term on the RHS of Eq. (31). Using the definition of $\mathcal{T}^i (\cdot)$ , we have + +$$ +\begin{array}{l} \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {T} ^ {i}) (s) \mu^ {- i} - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {*} ^ {i}) (s) \mu^ {- i} \right| \\ \leq \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left| \left(\mu^ {i}\right) ^ {\top} \left(\mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) (s) - \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) (s)\right) \mu^ {- i} \right| \\ \leq \max _ {s, a ^ {i}, a ^ {- i}} \left| \mathcal {T} ^ {i} \left(v _ {T} ^ {i}\right) \left(s, a ^ {i}, a ^ {- i}\right) - \mathcal {T} ^ {i} \left(v _ {*} ^ {i}\right) \left(s, a ^ {i}, a ^ {- i}\right) \right| \\ \leq \gamma \| v _ {T} ^ {i} - v _ {*} ^ {i} \| _ {\infty}. \\ \end{array} +$$ + +Finally, using the upper bounds we obtained for all the terms on the RHS of Eq. (31), we have + +$$ +\begin{array}{l} \left\| v _ {*} ^ {- i} - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} \right\| _ {\infty} \leq \gamma \left\| v _ {*, \pi_ {T, K} ^ {- i}} ^ {i} - v _ {*} ^ {i} \right\| _ {\infty} + 2 \gamma \| v _ {T} ^ {i} + v _ {T} ^ {- i} \| _ {\infty} + 2 \gamma \| v _ {T} ^ {i} - v _ {*} ^ {i} \| _ {\infty} \\ + \max _ {s \in \mathcal {S}} V _ {v _ {T}, s} \left(\pi_ {T, K} ^ {i} (s), \pi_ {T, K} ^ {- i} (s)\right) + 2 \tau \log \left(A _ {\max }\right) \\ \leq \gamma \| v _ {*} ^ {- i} - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} \| _ {\infty} + 2 \| v _ {T} ^ {i} + v _ {T} ^ {- i} \| _ {\infty} + 2 \| v _ {T} ^ {i} - v _ {*} ^ {i} \| _ {\infty} \\ + \max _ {s \in \mathcal {S}} V _ {v _ {T}, s} (\pi_ {T, K} ^ {i} (s), \pi_ {T, K} ^ {- i} (s)) + 2 \tau \log (A _ {\max }) \\ \end{array} +$$ + +Rearranging terms and using $\mathcal{L}_{\mathrm{sum}}(t)$ and $\mathcal{L}_{\pi}(t,k)$ to simplify the notation, we obtain + +$$ +\left\| v _ {*} ^ {- i} - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} \right\| _ {\infty} \leq \frac {1}{1 - \gamma} \left(2 \mathcal {L} _ {\operatorname {s u m}} (T) + 2 \| v _ {T} ^ {i} - v _ {*} ^ {i} \| _ {\infty} + \mathcal {L} _ {\pi} (T, K) + 2 \tau \log (A _ {\max})\right). +$$ + +Summing up both sides of the previous inequality for $i \in \{1, 2\}$ , we have + +$$ +\sum_ {i = 1, 2} \left\| v _ {*} ^ {- i} - v _ {\pi_ {T, K} ^ {- i}, *} ^ {- i} \right\| _ {\infty} \leq \frac {2}{1 - \gamma} \left(2 \mathcal {L} _ {\mathrm {s u m}} (T) + \mathcal {L} _ {v} (T) + \mathcal {L} _ {\pi} (T, K) + 2 \tau \log (A _ {\mathrm {m a x}})\right). +$$ + +Using the previous inequality in Eq. (30), we have the desired result. + +Combining the results in Lemma D.2 and Lemma D.3 in Eq. (28), we have the following result, which bounds the Nash gap in terms of the Lyapunov functions defined in Appendix D.1. + +Lemma D.4. It holds that + +$$ +N G (\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}) \leq \frac {4}{1 - \gamma} \left(2 \mathcal {L} _ {s u m} (T) + \mathcal {L} _ {v} (T) + \mathcal {L} _ {\pi} (T, K) + 2 \tau \log (A _ {\max})\right). +$$ + +The next step is to bound the Lyapunov functions, which require us to analyze the outer loop and inner loop of Algorithm 2. + +# D.4 Analysis of the Outer Loop: $v$ -Function Update + +We first consider $\mathcal{L}_v(t)$ and establish a one-step Lyapunov drift inequality for it. + +Lemma D.5. It holds for all $t \geq 0$ that + +$$ +\mathcal {L} _ {v} (t + 1) \leq \gamma \mathcal {L} _ {v} (t) + 4 \mathcal {L} _ {\text {s u m}} (t) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K) + 4 \mathcal {L} _ {\pi} (t, K) + 6 \tau \log \left(A _ {\max }\right). \tag {34} +$$ + +Proof of Lemma D.5. For $i \in \{1, 2\}$ , using the outer-loop update equation (cf. Algorithm 2 Line 8) and the fact that $\mathcal{B}^i(v_*)^i = v_*^i$ , we have for any $t \geq 0$ and $s \in S$ that + +$$ +\begin{array}{l} v _ {t + 1} ^ {i} (s) - v _ {*} ^ {i} (s) = \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - v _ {*} ^ {i} (s) \\ = \mathcal {B} ^ {i} (v _ {t} ^ {i}) (s) - \mathcal {B} ^ {i} (v _ {*} ^ {i}) (s) + \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - \mathcal {B} ^ {i} (v _ {t} ^ {i}) (s). \\ \end{array} +$$ + +Since the minimax Bellman operator $\mathcal{B}^i (\cdot)$ is a $\gamma$ -contraction mapping with respect to $\| \cdot \|_{\infty}$ , we have from the previous inequality that + +$$ +\begin{array}{l} \left| v _ {t + 1} ^ {i} (s) - v _ {*} ^ {i} (s) \right| \leq \left| \mathcal {B} ^ {i} \left(v _ {t} ^ {i}\right) (s) - \mathcal {B} ^ {i} \left(v _ {*} ^ {i}\right) (s) \right| + \left| \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - \mathcal {B} ^ {i} \left(v _ {t} ^ {i}\right) (s) \right| \\ \leq \left\| \mathcal {B} ^ {i} \left(v _ {t} ^ {i}\right) - \mathcal {B} ^ {i} \left(v _ {*} ^ {i}\right) \right\| _ {\infty} + \left| \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - \mathcal {B} ^ {i} \left(v _ {t} ^ {i}\right) (s) \right| \\ \leq \gamma \left\| v _ {t} ^ {i} - v _ {*} ^ {i} \right\| _ {\infty} + \left| \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - \mathcal {B} ^ {i} \left(v _ {t} ^ {i}\right) (s) \right|. \tag {35} \\ \end{array} +$$ + +It remains to bound the second term on the RHS of Eq. (35). Using the definition of $\mathcal{B}^i (\cdot)$ , we have + +$$ +\begin{array}{l} \left| \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - \mathcal {B} ^ {i} \left(v _ {t} ^ {i}\right) (s) \right| \\ = \left| \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \mu^ {- i} \right| \\ \leq \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s) - \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) \right| \\ + \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} (\mu^ {i}) ^ {\top} \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \mu^ {- i} \right| \\ \leq \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i} - \pi_ {t, K} ^ {i} (s)\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s) \\ + \left| \left(\pi_ {t, K} ^ {i} (s)\right) ^ {\top} \left(\mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s) - q _ {t, K} ^ {i} (s)\right) \right| \\ + \left| \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s) - \max _ {\mu^ {i} \in \Delta (\mathcal {A} ^ {i})} \min _ {\mu^ {- i} \in \Delta (\mathcal {A} ^ {- i})} \left(\mu^ {i}\right) ^ {\top} \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \mu^ {- i} \right| \\ \leq \left\| \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) - q _ {t, K} ^ {i} (s) \right\| _ {\infty} + 2 V _ {v _ {t}, s} (\pi_ {t, K} ^ {1} (s), \pi_ {t, K} ^ {2} (s)) \\ + 2 \gamma \| v _ {t} ^ {1} + v _ {t} ^ {2} \| _ {\infty} + 3 \tau \log (A _ {\max }), \\ \end{array} +$$ + +where the last line follows from Eq. (33). Using the previous inequality in Eq. (35), we obtain + +$$ +\begin{array}{l} \left\| v _ {t + 1} ^ {i} - v _ {*} ^ {i} \right\| _ {\infty} \leq \gamma \left\| v _ {t} ^ {i} - v _ {*} ^ {i} \right\| _ {\infty} + \max _ {s \in \mathcal {S}} \left\| \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) - q _ {t, K} ^ {i} (s) \right\| _ {\infty} \\ + 2 \max _ {s \in \mathcal {S}} V _ {v _ {t}, s} (\pi_ {t, K} ^ {1} (s), \pi_ {t, K} ^ {2} (s)) + 2 \gamma \| v _ {t} ^ {1} + v _ {t} ^ {2} \| _ {\infty} + 3 \tau \log (A _ {\max}). \\ \end{array} +$$ + +Summing up both sides of the previous inequality for $i \in \{1, 2\}$ , we have + +$$ +\mathcal {L} _ {v} (t + 1) \leq \gamma \mathcal {L} _ {v} (t) + 4 \mathcal {L} _ {\text {s u m}} (t) + 4 \mathcal {L} _ {\pi} (t, K) + 6 \tau \log \left(A _ {\max }\right) +$$ + +$$ ++ \sum_ {i = 1, 2} \max _ {s \in \mathcal {S}} \left\| \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s) - q _ {t, K} ^ {i} (s) \right\| _ {\infty}. +$$ + +To bound the last term on the RHS of the previous inequality, observe that + +$$ +\begin{array}{l} \sum_ {i = 1, 2} \max _ {s \in \mathcal {S}} \left\| \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) - q _ {t, K} ^ {i} (s) \right\| _ {\infty} = \sum_ {i = 1, 2} \left\| \bar {q} _ {k} ^ {i} - q _ {t, K} ^ {i} \right\| _ {\infty} \\ \leq \sum_ {i = 1, 2} \left\| \bar {q} _ {k} ^ {i} - q _ {t, K} ^ {i} \right\| _ {2} \\ \leq \left(2 \sum_ {i = 1, 2} \left\| \bar {q} _ {k} ^ {i} - q _ {t, K} ^ {i} \right\| _ {2} ^ {2}\right) ^ {1 / 2} \quad (a ^ {2} + b ^ {2} \geq 2 a b) \\ \leq 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K). \tag {36} \\ \end{array} +$$ + +Therefore, we have + +$$ +\mathcal {L} _ {v} (t + 1) \leq \gamma \mathcal {L} _ {v} (t) + 4 \mathcal {L} _ {\text {s u m}} (t) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K) + 4 \mathcal {L} _ {\pi} (t, K) + 6 \tau \log \left(A _ {\max }\right). +$$ + +The proof is complete. + +![](images/7abd050af7916bcd5a3409fd5df8244ed2ed8108c5a6ce042ece75ab873489b2.jpg) + +We next establish a one-step Lyapunov drift inequality for $\mathcal{L}_{\mathrm{sum}}(t)$ in the following lemma. + +Lemma D.6. It holds for all $t \geq 0$ that $\mathcal{L}_{sum}(t + 1) \leq \gamma \mathcal{L}_{sum}(t) + 2\mathcal{L}_q^{1/2}(t, K)$ . + +Proof of Lemma D.6. Using the outer-loop update equation (cf. Algorithm 2 Line 8), we have for any $t \geq 0$ and $s \in S$ that + +$$ +\begin{array}{l} \left| v _ {t + 1} ^ {1} (s) + v _ {t + 1} ^ {2} (s) \right| = \left| \sum_ {i = 1, 2} \pi_ {t, K} ^ {i} (s) ^ {\top} q _ {t, K} ^ {i} (s) \right| \\ \leq \left| \sum_ {i = 1, 2} \pi_ {t, K} ^ {i} (s) ^ {\top} \left(q _ {t, K} ^ {i} (s) - \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s)\right) \right| \\ + \left| \sum_ {i = 1, 2} \pi_ {t, K} ^ {i} (s) \mathcal {T} ^ {i} \left(v _ {t} ^ {i}\right) (s) \pi_ {t, K} ^ {- i} (s) \right| \\ \leq \sum_ {i = 1, 2} \max _ {s \in \mathcal {S}} \| q _ {t, K} ^ {i} (s) - \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) \| _ {\infty} \\ + \sum_ {i = 1, 2} \max _ {(s, a ^ {i}, a ^ {- i})} \left| \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s, a ^ {i}, a ^ {- i}) + \mathcal {T} ^ {- i} (v _ {t} ^ {- i}) (s, a ^ {i}, a ^ {- i}) \right| \\ \leq \sum_ {i = 1, 2} \max _ {s \in \mathcal {S}} \| q _ {t, K} ^ {i} (s) - \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) \| _ {\infty} + \gamma \| v _ {t} ^ {1} + v _ {t} ^ {2} \| _ {\infty}, \\ \end{array} +$$ + +where the last line follows from the definition of $\mathcal{T}^i (\cdot)$ . Since the RHS of the previous inequality does not depend on $s$ , we have + +$$ +\| v _ {t + 1} ^ {1} + v _ {t + 1} ^ {2} \| _ {\infty} \leq \gamma \| v _ {t} ^ {1} + v _ {t} ^ {2} \| _ {\infty} + \sum_ {i = 1, 2} \max _ {s \in \mathcal {S}} \| q _ {t, K} ^ {i} (s) - \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, K} ^ {- i} (s) \| _ {\infty}. +$$ + +The result follows from using Eq. (36) to bound the last term on the RHS of the previous inequality and then using $\mathcal{L}_{\mathrm{sum}}(t)$ and $\mathcal{L}_q(t,k)$ to simplify the notation. + +# D.5 Analysis of the Inner Loop + +In this section, we establish negative drift inequalities for the Lyapunov functions $\mathcal{L}_q(t,k)$ and $\mathcal{L}_{\pi}(t,k)$ , which are defined in terms of the $q$ -functions and the policies updated in the inner loop of Algorithm 2. For ease of presentation, we write down only the inner loop of Algorithm 2 in Algorithm 3, where we omit the subscript $t$ , which is used as the index for the outer loop. Similarly, we will write $\mathcal{L}_q(k)$ for $\mathcal{L}_q(t,k)$ and $\mathcal{L}_{\pi}(k)$ for $\mathcal{L}_{\pi}(t,k)$ . All results derived for the $q$ -functions and policies of Algorithm 3 can be directly combined with the analysis of outer loop of Algorithm 2 using a conditioning argument together with the Markov property. + +# Algorithm 3 Inner Loop of Algorithm 2: from Player $i$ 's Perspective + +1: Input: Integer $K$ , initializations $q_0^i$ and $\pi_0^i$ , and a $v$ -function $v^i$ from the outer loop. Note that we have $\| q_0^i \|_{\infty} \leq \frac{1}{1 - \gamma}$ , $\| v^i \|_{\infty} \leq \frac{1}{1 - \gamma}$ , and $\min_{s,a^i} \pi_0^i(a^i \mid s) \geq \ell_\tau$ due to Lemma D.1. +2: for $k = 0,1,\dots ,K - 1$ do +3: $\pi_{k + 1}^{i}(s) = \pi_{k}^{i}(s) + \beta_{k}(\sigma_{\tau}(q_{k}^{i}(s)) - \pi_{k}^{i}(s))$ for all $s\in S$ +4: Sample $A_{k}^{i}\sim \pi_{k + 1}^{i}(\cdot \mid S_{k})$ , and observe $S_{k + 1}\sim p(\cdot \mid S_k,A_k^i,A_k^{-i})$ +5: $q_{k + 1}^{i}(s,a^{i}) = q_{k}^{i}(s,a^{i}) + \alpha_{k}\mathbb{1}_{\{(S_{k},A_{k}^{i}) = (s,a^{i})\}}\left(R_{i}(S_{k},A_{k}^{i},A_{k}^{-i}) + \gamma v^{i}(S_{k + 1}) - q_{k}^{i}(S_{k},A_{k}^{i})\right)$ for all $(s,a^i)\in \mathcal{S}\times \mathcal{A}^i$ +6: end for +7: Output: $q_{K}^{i}$ and $\pi_{K}^{i}$ + +# D.5.1 Analysis of the Policies + +We consider $\{(\pi_k^1,\pi_k^2)\}_{k\geq 0}$ generated by Algorithm 3 and use $V_{X}(\cdot ,\cdot)$ defined in Appendix D.1 as the Lyapunov function to study them. For simplicity of notation, we use $\nabla_1V_X(\cdot ,\cdot)$ (respectively, $\nabla_2V_X(\cdot ,\cdot))$ to denote the gradient with respect to the first argument (respectively, the second argument). Recall that Lemma D.1 implies that $\pi_k = (\pi_k^1,\pi_k^2)\in \Pi_\tau$ for all $k\geq 0$ , where $\Pi_{\tau} = \{(\pi^{1},\pi^{2})\in \Delta (\mathcal{A}^{1})\times \Delta (\mathcal{A}^{2})\mid \min_{a^{1}\in \mathcal{A}^{1}}\pi^{1}(a^{1})\geq \ell_{\tau},\min_{a^{2}\in \mathcal{A}^{2}}\pi^{2}(a^{2})\geq \ell_{\tau}\}$ . The following lemma establishes the properties of $V_{X}(\cdot ,\cdot)$ . + +Lemma D.7. The function $V_{X}(\cdot ,\cdot)$ has the following properties. + +(1) For $i\in \{1,2\}$ , fix $\mu^{-i}\in \Delta (\mathcal{A}^{-i})$ , the function $V_{X}(\mu^{i},\mu^{-i})$ as a function of $\mu^i$ is $\tau -\text{strongly}$ convex with respect to $\| \cdot \| _2$ . +(2) $V_{X}(\cdot ,\cdot)$ is $\tilde{L}_{\tau}$ -smooth on $\Pi_{\tau}$ , where $\tilde{L}_{\tau} = 2\left(\frac{\tau}{\ell_{\tau}} +\frac{\max(\|X_1\|_2^2,\|X_2\|_2^2)}{\tau} +\| X_1 + X_2^\top \| _2\right).$ +(3) It holds for any $(\mu^1,\mu^2)\in \Delta (\mathcal{A}^1)\times \Delta (\mathcal{A}^2)$ that + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (X _ {1} \mu^ {2}) - \mu^ {1} \rangle + \langle \nabla_ {2} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (X _ {2} \mu^ {1}) - \mu^ {2} \rangle \\ \leq - \frac {7}{8} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + \frac {1 6}{\tau} \| X _ {1} + X _ {2} ^ {\top} \| _ {2} ^ {2}. \\ \end{array} +$$ + +(4) For any $u^1 \in \mathbb{R}^{|\mathcal{A}^1|}$ and $u^2 \in \mathbb{R}^{|\mathcal{A}^2|}$ , we have for all $(\mu^1, \mu^2) \in \Pi_\tau$ that + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (u ^ {1}) - \sigma_ {\tau} (X _ {1} \mu^ {2}) \rangle + \langle \nabla_ {2} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (u ^ {2}) - \sigma_ {\tau} (X _ {2} \mu^ {1}) \rangle \\ \leq \frac {1}{8} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + \frac {8}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {\operatorname* {m a x} \left(\| X _ {1} \| _ {2} , \| X _ {2} \| _ {2}\right)}{\tau}\right) ^ {2} \sum_ {i = 1, 2} \| u ^ {i} - X _ {i} \mu^ {- i} \| _ {2} ^ {2}. \\ \end{array} +$$ + +Proof of Lemma D.7. To begin with, we have by Danskin's theorem [95] that + +$$ +\nabla_ {1} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) = - \left(X _ {1} + X _ {2} ^ {\top}\right) \mu^ {2} - \tau \nabla \nu \left(\mu^ {1}\right) + X _ {2} ^ {\top} \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right). \tag {37} +$$ + +Similar result holds for $\nabla_2 V_X(\mu^1, \mu^2)$ . + +(1) It is clear that the function $V_{X}(\cdot, \cdot)$ is non-negative. The strong convexity follows from the following two observations. + +(i) The negative entropy $-\nu(\cdot)$ is $1-$ strongly convex with respect to $\|\cdot\|_2$ [97, Example 5.27]. +(ii) Given $i \in \{1, 2\}$ , the function $\max_{\hat{\mu}^{-i} \in \Delta(\mathcal{A}^{-i})} \left\{ (\hat{\mu}^{-i})^\top X_{-i} \mu^i + \tau \nu(\hat{\mu}^{-i}) \right\}$ as a function of $\mu^i$ is the maximum of linear functions in $\mu^i$ , and therefore is convex. + +It follows that, for any $i \in \{1,2\}$ , the function $V_{X}(\mu^{1},\mu^{2})$ is $\tau-$ strongly convex in $\mu^i$ with respect to $\| \cdot \|_2$ uniformly for all $\mu^{-i}$ . + +(2) For any $(\mu^1,\mu^2),(\bar{\mu}^1,\bar{\mu}^2)\in \Pi_\tau$ , we have by Eq. (37) that + +$$ +\begin{array}{l} \left\| \nabla_ {1} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {1} V _ {X} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} \\ = \left\| (X _ {1} + X _ {2} ^ {\top}) (\mu^ {2} - \bar {\mu} ^ {2}) + \tau (\nabla \nu (\mu^ {1}) - \nabla \nu (\bar {\mu} ^ {1})) + X _ {2} ^ {\top} (\sigma_ {\tau} (X _ {2} \bar {\mu} ^ {1}) - \sigma_ {\tau} (X _ {2} \mu^ {1})) \right\| _ {2} \\ \leq \left\| X _ {1} + X _ {2} ^ {\top} \right\| _ {2} \left\| \mu^ {2} - \bar {\mu} ^ {2} \right\| _ {2} + \left(\frac {\tau}{\ell_ {\tau}} + \frac {\left\| X _ {2} \right\| _ {2} ^ {2}}{\tau}\right) \left\| \bar {\mu} ^ {1} - \mu^ {1} \right\| _ {2} \tag {38} \\ \end{array} +$$ + +where Eq. (38) follows from Lemma C.5 and the Lipschitz continuity of $\sigma_{\tau}(\cdot)$ [96]. Similarly, we also have + +$$ +\begin{array}{l} \left\| \nabla_ {2} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {2} V _ {X} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} \\ \leq \| X _ {2} + X _ {1} ^ {\top} \| _ {2} \| \mu^ {1} - \bar {\mu} ^ {1} \| _ {2} + \left(\frac {\tau}{\ell_ {\tau}} + \frac {\| X _ {1} \| _ {2} ^ {2}}{\tau}\right) \| \bar {\mu} ^ {2} - \mu^ {2} \| _ {2}. \\ \end{array} +$$ + +Using the previous 2 inequalities, we have the following result for the full gradient of $V_{X}(\cdot ,\cdot)$ : + +$$ +\begin{array}{l} \left\| \nabla V _ {X} \left(\mu^ {1}, \mu^ {2}\right) - \nabla V _ {X} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} ^ {2} \\ = \left\| \nabla_ {1} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {1} V _ {X} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} ^ {2} + \left\| \nabla_ {2} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) - \nabla_ {2} V _ {X} \left(\bar {\mu} ^ {1}, \bar {\mu} ^ {2}\right) \right\| _ {2} ^ {2} \\ \leq \sum_ {i = 1, 2} \left[ 2 \left(\frac {\tau}{\ell_ {\tau}} + \frac {\| X _ {- i} \| _ {2} ^ {2}}{\tau}\right) ^ {2} \| \bar {\mu} ^ {i} - \mu^ {i} \| _ {2} ^ {2} + 2 \| X _ {i} + X _ {- i} ^ {\top} \| _ {2} ^ {2} \| \mu^ {- i} - \bar {\mu} ^ {- i} \| _ {2} ^ {2} \right] \quad ((a + b) ^ {2} \leq 2 a ^ {2} + 2 b ^ {2}) \\ \leq 2 \left[ \left(\frac {\tau}{\ell_ {\tau}} + \frac {\operatorname* {m a x} (\| X _ {1} \| _ {2} ^ {2} , \| X _ {2} \| _ {2} ^ {2})}{\tau}\right) ^ {2} + \| X _ {1} + X _ {2} ^ {\top} \| _ {2} ^ {2} \right] \sum_ {i = 1, 2} \| \bar {\mu} ^ {i} - \mu^ {i} \| _ {2} ^ {2}. \\ \end{array} +$$ + +The previous inequality implies that $V_{X}(\cdot, \cdot)$ is an $\tilde{L}_{\tau}$ -smooth function on $\Pi_{\tau}$ [97], where + +$$ +\tilde {L} _ {\tau} = 2 \left(\frac {\tau}{\ell_ {\tau}} + \frac {\operatorname* {m a x} (\| X _ {1} \| _ {2} ^ {2} , \| X _ {2} \| _ {2} ^ {2})}{\tau} + \| X _ {1} + X _ {2} ^ {\top} \| _ {2}\right). +$$ + +(3) Using the formula for the gradient of $V_{X}(\cdot ,\cdot)$ in Eq. (37), we have + +$$ +\begin{array}{l} \left\langle \nabla_ {1} V _ {X} \left(\mu^ {1}, \mu^ {2}\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle \\ = \left\langle - \left(X _ {1} + X _ {2} ^ {\top}\right) \mu^ {2} - \tau \nabla \nu \left(\mu^ {1}\right) + X _ {2} ^ {\top} \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle \\ = \left\langle - \left(X _ {1} + X _ {2} ^ {\top}\right) \mu^ {2} - \tau \nabla \nu \left(\mu^ {1}\right) + X _ {2} ^ {\top} \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle \\ + \left\langle X _ {1} \mu^ {2} + \tau \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle \tag {39} \\ = \tau \left\langle \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) - \nabla \nu \left(\mu^ {1}\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle \\ + \left(\sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) - \mu^ {2}\right) ^ {\top} X _ {2} \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1}\right), \\ \end{array} +$$ + +where Eq. (39) is due to the first order optimality condition $X_{1}\mu^{2} + \tau \nabla \nu (\sigma_{\tau}(X_1\mu^2)) = 0$ . To proceed, observe that the concavity of $\nu (\cdot)$ and the optimality condition $X_{1}\mu^{2} + \tau \nabla \nu (\sigma_{\tau}(X_{1}\mu^{2})) = 0$ together imply that + +$$ +\begin{array}{l} \left\langle \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) - \nabla \nu \left(\mu^ {1}\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle \\ = \left\langle \nabla \nu \left(\mu^ {1}\right) - \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right), \mu^ {1} - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \\ = \left\langle \nabla \nu \left(\mu^ {1}\right), \mu^ {1} - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle - \left\langle \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right), \mu^ {1} - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \\ \leq \nu (\mu^ {1}) - \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) - \left\langle \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right), \mu^ {1} - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \quad (\text {C o n c a v i t y} \nu (\cdot)) \\ = \nu (\mu^ {1}) - \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) + \frac {1}{\tau} \langle X _ {1} \mu^ {2}, \mu^ {1} - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \rangle \\ = \frac {1}{\tau} \left[ \left(\mu^ {1}\right) ^ {\top} X _ {1} \mu^ {2} + \tau \nu (\mu^ {1}) - \max _ {\hat {\mu} ^ {1} \in \Delta \left(\mathcal {A} ^ {1}\right)} \left\{\left(\hat {\mu} ^ {1}\right) ^ {\top} X _ {1} \mu^ {2} + \tau \nu \left(\hat {\mu} ^ {1}\right) \right\} \right]. \\ \end{array} +$$ + +Therefore, we have from the previous 2 inequalities that + +$$ +\langle \nabla_ {1} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (X _ {1} \mu^ {2}) - \mu^ {1} \rangle +$$ + +$$ +\begin{array}{l} \leq (\mu^ {1}) ^ {\top} X _ {1} \mu^ {2} + \tau \nu (\mu^ {1}) - \max _ {\hat {\mu} ^ {1} \in \Delta (\mathcal {A} ^ {1})} \left\{(\hat {\mu} ^ {1}) ^ {\top} X _ {1} \mu^ {2} + \tau \nu (\hat {\mu} ^ {1}) \right\} \\ + \left(\sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) - \mu^ {2}\right) ^ {\top} X _ {2} \left(\sigma_ {\tau} \left(X _ {i} \mu^ {2}\right) - \mu^ {1}\right). \\ \end{array} +$$ + +Similarly, we also have + +$$ +\begin{array}{l} \langle \nabla_ {2} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (X _ {2} \mu^ {1}) - \mu^ {2} \rangle \\ \leq (\mu^ {2}) ^ {\top} X _ {2} \mu^ {1} + \tau \nu (\mu^ {2}) - \max _ {\hat {\mu} ^ {2} \in \Delta (\mathcal {A} ^ {2})} \left\{(\hat {\mu} ^ {2}) ^ {\top} X _ {2} \mu^ {1} + \tau \nu (\hat {\mu} ^ {2}) \right\} \\ + \left(\sigma_ {\tau} (X _ {1} \mu^ {2}) - \mu^ {1}\right) ^ {\top} X _ {1} (\sigma_ {\tau} (X _ {2} \mu^ {1}) - \mu^ {2}). \\ \end{array} +$$ + +Adding up the previous 2 inequalities, we obtain + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (X _ {1} \mu^ {2}) - \mu^ {1} \rangle + \langle \nabla_ {2} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (X _ {2} \mu^ {1}) - \mu^ {2} \rangle \\ \leq - V _ {X} (\mu^ {1}, \mu^ {2}) + (\sigma_ {\tau} (X _ {1} \mu^ {2}) - \mu^ {1}) ^ {\top} (X _ {1} + X _ {2} ^ {\top}) (\sigma_ {\tau} (X _ {2} \mu^ {1}) - \mu^ {2}) \\ \leq - V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + \left\| \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\| _ {2} \left\| X _ {1} + X _ {2} ^ {\top} \right\| _ {2} \left\| \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) - \mu^ {2} \right\| _ {2} \\ \leq - V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + 2 \| \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \| _ {2} \| X _ {1} + X _ {2} ^ {\top} \| _ {2}, \tag {40} \\ \end{array} +$$ + +where the last line follows from $\| \sigma_{\tau}(X_2\mu^1) - \mu^2 \|_2 \leq \| \sigma_{\tau}(X_2\mu^1) \|_1 + \| \mu^2 \|_1 \leq 2$ . Using Lemma D.7 (1) together with the quadratic growth property of strongly convex functions, we have + +$$ +\left\| \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\| _ {2} \leq \frac {\sqrt {2}}{\sqrt {\tau}} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) ^ {1 / 2}. +$$ + +It follows that + +$$ +\begin{array}{l} \left\langle \nabla_ {1} V _ {X} \left(\mu^ {1}, \mu^ {2}\right), \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1} \right\rangle + \left\langle \nabla_ {2} V _ {X} \left(\mu^ {1}, \mu^ {2}\right), \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) - \mu^ {2} \right\rangle \\ \leq - V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + 2 \left\| \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) - \mu^ {1}\right) \| _ {2} \| X _ {1} + X _ {2} ^ {\top} \| _ {2} \\ \leq - V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + \frac {2 \sqrt {2}}{\sqrt {\tau}} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) ^ {1 / 2} \| X _ {1} + X _ {2} ^ {\top} \| _ {2} \\ \leq - \frac {7}{8} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + \frac {1 6}{\tau} \| X _ {1} + X _ {2} ^ {\top} \| _ {2} ^ {2}, \\ \end{array} +$$ + +where the last line follows from $a^2 + b^2 \geq 2ab$ . + +(4) For any $u^1 \in \mathbb{R}^{|\mathcal{A}^1|}$ , using the formula of the gradient of $V_X(\cdot, \cdot)$ from Eq. (37), we have + +$$ +\begin{array}{l} \left\langle \nabla_ {1} V _ {X} \left(\mu^ {1}, \mu^ {2}\right), \sigma_ {\tau} \left(u ^ {1}\right) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \\ = \left\langle - \left(X _ {1} + X _ {2} ^ {\top}\right) \mu^ {2} - \tau \nabla \nu (\mu^ {1}) + X _ {2} ^ {\top} \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right), \sigma_ {\tau} (u ^ {1}) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \\ = \left\langle - \left(X _ {1} + X _ {2} ^ {\top}\right) \mu^ {2} - \tau \nabla \nu \left(\mu^ {1}\right) + X _ {2} ^ {\top} \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right), \sigma_ {\tau} \left(u ^ {1}\right) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \\ + \left\langle X _ {1} \mu^ {2} + \tau \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right), \sigma_ {\tau} \left(u ^ {1}\right) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \quad (\text {F i r s t o r d e r o p e r t i a l i t y c o n d i t i o n}) \\ = \tau \left\langle \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) - \nabla \nu \left(\mu^ {1}\right), \sigma_ {\tau} \left(u ^ {1}\right) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \right\rangle \\ + \left(\sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) - \mu^ {2}\right) ^ {\top} X _ {2} \left(\sigma_ {\tau} \left(u ^ {1}\right) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) \\ \leq \left(\tau \| \nabla \nu \left(\sigma_ {\tau} \left(X _ {1} \mu^ {2}\right)\right) - \nabla \nu \left(\mu^ {1}\right) \| _ {2} + \| \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) - \mu^ {2} \| _ {2} \| X _ {2} \| _ {2}\right) \| \sigma_ {\tau} (u ^ {1}) - \sigma_ {\tau} \left(X _ {1} \mu^ {2}\right) \| _ {2} \\ \leq \left(\frac {\tau}{\ell_ {\tau}} \| \sigma_ {\tau} (X _ {1} \mu^ {2}) - \mu^ {1} \| _ {2} + \| \sigma_ {\tau} (X _ {2} \mu^ {1}) - \mu^ {2} \| _ {2} \| X _ {2} \| _ {2}\right) \frac {1}{\tau} \| u ^ {1} - X _ {1} \mu^ {2} \| _ {2} \\ \leq \frac {\sqrt {2}}{\sqrt {\tau}} \left(\frac {1}{\ell_ {\tau}} + \frac {\| X _ {2} \| _ {2}}{\tau}\right) V _ {X} \left(\mu^ {1}, \mu^ {2}\right) ^ {1 / 2} \| u ^ {1} - X _ {1} \mu^ {2} \| _ {2} \\ \leq \frac {1}{1 6} V _ {X} (\mu^ {1}, \mu^ {2}) + \frac {8}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {\| X _ {2} \| _ {2}}{\tau}\right) ^ {2} \| u ^ {1} - X _ {1} \mu^ {2} \| _ {2} ^ {2}, \\ \end{array} +$$ + +where the third last inequality follows from the $\frac{1}{\ell_{\tau}}$ -smoothness of $\nu(\cdot)$ on $\Pi_{\tau}$ , the second last inequality follows from Lemma D.7 (1) together with the quadratic growth property of strongly + +convex functions, and the last inequality follows from $a^2 + b^2 \geq 2ab$ . Similarly, we also have for any $u^2 \in \mathbb{R}^{|\mathcal{A}^2|}$ that + +$$ +\begin{array}{l} \left\langle \nabla_ {2} V _ {X} \left(\mu^ {1}, \mu^ {2}\right), \sigma_ {\tau} \left(u ^ {2}\right) - \sigma_ {\tau} \left(X _ {2} \mu^ {1}\right) \right\rangle \\ \leq \frac {1}{1 6} V _ {X} (\mu^ {1}, \mu^ {2}) + \frac {8}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {\| X _ {1} \| _ {2}}{\tau}\right) ^ {2} \| u ^ {2} - X _ {2} \mu^ {1} \| _ {2} ^ {2}. \\ \end{array} +$$ + +Adding up the previous two inequalities, we obtain + +$$ +\begin{array}{l} \langle \nabla_ {1} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (u ^ {1}) - \sigma_ {\tau} (X _ {1} \mu^ {2}) \rangle + \langle \nabla_ {2} V _ {X} (\mu^ {1}, \mu^ {2}), \sigma_ {\tau} (u ^ {2}) - \sigma_ {\tau} (X _ {2} \mu^ {1}) \rangle \\ \leq \frac {1}{8} V _ {X} \left(\mu^ {1}, \mu^ {2}\right) + \frac {8}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {\operatorname* {m a x} \left(\| X _ {1} \| _ {2} , \| X _ {2} \| _ {2}\right)}{\tau}\right) ^ {2} \sum_ {i = 1, 2} \| u ^ {i} - X _ {i} \mu^ {- i} \| _ {2} ^ {2}. \\ \end{array} +$$ + +The proof is complete. + +![](images/726ec1e63261c7ca8d3482d9e7dc4c5e4511c87b86cf0ec5e23fd9c7d3197ae1.jpg) + +With the properties of $V_{X}(\cdot ,\cdot)$ established in Lemma D.7, we now use it as a Lyapunov function to study $(\pi_k^1,\pi_k^2)$ generated by Algorithm 3. Specifically, using the smoothness of $V_{X}(\cdot ,\cdot)$ (cf. Lemma D.7 (2)), the update equation in Algorithm 3 Line 3, and Lemma D.7 (3) and (4), we have the desired one-step Lyapunov drift inequality for $\mathcal{L}_{\pi}(k)$ , which is presented in the following. + +Lemma D.8. The following inequality holds for all $k \geq 0$ : + +$$ +\begin{array}{l} \mathcal {L} _ {\pi} (k + 1) \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) \mathcal {L} _ {\pi} (k) + \frac {1 6 A _ {\max } ^ {2} \beta_ {k}}{\tau} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2} \\ + \frac {3 2 A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{\tau^ {3} \ell_ {\tau} ^ {2} (1 - \gamma) ^ {2}} \mathcal {L} _ {q} (k) + 2 L _ {\tau} \beta_ {k} ^ {2}. \\ \end{array} +$$ + +Proof of Lemma D.8. We will use $V_{v,s}(\cdot, \cdot)$ (see Lemma D.1) as the Lyapunov function to study the evolution of $(\pi_k^1(s), \pi_k^2(s))$ . To begin with, we identify the smoothness parameter of $V_{v,s}(\cdot, \cdot)$ . Using Lemma D.7 (1) and the definition of $V_{v,s}(\cdot, \cdot)$ , we have + +$$ +\tilde {L} _ {\tau} = 2 \left(\frac {\tau}{\ell_ {\tau}} + \frac {\operatorname* {m a x} \left(\left\| X _ {1} \right\| _ {2} ^ {2} , \left\| X _ {2} \right\| _ {2} ^ {2}\right)}{\tau} + \left\| X _ {1} + X _ {2} ^ {\top} \right\| _ {2}\right) +$$ + +$$ += 2 \left(\frac {\tau}{\ell_ {\tau}} + \frac {\operatorname* {m a x} (\| \mathcal {T} ^ {1} (v ^ {1}) (s) \| _ {2} ^ {2} , \| \mathcal {T} ^ {2} (v ^ {2}) (s) \| _ {2} ^ {2})}{\tau} + \| \mathcal {T} ^ {1} (v ^ {1}) (s) + \mathcal {T} ^ {2} (v ^ {2}) (s) ^ {\top} \| _ {2}\right) +$$ + +$$ +\leq 2 \left(\frac {\tau}{\ell_ {\tau}} + \frac {A _ {\operatorname* {m a x}} ^ {2}}{\tau (1 - \gamma) ^ {2}} + \frac {2 A _ {\operatorname* {m a x}}}{1 - \gamma}\right) +$$ + +(This follows from $|\mathcal{T}^i (v^i)(s,a^i,a^{-i})|\leq \frac{1}{1 - \gamma},\forall (s,a^i,a^{-i})$ and $i\in \{1,2\}$ + +$$ +:= L _ {\tau}. +$$ + +Therefore, $V_{v,s}(\cdot ,\cdot)$ is an $L_{\tau}$ -smooth function on $\Pi_{\tau}$ . Using the smoothness of $V_{v,s}(\cdot ,\cdot)$ , for any $s\in S$ , we have by the policy update equation (cf. Algorithm 3 Line 3) that + +$$ +\begin{array}{l} V _ {v, s} \left(\pi_ {k + 1} ^ {1} (s), \pi_ {k + 1} ^ {2} (s)\right) \\ \leq V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)\right) + \beta_ {k} \left\langle \nabla_ {2} V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)\right), \sigma_ {\tau} \left(q _ {k} ^ {2} (s)\right) - \pi_ {k} ^ {2} (s) \right\rangle \\ + \beta_ {k} \langle \nabla_ {1} V _ {v, s} (\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)), \sigma_ {\tau} (q _ {k} ^ {1} (s)) - \pi_ {k} ^ {1} (s) \rangle + \frac {L _ {\tau} \beta_ {k} ^ {2}}{2} \sum_ {i = 1, 2} \| \sigma_ {\tau} (q _ {k} ^ {1} (s)) - \pi_ {k} ^ {1} (s) \| _ {2} ^ {2} \\ \leq V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)\right) + \beta_ {k} \left\langle \nabla_ {2} V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)\right), \sigma_ {\tau} \left(\mathcal {T} ^ {2} (v ^ {2}) (s) \pi_ {k} ^ {1} (s)\right) - \pi_ {k} ^ {2} (s) \right\rangle \\ + \beta_ {k} \langle \nabla_ {1} V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k + 1} ^ {2} (s)\right), \sigma_ {\tau} \left(\mathcal {T} ^ {1} \left(v ^ {1}\right) (s) \pi_ {k} ^ {2} (s)\right) - \pi_ {k} ^ {1} (s) \rangle \\ + \beta_ {k} \langle \nabla_ {2} V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)\right), \sigma_ {\tau} \left(q _ {k} ^ {2} (s)\right) - \sigma_ {\tau} \left(\mathcal {T} ^ {2} \left(v ^ {2}\right) (s) \pi_ {k} ^ {1} (s)\right) \rangle \\ + \beta_ {k} \langle \nabla_ {1} V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k + 1} ^ {2} (s)\right), \sigma_ {\tau} \left(q _ {k} ^ {1} (s)\right) - \sigma_ {\tau} \left(\mathcal {T} ^ {1} \left(v ^ {1}\right) \left(s\right) \pi_ {k} ^ {2} (s)\right) \rangle + 2 L _ {\tau} \beta_ {k} ^ {2} \\ \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) V _ {v, s} \left(\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)\right) + \frac {1 6 \beta_ {k}}{\tau} \| \mathcal {T} ^ {1} \left(v ^ {1}\right) (s) + \mathcal {T} ^ {2} \left(v ^ {2}\right) (s) ^ {\top} \| _ {2} ^ {2} \\ \end{array} +$$ + +$$ ++ \frac {8 \beta_ {k}}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {\max _ {i \in \{1 , 2 \}} \| \mathcal {T} ^ {i} (v ^ {i}) (s) \| _ {2}}{\tau}\right) ^ {2} \sum_ {i = 1, 2} \| q _ {k} ^ {i} (s) - \mathcal {T} ^ {i} (v ^ {i}) (s) \pi_ {k} ^ {2} (s) \| _ {2} ^ {2} + 2 L _ {\tau} \beta_ {k} ^ {2}. +$$ + +where the last line follows from Lemma D.7 (3) and (4). Since $\max_{i\in \{1,2\}}\| \mathcal{T}^i (v^i)(s)\| _2\leq \frac{A_{\max}}{1 - \gamma}$ and + +$$ +\left\| \mathcal {T} ^ {1} (v ^ {1}) (s) + \mathcal {T} ^ {2} (v ^ {2}) (s) ^ {\top} \right\| _ {2} ^ {2} \leq A _ {\max } ^ {2} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2}, +$$ + +we have + +$$ +\begin{array}{l} V _ {v, s} (\pi_ {k + 1} ^ {1} (s), \pi_ {k + 1} ^ {2} (s)) \\ \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) V _ {v, s} (\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)) + \frac {1 6 \beta_ {k} A _ {\mathrm {m a x}} ^ {2}}{\tau} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2} \\ + \frac {8 \beta_ {k}}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {A _ {\mathrm {m a x}}}{\tau (1 - \gamma)}\right) ^ {2} \sum_ {i = 1, 2} \| q _ {k} ^ {i} (s) - \mathcal {T} ^ {i} (v ^ {i}) (s) \pi_ {k} ^ {2} (s) \| _ {2} ^ {2} + 2 L _ {\tau} \beta_ {k} ^ {2} \\ \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) \max _ {s \in \mathcal {S}} V _ {v, s} (\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)) + \frac {1 6 \beta_ {k} A _ {\operatorname* {m a x}} ^ {2}}{\tau} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2} \\ + \frac {8 \beta_ {k}}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {A _ {\mathrm {m a x}}}{\tau (1 - \gamma)}\right) ^ {2} \sum_ {i = 1, 2} \sum_ {s \in S} \| q _ {k} ^ {i} (s) - \mathcal {T} ^ {i} (v ^ {i}) (s) \pi_ {k} ^ {2} (s) \| _ {2} ^ {2} + 2 L _ {\tau} \beta_ {k} ^ {2} \\ = \left(1 - \frac {3 \beta_ {k}}{4}\right) \mathcal {L} _ {\pi} (k) + \frac {1 6 \beta_ {k} A _ {\mathrm {m a x}} ^ {2}}{\tau} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2} + \frac {8 \beta_ {k}}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {A _ {\mathrm {m a x}}}{\tau (1 - \gamma)}\right) ^ {2} \mathcal {L} _ {q} (k) + 2 L _ {\tau} \beta_ {k} ^ {2}. \\ \end{array} +$$ + +Since the RHS of the previous inequality does not depend on $s$ , we have + +$$ +\begin{array}{l} \mathcal {L} _ {\pi} (k + 1) \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) \mathcal {L} _ {\pi} (k) + \frac {1 6 \beta_ {k} A _ {\mathrm {m a x}} ^ {2}}{\tau} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2} \\ + \frac {8 \beta_ {k}}{\tau} \left(\frac {1}{\ell_ {\tau}} + \frac {A _ {\mathrm {m a x}}}{\tau (1 - \gamma)}\right) ^ {2} \mathcal {L} _ {q} (k) + 2 L _ {\tau} \beta_ {k} ^ {2} \\ \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) \mathcal {L} _ {\pi} (k) + \frac {1 6 \beta_ {k} A _ {\operatorname* {m a x}} ^ {2}}{\tau} \| v ^ {1} + v ^ {2} \| _ {\infty} ^ {2} \\ + \frac {3 2 A _ {\mathrm {m a x}} ^ {2} \beta_ {k}}{\tau^ {3} \ell_ {\tau} ^ {2} (1 - \gamma) ^ {2}} \mathcal {L} _ {q} (k) + 2 L _ {\tau} \beta_ {k} ^ {2}, \\ \end{array} +$$ + +where the last line follows from $\tau \leq 1 / (1 - \gamma)$ + +![](images/91cc96b16f5ce520e64370f47f9aa571719bfa76d10fdba10b8f8fb11c25f5bc.jpg) + +# D.5.2 Analysis of the $q$ -Functions + +In this section, we consider $q_{k}^{i}$ generated by Algorithm 3. We begin by reformulating the update of the $q$ -function as a stochastic approximation algorithm for estimating a time-varying target. For $i \in \{1,2\}$ , fixing $v^{i} \in \mathbb{R}^{|S|}$ , let $F^{i}: \mathbb{R}^{|S||A^{i}|} \times S \times A^{i} \times A^{-i} \times S \mapsto \mathbb{R}^{|S||A^{i}|}$ be defined as + +$$ +[ F ^ {i} (q ^ {i}, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}) ] (s, a ^ {i}) = \mathbb {1} _ {\{(s, a ^ {i}) = (s _ {0}, a _ {0} ^ {i}) \}} \left(R _ {i} (s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}) + \gamma v ^ {i} (s _ {1}) - q ^ {i} (s _ {0}, a _ {0} ^ {i})\right) +$$ + +for all $(q^i, s_0, a_0^i, a_0^{-i}, s_1)$ and $(s, a^i)$ . Then Algorithm 3 Line 5 can be compactly written as + +$$ +q _ {k + 1} ^ {i} = q _ {k} ^ {i} + \alpha_ {k} F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right). \tag {41} +$$ + +Denote the stationary distribution of the Markov chain $\{S_k\}$ induced by the joint policy $\pi_{k} = (\pi_{k}^{1},\pi_{k}^{2})$ by $\mu_{k}\in \Delta (S)$ , the existence and uniqueness of which is guaranteed by Lemma D.1 and Lemma B.1 (1). Let $\bar{F}_k^i:\mathbb{R}^{|\mathcal{S}||\mathcal{A}^i|}\mapsto \mathbb{R}^{|\mathcal{S}||\mathcal{A}^i|}$ be defined as + +$$ +\bar {F} _ {k} ^ {i} (q ^ {i}) = \mathbb {E} _ {S _ {0} \sim \mu_ {k} (\cdot), A _ {0} ^ {i} \sim \pi_ {k} ^ {i} (\cdot | S _ {0}), A _ {0} ^ {- i} \sim \pi_ {k} ^ {- i} (\cdot | S _ {0}), S _ {1} \sim p (\cdot | S _ {0}, A _ {0} ^ {i}, A _ {0} ^ {- i})} \left[ F ^ {i} (q ^ {i}, S _ {0}, A _ {0} ^ {i}, A _ {0} ^ {- i}, S _ {1}) \right] +$$ + +for all $q^i \in \mathbb{R}^{|\mathcal{S}| |\mathcal{A}^i|}$ . Then, Eq. (41) can be viewed as a stochastic approximation algorithm for solving the (time-varying) equation $\bar{F}_k^i(q^i) = 0$ with time-inhomogeneous Markovian noise $\{(S_k, A_k^i, A_k^{-i}, S_{k+1})\}_{k \geq 0}$ . We next establish the properties of the operators $F^i(\cdot)$ and $\bar{F}_k^i(\cdot)$ in the following lemma. + +Lemma D.9. The following properties hold for $i \in \{1,2\}$ . + +(1) It holds that $\| F^i(q_1^i, s_0, a_0^i, a_0^{-i}, s_1) - F^i(q_2^i, s_0, a_0^i, a_0^{-i}, s_1) \|_2 \leq \| q_1^i - q_2^i \|_2$ for any $(q_1^i, q_2^i)$ and $(s_0, a_0^i, a_0^{-i}, s_1)$ . +(2) It holds that $\| F^i (0,s_0,a_0^i,a_0^{-i},s_1)\| _2\leq \frac{1}{1 - \gamma}$ for all $(s_0,a_0^i,a_0^{-i},s_1)$ +(3) $\bar{F}_k^i (q^i) = 0$ has a unique solution $\bar{q}_k^i$ , which is given as $\bar{q}_k^i (s) = \mathcal{T}^i (v^i)(s)\pi_k^{-i}(s)$ for all $s$ . +(4) It holds that $\langle \bar{F}_k^i (q_1^i) - \bar{F}_k^i (q_2^i),q_1^i -q_2^i\rangle \leq -c_\tau \| q_1^i -q_2^i\| _2^2$ for all $(q_{1}^{i},q_{2}^{i})$ , where $c_{\tau} = \mu_{\mathrm{min}}\ell_{\tau}$ See Lemma B.1 for the definition of $\mu_{\mathrm{min}}$ + +Proof of Lemma D.9. (1) For any $(q_1^i, q_2^i)$ and $(s_0, a_0^i, a_0^{-i}, s_1)$ , we have + +$$ +\begin{array}{l} \left\| F ^ {i} \left(q _ {1} ^ {i}, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}\right) - F ^ {i} \left(q _ {2} ^ {i}, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}\right) \right\| _ {2} ^ {2} \\ = \sum_ {(s, a ^ {i})} ([ F ^ {i} (q _ {1} ^ {i}, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}) ] (s, a ^ {i}) - [ F ^ {i} (q _ {2} ^ {i}, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}) ] (s, a ^ {i})) ^ {2} \\ = \left(q _ {1} ^ {i} (s _ {0}, a _ {0} ^ {i}) - q _ {2} ^ {i} (s _ {0}, a _ {0} ^ {i})\right) ^ {2} \\ \leq \left\| q _ {1} ^ {i} - q _ {2} ^ {i} \right\| _ {2} ^ {2}. \\ \end{array} +$$ + +(2) For any $(s_0, a_0^i, a_0^{-i}, s_1)$ , we have + +$$ +\begin{array}{l} \| F ^ {i} \left(0, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}\right) \| _ {2} ^ {2} = \sum_ {\left(s, a ^ {i}\right)} \left(\left[ F ^ {i} \left(0, s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}, s _ {1}\right) \right] \left(s, a ^ {i}\right)\right) ^ {2} \\ = \left(R _ {i} (s _ {0}, a _ {0} ^ {i}, a _ {0} ^ {- i}) + \gamma v ^ {i} (s _ {1})\right) ^ {2} \\ \leq \frac {1}{(1 - \gamma) ^ {2}}, \\ \end{array} +$$ + +where the last line follows from $\| v^i\|_{\infty}\leq 1 / (1 - \gamma)$ and $|R_{i}(s_{0},a_{0}^{i},a_{0}^{-i})|\leq 1$ + +(3) We first write down the explicitly the operator $\overline{F}_k^i (\cdot)$ . Using the definition of $\mathcal{T}^i (\cdot)$ , we have + +$$ +\bar {F} _ {k} ^ {i} \left(q ^ {i}\right) (s) = \mu_ {k} (s) \operatorname {d i a g} \left(\pi_ {k} ^ {i} (s)\right) \left(\mathcal {T} ^ {i} \left(v ^ {i}\right) (s) \pi_ {k} ^ {- i} (s) - q ^ {i} (s)\right), \quad \forall s \in \mathcal {S}, +$$ + +Since $\mu_k(s) \geq \mu_{\min} > 0$ (cf. Lemma B.1 (4)) and $\mathrm{diag}(\pi_k^i(s))$ has strictly positive diagonal entries (cf. Lemma D.1) for all $s \in S$ and $k \geq 0$ , the equation $\bar{F}_k^i(q^i) = 0$ has a unique solution $\bar{q}_k^i \in \mathbb{R}^{|\mathcal{S}| |\mathcal{A}^i|}$ , which is given as + +$$ +\bar {q} _ {k} ^ {i} (s) = \mathcal {T} ^ {i} (v ^ {i}) (s) \pi_ {k} ^ {- i} (s), \quad \forall s \in \mathcal {S}. +$$ + +(4) Using the expression of $\bar{F}_k^i (\cdot)$ , we have for any $q_1^i,q_2^i\in \mathbb{R}^{|\mathcal{S}||\mathcal{A}^i|}$ that + +$$ +\begin{array}{l} \left(q _ {1} ^ {i} - q _ {2} ^ {i}\right) ^ {\top} \left(\bar {F} _ {k} ^ {i} \left(q _ {1} ^ {i}\right) - \bar {F} _ {k} ^ {i} \left(q _ {2} ^ {i}\right)\right) = - \sum_ {s, a ^ {i}} \mu_ {k} (s) \pi_ {k} ^ {i} \left(a ^ {i} | s\right) \left(q _ {1} ^ {i} \left(s, a ^ {i}\right) - q _ {2} ^ {i} \left(s, a ^ {i}\right)\right) ^ {2} \\ \leq - \min _ {s, a ^ {i}} \mu_ {k} (s) \pi_ {k} ^ {i} (a ^ {i} | s) \| q _ {1} ^ {i} - q _ {2} ^ {i} \| _ {2} ^ {2} \\ \leq - \mu_ {\min } \ell_ {\tau} \| q _ {1} ^ {i} - q _ {2} ^ {i} \| _ {2} ^ {2} \quad (\text {L e m m a B . 1 a n d L e m m a D . 1}) \\ = - c _ {\tau} \| q _ {1} ^ {i} - q _ {2} ^ {i} \| _ {2} ^ {2}. \\ \end{array} +$$ + +The proof is complete. + +![](images/688f6b20de80581f9f68d8874d5d33a9e8571975e140aa76b59715f49798231b.jpg) + +Next, we establish a negative drift inequality for $\mathcal{L}_q(k)$ . Using $\| \cdot \|_2^2$ as a Lyapunov function, we have by Eq. (41) that + +$$ +\mathbb {E} [ \| q _ {k + 1} ^ {i} - \bar {q} _ {k + 1} ^ {i} \| _ {2} ^ {2} ] = \mathbb {E} [ \| q _ {k + 1} ^ {i} - q _ {k} ^ {i} + q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} + \bar {q} _ {k + 1} ^ {i} \| _ {2} ^ {2} ] +$$ + +$$ +\begin{array}{l} = \mathbb {E} [ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} ] + \mathbb {E} [ \| q _ {k + 1} ^ {i} - q _ {k} ^ {i} \| _ {2} ^ {2} ] + \mathbb {E} [ \| \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \| _ {2} ^ {2} ] \\ + 2 \alpha_ {k} \mathbb {E} \left[ \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) ^ {\top} \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right) \right] \\ + 2 \alpha_ {k} \mathbb {E} \left[ \left(F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right)\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right] \\ + 2 \mathbb {E} \left[ \left(\bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i}\right) ^ {\top} \left(q _ {k + 1} ^ {i} - q _ {k} ^ {i}\right) \right] \\ + 2 \mathbb {E} \left[ \left(\bar {q} _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) ^ {\top} \left(\bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i}\right) \right] \\ \leq \left(1 - 2 \alpha_ {k} c _ {\tau}\right) \mathbb {E} \left[ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} \right] + \mathbb {E} \left[ \| q _ {k + 1} ^ {i} - q _ {k} ^ {i} \| _ {2} ^ {2} \right] + \mathbb {E} \left[ \| \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \| _ {2} ^ {2} \right] \\ + 2 \mathbb {E} \left[ \left(\bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i}\right) ^ {\top} \left(q _ {k + 1} ^ {i} - q _ {k} ^ {i}\right) \right] \\ + 2 \mathbb {E} \left[ \left(\bar {q} _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) ^ {\top} \left(\bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i}\right) \right] \\ + 2 \alpha_ {k} \mathbb {E} \left[ \left(F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right)\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right] \tag {42} \\ \end{array} +$$ + +where the last line follows from Lemma D.9 (4). The terms $\mathbb{E}[\| q_{k + 1}^i -q_k^i\| _2^2 ],\mathbb{E}[\| \bar{q}_k^i -\bar{q}_{k + 1}^i\| _2^2 ],$ $\mathbb{E}[(\bar{q}_k^i -\bar{q}_{k + 1}^i)^\top (q_{k + 1}^i -q_k^i)],\mathbb{E}[(q_k^i -\bar{q}_k^i)^\top (\bar{q}_k^i -\bar{q}_{k + 1}^i)]$ on the RHS of Eq. (42) are bounded in the following lemma. + +Lemma D.10. The following inequalities hold for all $k \geq 0$ . + +(1) $\mathbb{E}[\| q_{k + 1}^i -q_k^i\| _2^2 ]\leq \frac{4|S|A_{\max}\alpha_k^2}{(1 - \gamma)^2}.$ +(2) $\mathbb{E}[\| \bar{q}_k^i -\bar{q}_{k + 1}^i\| _2^2 ]\leq \frac{4|S|A_{\max}\beta_k^2}{(1 - \gamma)^2}.$ +(3) $\mathbb{E}[\langle q_{k + 1}^i -q_k^i,\bar{q}_k^i -\bar{q}_{k + 1}^i\rangle ]\leq \frac{4|S|A_{\max}\alpha_k\beta_k}{(1 - \gamma)^2}.$ +(4) $\mathbb{E}[\langle q_k^i -\bar{q}_k^i,\bar{q}_k^i -\bar{q}_{k + 1}^i\rangle ]\leq \frac{17|\mathcal{S}|A_{\max}^2\beta_k}{\tau(1 - \gamma)^2}\mathbb{E}[\| q_k^i -\bar{q}_k^i \| _2^2 ] + \frac{\beta_k}{16}\mathbb{E}[\mathcal{L}_\pi (k)].$ + +Proof of Lemma D.10. (1) For any $k \geq 0$ , using Eq. (41) and Lemma D.9 (1), we have + +$$ +\begin{array}{l} \left\| q _ {k + 1} ^ {i} - q _ {k} ^ {i} \right\| _ {2} ^ {2} = \alpha_ {k} ^ {2} \left\| F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \right\| _ {2} ^ {2} \\ = \alpha_ {k} ^ {2} \| F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - F ^ {i} \left(0, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \\ + F ^ {i} \left(0, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \| _ {2} ^ {2} \\ \leq \alpha_ {k} ^ {2} \left(\| q _ {k} ^ {i} \| _ {2} + \frac {1}{1 - \gamma}\right) ^ {2} \\ \leq \alpha_ {k} ^ {2} \left(\frac {\sqrt {| S | A _ {\max }}}{1 - \gamma} + \frac {1}{1 - \gamma}\right) ^ {2} \quad (\| q _ {k} ^ {i} \| _ {\infty} \leq \frac {1}{1 - \gamma} \text {b y L e m m a D . 1}) \\ \leq \frac {4 | \mathcal {S} | A _ {\operatorname* {m a x}} \alpha_ {k} ^ {2}}{(1 - \gamma) ^ {2}}. \\ \end{array} +$$ + +The result follows by taking expectation on both sides of the previous inequality. + +(2) For any $k \geq 0$ , using the definition of $\bar{q}_k$ in Appendix D.1, we have by Lemma D.9 that + +$$ +\begin{array}{l} \left\| \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \right\| _ {2} ^ {2} = \sum_ {s} \left\| \mathcal {T} ^ {i} (v ^ {i}) (s) \left(\pi_ {k + 1} ^ {- i} (s) - \pi_ {k} ^ {- i} (s)\right) \right\| _ {2} ^ {2} \\ = \beta_ {k} ^ {2} \sum_ {s} \| \mathcal {T} ^ {i} (v ^ {i}) (s) (\sigma_ {\tau} (q _ {k} ^ {- i} (s)) - \pi_ {k} ^ {- i} (s)) \| _ {2} ^ {2} \\ \leq \beta_ {k} ^ {2} \sum_ {s} (\| \mathcal {T} ^ {i} (v ^ {i}) (s) \sigma_ {\tau} (q _ {k} ^ {- i} (s)) \| _ {2} + \| \mathcal {T} ^ {i} (v ^ {i}) (s) \pi_ {k} ^ {- i} (s) \| _ {2}) ^ {2} \\ \leq \frac {4 | \mathcal {S} | A _ {\max} \beta_ {k} ^ {2}}{(1 - \gamma) ^ {2}}. \\ \end{array} +$$ + +The result follows by taking expectation on both sides of the previous inequality. + +(3) For any $k\geq 0$ , we have + +$$ +\langle q _ {k + 1} ^ {i} - q _ {k} ^ {i}, \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \rangle \leq \| q _ {k + 1} ^ {i} - q _ {k} ^ {i} \| _ {2} \| \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \| _ {2} \leq \frac {4 | \mathcal {S} | A _ {\max} \alpha_ {k} \beta_ {k}}{(1 - \gamma) ^ {2}}, +$$ + +where the last inequality follows from Part (1) and Part (2) of this lemma. The result follows by taking expectation on both sides of the previous inequality. + +(4) For any $k\geq 0$ , we have + +$$ +\begin{array}{l} \left\langle q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}, \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \right\rangle \\ = \beta_ {k} \sum_ {s} \left\langle q _ {k} ^ {i} (s) - \bar {q} _ {k} ^ {i} (s), \mathcal {T} ^ {i} \left(v ^ {i}\right) (s) \left(\sigma_ {\tau} \left(q _ {k} ^ {- i} (s)\right) - \pi_ {k} ^ {- i} (s)\right) \right\rangle \\ \leq \frac {c _ {1} \beta_ {k} \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2}}{2} + \frac {\beta_ {k} \sum_ {s} \| \mathcal {T} ^ {i} (v ^ {i}) (s) \left(\sigma_ {\tau} \left(q _ {k} ^ {- i} (s)\right) - \pi_ {k} ^ {- i} (s)\right) \| _ {2} ^ {2}}{2 c _ {1}}, \tag {43} \\ \end{array} +$$ + +where $c_{1} > 0$ is an arbitrary positive real number. We next bound the second term on the RHS of the previous inequality. For any $s \in S$ , we have + +$$ +\begin{array}{l} \| \mathcal {T} ^ {i} (v ^ {i}) (s) \left(\sigma_ {\tau} \left(q _ {k} ^ {- i} (s)\right) - \pi_ {k} ^ {- i} (s)\right) \| _ {2} \\ = \| \mathcal {T} ^ {i} (v ^ {i}) (s) (\sigma_ {\tau} (q _ {k} ^ {- i} (s)) - \sigma_ {\tau} (\bar {q} _ {k} ^ {- i} (s)) + \sigma_ {\tau} (\mathcal {T} ^ {- i} (v ^ {- i}) (s) \pi_ {k} ^ {i} (s)) - \pi_ {k} ^ {- i} (s)) \| _ {2} \\ \leq \underbrace {\| \mathcal {T} ^ {i} (v ^ {i}) (s) (\sigma_ {\tau} (q _ {k} ^ {- i} (s)) - \sigma_ {\tau} (\bar {q} _ {k} ^ {- i} (s))) \| _ {2}} _ {B _ {1}} \\ + \underbrace {\| \mathcal {T} ^ {i} (v ^ {i}) (s) (\sigma_ {\tau} (\mathcal {T} ^ {- i} (v ^ {- i}) (s) \pi_ {k} ^ {i} (s)) - \pi_ {k} ^ {- i} (s)) \| _ {2}} _ {B _ {2}}. \\ \end{array} +$$ + +Since the softmax operator $\sigma_{\tau}(\cdot)$ is $\frac{1}{\tau}$ -Lipschitz continuous with respect to $\| \cdot \|_2$ [96, Proposition 4], we have + +$$ +\begin{array}{l} B _ {1} \leq \| \mathcal {T} ^ {i} (v ^ {i}) (s) \| _ {2} \| \sigma_ {\tau} \left(q _ {k} ^ {- i} (s)\right) - \sigma_ {\tau} \left(\bar {q} _ {k} ^ {- i} (s)\right) \| _ {2} \\ \leq \frac {A _ {\operatorname* {m a x}}}{\tau (1 - \gamma)} \| q _ {k} ^ {- i} (s) - \bar {q} _ {k} ^ {- i} (s) \| _ {2}. \\ \end{array} +$$ + +We next analyze the term $B_{2}$ . Using Lemma D.7 (1) and the quadratic growth property of strongly convex functions, we have + +$$ +\begin{array}{l} B _ {2} = \left\| \right. \mathcal {T} ^ {i} \left(v ^ {i}\right) (s) \left(\sigma_ {\tau} \left(\mathcal {T} ^ {- i} \left(v ^ {- i}\right)\right) (s) \pi_ {k} ^ {i} (s)\right) - \pi_ {k} ^ {- i} (s)\left. \right)\left. \right\| _ {2} \\ \leq \left\| \mathcal {T} ^ {i} \left(v ^ {i}\right) (s) \right\| _ {2} \left\| \sigma_ {\tau} \left(\mathcal {T} ^ {- i} \left(v ^ {- i}\right) (s) \pi_ {k} ^ {i} (s)\right) - \pi_ {k} ^ {- i} (s) \right\| _ {2} \\ \leq \frac {\sqrt {2} A _ {\mathrm {m a x}}}{\sqrt {\tau} (1 - \gamma)} V _ {v, s} (\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s)) ^ {1 / 2}. \\ \end{array} +$$ + +Combine the upper bounds we obtained for the terms $B_{1}$ and $B_{2}$ , we obtain + +$$ +\begin{array}{l} \sum_ {s} \| \mathcal {T} ^ {i} (v ^ {i}) (s) (\sigma_ {\tau} (q _ {k} ^ {- i} (s)) - \pi_ {k} ^ {- i} (s)) \| _ {2} ^ {2} \\ \leq \sum_ {s} (B _ {1} + B _ {2}) ^ {2} \\ \leq 2 \sum_ {s} \left(B _ {1} ^ {2} + B _ {2} ^ {2}\right) \\ \leq 2 \sum_ {s} \left(\frac {A _ {\operatorname* {m a x}} ^ {2}}{\tau^ {2} (1 - \gamma) ^ {2}} \| q _ {k} ^ {- i} (s)) - \bar {q} _ {k} ^ {- i} (s) \| _ {2} ^ {2} + \frac {2 A _ {\operatorname* {m a x}} ^ {2}}{\tau (1 - \gamma) ^ {2}} V _ {v, s} (\pi_ {k} ^ {1} (s), \pi_ {k} ^ {2} (s))\right) \\ \leq \frac {2 A _ {\mathrm {m a x}} ^ {2}}{\tau^ {2} (1 - \gamma) ^ {2}} \| q _ {k} ^ {- i} - \bar {q} _ {k} ^ {- i} \| _ {2} ^ {2} + \frac {4 | \mathcal {S} | A _ {\mathrm {m a x}} ^ {2}}{\tau (1 - \gamma) ^ {2}} \mathcal {L} _ {\pi} (k). \\ \end{array} +$$ + +Coming back to Eq. (43), using the previous inequality, we have + +$$ +\langle q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}, \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \rangle +$$ + +$$ +\begin{array}{l} \leq \frac {c _ {1} \beta_ {k} \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2}}{2} + \frac {\beta_ {k} \sum_ {s} \| \mathcal {T} ^ {i} (v ^ {i}) (s) (\sigma_ {\tau} (q _ {k} ^ {- i} (s)) - \pi_ {k} ^ {- i} (s)) \| _ {2} ^ {2}}{2 c _ {1}} \\ \leq \frac {c _ {1} \beta_ {k} \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2}}{2} + \frac {A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{c _ {1} \tau^ {2} (1 - \gamma) ^ {2}} \| q _ {k} ^ {- i} - \bar {q} _ {k} ^ {- i} \| _ {2} ^ {2} + \frac {2 | \mathcal {S} | A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{c _ {1} \tau (1 - \gamma) ^ {2}} \mathcal {L} _ {\pi} (k). \\ \end{array} +$$ + +Choosing $c_{1} = \frac{32|S|A_{\max}^{2}}{\tau(1 - \gamma)^{2}}$ in the previous inequality and then taking expectation on both sides, we obtain + +$$ +\mathbb {E} \left[ \langle q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}, \bar {q} _ {k} ^ {i} - \bar {q} _ {k + 1} ^ {i} \rangle \right] \leq \frac {1 7 | \mathcal {S} | A _ {\max } ^ {2} \beta_ {k}}{\tau (1 - \gamma) ^ {2}} \mathbb {E} \left[ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} \right] + \frac {\beta_ {k}}{1 6} \mathbb {E} \left[ \mathcal {L} _ {\pi} (k) \right]. +$$ + +The proof is complete. + +![](images/4ee640bdf434904b89ba1f64da9a6decd801c668a219ef78be47cfb0ddd7419f.jpg) + +We next consider the last term on the RHS of Eq. (42), which involves the difference between the operator $F^{i}(q_{k}^{i},S_{k},A_{k}^{i},A_{k}^{-i},S_{k + 1})$ and its expected version $\bar{F}_k^i (q_k^i)$ , and hence can be viewed as the stochastic error due to sampling. The fact that the Markov chain $\{(S_k,A_k^i,A_k^{-i},S_{k + 1})\}$ is time-inhomogeneous presents a challenge in our analysis. To overcome this challenge, observe that: (1) the policy (hence the transition probability matrix of the induced Markov chain) is changing slowly compared to the $q$ -function; see Algorithm 3 Line 3, and (2) the stationary distribution as a function of the policy is Lipschitz (cf. Lemma B.1 (3)). These two observations together enable us to develop a refined conditioning argument to handle the time-inhomogeneous Markovian noise. The result is presented in following. Similar ideas were previous used in [23, 24, 65, 41, 76] for finite-sample analysis of single-agent RL algorithms. Recall that we use $\alpha_{k_1,k_2} = \sum_{k = k_1}^{k_2}\alpha_k$ to simplify the notation. + +Lemma D.11 (Proof in Appendix D.7.2). The following inequality holds for all $k \geq z_k$ : + +$$ +\mathbb {E} \left[ \left(F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right)\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right] \leq \frac {1 7 z _ {k} \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}, +$$ + +where $z_{k}$ is the mixing time of the Markov chain $\{S_n\}_{n\geq 0}$ induced by the joint policy $\pi_k = (\pi_k^1,\pi_k^2)$ with accuracy $\beta_{k}$ ; see Eq. (11). + +When using constant stepsize, we have $z_{k}\alpha_{k - z_{k},k - 1} = z_{\beta}^{2}\alpha = \mathcal{O}(\alpha \log^{2}(1 / \beta))$ . Since the two step sizes $\alpha$ and $\beta$ differ only by a multiplicative constant $c_{\alpha ,\beta}$ , we have $\lim_{\alpha \to 0}z_{\beta}^{2}\alpha = 0$ . Similarly, we also have $\lim_{k\to \infty}z_k\alpha_{k - z_k,k - 1} = 0$ when using diminishing step sizes. + +Using the upper bounds we obtained for all the terms on the RHS of Eq. (42), we have the one-step Lyapunov drift inequality for $q_{k}^{i}$ . Following the same line of analysis, we also obtain the one-step inequality for $q_{k}^{-i}$ . Adding up the two Lyapunov drift inequalities, we arrive at the following lemma. + +Lemma D.12. The following inequality holds for all $k \geq z_k$ and $i \in \{1, 2\}$ : + +$$ +\mathbb {E} [ \mathcal {L} _ {q} (k + 1) ] \leq \left(1 - \alpha_ {k} c _ {\tau}\right) \mathbb {E} [ \mathcal {L} _ {q} (k) ] + \frac {\beta_ {k}}{4} \mathbb {E} [ \mathcal {L} _ {\pi} (k) ] + \frac {1 0 0 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1}. +$$ + +Proof of Lemma D.12. For $i \in \{1, 2\}$ , we have from Eq. (42), Lemma D.10, and Lemma D.11 that + +$$ +\begin{array}{l} \mathbb {E} [ \| q _ {k + 1} ^ {i} - \bar {q} _ {k + 1} ^ {i} \| _ {2} ^ {2} ] \leq (1 - 2 \alpha_ {k} c _ {\tau}) \mathbb {E} [ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} ] + \frac {4 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} (\alpha_ {k} ^ {2} + 2 \alpha_ {k} \beta_ {k} + \beta_ {k} ^ {2}) \\ + \frac {3 4 | \mathcal {S} | A _ {\max } ^ {2} \beta_ {k}}{\tau (1 - \gamma) ^ {2}} \mathbb {E} [ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} ] + \frac {\beta_ {k}}{8} \mathbb {E} [ \mathcal {L} _ {\pi} (k) ] + \frac {3 4 z _ {k} \alpha_ {k} \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} \\ \leq \left(1 - 2 \alpha_ {k} c _ {\tau} + \frac {3 4 | \mathcal {S} | A _ {\max } ^ {2} \beta_ {k}}{\tau (1 - \gamma) ^ {2}}\right) \mathbb {E} [ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} ] + \frac {\beta_ {k}}{8} \mathbb {E} [ \mathcal {L} _ {\pi} (k) ] \\ + \frac {5 0 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1}, \\ \end{array} +$$ + +where the second inequality follows from $\beta_{k} = c_{\alpha ,\beta}\alpha_{k}$ with $c_{\alpha ,\beta}\leq 1$ . Since + +$$ +c _ {\alpha , \beta} \leq \frac {c _ {\tau} \tau (1 - \gamma) ^ {2}}{3 4 | \mathcal {S} | A _ {\max } ^ {2}}, \tag {Condition 3.1} +$$ + +we have + +$$ +\mathbb {E} [ \| q _ {k + 1} ^ {i} - \bar {q} _ {k + 1} ^ {i} \| _ {2} ^ {2} ] \leq (1 - \alpha_ {k} c _ {\tau}) \mathbb {E} [ \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {2} ^ {2} ] + \frac {\beta_ {k}}{8} \mathbb {E} [ \mathcal {L} _ {\pi} (k) ] + \frac {5 0 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1}. +$$ + +Summing up the previous inequality for $i = 1,2$ , we have + +$$ +\mathbb {E} [ \mathcal {L} _ {q} (k + 1) ] \leq (1 - \alpha_ {k} c _ {\tau}) \mathbb {E} [ \mathcal {L} _ {q} (k) ] + \frac {\beta_ {k}}{4} \mathbb {E} [ \mathcal {L} _ {\pi} (k) ] + \frac {1 0 0 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1}. +$$ + +![](images/70a9d74dda247fceca41d7e6b8ee7045ae4de63df5e2c3b5ca4c1bd0d4ce8ac7.jpg) + +# D.6 Solving Coupled Lyapunov Drift Inequalities + +We first restate the Lyapunov drift inequalities from previous sections. Recall our notation $\mathcal{L}_q(t,k) = \sum_{i=1,2} \| q_{t,k}^i - \vec{q}_{t,k}^i \|_2^2$ , $\mathcal{L}_{\pi}(t,k) = \max_{s \in S} V_{v_t,s}(\pi_{t,k}^1(s), \pi_{t,k}^2(s))$ , $\mathcal{L}_{\mathrm{sum}}(t) = \| v_t^1 + v_t^2 \|_\infty$ , and $\mathcal{L}_v(t) = \sum_{i=1,2} \| v_t^i - v_*^i \|_\infty$ . Let $\mathcal{F}_t$ be the history of Algorithm 2 right before the $t$ -th outer-loop iteration. Note that $v_t^1$ and $v_t^2$ are both measurable with respect to $\mathcal{F}_t$ . In what follows, for ease of presentation, we write $\mathbb{E}_t[\cdot]$ for $\mathbb{E}[\cdot | \mathcal{F}_t]$ . + +- Lemma D.5: It holds for all $t \geq 0$ that + +$$ +\mathcal {L} _ {v} (t + 1) \leq \gamma \mathcal {L} _ {v} (t) + 4 \mathcal {L} _ {\text {s u m}} (t) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K) + 4 \mathcal {L} _ {\pi} (t, K) + 6 \tau \log \left(A _ {\max }\right). \tag {44} +$$ + +- Lemma D.6: It holds for all $t \geq 0$ that + +$$ +\mathcal {L} _ {\text {s u m}} (t + 1) \leq \gamma \mathcal {L} _ {\text {s u m}} (t) + 2 \mathcal {L} _ {q} ^ {1 / 2} (t, K). \tag {45} +$$ + +- Lemma D.8: It holds for all $t, k \geq 0$ that + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k + 1) \right] \leq \left(1 - \frac {3 \beta_ {k}}{4}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] + \frac {1 6 A _ {\max } ^ {2} \beta_ {k}}{\tau} \mathcal {L} _ {\operatorname {s u m}} (t) ^ {2} \\ + \frac {3 2 A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{\tau^ {3} \ell_ {\tau} ^ {2} (1 - \gamma) ^ {2}} \mathcal {L} _ {q} (k) + 2 L _ {\tau} \beta_ {k} ^ {2}. \tag {46} \\ \end{array} +$$ + +- Lemma D.12: It holds for all $t \geq 0$ and $k \geq z_k$ that + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k + 1) \right] \leq \left(1 - \alpha_ {k} c _ {\tau}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] + \frac {\beta_ {k}}{4} \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] + \frac {1 0 0 | S | A _ {\max }}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1}. \tag {47} +$$ + +Adding up Eqs. (46) and (47), we have by $c_{\alpha,\beta} \leq \min \left(\frac{1}{L_\tau^{1/2}}, \frac{c_\tau \tau^3 \ell_\tau^2 (1 - \gamma)^2}{128 A_{\max}^2}, c_\tau\right)$ (cf. Condition 3.1) that + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k + 1) + \mathcal {L} _ {q} (t, k + 1) \right] \leq \left(1 - \frac {\beta_ {k}}{2}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) + \mathcal {L} _ {q} (t, k) \right] \\ \frac {1 6 A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{\tau} \mathcal {L} _ {\text {s u m}} (t) ^ {2} + \frac {1 0 2 | S | A _ {\max }}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1}. \tag {48} \\ \end{array} +$$ + +# D.6.1 Constant Stepsize + +When using constant stepsizes, i.e., $\alpha_{k}\equiv \alpha$ , $\beta_{k}\equiv \beta = c_{\alpha ,\beta}\alpha$ , repeatedly using Eq. (48) from $z_{\beta}$ to $k$ , we have + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) + \mathcal {L} _ {q} (t, k) \right] \leq \left(1 - \frac {\beta}{2}\right) ^ {k - z _ {\beta}} \left(\mathcal {L} _ {\pi} (t, 0) + \mathcal {L} _ {q} (t, 0)\right) +$$ + +$$ ++ \frac {1 6 A _ {\max } ^ {2}}{\tau} \mathcal {L} _ {\operatorname {s u m}} (t) ^ {2} + \frac {2 0 4 | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha . \tag {49} +$$ + +We next bound $\mathcal{L}_{\pi}(t,0) + \mathcal{L}_q(t,0)$ . For $i\in \{1,2\}$ , we have + +$$ +\begin{array}{l} \mathcal {L} _ {\pi} (t, 0) = \max _ {s} V _ {v _ {t}, s} \left(\pi_ {t, 0} ^ {1} (s), \pi_ {t, 0} ^ {2} (s)\right) \\ = \max _ {s} \sum_ {i = 1, 2} \max _ {\mu^ {i}} \{(\mu^ {i} - \pi_ {t, 0} ^ {i} (s)) ^ {\top} \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s) \pi_ {t, 0} ^ {- i} (s) + \tau \nu (\mu^ {i}) - \tau \nu (\pi_ {t, 0} ^ {i} (s)) \} \\ \leq 2 \sum_ {i = 1, 2} \max _ {s, a ^ {i}, a ^ {- i}} | \mathcal {T} ^ {i} (v _ {t} ^ {i}) (s, a ^ {i}, a ^ {- i}) | + 2 \tau \log (A _ {\max }) \\ \leq \frac {4}{(1 - \gamma)} + 2 \tau \log (A _ {\max}), \\ \end{array} +$$ + +and + +$$ +\mathcal {L} _ {q} (t, 0) = \sum_ {i = 1, 2} \| q _ {t, 0} ^ {i} - \bar {q} _ {t, 0} ^ {i} \| _ {2} ^ {2} \leq \frac {8 | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {2}}. \tag {Lemma D.1} +$$ + +It follows that + +$$ +\mathcal {L} _ {\pi} (t, 0) + \mathcal {L} _ {q} (t, 0) \leq \frac {4}{(1 - \gamma)} + 2 \tau \log (A _ {\max }) + \frac {8 | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {2}} = L _ {\mathrm {i n}}. +$$ + +Using the previous inequality in Eq. (49), we have + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) + \mathcal {L} _ {q} (t, k) \right] \leq L _ {\text {i n}} \left(1 - \frac {\beta}{2}\right) ^ {k - z _ {\beta}} + \frac {1 6 A _ {\max } ^ {2}}{\tau} \mathcal {L} _ {\text {s u m}} (t) ^ {2} + \frac {2 0 4 | S | A _ {\max }}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha , \tag {50} +$$ + +which implies + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] \leq L _ {\text {i n}} \left(1 - \frac {\beta}{2}\right) ^ {k - z _ {\beta}} + \frac {1 6 A _ {\max } ^ {2}}{\tau} \mathcal {L} _ {\text {s u m}} (t) ^ {2} + \frac {2 0 4 | S | A _ {\max }}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha . +$$ + +Substituting the previous inequality on $\mathbb{E}_t[\mathcal{L}_{\pi}(t,k)]$ into Eq. (47), we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k + 1) \right] \leq (1 - \alpha c _ {\tau}) \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] + \frac {1 5 1 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {\beta} ^ {2} \alpha^ {2} + \frac {\beta L _ {\text {i n}}}{4} \left(1 - \frac {\beta}{2}\right) ^ {k - z _ {\beta}} \\ + \frac {4 A _ {\mathrm {m a x}} ^ {2} \beta}{\tau} \mathcal {L} _ {\mathrm {s u m}} (t) ^ {2}. \\ \end{array} +$$ + +Repeatedly using the previous inequality, since $c_{\alpha,\beta} \leq c_{\tau}$ (cf. Condition 3.1), we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] \leq L _ {\text {i n}} \left(1 - c _ {\tau} \alpha\right) ^ {k - z _ {\beta}} + \frac {\beta L _ {\text {i n}} (k - z _ {\beta})}{4} \left(1 - \frac {\beta}{2}\right) ^ {k - z _ {\beta} - 1} \\ + \frac {4 A _ {\operatorname* {m a x}} ^ {2} c _ {\alpha , \beta}}{c _ {\tau} \tau} \mathcal {L} _ {\operatorname {s u m}} (t) ^ {2} + \frac {1 5 1 | \mathcal {S} | A _ {\operatorname* {m a x}}}{(1 - \gamma) ^ {2} c _ {\tau}} z _ {\beta} ^ {2} \alpha , \\ \end{array} +$$ + +which implies (by using Jensen's inequality) that + +$$ +\begin{array}{l} \mathbb {E} _ {t} [ \mathcal {L} _ {q} (t, k) ^ {1 / 2} ] \leq L _ {\mathrm {i n}} ^ {1 / 2} \left(1 - c _ {\tau} \alpha\right) ^ {\frac {k - z _ {\beta}}{2}} + \frac {\beta^ {1 / 2} L _ {\mathrm {i n}} ^ {1 / 2} (k - z _ {\beta}) ^ {1 / 2}}{2} \left(1 - \frac {\beta}{2}\right) ^ {\frac {k - z _ {\beta} - 1}{2}} \\ + \frac {2 A _ {\operatorname* {m a x}} c _ {\alpha , \beta} ^ {1 / 2}}{c _ {\tau} ^ {1 / 2} \tau^ {1 / 2}} \mathcal {L} _ {\operatorname {s u m}} (t) + \frac {1 3 | \mathcal {S} | ^ {1 / 2} A _ {\operatorname* {m a x}} ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2}. \\ \end{array} +$$ + +Substituting the previous bound on $\mathbb{E}_t[\mathcal{L}_q(t,k)^{1 / 2}]$ into Eq. (45) and then taking total expectation, we have + +$$ +\mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t + 1) \right] \leq \gamma \mathcal {L} _ {\text {s u m}} (t) + 2 L _ {\text {i n}} ^ {1 / 2} \left(1 - c _ {\tau} \alpha\right) ^ {\frac {K - z _ {\beta}}{2}} + \beta^ {1 / 2} L _ {\text {i n}} ^ {1 / 2} \left(K - z _ {\beta}\right) ^ {1 / 2} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} +$$ + +$$ +\begin{array}{l} + \frac {4 A _ {\operatorname* {m a x}} c _ {\alpha , \beta} ^ {1 / 2}}{c _ {\tau} ^ {1 / 2} \tau^ {1 / 2}} \mathcal {L} _ {\text {s u m}} (t) + \frac {2 6 | \mathcal {S} | ^ {1 / 2} A _ {\operatorname* {m a x}} ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2} \\ \leq \left(\frac {1 + \gamma}{2}\right) \mathcal {L} _ {\text {s u m}} (t) + 2 L _ {\text {i n}} ^ {1 / 2} \left(1 - c _ {\tau} \alpha\right) ^ {\frac {K - z _ {\beta}}{2}} \\ + \beta^ {1 / 2} L _ {\mathrm {i n}} ^ {1 / 2} (K - z _ {\beta}) ^ {1 / 2} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} + \frac {2 6 | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2}, \\ \end{array} +$$ + +where the last line follows from $c_{\alpha, \beta} \leq \frac{c_{\tau} \tau(1 - \gamma)^2}{64A_{\max}^2}$ (cf. Condition 3.1). Since $\| v_0^1 + v_0^2 \|_{\infty} \leq \frac{2}{1 - \gamma}$ , repeatedly using the previous inequality, we have for all $k \geq 0$ that + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right] \leq \frac {2}{1 - \gamma} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {4 L _ {\text {i n}} ^ {1 / 2} \left(1 - c _ {\tau} \alpha\right) ^ {\frac {K - z _ {\beta}}{2}}}{1 - \gamma} \\ + \frac {2 \beta^ {1 / 2} L _ {\mathrm {i n}} ^ {1 / 2} (K - z _ {\beta}) ^ {1 / 2}}{1 - \gamma} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} + \frac {5 2 | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{(1 - \gamma) ^ {2} c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2} \\ \leq \frac {2}{1 - \gamma} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {6 L _ {\text {i n}} ^ {1 / 2} (K - z _ {\beta}) ^ {1 / 2}}{1 - \gamma} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} \\ + \frac {5 2 | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{(1 - \gamma) ^ {2} c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2}. \tag {51} \\ \end{array} +$$ + +Now we have obtained finite-sample bounds for $\mathcal{L}_q(t,k),\mathcal{L}_{\pi}(t,k)$ , and $\mathcal{L}_{\mathrm{sum}}(t)$ . The next step is to use them in Eq. (44) to obtain finite-sample bounds for $\mathcal{L}_v(t)$ . Specifically, using Eq. (44), Eq. (50), and Eq. (51), we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {L} _ {v} (t + 1) \right] \leq \gamma \mathbb {E} \left[ \mathcal {L} _ {v} (t) \right] + 4 \mathbb {E} \left[ \mathcal {L} _ {\operatorname {s u m}} (t) \right] + 2 \mathbb {E} \left[ \mathcal {L} _ {q} ^ {1 / 2} (t, K) \right] + 4 \mathbb {E} \left[ \mathcal {L} _ {\pi} (t, K) \right] + 6 \tau \log \left(A _ {\max }\right) \\ \leq \gamma \mathbb {E} [ \mathcal {L} _ {v} (t) ] + 2 L _ {\text {i n}} ^ {1 / 2} \left(1 - c _ {\tau} \alpha\right) ^ {\frac {K - z _ {\beta}}{2}} \\ + \beta^ {1 / 2} L _ {\mathrm {i n}} ^ {1 / 2} (K - z _ {\beta}) ^ {1 / 2} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} + \frac {2 6 | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2} \\ + 4 L _ {\text {i n}} \left(1 - \frac {\beta}{2}\right) ^ {K - z _ {\beta}} + \frac {8 1 6 | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha + 6 \tau \log \left(A _ {\max }\right) \\ + \frac {2 6 6 A _ {\max } ^ {2}}{\tau (1 - \gamma) ^ {2}} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {7 9 8 A _ {\max } ^ {2} L _ {\text {i n}} ^ {1 / 2} (K - z _ {\beta}) ^ {1 / 2}}{(1 - \gamma) ^ {2} \tau} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} \\ + \frac {6 9 1 6 | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {5 / 2}}{(1 - \gamma) ^ {3} \tau c _ {\tau} ^ {1 / 2}} z _ {\beta} \alpha^ {1 / 2} \\ \leq \gamma \mathbb {E} [ \mathcal {L} _ {v} (t) ] + \frac {1 2 2 3 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha^ {1 / 2} + 6 \tau \log (A _ {\max }) \\ + \frac {2 6 6 A _ {\max} ^ {2}}{\tau (1 - \gamma) ^ {2}} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {8 0 5 A _ {\max} ^ {2} L _ {\mathrm {i n}} (K - z _ {\beta}) ^ {1 / 2}}{(1 - \gamma) ^ {2} \tau} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}}. \\ \end{array} +$$ + +Repeatedly using the previous inequality from 0 to $T - 1$ and then using $\mathcal{L}_v(0) \leq \frac{4}{1 - \gamma}$ , we have + +$$ +\begin{array}{l} \mathcal {L} _ {v} (T) \leq \frac {2 7 0 A _ {\operatorname* {m a x}} ^ {2} T}{\tau (1 - \gamma) ^ {2}} \left(\frac {1 + \gamma}{2}\right) ^ {T - 1} \\ + \frac {8 0 5 A _ {\max } ^ {2} L _ {\operatorname {i n}} (K - z _ {\beta}) ^ {1 / 2}}{\tau (1 - \gamma) ^ {3}} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} \\ + \frac {1 2 2 3 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {3} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha^ {1 / 2} + \frac {6 \tau \log (A _ {\max})}{1 - \gamma}. \\ \end{array} +$$ + +Our next step is to use the bounds we obtained for $\mathcal{L}_q(t,k),\mathcal{L}_{\pi}(t,k),\mathcal{L}_v(t)$ , and $\mathcal{L}_{\mathrm{sum}}(t)$ in Lemma D.4. For simplicity of presentation, we use $a\lesssim b$ to mean that there exists a numerical constant $c$ such that $a\le cb$ . Using the previous inequality, Eq. (50), and Eq. (51), we have + +$$ +\begin{array}{l} \mathbb {E} [ \mathrm {N G} (\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}) ] \leq \frac {8}{1 - \gamma} \mathcal {L} _ {\text {s u m}} (T) + \frac {4}{1 - \gamma} \mathcal {L} _ {v} (T) + \frac {4}{1 - \gamma} \mathcal {L} _ {\pi} (T, K) + \frac {8 \tau \log (A _ {\max })}{1 - \gamma} \\ \lesssim \frac {A _ {\operatorname* {m a x}} ^ {2} T}{\tau (1 - \gamma) ^ {3}} \left(\frac {1 + \gamma}{2}\right) ^ {T - 1} + \frac {A _ {\operatorname* {m a x}} ^ {2} L _ {\mathrm {i n}} (K - z _ {\beta}) ^ {1 / 2}}{\tau (1 - \gamma) ^ {4}} \left(1 - \frac {\beta}{2}\right) ^ {\frac {K - z _ {\beta} - 1}{2}} \\ + \frac {| \mathcal {S} | A _ {\mathrm {m a x}}}{(1 - \gamma) ^ {4} c _ {\alpha , \beta}} z _ {\beta} ^ {2} \alpha^ {1 / 2} + \frac {\tau \log (A _ {\mathrm {m a x}})}{(1 - \gamma) ^ {2}}. \\ \end{array} +$$ + +The proof of Theorem 3.1 (1) is complete. + +# D.6.2 Diminishing Stepsizes + +Consider using linearly diminishing step sizes, i.e., $\alpha_{k} = \frac{\alpha}{k + h}$ , $\beta_{k} = \frac{\beta}{k + h}$ , and $\beta = c_{\alpha,\beta}\alpha$ . Repeatedly using Eq. (48), we have for all $k \geq k_0 := \min \{k' \mid k' \geq z_{k'}\}$ that + +$$ +\begin{array}{l} \mathbb{E}_{t}[\mathcal{L}_{\pi}(t,k) + \mathcal{L}_{q}(t,k)]\leq L_{\text{in}}\underbrace{\prod_{m = k_{0}}^{k - 1}\left(1 - \frac{\beta_{m}}{2}\right)}_{\hat{\mathcal{E}}_{1}} + \frac{204|\mathcal{S}|A_{\max}}{(1 - \gamma)^{2}}\underbrace{\sum_{n = k_{0}}^{k - 1}z_{n}^{2}\alpha_{n}^{2}\prod_{m = n + 1}^{k - 1}\left(1 - \frac{\beta_{m}}{2}\right)}_{\hat{\mathcal{E}}_{2}} \\ + \frac {1 6 A _ {\operatorname* {m a x}} ^ {2}}{\tau} \mathcal {L} _ {\operatorname {s u m}} (t) ^ {2} \underbrace {\sum_ {n = k _ {0}} ^ {k - 1} \beta_ {n} \prod_ {m = n + 1} ^ {k - 1} \left(1 - \frac {\beta_ {m}}{2}\right)} _ {\hat {\mathcal {E}} _ {3}}. \\ \end{array} +$$ + +Next, we evaluate the terms $\{\hat{\mathcal{E}}_j\}_{1\leq j\leq 3}$ . Terms like $\{\hat{\mathcal{E}}_j\}_{1\leq j\leq 3}$ have been well studied in the existing literature [24, 44, 65]. Specifically, we have from [65, Appendix A.2.] and $\beta = 4$ that + +$$ +\hat {\mathcal {E}} _ {1} \leq \frac {k _ {0} + h}{k + h}, \quad \hat {\mathcal {E}} _ {2} \leq \frac {6 4 e z _ {k} ^ {2}}{(k + h) c _ {\alpha , \beta} ^ {2}}, \text {a n d} \hat {\mathcal {E}} _ {3} \leq 2. +$$ + +It follows that + +$$ +\mathbb {E} _ {t} [ \mathcal {L} _ {\pi} (t, k) + \mathcal {L} _ {q} (t, k) ] \le L _ {\mathrm {i n}} \frac {k _ {0} + h}{k + h} + \frac {3 2 6 4 e | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {k} ^ {2} \alpha_ {k} + \frac {3 2 A _ {\max} ^ {2}}{\tau} \mathcal {L} _ {\mathrm {s u m}} (t) ^ {2}, +$$ + +which implies + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, k) \right] \leq L _ {\mathrm {i n}} \frac {k _ {0} + h}{k + h} + \frac {3 2 6 4 e | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {k} ^ {2} \alpha_ {k} + \frac {3 2 A _ {\max } ^ {2}}{\tau} \mathcal {L} _ {\operatorname {s u m}} (t) ^ {2}. \tag {52} +$$ + +Using the previous inequality on $\mathbb{E}_t[\mathcal{L}_{\pi}(t,k)]$ in Eq. (47), we have + +$$ +\begin{array}{l} \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k + 1) \right] \leq \left(1 - \alpha_ {k} c _ {\tau}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] + \frac {1 0 0 | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {k} \alpha_ {k} \alpha_ {k - z _ {k}, k - 1} \\ + \frac {L _ {\mathrm {i n}} c _ {\alpha , \beta} \alpha_ {k} ^ {2}}{4 \alpha_ {k _ {0}}} + \frac {8 1 6 e | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2}} z _ {k} ^ {2} \alpha_ {k} ^ {2} + \frac {8 A _ {\max} ^ {2} \beta_ {k}}{\tau} \mathcal {L} _ {\mathrm {s u m}} (t) ^ {2} \\ \leq \left(1 - \alpha_ {k} c _ {\tau}\right) \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] + \frac {1 0 1 7 e L _ {\mathrm {i n}} | S | A _ {\max}}{(1 - \gamma) ^ {2} \alpha_ {k _ {0}}} z _ {k} ^ {2} \alpha_ {k} ^ {2} \\ + \frac {8 A _ {\operatorname* {m a x}} ^ {2} \beta_ {k}}{\tau} \mathcal {L} _ {\operatorname {s u m}} (t) ^ {2}. \\ \end{array} +$$ + +Repeatedly using the previous inequality starting from $k_{0}$ , since $\alpha c_{\tau} \geq 1$ (cf. Condition 3.1), we have + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) \right] \leq L _ {\text {i n}} \frac {k _ {0} + h}{k + h} + \frac {4 0 6 8 e ^ {2} L _ {\text {i n}} | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {2} c _ {\tau} \alpha_ {k _ {0}}} z _ {k} ^ {2} \alpha_ {k} + \frac {8 A _ {\max } ^ {2} c _ {\alpha , \beta}}{c _ {\tau} \tau} \mathcal {L} _ {\text {s u m}} (t) ^ {2}, +$$ + +which implies (by using Jensen's inequality) that + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {q} (t, k) ^ {1 / 2} \right] \leq L _ {\text {i n}} ^ {1 / 2} \left(\frac {k _ {0} + h}{k + h}\right) ^ {1 / 2} + \frac {6 4 e L _ {\text {i n}} ^ {1 / 2} | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2} \alpha_ {k _ {0}} ^ {1 / 2}} z _ {k} \alpha_ {k} ^ {1 / 2} + \frac {3 A _ {\max } c _ {\alpha , \beta} ^ {1 / 2}}{c _ {\tau} ^ {1 / 2} \tau^ {1 / 2}} \mathcal {L} _ {\text {s u m}} (t). \tag {53} +$$ + +Taking total expectation on both sides of the previous inequality and then using the result in Eq. (45), we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t + 1) \right] \leq \gamma \mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right] + 2 L _ {\text {i n}} ^ {1 / 2} \left(\frac {k _ {0} + h}{K + h}\right) ^ {1 / 2} + \frac {1 2 8 e | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2} \alpha_ {k _ {0}} ^ {1 / 2}} z _ {K} \alpha_ {K} ^ {1 / 2} \\ + \frac {6 A _ {\operatorname* {m a x}} c _ {\alpha , \beta} ^ {1 / 2}}{c _ {\tau} ^ {1 / 2} \tau^ {1 / 2}} \mathcal {L} _ {\text {s u m}} (t) \\ \leq \left(\frac {\gamma + 1}{2}\right) \mathbb {E} [ \mathcal {L} _ {\mathrm {s u m}} (t) ] + \frac {1 3 0 e L _ {\mathrm {i n}} ^ {1 / 2} | \mathcal {S} | ^ {1 / 2} A _ {\mathrm {m a x}} ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2} \alpha_ {k _ {0}} ^ {1 / 2}} z _ {K} \alpha_ {K} ^ {1 / 2}, \\ \end{array} +$$ + +where the last line follows from $c_{\alpha, \beta} \leq \frac{c_{\tau} \tau(1 - \gamma)^2}{144A_{\max}^2}$ (cf. Condition 3.1). Repeatedly using the previous inequality starting from 0, we have + +$$ +\mathbb {E} \left[ \mathcal {L} _ {\text {s u m}} (t) \right] \leq \frac {2}{1 - \gamma} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {2 6 0 e L _ {\mathrm {i n}} ^ {1 / 2} | \mathcal {S} | ^ {1 / 2} A _ {\max } ^ {1 / 2}}{\left(1 - \gamma\right) ^ {2} c _ {\tau} ^ {1 / 2} \alpha_ {k _ {0}} ^ {1 / 2}} z _ {K} \alpha_ {K} ^ {1 / 2}. \tag {54} +$$ + +The next step is to bound $\mathcal{L}_v(t)$ . Recall from Eq. (44) that + +$$ +\mathbb {E} _ {t} \left[ \mathcal {L} _ {v} (t + 1) \right] \leq \gamma \mathcal {L} _ {v} (t) + 4 \mathcal {L} _ {\operatorname {s u m}} (t) + 2 \mathbb {E} _ {t} \left[ \mathcal {L} _ {q} ^ {1 / 2} (t, K) \right] + 4 \mathbb {E} _ {t} \left[ \mathcal {L} _ {\pi} (t, K) \right] + 6 \tau \log \left(A _ {\max }\right). +$$ + +Using Eqs. (52), (53), and (54) in the previous inequality, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {L} _ {v} (t + 1) \right] \leq \gamma \mathbb {E} \left[ \mathcal {L} _ {v} (t) \right] + 4 \mathbb {E} \left[ \mathcal {L} _ {\operatorname {s u m}} (t) \right] + 2 \mathbb {E} \left[ \mathcal {L} _ {q} ^ {1 / 2} (t, K) \right] + 4 \mathbb {E} \left[ \mathcal {L} _ {\pi} (t, K) \right] + 6 \tau \log \left(A _ {\max }\right) \\ \leq \gamma \mathbb {E} [ \mathcal {L} _ {v} (t) ] + \frac {1 3 0 e L _ {\mathrm {i n}} ^ {1 / 2} | \mathcal {S} | ^ {1 / 2} A _ {\mathrm {m a x}} ^ {1 / 2}}{(1 - \gamma) c _ {\tau} ^ {1 / 2} \alpha_ {k _ {0}} ^ {1 / 2}} z _ {K} \alpha_ {K} ^ {1 / 2} \\ + \frac {4 L _ {\mathrm {i n}} \alpha_ {K}}{\alpha_ {k _ {0}}} + \frac {1 3 0 5 6 e | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2} c _ {\alpha , \beta}} z _ {K} ^ {2} \alpha_ {K} + 6 \tau \log (A _ {\max}) \\ + \frac {5 2 2 A _ {\mathrm {m a x}} ^ {2}}{\tau (1 - \gamma) ^ {2}} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {6 7 8 6 0 e L _ {\mathrm {i n}} ^ {1 / 2} | \mathcal {S} | ^ {1 / 2} A _ {\mathrm {m a x}} ^ {5 / 2}}{(1 - \gamma) ^ {3} \tau c _ {\tau} ^ {1 / 2} \alpha_ {k _ {0}} ^ {1 / 2}} z _ {K} \alpha_ {K} ^ {1 / 2} \\ \leq \gamma \mathbb {E} [ \mathcal {L} _ {v} (t) ] + \frac {5 2 2 A _ {\max} ^ {2}}{\tau (1 - \gamma) ^ {2}} \left(\frac {1 + \gamma}{2}\right) ^ {t} + \frac {1 5 0 5 6 e L _ {\mathrm {i n}} | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {2} \alpha_ {k _ {0}} ^ {1 / 2} c _ {\alpha , \beta}} z _ {K} ^ {2} \alpha_ {K} ^ {1 / 2} \\ + 6 \tau \log (A _ {\max}). \\ \end{array} +$$ + +Repeatedly using the previous inequality starting from 0 to $T - 1$ , we have + +$$ +\mathcal {L} _ {v} (T) \leq \frac {5 2 6 A _ {\max} ^ {2} T}{\tau (1 - \gamma) ^ {2}} \left(\frac {1 + \gamma}{2}\right) ^ {T - 1} + \frac {1 5 0 5 6 e L _ {\mathrm {i n}} | \mathcal {S} | A _ {\max}}{(1 - \gamma) ^ {3} \alpha_ {k _ {0}} ^ {1 / 2} c _ {\alpha , \beta}} z _ {K} ^ {2} \alpha_ {K} ^ {1 / 2} + \frac {6 \tau \log (A _ {\max})}{1 - \gamma} +$$ + +Finally, using the previous inequality, Eq. (52), and Eq. (54) in Lemma D.4, we obtain + +$$ +\mathbb {E} [ \mathrm {N G} (\pi_ {T, K} ^ {1}, \pi_ {T, K} ^ {2}) ] \lesssim \frac {A _ {\max } ^ {2} T}{\tau (1 - \gamma) ^ {3}} \left(\frac {1 + \gamma}{2}\right) ^ {T - 1} + \frac {L _ {\text {i n}} | \mathcal {S} | A _ {\max }}{(1 - \gamma) ^ {4} \alpha_ {k _ {0}} ^ {1 / 2} c _ {\alpha , \beta}} z _ {K} ^ {2} \alpha_ {K} ^ {1 / 2} + \frac {\tau \log (A _ {\max })}{(1 - \gamma) ^ {2}}. +$$ + +The proof of Theorem 3.1 (2) is complete. + +# D.7 Proof of All Supporting Lemmas + +# D.7.1 Proof of Lemma B.1 + +Lemma B.1 (1), (3), and (4) are identical to [10, Proposition 3]. We here only prove Lemma B.1 (2). Consider the Markov chain $\{S_k\}$ induced by $\pi_b$ . Since $\{S_k\}$ is irreducible and aperiodic, there exists + +a positive integer $r_b$ such that $P_{\pi_b}^{r_b}$ has strictly positive entries [93, Proposition 1.7]. Therefore, there exists $\delta_b \in (0,1)$ such that + +$$ +P _ {\pi_ {b}} ^ {r _ {b}} (s, s ^ {\prime}) \geq \delta_ {b} \mu_ {b} (s ^ {\prime}) +$$ + +for all $(s, s')$ . In addition, the constant $\rho_b$ introduced after Assumption 3.1 is explicitly given as $\rho_b = \exp(-\delta_b / r_b)$ . The previous two equations are from the proof of the Markov chain convergence theorem presented in [93, Section 4.3]. Next, we consider the Markov chain $\{S_k\}$ induced by an arbitrary $\pi \in \Pi_{\tau}$ . Since + +$$ +\frac {\pi_ {b} (a | s)}{\pi (a | s)} = \frac {\pi_ {b} ^ {i} (a ^ {i} | s) \pi_ {b} ^ {- i} (a ^ {i} | s)}{\pi^ {i} (a ^ {i} | s) \pi^ {- i} (a ^ {i} | s)} \leq \frac {1}{\ell_ {\tau} ^ {2}}, \quad \forall a = (a ^ {i}, a ^ {- i}) \text {a n d} s, +$$ + +we have for any $s, s' \in S$ and $k \geq 1$ that + +$$ +\begin{array}{l} P _ {\pi_ {b}} ^ {k} (s, s ^ {\prime}) = \sum_ {s _ {0}} P _ {\pi_ {b}} ^ {k - 1} (s, s _ {0}) P _ {\pi_ {b}} (s _ {0}, s ^ {\prime}) \\ = \sum_ {s _ {0}} P _ {\pi_ {b}} ^ {k - 1} (s, s _ {0}) \sum_ {a \in \mathcal {A}} \pi_ {b} (a | s _ {0}) P _ {a} (s _ {0}, s ^ {\prime}) \\ = \sum_ {s _ {0}} P _ {\pi_ {b}} ^ {k - 1} (s, s _ {0}) \sum_ {a \in \mathcal {A}} \frac {\pi_ {b} (a | s _ {0})}{\pi (a | s _ {0})} \pi (a | s _ {0}) P _ {a} (s _ {0}, s ^ {\prime}) \\ \leq \frac {1}{\ell_ {\tau} ^ {2}} \sum_ {s _ {0}} P _ {\pi_ {b}} ^ {k - 1} (s, s _ {0}) \sum_ {a \in \mathcal {A}} \pi (a | s _ {0}) P _ {a} \left(s _ {0}, s ^ {\prime}\right) \\ \leq \frac {1}{\ell_ {\tau} ^ {2}} \sum_ {s _ {0}} P _ {\pi_ {b}} ^ {k - 1} (s, s _ {0}) P _ {\pi} \left(s _ {0}, s ^ {\prime}\right) \\ = \frac {1}{\ell_ {\tau} ^ {2}} \left[ P _ {\pi_ {b}} ^ {k - 1} P _ {\pi} \right] (s, s ^ {\prime}). \\ \end{array} +$$ + +Since the previous inequality holds for all $s$ and $s'$ , we in fact have $\ell_{\tau}^{2}P_{\pi_b}^k \leq P_{\pi_b}^{k-1}P_{\pi}$ (which is an entry-wise inequality). Repeatedly using the previous inequality, we obtain + +$$ +\ell_ {\tau} ^ {2 k} P _ {\pi_ {b}} ^ {k} \leq P _ {\pi} ^ {k}, +$$ + +which implies + +$$ +\begin{array}{l} P _ {\pi} ^ {r _ {b}} (s, s ^ {\prime}) \geq \ell_ {\tau} ^ {2 r _ {b}} P _ {\pi_ {b}} ^ {r _ {b}} (s, s ^ {\prime}) \\ \geq \delta_ {b} \ell_ {\tau} ^ {2 r _ {b}} \mu_ {b} (s ^ {\prime}) \\ \geq \delta_ {b} \ell_ {\tau} ^ {2 r _ {b}} \frac {\mu_ {b} \left(s ^ {\prime}\right)}{\mu_ {\pi} \left(s ^ {\prime}\right)} \mu_ {\pi} \left(s ^ {\prime}\right) \\ \geq \delta_ {b} \ell_ {\tau} ^ {2 r _ {b}} \mu_ {b, \min } \mu_ {\pi} (s ^ {\prime}). \\ \end{array} +$$ + +Following the proof of the Markov chain convergence theorem in [93, Section 4.3], we have + +$$ +\left\| P _ {\pi} ^ {k} (s, \cdot) - \mu_ {\pi} (\cdot) \right\| _ {\mathrm {T V}} \leq \left(1 - \delta_ {b} \ell_ {\tau} ^ {2 r _ {b}} \mu_ {b, \min }\right) ^ {k / r _ {b} - 1}, \quad \forall s \in S, \pi \in \Pi_ {\tau}. \tag {55} +$$ + +Since $A_{\mathrm{max}} \geq 2$ (otherwise there is no decision to make in this stochastic game), we have $\ell_{\tau}^{2} \leq \frac{1}{2}$ . It follows that $1 - \delta_b \ell_{\tau}^{2r_b} \mu_{b,\min} > 1/2$ . Using the previous inequality in Eq. (55), we have + +$$ +\begin{array}{l} \sup _ {\pi \in \Pi_ {\tau}} \max _ {s \in \mathcal {S}} \| P _ {\pi} ^ {k} (s, \cdot) - \mu_ {\pi} (\cdot) \| _ {\mathrm {T V}} \leq 2 \left(1 - \delta_ {b} \ell_ {\tau} ^ {2 r _ {b}} \mu_ {b, \min }\right) ^ {k / r _ {b}} \\ \leq 2 \exp \left(- \delta_ {b} \ell_ {\tau} ^ {2 r _ {b}} \mu_ {b, \min } k / r _ {b}\right) \\ = 2 \rho_ {b} ^ {\ell_ {\tau} ^ {2 r _ {b}} \mu_ {b, \min } k} \quad (\text {R e c a l l t h a t} \rho_ {b} = \exp (- \delta_ {b} / r _ {b})) \\ = 2 \rho_ {\tau} ^ {k}. \\ \end{array} +$$ + +We next compute the mixing time. Using the previous inequality and the definition of the total variation distance, we have + +$$ +\sup _ {\pi \in \Pi_ {\tau}} \max _ {s \in \mathcal {S}} \| P _ {\pi} ^ {k} (s, \cdot) - \mu_ {\pi} (\cdot) \| _ {\mathrm {T V}} \leq \eta +$$ + +as long as + +$$ +k \geq \frac {\log (2 / \eta)}{\log (1 / \rho_ {\delta})} = \frac {1}{\ell_ {\tau} ^ {2 r _ {b}} \mu_ {b , \mathrm {m i n}}} \frac {\log (2 / \eta)}{\log (1 / \rho_ {b})} \geq \frac {t _ {\pi_ {b} , \eta}}{\ell_ {\tau} ^ {2 r _ {b}} \mu_ {b , \mathrm {m i n}}}. +$$ + +# D.7.2 Proof of Lemma D.11 + +For any $k \geq z_k$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left(F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right] \right. \\ = \underbrace {\mathbb {E} \left[ \left(F ^ {i} \left(q _ {k - z _ {k}} ^ {i} , S _ {k} , A _ {k} ^ {i} , A _ {k} ^ {- i} , S _ {k + 1}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right)\right) ^ {\top} \left(q _ {k - z _ {k}} ^ {i} - \bar {q} _ {k - z _ {k}} ^ {i}\right) \right]} _ {N _ {1}} \\ + \underbrace {\mathbb {E} [ (F ^ {i} (q _ {k - z _ {k}} ^ {i} , S _ {k} , A _ {k} ^ {i} , A _ {k} ^ {- i} , S _ {k + 1}) - \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i})) ^ {\top} (q _ {k} ^ {i} - q _ {k - z _ {k}} ^ {i}) ]} _ {N _ {2}} \\ + \underbrace {\mathbb {E} [ (F ^ {i} (q _ {k - z _ {k}} ^ {i} , S _ {k} , A _ {k} ^ {i} , A _ {k} ^ {- i} , S _ {k + 1}) - \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i})) ^ {\top} (\bar {q} _ {k - z _ {k}} ^ {i} - \bar {q} _ {k} ^ {i}) ]} _ {N _ {3}} \\ + \underbrace {\mathbb {E} \left[ \left(F ^ {i} \left(q _ {k} ^ {i} , S _ {k} , A _ {k} ^ {i} , A _ {k} ^ {- i} , S _ {k + 1}\right) - F ^ {i} \left(q _ {k - z _ {k}} ^ {i} , S _ {k} , A _ {k} ^ {i} , A _ {k} ^ {- i} , S _ {k + 1}\right)\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right]} _ {N _ {4}} \\ + \underbrace {\mathbb {E} \left[ \left(\bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) - \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right)\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right]} _ {N _ {5}}. \tag {56} \\ \end{array} +$$ + +To bound the terms $N_{1}$ to $N_{5}$ on the RHS of the previous inequality, the following lemma is needed. + +Lemma D.13. For any positive integers $k_{1} \leq k_{2}$ , we have (1) $\| q_{k_2}^i - q_{k_1}^i \|_\infty \leq \frac{2\alpha_{k_1,k_2-1}}{1-\gamma}$ , and (2) $\max_{s \in \mathcal{S}} \| \pi_{k_2}^i(s) - \pi_{k_1}^i(s) \|_1 \leq 2\beta_{k_1,k_2-1}$ . + +Proof of Lemma D.13. For any $k \in [k_1, k_2 - 1]$ , we have by Eq. (41) that + +$$ +\| q _ {k + 1} ^ {i} - q _ {k} ^ {i} \| _ {\infty} = \alpha_ {k} \| F ^ {i} (q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}) \| _ {\infty} \leq \frac {2 \alpha_ {k}}{1 - \gamma}. +$$ + +It follows that $\| q_{k_2}^i -q_{k_1}^i\|_\infty \leq \frac{2\alpha_{k_1,k_2 - 1}}{1 - \gamma}$ . Similarly, for any $k\in [k_1,k_2 - 1]$ and $s\in S$ , we have + +$$ +\left\| \pi_ {k + 1} ^ {i} (s) - \pi_ {k} ^ {i} (s) \right\| _ {1} = \beta_ {k} \left\| \sigma_ {\tau} \left(q _ {k} ^ {i} (s)\right) - \pi_ {k} ^ {i} (s) \right\| _ {1} \leq 2 \beta_ {k}, +$$ + +which implies $\max_{s\in S}\| \pi_{k_2}^i (s) - \pi_{k_1}^i (s)\| _1\leq 2\beta_{k_1,k_2 - 1}$ . + +We next bound the terms $N_{1}$ to $N_{5}$ . Let $\mathcal{F}_k$ be the $\sigma$ -algebra generated the sequence of random variables $\{S_0, A_0^i, A_0^{-i}, \dots, S_{k-1}, A_{k-1}^i, A_{k-1}^{-i}, S_k\}$ . + +The Term $N_{1}$ . Using the tower property of conditional expectations, we have + +$$ +\begin{array}{l} N _ {1} = \mathbb {E} \left[ \left(F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right)\right) ^ {\top} \left(q _ {k - z _ {k}} ^ {i} - \bar {q} _ {k - z _ {k}} ^ {i}\right) \right] \\ = \mathbb {E} \left[ \left(\mathbb {E} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \mid \mathcal {F} _ {k - z _ {k}} \right] - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right)\right) ^ {\top} \left(q _ {k - z _ {k}} ^ {i} - \bar {q} _ {k - z _ {k}} ^ {i}\right) \right] \\ \leq \mathbb {E} [ \| \mathbb {E} [ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \mid \mathcal {F} _ {k - z _ {k}} ] - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} \| q _ {k - z _ {k}} ^ {i} - \bar {q} _ {k - z _ {k}} ^ {i} \| _ {\infty} ] (57) \\ \leq \frac {2}{1 - \gamma} \mathbb {E} \left[ \left\| \mathbb {E} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \mid \mathcal {F} _ {k - z _ {k}} \right] - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \right\| _ {1} \right] \quad (\text {L e m m a D . 1}) \\ \leq \frac {2}{1 - \gamma} \mathbb {E} [ \| \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} ] \\ + \frac {2}{1 - \gamma} \mathbb {E} [ \| \mathbb {E} [ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \mid \mathcal {F} _ {k - z _ {k}} ] - \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} ]. (58) \\ \end{array} +$$ + +We next bound the two terms on the RHS of the previous inequality. Observe that + +$$ +\begin{array}{l} \left\| \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \right\| _ {1} \\ = \sum_ {s, a ^ {i}} | [ \bar {F} _ {k} ^ {i} (q _ {k - z _ {k}} ^ {i}) ] (s, a ^ {i}) - [ \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i}) ] (s, a ^ {i}) | \\ = \sum_ {s, a ^ {i}} \left| \right.\left[ \right. \mathbb {E} _ {k} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S, A ^ {i}, A ^ {- i}, S ^ {\prime}\right)\right] (s, a ^ {i}) - \left[ \mathbb {E} _ {k - z _ {k}} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S, A ^ {i}, A ^ {- i}, S ^ {\prime}\right)\right] (s, a ^ {i}) \right.\left. \right|, \\ \end{array} +$$ + +where we use $\mathbb{E}_k[\cdot ]$ to denote $\mathbb{E}_{S\sim \mu_k(\cdot),A^i\sim \pi_k^i (\cdot |S_0),A^{-i}\sim \pi_k^{-i}(\cdot |S_0),S'\sim p(\cdot |S_0,A_0^i,A_0^{-i})}[\cdot ]$ for ease of presentation. To proceed, recall the following equivalent definition of total variation distance between probability measures $p_1,p_2$ : + +$$ +\| p _ {1} - p _ {2} \| _ {\mathrm {T V}} = \frac {1}{2} \sup _ {f: \| f \| _ {\infty} \leq 1} | \mathbb {E} _ {p _ {1}} [ f ] - \mathbb {E} _ {p _ {2}} [ f ] |. +$$ + +It follows that + +$$ +\begin{array}{l} \left. \left| [ \mathbb {E} _ {k} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S, A ^ {i}, A ^ {- i}, S ^ {\prime}\right) \right] (s, a ^ {i}) - [ \mathbb {E} _ {k - z _ {k}} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S, A ^ {i}, A ^ {- i}, S ^ {\prime}\right) \right] (s, a ^ {i}) \right| \right. \\ \leq \max _ {\bar {s}, \bar {a} ^ {i}, \bar {a} ^ {- i}, \bar {s} ^ {\prime}} \left| \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, \bar {s}, \bar {a} ^ {i}, \bar {a} ^ {- i}, \bar {s} ^ {\prime}\right) \right] (s, a ^ {i}) \right| \\ \times \sum_ {\tilde {s}, \tilde {a} ^ {i}, \tilde {a} ^ {- i}} \left| \mu_ {k} (\tilde {s}) \pi_ {k} ^ {i} (\tilde {a} ^ {i} | \tilde {s}) \pi_ {k} ^ {- i} (\tilde {a} ^ {- i} | \tilde {s}) - \mu_ {k - z _ {k}} (\tilde {s}) \pi_ {k - z _ {k}} ^ {i} (\tilde {a} ^ {i} | \tilde {s}) \pi_ {k - z _ {k}} ^ {- i} (\tilde {a} ^ {- i} | \tilde {s}) \right| \\ \leq \frac {1}{1 - \gamma} \left(\| \mu_ {k} - \mu_ {k - z _ {k}} \| _ {1} + \max _ {s} \| \pi_ {k} ^ {i} (s) - \pi_ {k - z _ {k}} ^ {i} (s) \| _ {1} + \max _ {s} \| \pi_ {k} ^ {- i} (s) - \pi_ {k - z _ {k}} ^ {- i} (s) \| _ {1}\right) \\ \leq \frac {2 L _ {p}}{1 - \gamma} \left(\max _ {s} \| \pi_ {k} ^ {i} (s) - \pi_ {k - z _ {k}} ^ {i} (s) \| _ {1} + \max _ {s} \| \pi_ {k} ^ {- i} (s) - \pi_ {k - z _ {k}} ^ {- i} (s) \| _ {1}\right) (Lemma B.1) \\ \leq \frac {8 L _ {p} \beta_ {k - z _ {k} , k - 1}}{1 - \gamma}. (Lemma D.13) \\ \end{array} +$$ + +Therefore, we have + +$$ +\begin{array}{l} \| \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} = \sum_ {s, a ^ {i}} | [ \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) ] (s, a ^ {i}) - [ \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) ] (s, a ^ {i}) | \\ \leq \frac {8 | \mathcal {S} | A _ {\max } L _ {p} \beta_ {k - z _ {k} , k - 1}}{1 - \gamma}. \tag {59} \\ \end{array} +$$ + +It remains to bound the second term on the RHS of Eq. (58). Recall that we denote $P_{\pi} \in \mathbb{R}^{|S| \times |S|}$ as the transition probability matrix of the Markov chain $\{S_k\}$ induced by a joint policy $\pi$ . Using the definition of conditional expectations, we have + +$$ +\begin{array}{l} \left\| \mathbb {E} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \mid \mathcal {F} _ {k - z _ {k}} \right] - \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \right\| _ {1} \\ = \left\| \sum_ {s \in \mathcal {S}} \left[ \left(\prod_ {j = k + 1} ^ {k + z _ {k}} P _ {\pi_ {j - z _ {k}}}\right) (S _ {k - z _ {k}}, s) - \mu_ {k} (s) \right] \sum_ {a ^ {i}} \pi_ {k} ^ {i} (a ^ {i} | s) \sum_ {a ^ {- i}} \pi_ {k} ^ {- i} (a ^ {- i} | s) \right. \\ \left. \times \sum_ {s ^ {\prime}} p \left(s ^ {\prime} \mid s, a ^ {i}, a ^ {- i}\right) F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, s, a ^ {i}, a ^ {- i}, s ^ {\prime}\right) \right\rVert_ {1} \\ \leq \frac {2}{1 - \gamma} \sum_ {s \in \mathcal {S}} \left| \left(\prod_ {j = k + 1} ^ {k + z _ {k}} P _ {\pi_ {j - z _ {k}}}\right) \left(S _ {k - z _ {k}}, s\right) - \mu_ {k} (s) \right| (Lemma D.9) \\ \leq \frac {2}{1 - \gamma} \left\{\sum_ {s \in \mathcal {S}} \left| \left(\prod_ {j = k + 1} ^ {k + z _ {k}} P _ {\pi_ {j - z _ {k}}}\right) \left(S _ {k - z _ {k}}, s\right) - P _ {\pi_ {k}} ^ {z _ {k}} \left(S _ {k - z _ {k}}, s\right) \right. \right| \\ \left. \right.\left. + \sum_ {s \in \mathcal {S}} \left| P _ {\pi_ {k}} ^ {z _ {k}} \left(S _ {k - z _ {k}}, s\right) - \mu_ {k} (s) \right|\right\} \\ \leq \frac {2}{1 - \gamma} \left\{\left\| \prod_ {j = k + 1} ^ {k + z _ {k}} P _ {\pi_ {j - z _ {k}}} - P _ {\pi_ {k}} ^ {z _ {k}} \right\| _ {\infty} + 2 \rho_ {\tau} ^ {z _ {k}} \right\}, (60) \\ \end{array} +$$ + +where the last line follows from Lemma B.1 (2). Observe that + +$$ +\left\| \prod_ {j = k + 1} ^ {k + z _ {k}} P _ {\pi_ {j - z _ {k}}} - P _ {\pi_ {k}} ^ {z _ {k}} \right\| _ {\infty} = \left\| \sum_ {\ell = 1} ^ {z _ {k}} \left(\prod_ {j = k + 1} ^ {k - \ell + 1 + z _ {k}} P _ {\pi_ {j - z _ {k}}} P _ {\pi_ {k}} ^ {\ell - 1} - \prod_ {j = k + 1} ^ {k - \ell + z _ {k}} P _ {\pi_ {j - z _ {k}}} P _ {\pi_ {k}} ^ {\ell}\right) \right\| _ {\infty} +$$ + +$$ +\begin{array}{l} = \left\| \sum_ {\ell = 1} ^ {z _ {k}} \left(\prod_ {j = k + 1} ^ {k - \ell + z _ {k}} P _ {\pi_ {j - z _ {k}}} \left(P _ {\pi_ {k - \ell + 1}} - P _ {\pi_ {k}}\right) P _ {\pi_ {k}} ^ {\ell - 1}\right) \right\| _ {\infty} \\ \leq \sum_ {\ell = 1} ^ {z _ {k}} \left\| \prod_ {j = k + 1} ^ {k - \ell + z _ {k}} P _ {\pi_ {j - z _ {k}}} \right\| _ {\infty} \| P _ {\pi_ {k - \ell + 1}} - P _ {\pi_ {k}} \| _ {\infty} \| P _ {\pi_ {k}} ^ {\ell - 1} \| _ {\infty} \\ \leq \sum_ {\ell = 1} ^ {z _ {k}} \| P _ {\pi_ {k - \ell + 1}} - P _ {\pi_ {k}} \| _ {\infty}. \\ \end{array} +$$ + +Since $P_{\pi}$ as a function of $\pi$ is 1-Lipschitz continuous with respect to the $\ell_{\infty}$ -norm, we have + +$$ +\begin{array}{l} \left\| \prod_ {j = k + 1} ^ {k + z _ {k}} P _ {\pi_ {j - z _ {k}}} - P _ {\pi_ {k}} ^ {z _ {k}} \right\| _ {\infty} \leq \sum_ {\ell = 1} ^ {z _ {k}} \max _ {s \in \mathcal {S}} \| \pi_ {k - \ell + 1} (s) - \pi_ {k} (s) \| _ {1} \\ = \sum_ {\ell = 1} ^ {z _ {k}} \max _ {s \in \mathcal {S}} \left(\| \pi_ {k - \ell + 1} ^ {- i} (s) - \pi_ {k} ^ {- i} (s) \| _ {1} + \| \pi_ {k - \ell + 1} ^ {i} (s) - \pi_ {k} ^ {i} (s) \| _ {1}\right) \\ \leq 4 z _ {k} \beta_ {k - z _ {k}, k - 1}. \tag {Lemma D.13} \\ \end{array} +$$ + +Using the previous inequality in Eq. (60), we have + +$$ +\begin{array}{l} \| \mathbb {E} \left[ F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \mid \mathcal {F} _ {k - z _ {k}} \right] - \bar {F} _ {k} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} \\ \leq \frac {2}{1 - \gamma} \left(4 z _ {k} \beta_ {k - z _ {k}, k - 1} + 2 \rho_ {\tau} ^ {z _ {k}}\right) \\ \leq \frac {2}{1 - \gamma} \left(4 z _ {k} \beta_ {k - z _ {k}, k - 1} + \beta_ {k}\right) \quad (\text {D e f i n i t i o n} z _ {k}) \\ \leq \frac {1 0 z _ {k} \beta_ {k - z _ {k} , k - 1}}{1 - \gamma}. \quad (z _ {k} \geq 1) \\ \end{array} +$$ + +Using the previous inequality and Eq. (59) together in Eq. (58), we obtain + +$$ +N _ {1} \leq \frac {1 6 L _ {p} | \mathcal {S} | A _ {\max} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {2 0 z _ {k} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} \leq \frac {3 6 L _ {p} | \mathcal {S} | A _ {\max} z _ {k} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}. +$$ + +The Term $N_{2}$ . For any $k \geq z_{k}$ , we have by Lemma D.13 that + +$$ +\begin{array}{l} N _ {2} \leq \mathbb {E} [ \| F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} \| q _ {k} ^ {i} - q _ {k - z _ {k}} ^ {i} \| _ {\infty} ] \\ \leq \frac {2 \alpha_ {k - z _ {k} , k - 1}}{1 - \gamma} \mathbb {E} [ \| F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \| _ {1} + \| \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} ]. \tag {61} \\ \end{array} +$$ + +Using the definition of $F^i(\cdot)$ , we have + +$$ +\begin{array}{l} \left\| F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \right\| _ {1} \\ = \sum_ {s, a ^ {i}} \mathbb {1} _ {\left\{\left(s, a ^ {i}\right) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}} \left| R _ {i} \left(S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) + \gamma v ^ {i} \left(S _ {k + 1}\right) - q _ {k} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right) \right| \\ = \left| R _ {i} \left(S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}\right) + \gamma v ^ {i} \left(S _ {k + 1}\right) - q _ {k} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right) \right| \\ \leq 1 + \frac {\gamma}{1 - \gamma} + \frac {1}{1 - \gamma} \\ = \frac {2}{1 - \gamma}. \tag {62} \\ \end{array} +$$ + +Moreover, we have by Jensen's inequality that + +$$ +\left\| \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \right\| _ {1} \leq \frac {2}{1 - \gamma}. \tag {63} +$$ + +Using Eqs. (62) and (63) together in Eq. (61), we have + +$$ +N _ {2} \leq \frac {8 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}. +$$ + +The Term $N_{3}$ . For any $k \geq z_k$ , we have + +$$ +\begin{array}{l} N _ {3} \leq \mathbb {E} [ \| F ^ {i} (q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}) - \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i}) \| _ {1} \| \bar {q} _ {k - z _ {k}} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {\infty} ] \\ \leq \mathbb {E} [ (\| F ^ {i} (q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}) \| _ {1} + \| \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i}) \| _ {1}) \| \bar {q} _ {k - z _ {k}} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {\infty} ] \\ \leq \frac {4}{1 - \gamma} \mathbb {E} \left[ \| \bar {q} _ {k - z _ {k}} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {\infty} \right]. \tag {Eqs. (62) and (63)} \\ \end{array} +$$ + +Observe that + +$$ +\begin{array}{l} \| \bar {q} _ {k - z _ {k}} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {\infty} = \max _ {s \in \mathcal {S}} \| \mathcal {T} ^ {i} (v ^ {i}) (s) (\pi_ {k} ^ {- i} (s) - \pi_ {k - z _ {k}} ^ {- i} (s)) \| _ {\infty} \\ \leq \max _ {s \in \mathcal {S}} \| \mathcal {T} ^ {i} (v ^ {i}) (s) \| _ {1, \infty} \| \pi_ {k} ^ {- i} (s) - \pi_ {k - z _ {k}} ^ {- i} (s) \| _ {1} \\ \leq \frac {2 \beta_ {k - z _ {k} , k - 1}}{1 - \gamma}, \\ \end{array} +$$ + +where the last line follows from Lemma D.13 and + +$$ +\| \mathcal {T} ^ {i} (v ^ {i}) (s) \| _ {1, \infty} \leq \max _ {s, a ^ {i}, a ^ {- i}} | \mathcal {T} ^ {i} (v ^ {i}) (s, a ^ {i}, a ^ {- i}) | \leq \frac {1}{1 - \gamma}. +$$ + +Therefore, we have + +$$ +N _ {3} \leq \frac {8 \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}. +$$ + +The Term $N_4$ . For any $k \geq z_k$ , we have + +$$ +\begin{array}{l} N _ {4} \leq \mathbb {E} [ \| F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \| _ {1} \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {\infty} ] \\ \leq \frac {2}{1 - \gamma} \mathbb {E} [ \| F ^ {i} (q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}) - F ^ {i} (q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}) \| _ {1} ], \\ \end{array} +$$ + +where the last line follows from Lemma D.1. Using the definition of $F^i(\cdot)$ , we have + +$$ +\begin{array}{l} \left\| F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - F ^ {i} \left(q _ {k - z _ {k}} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) \right\| _ {1} \\ = \sum_ {s, a ^ {i}} \mathbb {1} _ {\left\{(s, a ^ {i}) = \left(S _ {k}, A _ {k} ^ {i}\right) \right\}} \left| q _ {k - z _ {k}} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right) - q _ {k} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right) \right| \\ = \left| q _ {k - z _ {k}} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right) - q _ {k} ^ {i} \left(S _ {k}, A _ {k} ^ {i}\right) \right| \\ \leq \left\| q _ {k - z _ {k}} ^ {i} - q _ {k} ^ {i} \right\| _ {\infty} \\ \leq \frac {2 \alpha_ {k - z _ {k} , k - 1}}{1 - \gamma}. \tag {Lemma D.13} \\ \end{array} +$$ + +It follows that + +$$ +N _ {4} \leq \frac {4 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}. +$$ + +The Term $N_{5}$ . For any $k \geq z_{k}$ , we have + +$$ +\begin{array}{l} N _ {5} \leq \mathbb {E} \left[ \| \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} \| q _ {k} ^ {i} - \bar {q} _ {k} ^ {i} \| _ {\infty} \right] \\ \leq \frac {2}{1 - \gamma} \mathbb {E} \left[ \| \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} \right] (Lemma D.1) \\ \leq \frac {2}{1 - \gamma} \mathbb {E} [ \| \bar {F} _ {k} ^ {i} (q _ {k} ^ {i}) - \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k} ^ {i}) \| _ {1} + \| \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k} ^ {i}) - \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i}) \| _ {1} ] \\ \leq \frac {1 6 L _ {p} | \mathcal {S} | A _ {\max } \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {2}{1 - \gamma} \mathbb {E} [ \| \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k} ^ {i}\right) - \bar {F} _ {k - z _ {k}} ^ {i} \left(q _ {k - z _ {k}} ^ {i}\right) \| _ {1} ], (64) \\ \end{array} +$$ + +where the last line follows from the same analysis as we obtain Eq. (59). As for the second term on the RHS of Eq. (64), using the definition of $\bar{F}_k^i (\cdot)$ , we have + +$$ +\| \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k} ^ {i}) - \bar {F} _ {k - z _ {k}} ^ {i} (q _ {k - z _ {k}} ^ {i}) \| _ {1} = \sum_ {s \in \mathcal {S}} \mu_ {k - z _ {k}} (s) \sum_ {a ^ {i}} \pi_ {k - z _ {k}} ^ {i} (a ^ {i} \mid s) | q _ {k} ^ {i} (s, a ^ {i}) - q _ {k - z _ {k}} ^ {i} (s, a ^ {i}) | +$$ + +$$ +\begin{array}{l} \leq \left\| q _ {k} ^ {i} - q _ {k - z _ {k}} ^ {i} \right\| _ {\infty} \\ \leq \frac {2 \alpha_ {k - z _ {k} , k - 1}}{1 - \gamma}. \tag {Lemma D.13} \\ \end{array} +$$ + +Using the previous inequality in Eq. (64), we obtain + +$$ +N _ {5} \leq \frac {1 6 L _ {p} | \mathcal {S} | A _ {\max} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {4 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}. +$$ + +Combining the upper bounds we derived for the terms $N_{1}$ to $N_{5}$ in Eq. (56), we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left(F ^ {i} \left(q _ {k} ^ {i}, S _ {k}, A _ {k} ^ {i}, A _ {k} ^ {- i}, S _ {k + 1}\right) - \bar {F} _ {k} ^ {i} \left(q _ {k} ^ {i}\right) ^ {\top} \left(q _ {k} ^ {i} - \bar {q} _ {k} ^ {i}\right) \right] \right. \\ \leq \frac {3 6 L _ {p} | \mathcal {S} | A _ {\max z _ {k}} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {8 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {8 \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {4 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} \\ + \frac {1 6 L _ {p} | \mathcal {S} | A _ {\max} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {4 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} \\ \leq \frac {6 0 L _ {p} | S | A _ {\max } z _ {k} \beta_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} + \frac {1 6 \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}} \\ \leq \frac {1 7 z _ {k} \alpha_ {k - z _ {k} , k - 1}}{(1 - \gamma) ^ {2}}, \\ \end{array} +$$ + +where the last line follows from $\beta_{k} / \alpha_{k} = c_{\alpha ,\beta}\leq \frac{1}{60L_{p}|S|A_{\max}}$ (cf. Condition 3.1). + +# D.8 Proof of Corollary 3.1.1 + +The following proof idea was previous used in [15] to show the rationality of their decentralized $Q$ -learning algorithm. + +Observe that Theorem 3.1 can be easily generalized to the case where the reward is corrupted by noise. Specifically, suppose that player $i$ takes action $a^i$ and player $-i$ takes action $a^{-i}$ . Instead of assuming player $i$ receives a deterministic reward $R_{i}(s,a^{i},a^{-i})$ , we assume that player $i$ receives a random reward $r^i (s,a^i,a^{-i},\xi)$ , where $\xi \in \Xi$ ( $\Xi$ is a finite set) is a random variable with distribution $\mu_{\xi}(s)$ , and is independent of everything else. The proof is identical as long as $r^i +r^{-i} = 0$ , and the reward is uniformly bounded, i.e., $\max_{s,a^i,a^{-i},\xi}|r^i (s,a^i,a^{-i},\xi)| < \infty$ . Now consider the case where player $i$ 's opponent follows a stationary policy $\pi^{-i}$ . We incorporate the randomness of player $-i$ 's action into the model and introduce a fictitious opponent with only one action $a^*$ . In particular, let the random reward function be defined as $\hat{r}^i (s,a^i,a^*,A^{-i}) = R_i(s,a^i,A^{-i})$ for all $(s,a^i)$ , where $A^{-i}\sim \pi^{-i}(\cdot |s)$ , and let $\hat{p} (s' | s,a^i,a^*) = \sum_{\pi^{-i}(a^{-i}|s)}p(s' | a^i,a^{-i},s)$ . Now the problem can be reformulated as player $i$ playing against the fictitious player with a single action $a^*$ , with reward function $\hat{r}^i$ ( $i\in \{1,2\}$ ) and transition probabilities $\hat{p}$ . Using the same proof for Theorem 3.1, we have the desired finite-sample bound. + +# E On the Mixing Time of MDPs with Almost Deterministic Policies + +Consider an MDP with two states $s_1, s_2$ and two actions $a_1, a_2$ . The transition probability matrix $P_1$ of taking action $a_1$ is the identity matrix $I_2$ , and the transition probability matrix $P_2$ of taking action $a_2$ is $P_2 = [0, 1; 1, 0]$ . Given $\alpha \in (1/2, 1)$ , let $\pi_\alpha$ be a policy such that $\pi(a_1|s) = \alpha$ and $\pi(a_2|s) = 1 - \alpha$ for any $s \in \{s_1, s_2\}$ . Denote $P_\alpha$ as the transition probability matrix under $\pi_\alpha$ . It is easy to see that + +$$ +P _ {\alpha} = \left[ \begin{array}{c c} \alpha & 1 - \alpha \\ 1 - \alpha & \alpha \end{array} \right]. +$$ + +Since $P_{\alpha}$ is a doubly stochastic matrix, and has strictly positive entries, it has a unique stationary distribution $\mu = \mathbf{1}^{\top} / 2$ . + +We next compute a lower bound of the mixing time of the $\pi_{\alpha}$ -induced Markov chain. Let $e_1 = [1,0]^\top$ be the initial distribution of the states, and denote $[x_k, 1 - x_k]^\top$ as the distribution of the states at time step $k$ . Then we have + +$$ +x _ {k + 1} = x _ {k} \alpha + (1 - x _ {k}) (1 - \alpha) +$$ + +$$ +\begin{array}{l} = (2 \alpha - 1) x _ {k} + 1 - \alpha \\ = (2 \alpha - 1) ^ {k + 1} x _ {0} + \sum_ {i = 0} ^ {k} (1 - \alpha) (2 \alpha - 1) ^ {k - i} \\ = \frac {1}{2} + \frac {(2 \alpha - 1) ^ {k + 1}}{2}. \\ \end{array} +$$ + +It follows that + +$$ +\begin{array}{l} t _ {\pi_ {\alpha}, \eta} = \min _ {k \geq 0} \left\{\max _ {\mu_ {0} \in \Delta^ {2}} \left\| \mu_ {0} ^ {\top} P _ {\alpha} ^ {k} - \mathbf {1} ^ {\top} / 2 \right\| _ {\mathrm {T V}} \leq \eta \right\} \\ \geq \min _ {k > 0} \left\{\left\| e _ {1} ^ {\top} P _ {\alpha} ^ {k} - \mathbf {1} ^ {\top} / 2 \right\| _ {\mathrm {T V}} \leq \eta \right\} \\ = \min _ {k \geq 0} \left\{\left(2 \alpha - 1\right) ^ {k} \leq 2 \eta \right\} \\ \geq \frac {\log (1 / 2 \eta)}{\log (1 / (2 \alpha - 1))} - 1, \\ \end{array} +$$ + +which implies $\lim_{\alpha \to 1}t_{\alpha ,\eta} = \infty$ . Therefore, as the policies become deterministic, the mixing time of the associated Markov chain can approach infinity. + +# F Numerical Simulations + +We first conduct numerical simulations to investigate the impact of choosing different $\tau$ , which is used to define the softmax operator in Algorithms 1 and 2. Our theoretical results indicate that there is an asymptotically non-vanishing bias due to using a positive $\tau$ . Intuitively, since a softmax policy always has strictly positive entries while a Nash equilibrium policy can have zero entries, we cannot, in general, expect the Nash gap to converge to zero. + +To demonstrate this phenomenon, consider the following example of a zero-sum matrix game. Let + +$$ +R _ {1} = \left[ \begin{array}{c c c} N & 1 & - 1 \\ - 1 & 0 & 1 \\ 1 & - 1 & 0 \end{array} \right] +$$ + +be the payoff matrix for player 1, and let $R_{2} = -(R_{1})^{\top}$ , where $N > 0$ is a tunable parameter. Note that this matrix game has a unique Nash equilibrium, which goes to the joint policy $\pi^{1} = (1/3, 2/3, 0)$ , $\pi^{2} = (0, 2/3, 1/3)$ as $N \to \infty$ . In our simulations, we use constant stepsizes $\alpha_{k} \equiv 0, 5$ and $\beta_{k} \equiv 0.01$ and run Algorithm 1 for 100 trajectories (each has $K = 2000$ iterations). Then, we plot the average Nash gap (averaged over the 100 trajectories) as a function of the number of iterations $k$ in Figure 1 for different temperatures $\tau$ . To enable a fair comparison, we use the normalized $q$ -function to compute the softmax, that is, instead of directly using $\sigma_{\tau}(q_k^i)$ in Algorithm 1, we use $\sigma_{\tau}(q_k^i / \|q_k^i\|_2)$ . As we can see in Figure 1, as $\tau$ increases, the asymptotic error also increases, which is consistent with our theoretical results. + +# F.1 Comparison with the Optimistic Multiplicative-Weights Update + +The Optimistic Multiplicative-Weights Update (OMWU) was recognized as a popular learning algorithm for zero-sum matrix games [98]. Since OMWU in the payoff-based setting (or noisy-feedback setting) may not have last-iterate convergence [99], to enable a fair comparison, we will compare OMWU in the noiseless setting (which does enjoy last-iterate convergence [98]) with smoothed-best response dynamics. We start by writing down the algorithm. + +OMWU: With initializations $\pi_0^1, \pi_1^1$ (respectively, $\pi_0^2, \pi_1^2$ ) that live in the interior of the probability simplex $\Delta(\mathcal{A}^1)$ (respectively, $\Delta(\mathcal{A}^2)$ ), OMWU updates $(\pi_k^1, \pi_k^2)$ iteratively according to + +$$ +\pi_ {k + 1} ^ {i} (a ^ {i}) = \frac {\pi_ {k} ^ {i} (a ^ {i}) \exp (2 \eta [ R _ {i} \pi_ {k} ^ {- i} ] (a ^ {i}) - \eta [ R _ {i} \pi_ {k - 1} ^ {- i} (a ^ {i}) ])}{\sum_ {\tilde {a} ^ {i} \in \mathcal {A} ^ {i}} \pi_ {k} ^ {i} (\tilde {a} ^ {i}) \exp (2 \eta [ R _ {i} \pi_ {k} ^ {- i} ] (\tilde {a} ^ {i}) - \eta [ R _ {i} \pi_ {k - 1} ^ {- i} (\tilde {a} ^ {i}) ])}, \forall a ^ {i} \in \mathcal {A} ^ {i}, i \in \{1, 2 \}, +$$ + +where $\eta \in (0,1)$ is the stepsize. + +![](images/dc56889dc7dda53df1be19bd02d06fb4d1bbbf12e09544a78de99a0bc0c0ed76.jpg) +Figure 1: The Nash Gap for Different Temperatures $\tau$ + +![](images/8a4ef36e93b4d00a574f2242da08db9a986fbfda4caf04bdefdf460d1e910069.jpg) +Figure 2: The Nash Gap as a Function of the Number of Iterations $k$ + +The Discrete Smoothed Best-Response Dynamics (DSBR): With arbitrary initializations $\pi_0^1 \in \Delta(\mathcal{A}^1)$ and $\pi_0^2 \in \Delta(\mathcal{A}^2)$ , the discrete smoothed best-response dynamics update $(\pi_k^1, \pi_k^2)$ iteratively according to + +$$ +\pi_ {k + 1} ^ {i} = \left(1 - \beta_ {k}\right) \pi_ {k} ^ {i} + \beta_ {k} \sigma_ {\tau} \left(R _ {i} \pi_ {k} ^ {- i}\right), \quad \forall i \in \{1, 2 \}, +$$ + +where $\beta_{k}$ is the stepsize. + +We perform two sets of numerical simulations to compare OMWU and DSBR. Our first experiment is implemented on the rock-paper-scissor game, where the payoff matrix for player 1 is + +$$ +R _ {1} = \left[ \begin{array}{c c c} 0 & 1 & - 1 \\ - 1 & 0 & 1 \\ 1 & - 1 & 0 \end{array} \right], +$$ + +and $R_{2} = -(R_{1})^{\top}$ . As we see in Figure 2, the convergence rates of OMWU and DSBR are comparable. However, DSBR seems to be more stable compared with OMWU. Note that while we use softmax policies in DSBR, since the rock-paper-scissor game has a unique Nash equilibrium, which is also the unique Nash equilibrium of the entropy-regularized matrix game for any temperature $\tau > 0$ , there is no smoothing bias and the Nash gap under DSBR does converge to zero. + +![](images/1b89c22a9eb04e74907d001511e559002ef180b35c31950267d4314595caf87f.jpg) +Figure 3: The Nash Gap as a Function of the Number of Iterations $k$ + +![](images/0848a0b367facc114d6b1341abd869ceb74b9538c06c2d1dd608973bbb345720.jpg) +Figure 4: The Asymptotic Behavior of Figure 3 + +In our second numerical simulation, we set the payoff matrix of player 1 to be + +$$ +R _ {1} = \left[ \begin{array}{c c c} N & 1 & - 1 \\ - 1 & 0 & 1 \\ 1 & - 1 & 0 \end{array} \right], +$$ + +and $R_{2} = -(R_{1})^{\top}$ , where we choose $N = 100$ . Note that as $N \to \infty$ , the unique Nash equilibrium goes to $\pi^{1} = (1/3, 2/3, 0)$ , $\pi^{2} = (0, 2/3, 1/3)$ . In this case, we also see from Figure 3 that DSBR is more stable compared with OMWU. However, since in this case, the Nash equilibrium has zero entries, due to the use of softmax policies, DSBR suffers from an asymptotically non-vanishing bias. This is clear from Figure 4, which plots the asymptotic behavior of Figure 3. We see that OMWU converges to zero while the Nash gap from DSBR converges to a positive real number. \ No newline at end of file diff --git a/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/images.zip b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b56b14f5c123ebb377ab1ffc199602f5dc9f09e4 --- /dev/null +++ b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c80acb9c00bae119cf77470d72da5b007c6e666c425ffce549c79d62d3066c1b +size 4324989 diff --git a/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/layout.json b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3fbf3db6d16064feb2c861afd389667d4595d47d --- /dev/null +++ b/afinitesampleanalysisofpayoffbasedindependentlearninginzerosumstochasticgames/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2882bc024e0b693760de49fb47439a56f8c1d4d93963fbba97eb9ed812c8ead +size 2532837 diff --git a/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_content_list.json b/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e99e08431cd331d8a201745c9fe6e93390f3e405 --- /dev/null +++ b/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:abbe38bb2cd8cf4fa7e3b94849e780d6aad1d627eb7eb0fe84a73e4bb12b432a +size 301748 diff --git a/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_model.json b/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_model.json new file mode 100644 index 0000000000000000000000000000000000000000..82fc5fd4ff0c34876ef75a298790009492d2613a --- /dev/null +++ b/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5edb8b4605173a8eb3395bfd27084a9c42f350fe061763e94e60bbab6a0252e6 +size 347040 diff --git a/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_origin.pdf b/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..de1cf28ecc799380c494d33b71b51cbb2f5d56c2 --- /dev/null +++ b/afractionalgraphlaplacianapproachtooversmoothing/5a984520-e75f-4676-a766-1476a79c0136_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cdb16482e1d3ddfb1c38dee2b06df9cd5c05ec82df6fdbd2844daf68970b340 +size 1167671 diff --git a/afractionalgraphlaplacianapproachtooversmoothing/full.md b/afractionalgraphlaplacianapproachtooversmoothing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1eda9db371b62adf6482e5f8616d37f9aafc0499 --- /dev/null +++ b/afractionalgraphlaplacianapproachtooversmoothing/full.md @@ -0,0 +1,1612 @@ +# A Fractional Graph Laplacian Approach to Oversmoothing + +Sohir Maskey* + +Department of Mathematics, + +LMU Munich + +maskey@math.lmu.de + +Aras Bacho + +Department of Mathematics, + +LMU Munich + +Raffaele Paolino* + +Department of Mathematics & MCML, + +LMU Munich + +paolino@math.lmu.de + +Gitta Kutyniok + +Department of Mathematics & MCML, + +LMU Munich + +# Abstract + +Graph neural networks (GNNs) have shown state-of-the-art performances in various applications. However, GNNs often struggle to capture long-range dependencies in graphs due to oversmoothing. In this paper, we generalize the concept of oversmoothing from undirected to directed graphs. To this aim, we extend the notion of Dirichlet energy by considering a directed symmetrically normalized Laplacian. As vanilla graph convolutional networks are prone to oversmooth, we adopt a neural graph ODE framework. Specifically, we propose fractional graph Laplacian neural ODEs, which describe non-local dynamics. We prove that our approach allows propagating information between distant nodes while maintaining a low probability of long-distance jumps. Moreover, we show that our method is more flexible with respect to the convergence of the graph's Dirichlet energy, thereby mitigating oversmoothing. We conduct extensive experiments on synthetic and real-world graphs, both directed and undirected, demonstrating our method's versatility across diverse graph homophily levels. Our code is available on GitHub. + +# 1 Introduction + +Graph neural networks (GNNs) (Gori et al., 2005; Scarselli et al., 2009; Bronstein et al., 2017) have emerged as a powerful class of machine learning models capable of effectively learning representations of structured data. GNNs have demonstrated state-of-the-art performance in a wide range of applications, including social network analysis (Monti et al., 2019), molecular property prediction (Gilmer et al., 2017), and recommendation systems (J. Wang et al., 2018; Fan et al., 2019). The majority of existing work on GNNs has focused on undirected graphs (Defferrard et al., 2016; Kipf et al., 2017; Hamilton et al., 2017), where edges have no inherent direction. However, many real-world systems, such as citation networks, transportation systems, and biological pathways, are inherently directed, necessitating the development of methods explicitly tailored to directed graphs. + +Despite their success, most existing GNN models struggle to capture long-range dependencies, which can be critical for specific tasks, such as node classification and link prediction, and for specific graphs, such as heterophilic graphs. This shortcoming also arises from the problem of oversmoothing, where increasing the depth of GNNs results in the node features converging to similar values that only convey information about the node's degree (Oono et al., 2019; Cai et al., 2020). Consequently, + +scaling the depth of GNNs is not sufficient to broaden receptive fields, and other approaches are necessary to address this limitation. While these issues have been extensively studied in undirected graphs (Q. Li et al., 2018; G. Li et al., 2019; Luan, M. Zhao, et al., 2019; D. Chen et al., 2020; Rusch et al., 2022), their implications for directed graphs remain largely unexplored. Investigating these challenges and developing effective solutions is crucial for applying GNNs to real-world scenarios. + +Over-smoothing has been shown to be intimately related to the graph's Dirichlet energy, defined as + +$$ +\mathcal {E} (\mathbf {x}) := \frac {1}{4} \sum_ {i, j = 1} ^ {N} a _ {i, j} \left\| \frac {\mathbf {x} _ {i}}{\sqrt {d _ {i}}} - \frac {\mathbf {x} _ {j}}{\sqrt {d _ {j}}} \right\| _ {2} ^ {2}, +$$ + +where $\mathbf{A} = (a_{i,j})_{i,j=1}^{N}$ represents the adjacency matrix of the underlying graph, $\mathbf{x} \in \mathbb{R}^{N \times K}$ denotes the node features, and $d_i \in \mathbb{R}$ the degree of node $i$ . Intuitively, the Dirichlet energy measures the smoothness of nodes' features. Therefore, a GNN that minimizes the Dirichlet energy is expected to perform well on homophilic graphs, where similar nodes are likely to be connected. Conversely, a GNN that ensures high Dirichlet energy should lead to better performances on heterophilic graphs, for which the nodes' features are less smooth. + +This paper aims to bridge the gap in understanding oversmoothing for directed graphs. To this aim, we generalize the concept of Dirichlet energy, providing a rigorous foundation for analyzing oversmoothing. Specifically, we consider the directed symmetrically normalized Laplacian, which accommodates directed graph structures and recovers the usual definition in the undirected case. Even though the directed symmetrically normalized Laplacian has been already used (Zou et al., 2022), its theoretical properties remain widely unexplored. + +However, a vanilla graph convolutional network (GCN) (Kipf et al., 2017) implementing this directed Laplacian alone is not able to prevent oversmoothing. For this reason, we adopt a graph neural ODE framework, which has been shown to effectively alleviate oversmoothing in undirected graphs (Bodnar et al., 2022; Rusch et al., 2022; Di Giovanni et al., 2023). + +# 1.1 Graph Neural ODEs + +The concept of neural ODE was introduced by Haber et al. (2018) and R. T. Q. Chen et al. (2018), who first interpreted the layers in neural networks as the time variable in ODEs. Building on this foundation, Poli et al. (2021), Chamberlain et al. (2021), and Eliasof et al. (2021) extended the connection to the realm of GNNs, resulting in the development of graph neural ODEs. In this context, each node $i$ of the underlying graph is described by a state variable $\mathbf{x}_i(t) \in \mathbb{R}^K$ , representing the node $i$ at time $t$ . We can define the dynamics of $\mathbf{x}(t)$ via the node-wise ODE + +$$ +\mathbf {x} ^ {\prime} (t) = f _ {\mathbf {w}} (\mathbf {x} (t)), t \in [ 0, T ], +$$ + +subject to the initial condition $\mathbf{x}(0) = \mathbf{x}_0\in \mathbb{R}^{N\times K}$ , where the function $f_{\mathbf{w}}:\mathbb{R}^{N\times K}\to \mathbb{R}^{N\times K}$ is parametrized by the learnable parameters $\mathbf{w}$ . + +The graph neural ODE can be seen as a continuous learnable architecture on the underlying graph, which computes the final node representation $\mathbf{x}(T)$ from the input nodes' features $\mathbf{x}_0$ . Typical choices for $f_{\mathbf{w}}$ include attention-based functions (Chamberlain et al., 2021), which generalize graph attention networks (GATs) (Velickovic et al., 2018), or convolutional-like functions (Di Giovanni et al., 2023) that generalize GCNs (Kipf et al., 2017). + +How can we choose the learnable function $f_{\mathbf{w}}$ to accommodate both directed and undirected graphs, as well as different levels of homophily? We address this question in the following subsection. + +# 1.2 Fractional Laplacians + +The continuous fractional Laplacian, denoted by $(- \Delta)^{\alpha}$ for $\alpha > 0$ , is used to model non-local interactions. For instance, the fractional heat equation $\partial_t u + (-\Delta)^{\alpha} u = 0$ provides a flexible and accurate framework for modeling anomalous diffusion processes. Similarly, the fractional diffusion-reaction, quasi-geostrophic, Cahn-Hilliard, porous medium, Schrödinger, and ultrasound equations are more sophisticated models to represent complex anomalous systems (Pozrikidis, 2018). + +Similarly to the continuous case, the fractional graph Laplacian (FGL) (Benzi et al., 2020) models non-local network dynamics. In general, the FGL does not inherit the sparsity of the underlying + +graph, allowing a random walker to leap rather than walk solely between adjacent nodes. Hence, the FGL is able to build long-range connections, making it well-suited for heterophilic graphs. + +# 1.3 Main Contributions + +We present a novel approach to the fractional graph Laplacian by defining it in the singular value domain, instead of the frequency domain (Benzi et al., 2020). This formulation bypasses the need for computing the Jordan decomposition of the graph Laplacian, which lacks reliable numerical methods. We show that our version of the FGL can still capture long-range dependencies, and we prove that its entries remain reasonably bounded. + +We then propose two FGL-based neural ODEs: the fractional heat equation and the fractional Schrödinger equation. Importantly, we demonstrate that solutions to these FGL-based neural ODEs offer increased flexibility in terms of the convergence of the Dirichlet energy. Notably, the exponent of the fractional graph Laplacian becomes a learnable parameter, allowing our network to adaptively determine the optimal exponent for the given task and graph. We show that this can effectively alleviate oversmoothing in undirected and directed graphs. + +To validate the effectiveness of our approach, we conduct extensive experiments on synthetic and real-world graphs, with a specific focus on supervised node classification. Our experimental results indicate the advantages offered by fractional graph Laplacians, particularly in non-homophilic and directed graphs. + +# 2 Preliminaries + +We denote a graph as $\mathcal{G} = (\mathcal{V},\mathcal{E})$ , where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, and $N = |\mathcal{V}|$ is the number of nodes. The adjacency matrix $\mathbf{A} := \{a_{i,j}\}$ encodes the edge information, with $a_{i,j} = 1$ if there is an edge directed from node $j$ to $i$ , and 0 otherwise. The in- and out-degree matrices are then defined as $\mathbf{D}_{\mathrm{in}} = \mathrm{diag}(\mathbf{A}\mathbf{1})$ , $\mathbf{D}_{\mathrm{out}} = \mathrm{diag}(\mathbf{A}^{\mathrm{T}}\mathbf{1})$ , respectively. The node feature matrix $\mathbf{x} \in \mathbb{R}^{N \times K}$ contains for every node its feature in $\mathbb{R}^K$ . + +Given any matrix $\mathbf{M} \in \mathbb{C}^{n \times n}$ , we denote its spectrum by $\lambda(\mathbf{M}) \coloneqq \{\lambda_i(\mathbf{M})\}_{i=1}^n$ in ascending order w.r.t. to the real part, i.e., $\Re \lambda_1(\mathbf{M}) \leq \Re \lambda_2(\mathbf{M}) \leq \ldots \leq \Re \lambda_n(\mathbf{M})$ . Furthermore, we denote by $\|\mathbf{M}\|_2$ and $\|\mathbf{M}\|$ the Frobenius and spectral norm of $\mathbf{M}$ , respectively. Lastly, we denote by $\mathbf{I}_n$ the identity matrix, where we omit the dimension $n$ when it is clear from the context. + +Homophily and Heterophily Given a graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with labels $\mathbf{y} = \{y_i\}_{i\in \mathcal{V}}$ , the homophily of the graph indicates whether connected nodes are likely to have the same labels; formally, + +$$ +\mathcal {H} (\mathcal {G}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \frac {| \{j \in \{1 , \ldots , N \} : a _ {i , j} = 1 \wedge y _ {i} = y _ {j} \} |}{| \{j \in \{1 , \ldots , N \} : a _ {i , j} = 1 \} |}, +$$ + +where the numerator represents the number of neighbors of node $i \in \mathcal{V}$ that have the same label $y_{i}$ (Pei et al., 2019). We say that $\mathcal{G}$ is homophilic if $\mathcal{H}(\mathcal{G}) \approx 1$ and heterophilic if $\mathcal{H}(\mathcal{G}) \approx 0$ . + +# 3 Dirichlet Energy and Laplacian for (Directed) Graphs + +In this section, we introduce the concept of Dirichlet energy and demonstrate its relationship to a directed Laplacian, thereby generalizing well-known results for undirected graphs. + +Definition 3.1. The Dirichlet energy is defined on the node features $\mathbf{x} \in \mathbb{R}^{N \times K}$ of a graph $\mathcal{G}$ as + +$$ +\mathcal {E} (\mathbf {x}) := \frac {1}{4} \sum_ {i, j = 1} ^ {N} a _ {i, j} \left\| \frac {\mathbf {x} _ {i}}{\sqrt {d _ {i} ^ {i n}}} - \frac {\mathbf {x} _ {j}}{\sqrt {d _ {j} ^ {o u t}}} \right\| _ {2} ^ {2}. \tag {1} +$$ + +The Dirichlet energy measures how much the features change over the nodes of $\mathcal{G}$ , by quantifying the disparity between the normalized outflow of information from node $j$ and the normalized inflow of information to node $i$ . + +Definition 3.2. We define the symmetrically normalized adjacency (SNA) as $\mathbf{L} \coloneqq \mathbf{D}_{in}^{-1/2} \mathbf{A} \mathbf{D}_{out}^{-1/2}$ . + +![](images/88623ff97357584cf57a1df0f59a54aec160e06608804e3a174ec8e62fa125aa.jpg) +Figure 1: Spectrum $\lambda (\mathbf{L})$ of common directed real-world graphs. The Perron-Frobenius eigenvalue is $\lambda_{\mathrm{PF}}\approx 0.94$ for Chameleon, and $\lambda_{\mathrm{PF}}\approx 0.89$ for Squirrel. + +![](images/6e6733fcb5ea0c8b844bea73bcac6a5954f9f99993f9e9bc1c15a73c8a207dbe.jpg) + +![](images/88c667c9ca4ebaa04dda0c0f11b07d456a9bab7175470d3797447a052ee0a090.jpg) +Figure 2: Examples of non-weakly balanced (left), weakly balanced (center), and balanced (right) directed graphs. The Perron-Frobenius eigenvalue of the left graph is $\lambda_{\mathrm{PF}} \approx 0.97 \neq 1$ , while for the middle and right graphs $\lambda_{\mathrm{PF}} = 1$ . + +![](images/75c97158f155c641fef406f2357ad99b7d50b7efd0312e75792bfb2f1f23367a.jpg) + +![](images/d1d4b235f460269b2adda3680dc82e39bf7a671e350b5abb614ac48c8dad7fa9.jpg) + +Note that $\mathbf{L}$ is symmetric if and only if $\mathcal{G}$ is undirected; the term "symmetrically" refers to the both-sided normalization rather than the specific property of the matrix itself. + +It is well-known that the SNA's spectrum of a connected undirected graph lies within $[-1, 1]$ (Chung, 1997). We extend this result to directed graphs, which generally exhibit complex-valued spectra. + +Proposition 3.3. Let $\mathcal{G}$ be a directed graph with SNA $\mathbf{L}$ . For every $\lambda \in \lambda (\mathbf{L})$ , it holds $|\lambda |\leq 1$ + +Proposition 3.3 provides an upper bound for the largest eigenvalue of any directed graph, irrespective of its size. However, many other spectral properties do not carry over easily from the undirected to the directed case. For example, the SNA may not possess a one-eigenvalue, even if the graph is strongly connected (see, e.g., Figures 1 to 2). The one-eigenvalue is of particular interest since its eigenvector $\mathbf{v}$ corresponds to zero Dirichlet energy $\mathcal{E}(\mathbf{v}) = 0$ . Therefore, studying when $1 \in \lambda(\mathbf{L})$ is crucial to understanding the behavior of the Dirichlet energy. We fully characterize the set of graphs for which $1 \in \lambda(\mathbf{L})$ ; this is the scope of the following definition. + +Definition 3.4. A graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ is said to be balanced if $d_i^{in} = d_i^{out}$ for all $i\in \{1,\ldots ,N\}$ , and weakly balanced if there exists $\mathbf{k}\in \mathbb{R}^N$ such that $\mathbf{k}\neq 0$ and + +$$ +\sum_ {j = 1} ^ {N} a _ {i, j} \left(\frac {k _ {j}}{\sqrt {d _ {j} ^ {o u t}}} - \frac {k _ {i}}{\sqrt {d _ {i} ^ {i n}}}\right) = 0, \forall i \in \{1, \dots , N \}. +$$ + +It is straightforward to see that a balanced graph is weakly balanced since one can choose $k_{i} = \sqrt{d_{i}^{\mathrm{in}}}$ . Hence, all undirected graphs are also weakly balanced. However, as shown in Figure 2, the set of balanced graphs is a proper subset of the set of weakly balanced graphs. + +Proposition 3.5. Let $\mathcal{G}$ be a directed graph with SNA $\mathbf{L}$ . Then, $1 \in \lambda(\mathbf{L})$ if and only if the graph is weakly balanced. Suppose the graph is strongly connected, then $-1 \in \lambda(\mathbf{L})$ if and only if the graph is weakly balanced with an even period. + +Proposition 3.5 generalizes a well-known result for undirected graphs: $-1\in \lambda (\mathbf{L})$ if and only if the graph is bipartite, i.e., has even period. The next result shows that the Dirichlet energy defined in (1) and the SNA are closely connected. + +Proposition 3.6. For every $\mathbf{x} \in \mathbb{C}^{N \times K}$ , it holds $\mathcal{E}(\mathbf{x}) = \frac{1}{2}\Re\left(\operatorname{trace}\left(\mathbf{x}^{\mathsf{H}}(\mathbf{I} - \mathbf{L})\mathbf{x}\right)\right)$ . Moreover, there exists $\mathbf{x} \neq \mathbf{0}$ such that $\mathcal{E}(\mathbf{x}) = 0$ if and only if the graph is weakly balanced. + +Proposition 3.6 generalizes the well-known result from the undirected (see, e.g., Cai et al., 2020, Definition 3.1) to the directed case. This result is an important tool for analyzing the evolution of the Dirichlet energy in graph neural networks. + +![](images/602d34fa79741c0453ac474372001803c50bff1b4ec5b5626ac3f412f2aae05f.jpg) +(a) Synthetic cycle graph. The values of the fractional Laplacian can also be negative. + +![](images/02e2f35360cb2c553b2c5fbe34d8b0bbdd8505a4a264238dd8ace5f8ccf0c45f.jpg) +(b) Real-world graphs. We count the number of virtual edges built by the fractional Laplacian based on the distance $d(i,j)$ in the original graph. The number of virtual edges increases as $\alpha$ decreases. +Figure 3: Visual representation of long-range edges built by the fractional Laplacian. + +# 4 Fractional Graph Laplacians + +We introduce the fractional graph Laplacian through the singular value decomposition (SVD). This approach has two key advantages over the traditional definition (Pozrikidis, 2018; Benzi et al., 2020) in the spectral domain. First, it allows defining the fractional Laplacian based on any choice of graph Laplacian, including those with negative or complex spectrum such as the SNA. Secondly, the SVD is computationally more efficient and numerically more stable than the Jordan decomposition, which would be necessary if the fractional Laplacian was defined in the spectral domain. + +Consider a directed graph with SNA $\mathbf{L}$ and its SVD $\mathbf{L} = \mathbf{U}\pmb{\Sigma}\mathbf{V}^{\mathsf{H}}$ , where $\mathbf{U},\mathbf{V}\in \mathbb{C}^{N\times N}$ are unitary matrices and $\pmb {\Sigma}\in \mathbb{R}^{N\times N}$ is a diagonal matrix. Given $\alpha \in \mathbb{R}$ , we define the $\alpha$ -fractional graph Laplacian $^2$ ( $\alpha$ -FGL in short) as + +$$ +\mathbf {L} ^ {\alpha} := \mathbf {U} \boldsymbol {\Sigma} ^ {\alpha} \mathbf {V} ^ {\mathsf {H}}. +$$ + +In undirected graphs, the $\alpha$ -FGL preserves the sign of the eigenvalues $\lambda$ of $\mathbf{L}$ while modifying their magnitudes, i.e., $\lambda \mapsto \mathrm{sign}(\lambda)|\lambda|^{\alpha}$ .3 + +The $\alpha$ -FGL is generally less sparse than the original SNA, as it connects nodes that are not adjacent in the underlying graph. The next theorem proves that the weight of such "virtual" edges is bounded. + +Theorem 4.1. Let $\mathcal{G}$ be a directed graph with SNA L. For $\alpha >0$ , if the distance $d(i,j)$ between nodes $i$ and $j$ is at least 2, then + +$$ +\left| \left(\mathbf {L} ^ {\alpha}\right) _ {i, j} \right| \leq \left(1 + \frac {\pi^ {2}}{2}\right) \left(\frac {\| \mathbf {L} \|}{2 (d (i , j) - 1)}\right) ^ {\alpha}. +$$ + +We provide a proof of Theorem 4.1 in Appendix C. In Figure 3a, we visually represent the cycle graph with eight nodes and the corresponding $\alpha$ -FGL entries. We also refer to Figure 3b, where we + +depict the distribution of $\alpha$ -FGL entries for the real-world graphs Cora (undirected) and Chameleon (directed) with respect to the distance in the original graph. Our empirical findings align with our theoretical results presented in Theorem 4.1. + +# 5 Fractional Graph Laplacian Neural ODE + +This section explores two fractional Laplacian-based graph neural ODEs. First, we consider the fractional heat equation, + +$$ +\mathbf {x} ^ {\prime} (t) = - \mathbf {L} ^ {\alpha} \mathbf {x} (t) \mathbf {W}, \mathbf {x} (0) = \mathbf {x} _ {0}, \tag {2} +$$ + +where $\mathbf{x}_0\in \mathbb{R}^{N\times K}$ is the initial condition, $\mathbf{x}(t)\in \mathbb{R}^{N\times K}$ for $t > 0$ and $\alpha \in \mathbb{R}$ . We assume that the channel mixing matrix $\mathbf{W}\in \mathbb{R}^{K\times K}$ is a symmetric matrix. Second, we consider the fractional Schrödinger equation, + +$$ +\mathbf {x} ^ {\prime} (t) = i \mathbf {L} ^ {\alpha} \mathbf {x} (t) \mathbf {W}, \mathbf {x} (0) = \mathbf {x} _ {0}, \tag {3} +$$ + +where $\mathbf{x}_0, \mathbf{x}(t) \in \mathbb{C}^{N \times K}$ and $\mathbf{W} \in \mathbb{C}^{K \times K}$ is unitary diagonalizable. Both (2) and (3) can be analytically solved. For instance, the solution of (2) is given by $\operatorname{vec}(\mathbf{x})(t) = \exp(-t\mathbf{W} \otimes \mathbf{L}^{\alpha})\operatorname{vec}(\mathbf{x}_0)$ , where $\otimes$ denotes the Kronecker product and $\operatorname{vec}(\cdot)$ represents the vectorization operation. However, calculating the exact solution is computationally infeasible since the memory required to store $\mathbf{W} \otimes \mathbf{L}^{\alpha}$ alone grows as $(NK)^2$ . Therefore, we rely on numerical schemes to solve (2) and (3). + +In the remainder of this section, we analyze the Dirichlet energy for solutions to (2) and (3). We begin with the definition of oversmoothing. + +Definition 5.1. Neural ODE-based GNNs are said to oversmooth if the normalized Dirichlet energy decays exponentially fast. That is, for any initial value $\mathbf{x}_0$ , the solution $\mathbf{x}(t)$ satisfies for every $t > 0$ + +$$ +\left| \mathcal {E} \left(\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}}\right) - \min \lambda (\mathbf {I} - \mathbf {L}) \right| \leq \exp (- C t), C > 0. +$$ + +Definition 5.1 captures the actual smoothness of features by considering the normalized Dirichlet energy, which mitigates the impact of feature amplitude (Cai et al., 2020; Di Giovanni et al., 2023). Additionally, Proposition 3.6 shows that the normalized Dirichlet energy is intimately related to the numerical range of $\mathbf{I} - \mathbf{L}$ of the underlying graph. This shows that the Dirichlet energy and eigenvalues (or frequencies) of the SNA are intertwined, and one can equivalently talk about Dirichlet energy or frequencies (see also Lemma D.2). In particular, it holds that + +$$ +0 \leq \mathcal {E} \left(\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}}\right) \leq \frac {\| \mathbf {I} - \mathbf {L} \|}{2}. +$$ + +As seen in Section 3, the minimal possible value attained by the normalized Dirichlet energy is often strictly greater than 0 for directed graphs. This indicates that GNNs on general directed graphs inherently cannot oversmooth to the same extent as in undirected. However, we prove that a vanilla GCN implementing the directed SNA oversmooths with respect to Definition 5.1, see Appendix E.3. + +# 5.1 Frequency Analysis for Graphs with Normal SNA + +This subsection focuses on the frequency analysis of FGL-based Neural ODEs for undirected graphs. Most classical GNNs (Kipf et al., 2017; Velickovic et al., 2018) and also graph neural ODEs (Chamberlain et al., 2021; Eliasof et al., 2021) have been shown to oversmooth. Di Giovanni et al. (2023) proved that the normalized Dirichlet energy for GNNs based on (2) with $\alpha = 1$ can not only converge to its minimal value but also to its maximal possible value. A GNN exhibiting this property is then termed Highest-Frequency-Dominant (HFD). + +However, in real-world scenarios, most graphs are not purely homophilic nor purely heterophilic but fall somewhere in between. Intuitively, this suggests that mid-range frequencies might be more suitable. To illustrate this intuition, consider the cycle graph as an example. If we have a homophily of 1, low frequencies are optimal; with a homophily equal to 0, high frequencies are optimal. Interestingly, for a homophily of $\frac{1}{2}$ , the mid-range frequency is optimal, even though the eigendecomposition is label-independent. More information on this example can be found in Figure 4 and Appendix F. Based on this observation, we propose the following definition to generalize the concept of HFD, accommodating not only the lowest or highest frequency but all possible frequencies. + +![](images/c82e25a60dee74aadc3a8452acbdd623c337d49a1ef9d2ef14e4d07549d8341a.jpg) +Figure 4: Eigendecomposition of $\mathbf{L}$ for the cycle graph $C_8$ (see Appendix F). The first two rows show the eigenvectors corresponding to the eigenvalues $\lambda$ . The last row shows how the (label-unaware) eigendecomposition can be used to study homophily, whose definition requires the labels. + +Definition 5.2. Let $\lambda \geq 0$ . Neural ODE-based GNNs initialized at $\mathbf{x}_0$ are $\lambda$ -Frequency-Dominant ( $\lambda$ -FD) if the solution $\mathbf{x}(t)$ satisfies + +$$ +\mathcal {E} \left(\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}}\right) \xrightarrow {t \to \infty} \frac {\lambda}{2}. +$$ + +Suppose $\lambda$ is the smallest or the largest eigenvalue with respect to the real part. In that case, we call it Lowest-Frequency-Dominant (LFD) or Highest-Frequency-Dominant (HFD), respectively. + +In the following theorem, we show that (2) and (3) are not limited to being LFD or HFD, but can also be mid-frequency dominant. + +Theorem 5.3. Let $\mathcal{G}$ be an undirected graph with SNA L. Consider the initial value problem in (2) with $\mathbf{W} \in \mathbb{R}^{K \times K}$ and $\alpha \in \mathbb{R}$ . Then, for almost all initial values $\mathbf{x}_0 \in \mathbb{R}^{N \times K}$ the following holds. + +$(\alpha >0)$ The solution to (2) is either HFD or LFD. + +$(\alpha < 0)$ Let $\lambda_{+}(\mathbf{L})$ and $\lambda_{-}(\mathbf{L})$ be the smallest positive and negative non-zero eigenvalue of $\mathbf{L}$ , respectively. The solution to (2) is either $(1 - \lambda_{+}(\mathbf{L}))$ -FD or $(1 - \lambda_{-}(\mathbf{L}))$ -FD. + +Furthermore, the previous results also hold for solutions to the Schrödinger equation (3) if $\mathbf{W} \in \mathbb{C}^{K \times K}$ has at least one eigenvalue with non-zero imaginary part. + +Theorem 5.3 ( $\alpha > 0$ ) generalizes the result by Di Giovanni et al. (2023) for $\alpha = 1$ to arbitrary positive values of $\alpha$ . The convergence speed in Theorem 5.3 ( $\alpha > 0$ ) depends on the choice of $\alpha \in \mathbb{R}$ . By selecting a variable $\alpha$ (e.g., as a learnable parameter), we establish a flexible learning framework capable of adapting the convergence speed of the Dirichlet energy. A slower or more adjustable convergence speed facilitates broader frequency exploration as it converges more gradually to its maximal or minimal value. Consequently, the frequency component contributions (for finite time, i.e., in practice) are better balanced, which is advantageous for graphs with different homophily levels. Theorem 5.3 ( $\alpha < 0$ ) shows that solutions of the fractional neural ODEs in (2) and (3) are not limited to be LFD or HFD. To demonstrate this and the other results of Theorem 5.3, we solve (2) using an explicit Euler scheme for different choices of $\alpha$ and $\mathbf{W}$ on the Cora and Chameleon graphs. The resulting evolution of the Dirichlet energy with respect to time is illustrated in Figure 5. Finally, we refer to Theorem D.5 in Appendix D.1 for the full statement and proof of Theorem 5.3. + +Remark 5.4. Theorem 5.3 is stated for the analytical solutions of (2) and (3), respectively. As noted in Section 5, calculating the analytical solution is infeasible in practice. However, we show in Appendices D.2 to D.3 that approximations of the solution of (2) and (3) via explicit Euler schemes satisfy the same Dirichlet energy convergence properties if the step size is sufficiently small. + +Remark 5.5. Theorem 5.3 can be generalized to all directed graphs with normal SNA, i.e., satisfying the condition $\mathbf{L}\mathbf{L}^{\top} = \mathbf{L}^{\top}\mathbf{L}$ . For the complete statement, see Appendix D.1. + +![](images/17e3fcdfafe00609feaf72e80e0ec0c0ce75d59006cc835645c7cdb12543b03e.jpg) + +![](images/e5c84bbf80aa5c333ef93ccaafc892dc6cfaf4d68d0527ed4c5f116fe043cafc.jpg) +(a) Cora (undirected). +(b) chameleon (directed). +Figure 5: Convergence of Dirichlet energy for the solution of equation (2) using an explicit Euler scheme with a step size of $h = 10^{-1}$ . We consider different $\alpha$ -FGL in (2) and choose $\mathbf{W}$ as a random diagonal matrix. In the left plot, $\mathbf{W}$ has only a negative spectrum, while in the right plot, $\mathbf{W}$ has only a positive spectrum. The black horizontal line represents the theoretical limit based on Theorem 5.3. + +# 5.2 Frequency Dominance for Directed Graphs + +Section 5.1 analyzes the Dirichlet energy in graphs with normal SNA. However, the situation becomes significantly more complex when considering generic directed graphs. In our experiments (see Figure 5), we observe that the solution to (2) and (3) does not necessarily lead to oversmoothing. On the contrary, the solution can be controlled to exhibit either LFD or HFD for $\alpha > 0$ , and mid-frequency-dominance for $\alpha < 0$ as proven for undirected graphs in Theorem 5.3. We present an initial theoretical result for directed graphs, specifically in the case of $\alpha = 1$ . + +Theorem 5.6. Let $\mathcal{G}$ be a directed graph with SNA $\mathbf{L}$ . Consider the initial value problem in (2) with diagonal channel mixing matrix $\mathbf{W} \in \mathbb{R}^{K \times K}$ and $\alpha = 1$ . Suppose $\lambda_1(\mathbf{L})$ is unique. For almost all initial values $\mathbf{x}_0 \in \mathbb{R}^{N \times K}$ , the solution to (2) is either HFD or LFD. + +The proof of Theorem 5.6 is given in Appendix E.1. Finally, we refer to Appendix E.2 for the analogous statement and proof when the solution of (2) is approximated via an explicit Euler scheme. + +# 6 Numerical Experiments + +This section evaluates the fractional Laplacian ODEs in node classification by approximating (2) and (3) with an explicit Euler scheme. This leads to the following update rules + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - h \mathbf {L} ^ {\alpha} \mathbf {x} _ {t} \mathbf {W}, \mathbf {x} _ {t + 1} = \mathbf {x} _ {t} + i h \mathbf {L} ^ {\alpha} \mathbf {x} _ {t} \mathbf {W}, \tag {4} +$$ + +for the heat and Schrödinger equation, respectively. In both cases, $\mathbf{W}$ , $\alpha$ and $h$ are learnable parameters, $t$ is the layer, and $\mathbf{x}_0$ is the initial nodes' feature matrix. In accordance with the results in Section 5, we select $\mathbf{W}$ as a diagonal matrix. The initial features $\mathbf{x}_0$ in (4) are encoded through a MLP, and the output is decoded using a second MLP. We refer to the resulting model as FLODE (fractional Laplacian ODE). In Appendix A, we present details on the baseline models, the training setup, and the exact hyperparameters. + +Table 1: Test accuracy (Film, Squirrel, Chameleon, Citeseer) and test AUROC (Minesweeper, Tolokers, Questions) on node classification, top three models. The thorough comparison is reported in Table 4, Appendix A: FLODE consistently outperforms the baseline models GCN and GRAFF, and it achieves results comparable to state-of-the-art. +(a) Undirected graphs. + +
SquirrelChameleonCiteseer
1stFLODEFLODEFLODE
64.23 ± 1.8473.60 ± 1.5578.07 ± 1.62
2ndGREADGREADGeom-GCN
59.22 ± 1.4471.38 ± 1.3078.02 ± 1.15
3rdGRAFFNLGRAFFNLGREAD
59.01 ± 1.3171.38 ± 1.4777.60 ± 1.81
+ +(b) Heterophily-specific graphs. + +
MinesweeperTolokersQuestions
GAT-sepFLODEESGNN
93.91 ± 0.3584.17 ± 0.5878.86 ± 0.92
GraphSAGEGAT-sepFLODE
93.51 ± 0.5783.78 ± 0.4378.39 ± 1.22
FLODEGATGT-sep
92.43 ± 0.5183.70 ± 0.4778.05 ± 0.93
+ +(c) Directed graphs. + +
FilmSquirrelChameleon
1stFLODEHLPFSGNN
37.41 ± 1.0674.17 ± 1.8378.14 ± 1.25
2ndGRAFFFLODEFLODE
37.11 ± 1.0874.03 ± 1.5877.98 ± 1.05
3rdACMFSGNNHLP
36.89 ± 1.1873.48 ± 2.1377.48 ± 1.50
+ +Ablation Study. In Appendix A.3, we investigate the influence of each component (learnable exponent, ODE framework, directionality via the SNA) on the performance of FLODE. The adjustable fractional power in the FGL is a crucial component of FLODE, as it alone outperforms the model employing the ODE framework with a fixed $\alpha = 1$ . Further, Appendix A.3 includes ablation studies that demonstrate FLODE's capability to efficiently scale to large depths, as depicted in Figure 8. + +Real-World Graphs. We report results on 6 undirected datasets consisting of both homophilic graphs, i.e., Cora (McCallum et al., 2000), Citeseer (Sen et al., 2008) and Pubmed (Namata et al., 2012), and heterophilic graphs, i.e., Film (Tang et al., 2009), Squirrel and Chameleon (Rozemberczki et al., 2021). We evaluate our method on the directed and undirected versions of Squirrel, Film, and Chameleon. In all datasets, we use the standard 10 splits from (Pei et al., 2019). The choice of the baseline models and their results are taken from (Di Giovanni et al., 2023). Further, we test our method on heterophily-specific graph datasets, i.e., Roman-empire, Minesweeper, Tolokers, and Questions (Platonov et al., 2023). The splits, baseline models, and results are taken from (Platonov et al., 2023). The top three models are shown in Table 1, and the thorough comparison is reported in Table 4. Due to memory limitations, we compute only $30\%$ of singular values for Pubmed, Roman-Empire, and Questions, which serve as the best low-rank approximation of the original SNA. + +Synthetic Directed Graph. We consider the directed stochastic block model (DSBM) datasets (Zhang et al., 2021). The DSBM divides nodes into 5 clusters and assigns probabilities for interactions between vertices. It considers two sets of probabilities: $\{\alpha_{i,j}\}$ for undirected edge creation and $\{\beta_{i,j}\}$ for assigning edge directions, $i,j\in \{1,\ldots 5\}$ . The objective is to classify vertices based on their clusters. In the first experiment, $\alpha_{i,j} = \alpha^{*}$ varies, altering neighborhood information's importance. In the second experiment, $\beta_{i,j} = \beta^{*}$ varies, changing directional information. The results are shown in Figure 6 and Table 6. The splits, baseline models, and results are taken from (Zhang et al., 2021). + +Results. The experiments showcase the flexibility of FLODE, as it can accommodate various types of graphs, both directed and undirected, as well as a broad range of homophily levels. While other methods, such as MagNet (Zhang et al., 2021), perform similarly to our approach, they face limitations when applied to certain graph configurations. For instance, when applied to undirected graphs, MagNet reduces to ChebNet, making it unsuitable for heterophilic graphs. Similarly, GRAFF + +![](images/1cc9883571f56929fa34e8e2d3683788e4fce02ec5d2931ccb59beba944e9c85.jpg) +Figure 6: Experiments on directed stochastic block model. Unlike other models, FLODE's performances do not deteriorate as much when changing the inter-cluster edge density $\alpha^{*}$ . + +![](images/644c38d3b0ef170b2823329a910b6d0090821853f2e450bde422ac17a66b5e58.jpg) +Figure 7: Effect of truncated SVD on test accuracy (orange) for standard directed real-world graphs. The explained variance, defined as $\sum_{i=1}^{k} \sigma_i^2 / \sum_{j=1}^{N} \sigma_j^2$ , measures the variability the first $k$ singular values explain. For chameleon, the accuracy stabilized after 570 (25%) singular values, corresponding to an explained variance of 0.998. For squirrel, after 1600 (31%) singular values, which correspond to an explained variance 0.999, the improvement in test accuracy is only marginal. + +![](images/c7b1350e5088fbbe9dc56aaddf96ea8f32dca312a7bb4faf3365dd68d62ad6ed.jpg) + +(Di Giovanni et al., 2023) performs well on undirected graphs but falls short on directed graphs. We note that oftentimes FLODE learns a non-trivial exponent $\alpha \neq 1$ , highlighting the advantages of FGL-based GNNs (see, e.g., Table 5). Furthermore, as shown in Table 9 and Appendix A.3, our empirical results align closely with the theoretical results in Section 5. + +# 7 Conclusion + +In this work, we introduce the concepts of Dirichlet energy and oversmoothing for directed graphs and demonstrate their relation with the SNA. Building upon this foundation, we define fractional graph Laplacians in the singular value domain, resulting in matrices capable of capturing long-range dependencies. To address oversmoothing in directed graphs, we propose fractional Laplacian-based graph ODEs, which are provably not limited to LFD behavior. We finally show the flexibility of our method to accommodate various graph structures and homophily levels in node-level tasks. + +Limitations and Future Work. The computational cost of the SVD grows cubically in $N$ , while the storage of the singular vectors grows quadratically in $N$ . Both costs can be significantly reduced by computing only $k \ll N$ singular values via truncated SVD (Figure 7), giving the best $k$ -rank approximation of the SNA. Moreover, the SVD can be computed offline as a preprocessing step. + +The frequency analysis of $\alpha$ -FGL neural ODEs in directed graphs is an exciting future direction. It would also be worthwhile to investigate the impact of choosing $\alpha \neq 1$ on the convergence speed of the Dirichlet energy. Controlling the speed could facilitate the convergence of the Dirichlet energy to an optimal value, which has been shown to exist in synthetic settings (Keriven, 2022; X. Wu et al., 2022). Another interesting future direction would be to analyze the dynamics when approximating the solution to the FGL neural ODEs using alternative numerical solvers, such as adjoint methods. + +# Acknowledgments + +S. M. acknowledges partial support by the NSF-Simons Research Collaboration on the Mathematical and Scientific Foundations of Deep Learning (MoDL) (NSF DMS 2031985) and DFG SPP 1798, KU 1446/27-2. +G. K. acknowledges partial support by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research. G. Kutyniok also acknowledges support from the Munich Center for Machine Learning (MCML) as well as the German Research Foundation under Grants DFG-SPP-2298, KU 1446/31-1 and KU 1446/32-1 and under Grant DFG-SFB/TR 109 and Project C09. + +# References + +Benzi, M., Bertaccini, D., Durastante, F., and Simunec, I. (2020). "Non-Local Network Dynamics via Fractional Graph Laplacians". In: Journal of Complex Networks 8.3. +Bo, D., Wang, X., Shi, C., and Shen, H. (2021). "Beyond Low-frequency Information in Graph Convolutional Networks". In: Proceedings of the AAAI Conference on Artificial Intelligence 35.5, pp. 3950-3957. +Bodnar, C., Di Giovanni, F., Chamberlain, B., Lio, P., and Bronstein, M. (2022). "Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs". In: Advances in Neural Information Processing Systems 35, pp. 18527-18541. +Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., and Vandergheynst, P. (2017). "Geometric Deep Learning: Going beyond Euclidean Data". In: IEEE Signal Processing Magazine 34.4, pp. 18-42. +Cai, C. and Wang, Y. (2020). A Note on Over-Smoothing for Graph Neural Networks. arXiv: 2006.13318 [cs, stat]. +Chamberlain, B., Rowbottom, J., Gorinova, M. I., Bronstein, M., Webb, S., and Rossi, E. (2021). "GRAND: Graph Neural Diffusion". In: Proceedings of the 38th International Conference on Machine Learning. PMLR, pp. 1407-1418. +Chen, D., Lin, Y., Li, W., Li, P., Zhou, J., and Sun, X. (2020). "Measuring and Relieving the Over-Smoothing Problem for Graph Neural Networks from the Topological View". In: Proceedings of the AAAI Conference on Artificial Intelligence 34.04, pp. 3438-3445. +Chen, M., Wei, Z., Huang, Z., Ding, B., and Li, Y. (2020). "Simple and Deep Graph Convolutional Networks". In: Proceedings of the 37th International Conference on Machine Learning. PMLR, pp. 1725-1735. +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. (2018). "Neural Ordinary Differential Equations". In: Advances in Neural Information Processing Systems. Vol. 31. Curran Associates, Inc. +Chien, E., Peng, J., Li, P., and Milenkovic, O. (2021). "Adaptive Universal Generalized PageRank Graph Neural Network". In: International Conference on Learning Representations. +Choi, J., Hong, S., Park, N., and Cho, S.-B. (2023). "GREAD: Graph Neural Reaction-Diffusion Networks". In: Proceedings of the 40th International Conference on Machine Learning. Vol. 202. ICML'23. Honolulu, Hawaii, USA: JMLR.org, pp. 5722-5747. +Chung, F. R. K. (1997). Spectral Graph Theory. Regional Conference Series in Mathematics no. 92. Providence, R.I: Published for the Conference Board of the mathematical sciences by the American Mathematical Society. +Defferrard, M., Bresson, X., and Vandergheynst, P. (2016). "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering". In: Advances in Neural Information Processing Systems. Vol. 29. Curran Associates, Inc. +Di Giovanni, F., Rowbottom, J., Chamberlain, B. P., Markovich, T., and Bronstein, M. M. (2023). "Understanding convolution on graphs via energies". In: Transactions on Machine Learning Research. +Du, L., Shi, X., Fu, Q., Ma, X., Liu, H., Han, S., and Zhang, D. (2022). "GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily". In: Proceedings of the ACM Web Conference 2022. WWW '22. New York, NY, USA: Association for Computing Machinery, pp. 1550-1558. +Eliasof, M., Haber, E., and Treister, E. (2021). "PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations". In: Advances in Neural Information Processing Systems. Vol. 34. Curran Associates, Inc., pp. 3836-3849. + +Fan, W., Ma, Y., Li, Q., He, Y., Zhao, E., Tang, J., and Yin, D. (2019). "Graph Neural Networks for Social Recommendation". In: The World Wide Web Conference. San Francisco CA USA: ACM, pp. 417-426. +Fey, M. and Lenssen, J. E. (2019). "Fast Graph Representation Learning with PyTorch Geometric". In: ICLR 2019 Workshop on Representation Learning on Graphs and Manifolds. +Gasteiger, J., Bojchevski, A., and Gunnemann, S. (2018). "Predict Then Propagate: Graph Neural Networks Meet Personalized PageRank". In: International Conference on Learning Representations. +Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. (2017). "Neural Message Passing for Quantum Chemistry". In: Proceedings of the 34th International Conference on Machine Learning. PMLR, pp. 1263-1272. +Gori, M., Monfardini, G., and Scarselli, F. (2005). "A New Model for Learning in Graph Domains". In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005. Vol. 2, 729-734 vol. 2. +Haber, E. and Ruthotto, L. (2018). "Stable Architectures for Deep Neural Networks". In: Inverse Problems 34.1, p. 014004. eprint: 1705.03341 (cs, math). +Hamilton, W., Ying, Z., and Leskovec, J. (2017). "Inductive Representation Learning on Large Graphs". In: Advances in Neural Information Processing Systems. Vol. 30. Curran Associates, Inc. +He, K., Zhang, X., Ren, S., and Sun, J. (2016). "Deep Residual Learning for Image Recognition". In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. +Keriven, N. (2022). "Not Too Little, Not Too Much: A Theoretical Analysis of Graph (over)Smoothing". In: Advances in Neural Information Processing Systems. +Kingma, D. P. and Ba, J. (2015). Adam: A Method for Stochastic Optimization. Ed. by Y. Bengio and Y. LeCun. +Kipf, T. N. and Welling, M. (2017). "Semi-Supervised Classification with Graph Convolutional Networks". In: International Conference on Learning Representations. +Li, G., Muller, M., Thabet, A., and Ghanem, B. (2019). "DeepGCNs: Can GCNs Go As Deep As CNNs?" In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9267-9276. +Li, Q., Han, Z., and Wu, X.-m. (2018). "Deeper Insights Into Graph Convolutional Networks for Semi-Supervised Learning". In: Proceedings of the AAAI Conference on Artificial Intelligence 32.1. +Li, X., Zhu, R., Cheng, Y., Shan, C., Luo, S., Li, D., and Qian, W. (2022). "Finding Global Homophily in Graph Neural Networks When Meeting Heterophily". In: Proceedings of the 39th International Conference on Machine Learning. PMLR, pp. 13242-13256. +Lingam, V., Ragesh, R., Iyer, A., and Sellamanickam, S. (2021). Simple Truncated SVD Based Model for Node Classification on Heterophilic Graphs. eprint: 2106.12807 (cs). +Luan, S., Hua, C., Lu, Q., Zhu, J., Zhao, M., Zhang, S., Chang, X.-W., and Precup, D. (2022). "Revisiting Heterophily For Graph Neural Networks". In: Advances in Neural Information Processing Systems 35, pp. 1362-1375. +Luan, S., Zhao, M., Chang, X.-W., and Precup, D. (2019). "Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks". In: Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc. +Maurya, S. K., Liu, X., and Murata, T. (2021). Improving Graph Neural Networks with Simple Architecture Design. arXiv: 2105.07634 [cs, stat]. +McCallum, A. K., Nigam, K., Rennie, J., and Seymore, K. (2000). "Automating the Construction of Internet Portals with Machine Learning". In: Information Retrieval 3.2, pp. 127-163. +Monti, F., Frasca, F., Eynard, D., Mannion, D., and Bronstein, M. M. (2019). Fake News Detection on Social Media Using Geometric Deep Learning. arXiv: 1902.06673. +Namata, G. M., London, B., Getoor, L., and Huang, B. (2012). "Query-driven Active Surveying for Collective Classification". In: Workshop on Mining and Learning with Graphs. +Oono, K. and Suzuki, T. (2019). "Graph Neural Networks Exponentially Lose Expressive Power for Node Classification". In: International Conference on Learning Representations. +Paszke, A. et al. (2019). "PyTorch: An Imperative Style, High-Performance Deep Learning Library". In: Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc. +Pei, H., Wei, B., Chang, K. C.-C., Lei, Y., and Yang, B. (2019). "Geom-GCN: Geometric Graph Convolutional Networks". In: International Conference on Learning Representations. +Perko, L. (2001). Differential Equations and Dynamical Systems. Ed. by J. E. Marsden, L. Sirovich, and M. Golubitsky. Vol. 7. Texts in Applied Mathematics. New York, NY: Springer New York. + +Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A., and Prokhorenkova, L. (2023). "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?" In: The Eleventh International Conference on Learning Representations. +Poli, M., Massaroli, S., Park, J., Yamashita, A., Asama, H., and Park, J. (2021). Graph Neural Ordinary Differential Equations. arXiv: 1911.07532 [cs, stat]. +Pozrikidis, C. (2018). The Fractional Laplacian. First. Boca Raton: Taylor & Francis, 2016. | “A CRC title”: Chapman and Hall/CRC. +Rong, Y., Huang, W., Xu, T., and Huang, J. (2020). "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification". In: International Conference on Learning Representations. +Rozemberczki, B., Allen, C., and Sarkar, R. (2021). "Multi-Scale Attributed Node Embedding". In: Journal of Complex Networks 9.2. +Rusch, T. K., Chamberlain, B., Rowbottom, J., Mishra, S., and Bronstein, M. (2022). "Graph-Coupled Oscillator Networks". In: Proceedings of the 39th International Conference on Machine Learning. PMLR, pp. 18888-18909. +Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and Monfardini, G. (2009). "The Graph Neural Network Model". In: IEEE Transactions on Neural Networks 20.1, pp. 61-80. +Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., and Eliassi-Rad, T. (2008). "Collective Classification in Network Data". In: AI Magazine 29.3, p. 93. +Shi, Y., Huang, Z., Feng, S., Zhong, H., Wang, W., and Sun, Y. (2021). "Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification". In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. Montreal, Canada: International Joint Conferences on Artificial Intelligence Organization, pp. 1548-1554. +Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". In. +Tang, J., Sun, J., Wang, C., and Yang, Z. (2009). "Social Influence Analysis in Large-Scale Networks". In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Paris France: ACM, pp. 807-816. +Tong, Z., Liang, Y., Sun, C., Li, X., Rosenblum, D., and Lim, A. (2020). "Digraph Inception Convolutional Networks". In: Advances in Neural Information Processing Systems. Vol. 33. Curran Associates, Inc., pp. 17907-17918. +Tong, Z., Liang, Y., Sun, C., Rosenblum, D. S., and Lim, A. (2020). Directed Graph Convolutional Network. arXiv: 2004.13970 [cs, stat]. +Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. (2018). "Graph Attention Networks". In: International Conference on Learning Representations. +Wang, J., Huang, P., Zhao, H., Zhang, Z., Zhao, B., and Lee, D. L. (2018). "Billion-Scale Commodity Embedding for E-commerce Recommendation in Alibaba". In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD '18. New York, NY, USA: Association for Computing Machinery, pp. 839-848. +Wang, X. and Zhang, M. (2022). "How Powerful Are Spectral Graph Neural Networks". In: Proceedings of the 39th International Conference on Machine Learning. PMLR, pp. 23341-23362. +Wang, Y., Yi, K., Liu, X., Wang, Y. G., and Jin, S. (2022). "ACMP: Allen-Cahn Message Passing with Attractive and Repulsive Forces for Graph Neural Networks". In: The Eleventh International Conference on Learning Representations. +Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., and Weinberger, K. (2019). "Simplifying Graph Convolutional Networks". In: Proceedings of the 36th International Conference on Machine Learning. PMLR, pp. 6861-6871. +Wu, X., Chen, Z., Wang, W. W., and Jadbabaie, A. (2022). "A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks". In: The Eleventh International Conference on Learning Representations. +Xhonneux, L.-P., Qu, M., and Tang, J. (2020). "Continuous Graph Neural Networks". In: Proceedings of the 37th International Conference on Machine Learning. PMLR, pp. 10432-10441. +Xu, K., Hu, W., Leskovec, J., and Jegelka, S. (2018). "How Powerful Are Graph Neural Networks?" In: International Conference on Learning Representations. +Yan, Y., Hashemi, M., Swersky, K., Yang, Y., and Koutra, D. (2022). "Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks". In: 2022 IEEE International Conference on Data Mining (ICDM). Orlando, FL, USA: IEEE, pp. 1287-1292. +Zhang, X., He, Y., Brugnone, N., Perlmutter, M., and Hirn, M. (2021). "MagNet: A Neural Network for Directed Graphs". In: Advances in Neural Information Processing Systems. Vol. 34. Curran Associates, Inc., pp. 27003-27015. + +Zhao, L. and Akoglu, L. (2019). "PairNorm: Tackling Oversmoothing in GNNs". In: International Conference on Learning Representations. +Zhu, J., Rossi, R. A., Rao, A., Mai, T., Lipka, N., Ahmed, N. K., and Koutra, D. (2021). "Graph Neural Networks with Heterophily". In: Proceedings of the AAAI Conference on Artificial Intelligence 35.12, pp. 11168-11176. +Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., and Koutra, D. (2020). "Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs". In: Advances in Neural Information Processing Systems. Vol. 33. Curran Associates, Inc., pp. 7793-7804. +Zou, C., Han, A., Lin, L., and Gao, J. (2022). A Simple Yet Effective SVD-GCN for Directed Graphs. arXiv: 2205.09335 [cs]. + +# Acronyms + +AUROC Area under the ROC curve + +DSBM Directed Stochastic Block Model + +FD Frequency Dominant + +FGL Fractional Graph Laplacian + +GAT Graph Attention Network + +GCN Graph Convolutional Network + +GNN Graph Neural Network + +HFD Highest Frequency Dominant + +LCC Largest Connected Components + +LFD Lowest Frequency Dominant + +MLP Multi-Layer Perceptron + +ODE Ordinary Differential Equation + +SNA Symmetrically Normalized Adjacency + +SVD Singular Value Decomposition + +# Notation + +$i$ Imaginary unit + +$\Re (z)$ Real part of $z\in \mathbb{C}$ + +$\Im (z)$ Imaginary part of $z\in \mathbb{C}$ + +$\operatorname {diag}(\mathbf{x})$ Diagonal matrix with $\mathbf{x}$ on the diagonal. + +1 Constant vector of all 1s. + +$\mathbf{M}^{\mathrm{T}}$ Transpose of $\mathbf{M}$ + +$\mathbf{M}^{*}$ Conjugate of M + +$\mathbf{M}^{\mathrm{H}}$ Conjugate transpose of $\mathbf{M}$ + +$\| \mathbf{M}\|$ Spectral norm of M + +$\| \mathbf{M}\| _2$ Frobenius norm of M + +$\lambda (\mathbf{M})$ Spectrum of M + +$\sigma (\mathbf{M})$ Singular values of $\mathbf{M}$ + +$\mathcal{E}(\mathbf{x})$ Dirichlet energy computed on $\mathbf{x}$ + +$\mathcal{H}(\mathcal{G})$ Homophily coefficient of the graph $\mathcal{G}$ + +$\mathbf{A}\otimes \mathbf{B}$ Kronecker product between A and B + +vec $(\mathbf{M})$ Vector obtained stacking columns of M. + +# A Implementation Details + +In this section, we give the details on the numerical results in Section 6. We begin by describing the exact model. + +Model architecture. Let $\mathcal{G}$ be a directed graph and $\mathbf{x}_0\in \mathbb{R}^{N\times K}$ the node features. Our architecture first embeds the input node features $\mathbf{x}$ via a multi-layer perceptron (MLP). We then evolve the features $\mathbf{x}_0$ according to a slightly modified version of (3), i.e., $\mathbf{x}'(t) = -i\mathbf{L}^{\alpha}\mathbf{x}(t)\mathbf{W}$ for some time $t\in [0,T]$ . In our experiments, we approximate the solution with an Explicit Euler scheme with step size $h > 0$ . This leads to the following update rule + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - i h \mathbf {L} ^ {\alpha} \mathbf {x} _ {t} \mathbf {W}. +$$ + +The channel mixing matrix is a diagonal learnable matrix $\mathbf{W} \in \mathbb{C}^{K \times K}$ , and $\alpha \in \mathbb{R}$ , $h \in \mathbb{C}$ are also learnable parameters. The features at the last time step $\mathbf{x}_T$ are then fed into a second MLP, whose output is used as the final output. Both MLPs use LeakyReLU as non-linearity and dropout (Srivastava et al., 2014). On the contrary, the graph layers do not use any dropout nor non-linearity. A sketch of the algorithm is reported in fLode. + +Algorithm 1: fLode +```latex +$\% \mathbf{A}$ , $\mathbf{x}_0$ are given. +% Preprocessing +1 $\mathbf{D}_{\mathrm{in}} = \mathrm{diag}(\mathbf{A}\mathbf{1})$ +2 $\mathbf{D}_{\mathrm{out}} = \mathrm{diag}\left(\mathbf{A}^{\mathrm{T}}\mathbf{1}\right)$ +3 $\mathbf{L} = \mathbf{D}_{\mathrm{in}}^{-1 / 2}\mathbf{A}\mathbf{D}_{\mathrm{out}}^{-1 / 2}$ +4 U, $\Sigma ,\mathbf{V}^{\mathsf{H}} = \mathsf{svd}(\mathbf{L})$ +% The core of the algorithm is very simple +5 def training_step(xo): +6 ${\bf x}_0 = {\bf input\_MLP}({\bf x}_0)$ +7 for $t\in \{1,\dots,T\}$ do +8 $\begin{array}{r}\lfloor \mathbf{x}_t = \mathbf{x}_{t - 1} - ih\mathbf{U}\pmb{\Sigma}^\alpha \mathbf{V}^\mathsf{H}\mathbf{x}_{t - 1}\mathbf{W}\\ \mathbf{x}_T = \mathbf{output\_MLP}(\mathbf{x}_T)\\ \mathbf{return}\mathbf{x}_T \end{array}$ +9 +10 +``` + +Complexity. The computation of the SVD is $\mathcal{O}(N^3)$ . However, one can compute only the first $p \ll N$ singular values: this cuts down the cost to $\mathcal{O}(N^2 p)$ . The memory required to store the singular vectors is $\mathcal{O}(N^2)$ , since they are not sparse in general. Each training step has a cost of $\mathcal{O}(N^2 K)$ . + +Experimental details. Our model is implemented in PyTorch (Paszke et al., 2019), using PyTorch geometric (Fey et al., 2019). The computation of the SVD for the fractional Laplacian is implemented using the library linalg provided by PyTorch. In the case of truncated SVD, we use the function randomized_svd provided by the library extmath from sklearnn. The code and instructions to reproduce the experiments are available on GitHub. Hyperparameters were tuned using grid search. All experiments were run on an internal cluster with NVIDIA GeForce RTX 2080 Ti and NVIDIA TITAN RTX GPUs with 16 and 24 GB of memory, respectively. + +Training details. All models were trained for 1000 epochs using Adam (Kingma et al., 2015) as optimizer with a fixed learning rate. We perform early stopping if the validation metric does not increase for 200 epochs. + +# A.1 Real-World Graphs + +Undirected graphs We conducted 10 repetitions using data splits obtained from (Pei et al., 2019). For each split, $48\%$ of the nodes are used for training, $32\%$ for validation and $20\%$ for testing. In all datasets, we considered the largest connected component (LCC). Chameleon, Squirrel, and Film are + +directed graphs; hence, we converted them to undirected. Cora, Citeseer, and Pubmed are already undirected graphs: to these, we added self-loops. We normalized the input node features for all graphs. + +As baseline models, we considered the same models as in (Di Giovanni et al., 2023). The results were provided by Pei et al. (2019) and include standard GNNs, such as GAT (Velickovic et al., 2018), GCN (Kipf et al., 2017), and GraphSAGE (Hamilton et al., 2017). We also included models designed to address oversmoothing and heterophilic graphs, such as PairNorm (L. Zhao et al., 2019), GGCN (Yan et al., 2022), Geom-GCN (Pei et al., 2019), $\mathrm{H}_2\mathrm{GCN}$ (Zhu, Yan, et al., 2020), GPRGNN (Chien et al., 2021), and Sheaf (Bodnar et al., 2022). Furthermore, we included the graph neural ODE-based approaches, CGNN (Xhonneux et al., 2020) and GRAND (Chamberlain et al., 2021), as in (Di Giovanni et al., 2023), and the model GRAFF from (Di Giovanni et al., 2023) itself. Finally, we included GREAD (Choi et al., 2023), GraphCON (Rusch et al., 2022), ACMP (Y. Wang et al., 2022) and GCN and GAT equipped with DropEdge (Rong et al., 2020). + +Heterophily-specific Models For heterophily-specific datasets, we use the same models and results as in (Platonov et al., 2023). As baseline models we considered the topology-agnostic ResNet (He et al., 2016) and two graph-aware modifications: ResNet+SGC(F. Wu et al., 2019) where the initial node features are multiplied by powers of the SNA, and ResNet+adj, where rows of the adjacency matrix are used as additional node features; GCN (Kipf et al., 2017), GraphSAGE (Hamilton et al., 2017); GAT (Velickovic et al., 2018) and GT (Shi et al., 2021) as well as their modification GAT-sep and GT-sep which separate ego- and neighbor embeddings; $\mathrm{H}_2\mathrm{GCN}$ (Zhu, Yan, et al., 2020), CPGNN (Zhu, Rossi, et al., 2021), GPRGNN (Chien et al., 2021), FSGNN (Maurya et al., 2021), GloGNN (X. Li et al., 2022), FAGCN (Bo et al., 2021), GBK-GNN (Du et al., 2022), and JacobiConv (X. Wang et al., 2022). + +The exact hyperparameters for FLODE are provided in Table 5. + +# A.2 Synthetic Directed Graphs + +The dataset and code are taken from (Zhang et al., 2021). As baseline models, we considered the ones in (Zhang et al., 2021) for which we report the corresponding results. The baseline models include standard GNNs, such as ChebNet (Defferrard et al., 2016), GCN (Kipf et al., 2017), GraphSAGE (Hamilton et al., 2017), APPNP (Gasteiger et al., 2018), GIN (Xu et al., 2018), GAT (Velickovic et al., 2018), but also models specifically designed for directed graphs, such as DGCN (Tong, Liang, Sun, Rosenblum, et al., 2020), DiGraph and DiGraphIB (Tong, Liang, Sun, X. Li, et al., 2020), MagNet (Zhang et al., 2021)). + +The DSBM dataset. The directed stochastic block model (DSBM) is described in detail in (Zhang et al., 2021, Section 5.1.1). To be self-contained, we include a short explanation. + +The DSBM model is defined as follows. There are $N$ vertices, which are divided into $n_c$ clusters $(C_1,C_2,\ldots C_{n_c})$ , each having an equal number of vertices. An interaction is defined between any two distinct vertices, $u$ and $v$ , based on two sets of probabilities: $\{\alpha_{i,j}\}_{i,j=1}^{n_c}$ and $\{\beta_{i,j}\}_{i,j=1}^{n_c}$ . + +The set of probabilities $\{\alpha_{i,j}\}$ is used to create an undirected edge between any two vertices $u$ and $v$ , where $u$ belongs to cluster $C_i$ and $v$ belongs to cluster $C_j$ . The key property of this probability set is that $\alpha_{i,j} = \alpha_{j,i}$ , which means the chance of forming an edge between two clusters is the same in either direction. + +The set of probabilities $\{\beta_{i,j}\}$ is used to assign a direction to the undirected edges. For all $i, j \in \{1, \ldots, n_c\}$ , we assume that $\beta_{i,j} + \beta_{j,i} = 1$ holds. Then, to the undirected edge $(u,v)$ is assigned the direction from $u$ to $v$ with probability $\beta_{i,j}$ if $u$ belongs to cluster $C_i$ and $v$ belongs to cluster $C_j$ , and the direction from $v$ to $u$ with probability $\beta_{j,i}$ . + +The primary objective here is to classify the vertices based on their respective clusters. + +There are several scenarios designed to test different aspects of the baseline models and our model. In the experiments, the total number of nodes is fixed at $N = 2500$ and the number of clusters is fixed at $n_c = 5$ . In all experiments, the training set contains 20 nodes per cluster, 500 nodes for validation, and the rest for testing. The results are averaged over 5 different seeds and splits. + +Table 4: Test accuracy on node classification: top three models indicated as $1^{\mathrm{st}}$ , $2^{\mathrm{nd}}$ , $3^{\mathrm{rd}}$ . +(a) Undirected graphs. + +
FilmSquirrelChameleonCiteseerPubmedCora
GGCN37.54 ± 1.5655.17 ± 1.5871.14 ± 1.8477.14 ± 1.4589.15 ± 0.3787.95 ± 1.05
GPRGNN34.63 ± 1.2231.61 ± 1.2446.58 ± 1.7177.13 ± 1.6787.54 ± 0.3887.95 ± 1.18
FAGCN35.70 ± 1.0036.48 ± 1.8660.11 ± 2.1577.11 ± 1.5789.49 ± 0.3887.87 ± 1.20
GCNII37.44 ± 1.3038.47 ± 1.5863.86 ± 3.0477.33 ± 1.4890.15 ± 0.4388.37 ± 1.25
Geom-GCN31.59 ± 1.1538.15 ± 0.9260.00 ± 2.8178.02 ± 1.1589.95 ± 0.4785.35 ± 1.57
PairNorm27.40 ± 1.2450.44 ± 2.0462.74 ± 2.8273.59 ± 1.4787.53 ± 0.4485.79 ± 1.01
GraphSAGE34.23 ± 0.9941.61 ± 0.7458.73 ± 1.6876.04 ± 1.3088.45 ± 0.5086.90 ± 1.04
GCN27.32 ± 1.1053.43 ± 2.0164.82 ± 2.2476.50 ± 1.3688.42 ± 0.5086.98 ± 1.27
GAT27.44 ± 0.8940.72 ± 1.5560.26 ± 2.5076.55 ± 1.2387.30 ± 1.1086.33 ± 0.48
MLP36.53 ± 0.7028.77 ± 1.5646.21 ± 2.9974.02 ± 1.9075.69 ± 2.0087.16 ± 0.37
CGNN35.95 ± 0.8629.24 ± 1.0946.89 ± 1.6676.91 ± 1.8187.70 ± 0.4987.10 ± 1.35
GRAND35.62 ± 1.0140.05 ± 1.5054.67 ± 2.5476.46 ± 1.7789.02 ± 0.5187.36 ± 0.96
Sheaf (max)37.81 ± 1.1556.34 ± 1.3268.04 ± 1.5876.70 ± 1.5789.49 ± 0.4086.90 ± 1.13
GRAFFNL35.96 ± 0.9559.01 ± 1.3171.38 ± 1.4776.81 ± 1.1289.81 ± 0.5087.81 ± 1.13
GREAD37.90 ± 1.1759.22 ± 1.4471.38 ± 1.3077.60 ± 1.8190.23 ± 0.5588.57 ± 0.66
GraphCON35.58 ± 1.2435.51 ± 1.4049.63 ± 1.8976.36 ± 2.6788.01 ± 0.4787.22 ± 1.48
ACMP34.93 ± 1.2640.05 ± 1.5357.59 ± 2.0976.71 ± 1.7787.79 ± 0.4787.71 ± 0.95
GCN+DropEdge29.93 ± 0.8041.30 ± 1.7759.06 ± 2.0476.57 ± 2.6886.97 ± 0.4283.54 ± 1.06
GAT+DropEdge28.95 ± 0.7641.27 ± 1.7658.95 ± 2.1376.13 ± 2.2086.91 ± 0.4583.54 ± 1.06
FLODE37.16 ± 1.4264.23 ± 1.8473.60 ± 1.5578.07 ± 1.6289.02 ± 0.3886.44 ± 1.17
+ +(b) Directed graphs. + +
FilmSquirrelChameleon
ACM36.89 ± 1.1854.4 ± 1.8867.08 ± 2.04
HLP34.59 ± 1.3274.17 ± 1.8377.48 ± 1.50
FSGNN35.67 ± 0.6973.48 ± 2.1378.14 ± 1.25
GRAFF37.11 ± 1.0858.72 ± 0.8471.08 ± 1.75
FLODE37.41 ± 1.0674.03 ± 1.5877.98 ± 1.05
+ +(c) Heterophily-specific graphs. For Minesweeper, Tolokers and Questions the evaluation metric is the AUROC. + +
Roman-empireMinesweeperTolokersQuestions
ResNet65.88 ± 0.3850.89 ± 1.3972.95 ± 1.0670.34 ± 0.76
ResNet+SGC73.90 ± 0.5170.88 ± 0.9080.70 ± 0.9775.81 ± 0.96
ResNet+adj52.25 ± 0.4050.42 ± 0.8378.78 ± 1.1175.77 ± 1.24
GCN73.69 ± 0.7489.75 ± 0.5283.64 ± 0.6776.09 ± 1.27
GraphSAGE85.74 ± 0.6793.51 ± 0.5782.43 ± 0.4476.44 ± 0.62
GAT80.87 ± 0.3092.01 ± 0.6883.70 ± 0.4777.43 ± 1.20
GAT-sep88.75 ± 0.4193.91 ± 0.3583.78 ± 0.4376.79 ± 0.71
GT86.51 ± 0.7391.85 ± 0.7683.23 ± 0.6477.95 ± 0.68
GT-sep87.32 ± 0.3992.29 ± 0.4782.52 ± 0.9278.05 ± 0.93
FAGCN60.11 ± 0.5289.71 ± 0.3173.35 ± 1.0163.59 ± 1.46
CPGNN63.96 ± 0.6252.03 ± 5.4673.36 ± 1.0165.96 ± 1.95
H2GCN64.85 ± 0.2786.24 ± 0.6172.94 ± 0.9755.48 ± 0.91
FSGNN79.92 ± 0.5690.08 ± 0.7082.76 ± 0.6178.86 ± 0.92
GloGNN59.63 ± 0.6951.08 ± 1.2373.39 ± 1.1765.74 ± 1.19
FAGCN65.22 ± 0.5688.17 ± 0.7377.75 ± 1.0577.24 ± 1.26
GBK-GNN74.57 ± 0.4790.85 ± 0.5881.01 ± 0.6774.47 ± 0.86
JacobiConv71.14 ± 0.4289.66 ± 0.4068.66 ± 0.6573.88 ± 1.16
FLODE74.97 ± 0.5392.43 ± 0.5184.17 ± 0.5878.39 ± 1.22
+ +Table 5: Selected hyperparameters, learned exponent, step size, and Dirichlet energy in the last layer for real-world datasets. +(a) Undirected. + +
Dataset
FilmSquirrelChameleonCiteSeerPubmedCora
learning rate10-32.5 · 10-35 · 10-310-210-210-2
weight decay5 · 10-45 · 10-410-35 · 10-310-35 · 10-3
hidden channels2566464646464
num. layers164232
encoder layers311131
decoder layers222112
input dropout01.5 · 10-1005 · 10-20
decoder dropout10-110-10010-10
exponent1.001 ± 0.0030.17 ± 0.030.35 ± 0.150.92 ± 0.030.82 ± 0.070.90 ± 0.02
step size0.991 ± 0.0021.08 ± 0.011.22 ± 0.031.04 ± 0.021.12 ± 0.021.06 ± 0.01
Dirichlet energy0.246 ± 0.0060.40 ± 0.020.13 ± 0.030.021 ± 0.0010.015 ± 0.0010.0227 ± 0.0006
+ +(b) Directed. + +
Dataset
FilmSquirrelChameleon
learning rate10-32.5 · 10-310-2
weight decay5 · 10-45 · 10-410-3
hidden channels2566464
num. layers165
encoder layers311
decoder layers222
input dropout010-10
decoder dropout0.110-10
exponent1.001 ± 0.0050.28 ± 0.060.30 ± 0.11
step size0.990 ± 0.0021.22 ± 0.021.22 ± 0.05
Dirichlet energy0.316 ± 0.0050.38 ± 0.020.27 ± 0.04
+ +(c) Heterophily-specific graphs. + +
Dataset
Roman-empireMinesweeperTolokersQuestions
learning rate10-310-310-310-2
weight decay0005 · 10-4
hidden channels512512512128
num. layers4445
encoder layers2212
decoder layers2222
input dropout0000
decoder dropout0000
exponent0.689 ± 0.0380.749 ± 0.0171.053 ± 0.0411.090 ± 0.046
step size0.933 ± 0.0150.984 ± 0.0040.993 ± 0.0090.789 ± 0.062
Dirichlet energy0.059 ± 0.0030.173 ± 0.0190.155 ± 0.0130.092 ± 0.039
+ +Table 6: Node classification accuracy of ordered DSBM graphs: top three models as $1^{\mathrm{st}}$ , $2^{\mathrm{nd}}$ and $3^{\mathrm{rd}}$ . + +(a) Varying edge density. + +
α*
0.10.080.05
ChebNet19.9 ± 0.620.0 ± 0.720.0 ± 0.7
GCN-D68.9 ± 2.167.6 ± 2.758.5 ± 2.0
APPNP-D97.7 ± 1.795.9 ± 2.290.3 ± 2.4
GraphSAGE-D20.1 ± 1.119.9 ± 0.819.9 ± 1.0
GIN-D57.3 ± 5.855.4 ± 5.550.9 ± 7.7
GAT-D42.1 ± 5.339.0 ± 7.037.2 ± 5.5
DGCN84.9 ± 7.281.2 ± 8.264.4 ± 12.4
DiGraph82.1 ± 1.777.7 ± 1.666.1 ± 2.4
DiGraphIB99.2 ± 0.597.7 ± 0.789.3 ± 1.7
MagNet99.6 ± 0.298.3 ± 0.894.1 ± 1.2
FLODE99.3 ± 0.198.8 ± 0.197.5 ± 0.1
+ +(b) Varying net flow. + +
β*
0.050.100.150.200.250.300.350.40
ChebNet19.9 ± 0.720.1 ± 0.620.0 ± 0.620.1 ± 0.819.9 ± 0.920.0 ± 0.519.7 ± 0.920.0 ± 0.5
GCN-D68.6 ± 2.274.1 ± 1.875.5 ± 1.374.9 ± 1.372.0 ± 1.465.4 ± 1.658.1 ± 2.445.6 ± 4.7
APPNP-D97.4 ± 1.894.3 ± 2.489.4 ± 3.679.8 ± 9.069.4 ± 3.959.6 ± 4.951.8 ± 4.539.4 ± 5.3
GraphSAGE-D20.2 ± 1.220.0 ± 1.020.0 ± 0.820.0 ± 0.719.6 ± 0.919.8 ± 0.719.9 ± 0.919.9 ± 0.8
GIN-D57.9 ± 6.348.0 ± 11.432.7 ± 12.926.5 ± 10.023.8 ± 6.020.6 ± 3.020.5 ± 2.819.8 ± 0.5
GAT-D42.0 ± 4.832.7 ± 5.125.6 ± 3.819.9 ± 1.420.0 ± 1.019.8 ± 0.819.6 ± 0.219.5 ± 0.2
DGCN81.4 ± 1.184.7 ± 0.785.5 ± 1.086.2 ± 0.884.2 ± 1.178.4 ± 1.369.6 ± 1.554.3 ± 1.5
DiGraph82.5 ± 1.482.9 ± 1.981.9 ± 1.179.7 ± 1.373.5 ± 1.967.4 ± 2.857.8 ± 1.643.0 ± 7.1
DiGraphIB99.2 ± 0.497.9 ± 0.694.1 ± 1.788.7 ± 2.082.3 ± 2.770.0 ± 2.257.8 ± 6.441.0 ± 9.0
MagNet99.6 ± 0.299.0 ± 1.097.5 ± 0.894.2 ± 1.688.7 ± 1.979.4 ± 2.968.8 ± 2.451.8 ± 3.1
FLODE99.3 ± 0.198.5 ± 0.196.7 ± 0.292.8 ± 0.187.2 ± 0.377.1 ± 0.563.8 ± 0.350.1 ± 0.5
+ +Following Zhang et al. (2021), we train our model in both experiments for 3000 epochs and use early-stopping if the validation accuracy does not increase for 500 epochs. We select the best model based on the validation accuracy after sweeping over a few hyperparameters. We give exact numerical values for the experiments with the standard error in Table 6a and refer to Appendix A.2 for the chosen hyperparameters. + +DSBM with varying edge density. In the first experiment, the model is evaluated based on its performance on the DSBM with varying $\alpha_{i,j} = \alpha^{*}$ , $\alpha^{*} \in \{0.1, 0.08, 0.05\}$ for $i \neq j$ , which essentially changes the density of edges between different clusters. The other probabilities are fixed at $\alpha_{i,i} = 0.5$ , $\beta_{i,i} = 0.5$ and $\beta_{i,j} = 0.05$ for $i > j$ . The results are shown in Figure 6 with exact numerical values in Table 6a. + +DSBM with varying net flow. In the other scenario, the model is tested on how it performs when the net flow from one cluster to another varies. This is achieved by keeping $\alpha_{i,j} = 0.1$ constant for all $i$ and $j$ , and allowing $\beta_{i,j}$ to vary from 0.05 to 0.4. The other probabilities are fixed at $\alpha_{i,i} = 0.5$ and $\beta_{i,i} = 0.5$ . The results are shown in Figure 6 with exact numerical values in Table 6b. + +# A.3 Ablation Study + +We perform an ablation study on Chameleon and Squirrel (directed, heterophilic), and Citeseer (undirected, homophilic). For this, we sweep over different model options using the same hyperparameters + +Table 7: Selected hyperparameters for DSBM dataset. +(a) Varying edge density. + +
α*
0.10.080.05
learning rate5·10-35·10-35·10-3
decay1·10-31·10-35·10-4
input dropout1·10-12·10-11·10-1
decoder dropout1·10-15·10-21·10-1
hidden channels256256256
+ +(b) Varying net flow. + +
β*0.4
0.050.10.150.20.250.30.35
learning rate5·10-35·10-31·10-31·10-31·10-31·10-31·10-31·10-3
decay1·10-31·10-35·10-41·10-31·10-31·10-35·10-41·10-3
input dropout1·10-11·10-12·10-11·10-11·10-12·10-15·10-22·10-1
decoder dropout1·10-11·10-15·10-25·10-25·10-21·10-12·10-11·10-1
hidden channels256256256256256256256256
+ +via grid search. The test accuracy corresponding to the hyperparameters that yielded maximum validation accuracy is reported in Table 8. + +The ablation study on Chameleon demonstrates that all the components of the model (learnable exponent, ODE framework with the Schrödinger equation, and directionality via the SNA) contribute to the performance of FLODE. The fact that performance drops when any of these components are not used suggests that they all play crucial roles in the model's ability to capture the structure and evolution of heterophilic graphs. It is important to note that the performance appears to be more dependent on the adjustable fraction in the FGL than on the use of the ODE framework, illustrating that the fractional Laplacian alone can effectively capture long-range dependencies. However, when the ODE framework is additionally employed, a noticeable decrease in variance is observed. + +From Theory to Practice. We conduct an ablation study to investigate the role of depth on Chameleon, Citeseer, Cora, and Squirrel datasets. The results, depicted in Figure 8, demonstrate that the neural ODE framework enables GNNs to scale to large depths (256 layers). Moreover, we see that the fractional Laplacian improves over the standard Laplacian in the heterophilic graphs which is supported by our claims in Section 5.2. We highlight that using only the fractional Laplacian without the neural ODE framework oftentimes outperforms the standard Laplacian with the neural ODE framework. This indicates the importance of the long-range connections built by the fractional Laplacian. + +We further demonstrate the close alignment of our theoretical and experimental results, which enables us to precisely anticipate when the models will exhibit HFD or LFD behaviors. In this context, we calculate parameters (according to Theorem D.5) and illustrate at each depth the expected and observed behaviors. For Squirrel and Chameleon, which are heterophilic graphs, we observe that both their theoretical and empirical behaviors are HFD. Additionally, the learned exponent is small. In contrast, for Cora and Citeseer, we see the opposite. + +Finally, we employ the best hyperparameters in Table 5a to solve both fractional heat and Schrödinger graph ODEs, further substantiating the intimate link between our theoretical advancements and practical applications. + +Table 8: Ablation study on node classification task: top two models are indicated as ${1}^{\text{st }}$ and ${2}^{\text{nd }}$ +(a) Chameleon (directed, heterophilic). + +
Update RuleTest AccuracyDirichlet Energy
Dxt+1 = xt - ihLαxtW77.79 ± 1.420.213 (t=5)
xt+1 = xt - ihL xtW75.72 ± 1.130.169 (t=6)
xt+1 = -i LαxtW77.35 ± 2.220.177 (t=4)
xt+1 = -i L xtW69.61 ± 1.590.178 (t=4)
Uxt+1 = xt - ihLαxtW73.60 ± 1.680.131 (t=4)
xt+1 = xt - ihL xtW70.15 ± 0.860.035 (t=4)
xt+1 = -i LαxtW71.25 ± 3.040.118 (t=4)
xt+1 = -i L xtW67.19 ± 2.490.040 (t=4)
Dxt+1 = xt - hLαxtW77.33 ± 1.470.378 (t=6)
xt+1 = xt - hL xtW73.55 ± 0.940.165 (t=6)
xt+1 = - LαxtW74.12 ± 3.600.182 (t=4)
xt+1 = - L xtW68.47 ± 2.770.208 (t=4)
+ +(b) Squirrel (directed, heterophilic). + +
Update RuleTest AccuracyDirichlet Energy
Dxt+1 = xt - ihLαxtW74.03 ± 1.580.38 ± 0.02
xt+1 = xt - ihL xtW64.04 ± 2.250.35 ± 0.02
xt+1 = -i LαxtW64.25 ± 1.850.46 ± 0.01
xt+1 = -i L xtW42.04 ± 1.580.29 ± 0.05
Uxt+1 = xt - ihLαxtW64.23 ± 1.840.40 ± 0.02
xt+1 = xt - ihL xtW55.19 ± 1.520.26 ± 0.03
xt+1 = -i LαxtW61.40 ± 2.150.43 ± 0.01
xt+1 = -i L xtW41.19 ± 1.950.20 ± 0.02
Dxt+1 = xt - hLαxtW71.86 ± 1.650.50 ± 0.01
xt+1 = xt - hL xtW59.34 ± 1.780.43 ± 0.03
xt+1 = - LαxtW42.91 ± 7.860.32 ± 0.08
xt+1 = - L xtW35.37 ± 1.690.25 ± 0.05
Uxt+1 = xt - hLαxtW62.95 ± 2.020.61 ± 0.08
xt+1 = xt - hL xtW52.19 ± 1.170.51 ± 0.07
xt+1 = - LαxtW59.04 ± 0.020.44 ± 0.02
xt+1 = - L xtW39.69 ± 1.540.20 ± 0.02
+ +(c) Citeseer (undirected, homphilic). + +
Update RuleTest AccuracyDirichlet Energy
xt+1 = xt - ihLαxtW78.07 ± 1.620.021 (t=5)
xt+1 = xt - ihL xxtW77.97 ± 2.290.019 (t=4)
xt+1 = -i LαxtW77.27 ± 2.100.011 (t=6)
xt+1 = -i L xxtW77.97 ±2.230.019 (t=4)
+ +Table 9: Learned $\alpha$ and spectrum of W. According to Theorem 5.3, we denote FD := $\lambda_{K}(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_{1}(\mathbf{L})) - \lambda_{1}(\mathbf{W})$ and FD := $\Im (\lambda_K(\mathbf{W}))\mathrm{f}_\alpha (\lambda_1(\mathbf{L})) - \Im (\lambda_1(\mathbf{W}))$ for the fractional heat (H) and Schrödinger (S) graph ODEs, respectively. The heterophilic graphs Squirrel and Chameleon exhibit HFD since FD < 0, while the homophilic Cora, Citeseer, Pubmed exhibit LFD since FD > 0. + +
λ1(L)Film -0.9486Squirrel -0.8896Chameleon -0.9337CiteSeer -0.5022Pubmed -0.6537Cora -0.4826
Hα1.008 ± 0.0070.19 ± 0.050.37 ± 0.140.89 ± 0.061.15 ± 0.080.89 ± 0.01
λ1(W)-2.774 ± 0.004-1.62 ± 0.03-1.81 ± 0.02-1.76 ± 0.01-1.66 ± 0.06-1.81 ± 0.01
λK(W)2.858 ± 0.0092.21 ± 0.032.29 ± 0.052.28 ± 0.061.1 ± 0.32.32 ± 0.01
FD0.367 ± 0.001-0.54 ± 0.02-0.42 ± 0.040.52 ± 0.020.97 ± 0.090.60 ± 0.01
Sα1.000 ± 0.0020.17 ± 0.030.34 ± 0.110.90 ± 0.070.76 ± 0.070.90 ± 0.02
∑(λ1(W))-2.795 ± 0.001-1.68 ± 0.01-1.79 ± 0.01-1.70 ± 0.04-1.74 ± 0.01-1.78 ± 0.01
∑(λK(W))2.880 ± 0.0022.21 ± 0.032.46 ± 0.022.29 ± 0.070.98 ± 0.092.30 ± 0.02
FD0.4945 ± 0.0001-0.48 ± 0.03-0.62 ± 0.030.46 ± 0.061.03 ± 0.050.59 ± 0.01
+ +![](images/0291e75cc2a0af6f4f1fc7f13d2481e14511d24c0205c1870815ab1dbf99fd45.jpg) +Figure 8: Ablation study on the effect of different update rules and different number of layers on undirected datasets. The x-axis shows the number of layers $2^{L}$ for $L \in \{0, \dots, 8\}$ . FD is calculated according to Theorem 5.3. + +# B Appendix for Section 3 + +Proposition 3.3. Let $\mathcal{G}$ be a directed graph with SNA $\mathbf{L}$ . For every $\lambda \in \lambda (\mathbf{L})$ , it holds $|\lambda |\leq 1$ and $\lambda (\mathbf{I} - \mathbf{L}) = 1 - \lambda (\mathbf{L})$ . + +Proof. We show that the numerical range $\mathcal{W}(\mathbf{L}) = \{\mathbf{x}^{\mathsf{H}}\mathbf{L}\mathbf{x} : \mathbf{x}^{\mathsf{H}}\mathbf{x} = 1\}$ satisfies $\mathcal{W}(\mathbf{L}) \subset [-1,1]$ . As $\mathcal{W}(\mathbf{L})$ contains all eigenvalues of $\mathbf{L}$ the thesis follows. + +Let $\mathbf{A}$ be the adjacency matrix of $\mathcal{G}$ and $\mathbf{x} \in \mathbb{C}^N$ with $\mathbf{x}^{\mathrm{H}}\mathbf{x} = 1$ . Applying the Cauchy-Schwartz inequality in (2) and (3), we get + +$$ +\begin{array}{l} \left| \mathbf {x} ^ {\mathsf {H}} \mathbf {L} \mathbf {x} \right| \stackrel {(1)} {\leq} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} a _ {i, j} \frac {\left| x _ {i} \right| \left| x _ {j} \right|}{\sqrt {d _ {i} ^ {i n} d _ {j} ^ {o u t}}} \\ = \sum_ {i = 1} ^ {N} \frac {\left| x _ {i} \right|}{\sqrt {d _ {i} ^ {i n}}} \sum_ {j = 1} ^ {N} a _ {i, j} \frac {\left| x _ {j} \right|}{\sqrt {d _ {j} ^ {o u t}}} \\ \stackrel {(2)} {\leq} \sum_ {i = 1} ^ {N} \frac {| x _ {i} |}{\sqrt {d _ {i} ^ {i n}}} \sqrt {\sum_ {j = 1} ^ {N} a _ {i , j} \frac {| x _ {j} | ^ {2}}{d _ {j} ^ {o u t}} \sum_ {j = 1} ^ {N} a _ {i , j}} \\ = \sum_ {i = 1} ^ {N} | x _ {i} | \sqrt {\sum_ {j = 1} ^ {N} a _ {i , j} \frac {| x _ {j} | ^ {2}}{d _ {j} ^ {o u t}}} \\ \stackrel {(3)} {\leq} \sqrt {\sum_ {i = 1} ^ {N} | x _ {i} | ^ {2} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} a _ {i , j} \frac {| x _ {j} | ^ {2}}{d _ {j} ^ {o u t}}} \\ = \sum_ {i = 1} ^ {N} \left| x _ {i} \right| ^ {2}, \\ \end{array} +$$ + +where we used $a_{i,j}^2 = a_{i,j}$ . We have $\sum_{i=1}^{N}|x_i|^2 = \mathbf{x}^\mathsf{H}\mathbf{x} = 1$ such that $\mathcal{W}(\mathbf{L}) \subset [-1,1]$ follows. The second claim follows directly by $(\mathbf{I} - \mathbf{L})\mathbf{v} = \mathbf{v} - \lambda \mathbf{v} = (1 - \lambda)\mathbf{v}$ . + +Proposition 3.5. Let $\mathcal{G}$ be a directed graph with SNA L. Then $1 \in \lambda(\mathbf{L})$ if and only if the graph is weakly balanced. Suppose the graph is strongly connected; then $-1 \in \lambda(\mathbf{L})$ if and only if the graph is weakly balanced with an even period. + +Proof. Since the numerical range is only a superset of the set of eigenvalues, we cannot simply consider when the inequalities (1) - (3) in the previous proof are actual equalities. Therefore, we have to find another way to prove the statement. Suppose that the graph is weakly balanced, then + +$$ +\sum_ {j = 1} ^ {N} a _ {i, j} \left(\frac {k _ {j}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} - \frac {k _ {i}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}}\right) = 0, \forall j \in \{1, \dots , N \}. +$$ + +We will prove that $\mathbf{k} = (k_{i})_{i = 1}^{N}$ is an eigenvector corresponding to the eigenvalue 1, + +$$ +(\mathbf {L k}) _ {i} = \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{\sqrt {d _ {i} ^ {\mathrm {i n}} d _ {j} ^ {\mathrm {o u t}}}} k _ {j} = \frac {1}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} k _ {j} = \frac {1}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} k _ {i} = \frac {1}{d _ {i} ^ {\mathrm {i n}}} \left(\sum_ {j = 1} ^ {N} a _ {i, j}\right) k _ {i} = k _ {i}. +$$ + +For the other direction, suppose that there exists $\mathbf{x} \in \mathbb{R}^N$ such that $\mathbf{x} \neq 0$ and $\mathbf{x} = \mathbf{L}\mathbf{x}$ . Then for all $i \in \{1, \dots, N\}$ + +$$ +0 = (\mathbf {L} \mathbf {x}) _ {i} - x _ {i} = \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{\sqrt {d _ {i} ^ {\mathrm {i n}} d _ {j} ^ {\mathrm {o u t}}}} x _ {j} - x _ {i} = \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{\sqrt {d _ {i} ^ {\mathrm {i n}} d _ {j} ^ {\mathrm {o u t}}}} x _ {j} - \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{d _ {i} ^ {\mathrm {i n}}} x _ {i} +$$ + +$$ += \sum_ {j = 1} ^ {N} \frac {a _ {i , j}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} \left(\frac {x _ {j}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} - \frac {x _ {i}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}}\right), +$$ + +hence, the graph is weakly balanced. + +By Perron-Frobenius theorem for irreducible non-negative matrices, one gets that $\mathbf{L}$ has exactly $h$ eigenvalues with maximal modulus corresponding to the $h$ roots of the unity, where $h$ is the period of $\mathbf{L}$ . Hence, $-1$ is an eigenvalue of $\mathbf{L}$ if and only if the graph is weakly balanced and $h$ is even. + +Proposition 3.6. For every $\mathbf{x} \in \mathbb{C}^{N \times K}$ , we have + +$$ +\mathfrak {R} \left(\operatorname {t r a c e} \left(\mathbf {x} ^ {\mathsf {H}} (\mathbf {I} - \mathbf {L}) \mathbf {x}\right)\right) = \frac {1}{2} \sum_ {i, j = 1} ^ {N} a _ {i, j} \left\| \frac {\mathbf {x} _ {i}}{\sqrt {d _ {i} ^ {i n}}} - \frac {\mathbf {x} _ {j}}{\sqrt {d _ {j} ^ {o u t}}} \right\| _ {2} ^ {2}, +$$ + +Moreover, there exists $\mathbf{x} \neq 0$ such that $\mathcal{E}(\mathbf{x}) = 0$ if and only if the graph is weakly balanced. + +Proof. By direct computation, it holds + +$$ +\begin{array}{l} \frac {1}{2} \sum_ {i, j = 1} ^ {N} a _ {i, j} \left\| \frac {x _ {i , :}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} - \frac {x _ {j , :}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} \right\| _ {2} ^ {2} \\ = \frac {1}{2} \sum_ {i, j = 1} ^ {N} a _ {i, j} \sum_ {k = 1} ^ {K} \left| \frac {x _ {i , k}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} - \frac {x _ {j , k}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} \right| ^ {2} \\ = \frac {1}{2} \sum_ {i, j = 1} ^ {N} a _ {i, j} \sum_ {k = 1} ^ {K} \left(\frac {x _ {i , k}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} - \frac {x _ {j , k}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}}\right) ^ {*} \left(\frac {x _ {i , k}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} - \frac {x _ {j , k}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}}\right) \\ = \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {\left| x _ {i , k} \right| ^ {2}}{d _ {i} ^ {\mathrm {i n}}} + \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {\left| x _ {j , k} \right| ^ {2}}{d _ {j} ^ {\mathrm {o u t}}} \\ - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {x _ {i , k} ^ {*} x _ {j , k}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {x _ {i , k} x _ {j , k} ^ {*}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} \\ = \frac {1}{2} \sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {K} | x _ {i, k} | ^ {2} + \frac {1}{2} \sum_ {j = 1} ^ {N} \sum_ {k = 1} ^ {K} | x _ {j, k} | ^ {2} - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {x _ {i , k} ^ {*} x _ {j , k}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {x _ {i , k} x _ {j , k} ^ {*}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} \\ = \sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {K} | x _ {i, k} | ^ {2} - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {(\mathbf {x} ^ {\mathsf {H}}) _ {k , i} x _ {j , k}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {x _ {i , k} (\mathbf {x} ^ {\mathsf {H}}) _ {k , j}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} \\ = \sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {K} \left| x _ {i, k} \right| ^ {2} - \frac {1}{2} \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {\left(\mathbf {x} ^ {\mathsf {H}}\right) _ {k , i} x _ {j , k}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}} - \frac {1}{2} \left(\sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {\left(\mathbf {x} ^ {\mathsf {H}}\right) _ {k , i} x _ {j , k}}{\sqrt {d _ {i} ^ {\text {i n}} d _ {j} ^ {\text {o u t}}}}\right) ^ {*} \\ = \Re \left(\sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {K} \left| x _ {i, k} \right| ^ {2} - \sum_ {i, j = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {i, j} \frac {x _ {i , k} ^ {*} x _ {j , k}}{\sqrt {a _ {i} ^ {\ln} d _ {j} ^ {\mathrm {o u t}}}}\right) \\ = \Re \left(\operatorname {t r a c e} \left(\mathbf {x} ^ {\mathsf {H}} (\mathbf {I} - \mathbf {L}) \mathbf {x}\right)\right). \\ \end{array} +$$ + +The last claim can be proved as follows. For simplicity, suppose $\mathbf{x} \in \mathbb{R}^N$ . The “ $\Leftarrow$ ” is clear since one can choose $\mathbf{x}$ to be $\mathbf{k}$ . To prove the “ $\Rightarrow$ ”, we reason by contradiction. Suppose there exists a $\mathbf{x} \neq 0$ such that $\mathcal{E}(\mathbf{x}) = 0$ and the underlying graph is not weakly connected, i.e., + +$$ +\forall \tilde {\mathbf {x}} \neq \mathbf {0}, \left| \sum_ {j = 1} ^ {N} a _ {i, j} \left(\frac {\tilde {x} _ {j}}{\sqrt {d _ {j} ^ {\text {o u t}}}} - \frac {\tilde {x} _ {i}}{\sqrt {d _ {i} ^ {\text {i n}}}}\right) \right| > 0, \forall i \in \{1, \dots , N \}, +$$ + +Then, since $\mathbf{x}\neq 0$ + +$$ +\begin{array}{l} 0 = \mathcal {E} (\mathbf {x}) = \frac {1}{4} \sum_ {i, j = 1} ^ {N} a _ {i, j} \left| \frac {x _ {i}}{\sqrt {d _ {i} ^ {\mathrm {l n}}}} - \frac {x _ {j}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} \right| ^ {2} \\ \geq \frac {1}{4} \sum_ {i = 1} ^ {N} \frac {1}{d _ {i} ^ {\mathrm {i n}}} \left(\sum_ {j = 1} ^ {N} a _ {i, j} \left| \frac {x _ {i}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} - \frac {x _ {j}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}} \right| ^ {2}\right) \left(\sum_ {j = 1} ^ {N} a _ {i, j}\right) \\ \geq \frac {1}{4} \sum_ {i = 1} ^ {N} \frac {1}{d _ {i} ^ {\text {i n}}} \left(\sum_ {j = 1} ^ {N} a _ {i, j} \left| \frac {x _ {i}}{\sqrt {d _ {i} ^ {\text {i n}}}} - \frac {x _ {j}}{\sqrt {d _ {j} ^ {\text {o u t}}}} \right|\right) ^ {2} \\ \geq \frac {1}{4} \sum_ {i = 1} ^ {N} \frac {1}{d _ {i} ^ {\mathrm {i n}}} \left| \sum_ {j = 1} ^ {N} a _ {i, j} \left(\frac {x _ {i}}{\sqrt {d _ {i} ^ {\mathrm {i n}}}} - \frac {x _ {j}}{\sqrt {d _ {j} ^ {\mathrm {o u t}}}}\right) \right| ^ {2} \\ > 0, \\ \end{array} +$$ + +where we used Cauchy-Schwartz and triangle inequalities. + +![](images/0fccb162056a41787e6938413e6fdf322eb70e27efa0777fd118190f6e64dbff.jpg) + +We give the following simple corollary. + +Corollary B.1. For every $\mathbf{x} \in \mathbb{R}^{N \times K}$ , it holds $\mathcal{E}(\mathbf{x}) = \frac{1}{2} \Re \left( \mathrm{vec}(\mathbf{x})^{\mathrm{H}} (\mathbf{I} \otimes (\mathbf{I} - \mathbf{L})) \mathrm{vec}(\mathbf{x}) \right)$ . + +# C Appendix for Section 4 + +In this section, we provide some properties about FGLs. The first statement shows that the FGL of a normal SNA $\mathbf{L}$ only changes the magnitude of the eigenvalues of $\mathbf{L}$ . + +Lemma C.1. Let $\mathbf{M}$ be a normal matrix with eigenvalues $\lambda_1, \ldots, \lambda_N$ and corresponding eigenvectors $\mathbf{v}_1, \ldots, \mathbf{v}_N$ . Suppose $\mathbf{M} = \mathbf{L}\Sigma \mathbf{R}^{\mathrm{H}}$ is its singular value decomposition. Then it holds + +$$ +\pmb {\Sigma} = | \pmb {\Lambda} |, \mathbf {L} = \mathbf {V}, \mathbf {R} = \mathbf {V} \exp {(i \pmb {\Theta})}, \pmb {\Theta} = \mathrm {d i a g} \left(\{\theta_ {i} \} _ {i = 1} ^ {N}\right), \theta_ {i} = \mathrm {a t a n 2} \left(\Re \lambda_ {i}, \Im \lambda_ {i}\right). +$$ + +Proof. By hypothesis, there exist a unitary matrix $\mathbf{V}$ such that $\mathbf{M} = \mathbf{V}\mathbf{\Lambda}\mathbf{V}^{\mathrm{H}}$ , then + +$$ +\mathbf {M} ^ {\mathrm {H}} \mathbf {M} = \mathbf {V} \boldsymbol {\Lambda} ^ {*} \mathbf {V} ^ {\mathrm {H}} \mathbf {V} \boldsymbol {\Lambda} \mathbf {V} ^ {\mathrm {H}} = \mathbf {V} | \boldsymbol {\Lambda} | ^ {2} \mathbf {V} ^ {\mathrm {H}}, +$$ + +$$ +\mathbf {M} ^ {\mathrm {H}} \mathbf {M} = \mathbf {R} \boldsymbol {\Sigma} \mathbf {L} ^ {\mathrm {H}} \mathbf {L} \boldsymbol {\Sigma} \mathbf {R} ^ {\mathrm {H}} = \mathbf {L} \boldsymbol {\Sigma} ^ {2} \mathbf {L} ^ {\mathrm {H}}. +$$ + +Therefore, $\Sigma = |\Lambda|$ and $\mathbf{L} = \mathbf{V}$ + +$$ +\mathbf {M} = \mathbf {R} \left| \boldsymbol {\Lambda} \right| \mathbf {V} ^ {\mathsf {H}} +$$ + +Finally, we note that it must hold $\mathbf{R} = \mathbf{V}\exp (i\Theta)$ where $\Theta = \mathrm{diag}\left(\{\mathrm{atan2}(\Re \lambda_i,\Im \lambda_i)\}_{i = 1}^N\right)$ and $\mathrm{atan2}$ is the 2-argument arctangent. + +We proceed by proving Theorem 4.1, which follows the proof of a similar result given in (Benzi et al., 2020) for the fractional Laplacian defined in the spectral domain of an in-degree normalized graph Laplacian. However, our result also holds for directed graphs and in particular for fractional Laplacians that are defined via the SVD of a graph SNA. + +Lemma C.2. Let $\mathbf{M} \in \mathbb{R}^{n \times n}$ with singular values $\sigma(\mathbf{M}) \subset [a, b]$ . For $f: [a, b] \to \mathbb{R}$ , define $f(\mathbf{M}) = \mathbf{U}f(\boldsymbol{\Sigma})\mathbf{V}^{\mathrm{H}}$ , where $\mathbf{M} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{\mathrm{H}}$ is the singular value decomposition of $\mathbf{M}$ . If $f$ has modulus of continuity $\omega$ and $d(i, j) \geq 2$ , it holds + +$$ +\left| f (\mathbf {M}) \right| _ {i, j} \leq \left(1 + \frac {\pi^ {2}}{2}\right) \omega \left(\frac {b - a}{2} \left| d (i, j) - 1 \right| ^ {- 1}\right). +$$ + +Proof. Let $g:[a,b]\to \mathbb{R}$ be any function, then + +$$ +\begin{array}{l} \left\| f (\mathbf {M}) - g (\mathbf {M}) \right\| _ {2} = \left\| \mathbf {U} f (\boldsymbol {\Sigma}) \mathbf {V} ^ {\mathsf {H}} - \mathbf {U} g (\boldsymbol {\Sigma}) \mathbf {V} ^ {\mathsf {H}} \right\| _ {2} \\ = \| f (\boldsymbol {\Sigma}) - g (\boldsymbol {\Sigma}) \| _ {2} \\ = \| f (\lambda) - g (\lambda) \| _ {\infty , \sigma (M)}. \\ \end{array} +$$ + +The second equation holds since the 2-norm is invariant under unitary transformations. By Jackson's Theorem, there exists for every $m \geq 1$ a polynomial $p_m$ of order $m$ such that + +$$ +\left\| f (\mathbf {M}) - p _ {m} (\mathbf {M}) \right\| _ {2} \leq \left\| f - p _ {m} \right\| _ {\infty , [ a, b ]} \leq \left(1 + \frac {\pi^ {2}}{2}\right) \omega \left(\frac {b - a}{2 m}\right). +$$ + +Fix $i,j\in \{1,\ldots ,n\}$ . If $d(i,j) = m + 1$ , then any power of $\mathbf{M}$ up to order $m$ has a zero entry in $(i,j)$ , i.e., $(\mathbf{M}^m)_{i,j} = 0$ . Hence, $f(\mathbf{M})_{i,j} = f(\mathbf{M})_{i,j} - p_m(\mathbf{M})_{i,j}$ , and we get + +$$ +| f (\mathbf {M}) _ {i, j} | \leq \| f (\mathbf {M}) - g (\mathbf {M}) \| _ {2} \leq \omega \left(1 + \frac {\pi^ {2}}{2}\right) \left(\frac {b - a}{2 m}\right) = \left(1 + \frac {\pi^ {2}}{2}\right) \omega \left(\frac {b - a}{2} | d (i, j) - 1 | ^ {- 1}\right) +$$ + +from which the thesis follows. + +![](images/5949271d802a9970195f43e114ad78fc2d420416fdba14b0f38bd5dad0a824cc.jpg) + +Finally, we give a proof of Theorem 4.1, which is a consequence of the previous statement. + +Proof of Theorem 4.1. The eigenvalues of $\mathbf{L}$ are in the unit circle, i.e., $\| \mathbf{L}\| \leq 1$ . Hence, $\| \mathbf{LL}^{\mathsf{H}}\| \leq 1$ and the singular values of $\mathbf{L}$ are in [0, 1]. By Lemma C.2 and the fact that $f(x) = x^{\alpha}$ has modulus of continuity $\omega (t) = t^{\alpha}$ the thesis follows. + +# D Appendix for Section 5 + +In this section, we provide the appendix for Section 5. We begin by analyzing the solution of linear matrix ODEs. For this, let $\mathbf{M} \in \mathbb{C}^{N \times N}$ . For $\mathbf{x}_0 \in \mathbb{C}^N$ , consider the initial value problem + +$$ +\mathbf {x} ^ {\prime} (t) = - \mathbf {M} \mathbf {x} (t), \quad \mathbf {x} (0) = \mathbf {x} _ {0}. \tag {5} +$$ + +Theorem D.1 (Existence and uniqueness of linear ODE solution). The initial value problem given by (5) has a unique solution $\mathbf{x}(t)\in \mathbb{C}^N$ for any initial condition $\mathbf{x}_0\in \mathbb{C}^N$ . + +The solution of (5) can be expressed using matrix exponentials, even if $\mathbf{M}$ is not symmetric. The matrix exponential is defined as: + +$$ +\exp (- \mathbf {M} t) = \sum_ {k = 0} ^ {\infty} \frac {(- \mathbf {M}) ^ {k} t ^ {k}}{k !}, +$$ + +where $\mathbf{M}^k$ is the $k$ -th power of the matrix $\mathbf{M}$ . The solution of (5) can then be written as + +$$ +\mathbf {x} (t) = \exp (- \mathbf {M} t) \mathbf {x} _ {0}. \tag {6} +$$ + +# D.1 Appendix for Section 5.1 + +In this section, we analyze the solution to (2) and (3). We further provide a proof for Theorem 5.3. We begin by considering the solution to the fractional heat equation (2). The analysis for the Schrödinger equation (3) follows analogously. + +The fractional heat equation $\mathbf{x}'(t) = -\mathbf{L}^{\alpha}\mathbf{x}\mathbf{W}$ can be vectorized and rewritten via the Kronecker product as + +$$ +\operatorname {v e c} (\mathbf {x}) ^ {\prime} (t) = - \mathbf {W} \otimes \mathbf {L} ^ {\alpha} \operatorname {v e c} (\mathbf {x}) (t). \tag {7} +$$ + +In the undirected case $\mathbf{L}$ and $\mathbf{I} - \mathbf{L}$ are both symmetric, and the eigenvalues satisfy the relation $\lambda_{i}(\mathbf{I} - \mathbf{L}) = 1 - \lambda_{i}(\mathbf{L})$ . The corresponding eigenvectors $\psi_{i}(\mathbf{L})$ and $\psi_{i}(\mathbf{I} - \mathbf{L})$ can be chosen to be the same for $\mathbf{L}$ and $\mathbf{I} - \mathbf{L}$ . In the following, we assume that these eigenvectors are orthonormalized. + +If $\mathbf{L}$ is symmetric, we can decompose it via the spectral theorem into $\mathbf{L} = \mathbf{U}\mathbf{D}\mathbf{U}^T$ , where $\mathbf{U} = [\psi_1(\mathbf{L}),\dots,\psi_N(\mathbf{L})]$ is an orthogonal matrix containing the eigenvectors of $\mathbf{L}$ , and $\mathbf{D}$ is the diagonal matrix of eigenvalues. + +Due to Lemma C.1, the fractional Laplacian $\mathbf{L}^{\alpha}$ can be written as $\mathbf{L}^{\alpha} = \mathbf{U}\mathrm{f}_{\alpha}(\mathbf{D})\mathbf{U}^{T}$ , where $\mathrm{f}_{\alpha}:\mathbb{R}\to \mathbb{R}$ is the map $x\mapsto \operatorname {sign}(x)|x|^{\alpha}$ and is applied element-wise. Clearly, the eigendecomposition of $\mathbf{L}^{\alpha}$ is given by the eigenvalues $\{\mathrm{f}_{\alpha}(\lambda_1(\mathbf{L})),\ldots ,\mathrm{f}_{\alpha}(\lambda_N(\mathbf{L}))\}$ and the corresponding eigenvectors $\{\psi_{1}(\mathbf{L}),\dots,\psi_{N}(\mathbf{L})\}$ . + +Now, by well-known properties of the Kronecker product, one can write the eigendecomposition of $\mathbf{W} \otimes \mathbf{L}^{\alpha}$ as + +$$ +\{\lambda_ {r} (\mathbf {W}) \mathrm {f} _ {\alpha} (\lambda_ {l} (\mathbf {L})) \} _ {r \in \{1, \dots , K \}, l \in \{1, \dots , N \}}, \{\psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}) \} _ {r \in \{1, \dots , K \}, l \in \{1, \dots , N \}}. +$$ + +Note that $1 \in \lambda(\mathbf{L})$ and, since trace $(\mathbf{L}) = 0$ , the SNA has at least one negative eigenvalue. This property is useful since it allows to retrieve of the indices $(r,l)$ corresponding to eigenvalues with minimal real (or imaginary) parts in a simple way. + +The initial condition $\operatorname{vec}(\mathbf{x}_0)$ can be decomposed as + +$$ +\operatorname {v e c} (\mathbf {x} _ {0}) = \sum_ {r = 1} ^ {K} \sum_ {l = 1} ^ {N} c _ {r, l} \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}), c _ {r, l} = \left\langle \operatorname {v e c} (\mathbf {x} _ {0}), \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {W}) \right\rangle . +$$ + +Then, the solution $\operatorname{vec}(\mathbf{x})(t)$ of (7) can be written as + +$$ +\operatorname {v e c} (\mathbf {x}) (t) = \sum_ {r = 1} ^ {K} \sum_ {l = 1} ^ {N} c _ {r, l} \exp \left(- t \lambda_ {r} (\mathbf {W}) f _ {\alpha} (\lambda_ {l} (\mathbf {L}))\right) \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}). \tag {8} +$$ + +The following result shows the relationship between the frequencies of $\mathbf{I} - \mathbf{L}$ and the Dirichlet energy and serves as a basis for the following proofs. + +Lemma D.2. Let $\mathcal{G}$ be a graph with SNA L. Consider $\mathbf{x}(t)\in \mathbb{C}^{N\times K}$ such that there exists $\varphi \in \mathbb{C}^{N\times K}\setminus \{0\}$ with + +$$ +\frac {\operatorname {v e c} (\mathbf {x}) (t)}{\| \operatorname {v e c} (\mathbf {x}) (t) \| _ {2}} \xrightarrow {t \to \infty} \operatorname {v e c} (\boldsymbol {\varphi}), +$$ + +and $(\mathbf{I}\otimes (\mathbf{I} - \mathbf{L}))\mathrm{vec}(\pmb {\varphi}) = \lambda \mathrm{vec}(\pmb {\varphi})$ . Then, + +$$ +\mathscr {E} \left(\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}}\right) \xrightarrow {t \to \infty} \frac {\Re (\lambda)}{2}. +$$ + +Proof. As $\operatorname{vec}(\varphi)$ is the limit of unit vectors, $\operatorname{vec}(\varphi)$ is a unit vector itself. We calculate its Dirichlet energy, + +$$ +\mathcal {E} (\mathrm {v e c} (\pmb {\varphi})) = \frac {1}{2} \Re \left(\mathrm {v e c} (\pmb {\varphi}) ^ {\mathrm {H}} (\mathbf {I} \otimes (\mathbf {I} - \mathbf {L})) \mathrm {v e c} (\pmb {\varphi})\right) = \frac {1}{2} \Re \left(\lambda \mathrm {v e c} (\pmb {\varphi}) ^ {\mathrm {H}} \mathrm {v e c} (\pmb {\varphi})\right) = \frac {1}{2} \Re \left(\lambda\right). +$$ + +Since $\mathbf{x} \mapsto \mathcal{E}(\mathbf{x})$ is continuous, the thesis follows. + +![](images/2bb26e766e24653a664ecd6c65723dd4f1282d0728282163b179b926631fbee5.jpg) + +Another useful result that will be extensively used in proving Theorem 5.3 is presented next. + +Lemma D.3. Suppose $\mathbf{x}(t)$ can be expressed as + +$$ +\mathbf {x} (t) = \sum_ {k = 1} ^ {K} \sum_ {n = 1} ^ {N} c _ {k, n} \exp \left(- t \lambda_ {k, n}\right) \mathbf {v} _ {k} \otimes \mathbf {w} _ {n}, +$$ + +for some choice of $c_{k,n}$ , $\lambda_{k,n}$ , $\{\mathbf{v}_k\}$ , $\{\mathbf{w}_n\}$ . Let $(a,b)$ be the unique index of $\lambda_{k,n}$ with minimal real part and corresponding non-null coefficient $c_{k,n}$ , i.e. + +$$ +(a, b) := \underset {(k, n) \in [ K ] \times [ N ]} {\arg \min } \left\{\Re \left(\lambda_ {k, n}\right): c _ {k, n} \neq 0 \right\}. +$$ + +Then + +$$ +\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}} \xrightarrow {t \to \infty} \frac {c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b}}{\| c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b} \| _ {2}}. +$$ + +Proof. The key insight is to separate the addend with index $(a, b)$ . It holds + +$$ +\begin{array}{l} \mathbf {x} (t) = \sum_ {k = 1} ^ {K} \sum_ {n = 1} ^ {N} c _ {k, n} \exp (- t \lambda_ {k, n}) \mathbf {v} _ {n} \otimes \mathbf {w} _ {m} \\ = \exp \left(-t\lambda_{a,b}\right)\left(c_{a,b}\mathbf{v}_{a}\otimes \mathbf{w}_{b} + \sum_{\substack{(k,n)\in [K]\times [N]\\ (k,n)\neq (a,b)}}c_{k,n}\exp \left(-t\left(\lambda_{k,n} - \lambda_{a,b}\right)\right)\mathbf{v}_{k}\otimes \mathbf{w}_{n}\right). \\ \end{array} +$$ + +We note that + +$$ +\begin{array}{l} \lim _ {t \rightarrow \infty} | \exp (- t (\lambda_ {k, n} - \lambda_ {a, b})) | = \lim _ {t \rightarrow \infty} | \exp (- t \Re (\lambda_ {k, n} - \lambda_ {a, b})) \exp (- i t \Im (\lambda_ {k, n} - \lambda_ {a, b})) | \\ = \lim _ {t \rightarrow \infty} \exp \left(- t \Re \left(\lambda_ {k, n} - \lambda_ {a, b}\right)\right) \\ = 0, \\ \end{array} +$$ + +for all $(k,n)\neq (a,b)$ , since $\Re (\lambda_{k,n} - \lambda_{a,b}) > 0$ . Therefore, one gets + +$$ +\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}} \xrightarrow {t \to \infty} \frac {c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b}}{\| c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b} \| _ {2}}, +$$ + +where the normalization removes the dependency on $\exp (-t\lambda_{a,b})$ + +![](images/3542b716580d6f31a274beae34bc2e583db06ea4333e4172416ef47afac9c5bf.jpg) + +When $\lambda_{a,b}$ is not unique, it is still possible to derive a convergence result. In this case, $\mathbf{x}$ will converge to an element in the span generated by vectors corresponding to $\lambda_{a,b}$ , i.e., + +$$ +\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}} \xrightarrow {t \to \infty} \frac {\sum_ {(a , b) \in \mathcal {A}} c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b}}{\| \sum_ {(a , b) \in \mathcal {A}} c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b} \| _ {2}}, +$$ + +where $\mathcal{A} \coloneqq \{(k,n) : \Re(\lambda_{k,n}) = \Re(\lambda_{a,b}), c_{k,n} \neq 0\}$ . + +A similar result to Lemma D.3 holds for a slightly different representation of $\mathbf{x}(t)$ . + +Lemma D.4. Suppose $\mathbf{x}(t)$ can be expressed as + +$$ +\mathbf {x} (t) = \sum_ {k = 1} ^ {K} \sum_ {n = 1} ^ {N} c _ {k, n} \exp \left(i t \lambda_ {k, n}\right) \mathbf {v} _ {k} \otimes \mathbf {w} _ {n}, +$$ + +for some choice of $c_{k,n}$ , $\lambda_{k,n}$ , $\{\mathbf{v}_k\}$ , $\{\mathbf{w}_n\}$ . Let $(a,b)$ be the unique index of $\lambda_{k,n}$ with minimal imaginary part and corresponding non-null coefficient $c_{k,n}$ , i.e. + +$$ +(a, b) := \underset {(k, n) \in [ K ] \times [ N ]} {\arg \min } \left\{\Im \left(\lambda_ {k, n}\right): c _ {k, n} \neq 0 \right\}. +$$ + +Then + +$$ +\frac {\mathbf {x} (t)}{\| \mathbf {x} (t) \| _ {2}} \xrightarrow {t \to \infty} \frac {c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b}}{\| c _ {a , b} \mathbf {v} _ {a} \otimes \mathbf {w} _ {b} \| _ {2}}. +$$ + +Proof. The proof follows the same reasoning as in the proof of Lemma D.3. The difference is that the dominating frequency is the one with the minimal imaginary part, since + +$$ +\Re (i \lambda_ {k, n}) = - \Im (\lambda_ {k, n}), +$$ + +and, consequently, + +$$ +\operatorname * {a r g m a x} _ {(k, n) \in [ K ] \times [ N ]} \left\{\mathfrak {R} \left(i \lambda_ {k, n}\right) \right\} = \operatorname * {a r g m i n} _ {(k, n) \in \in [ K ] \times [ N ]} \left\{\mathfrak {I} \left(\lambda_ {k, n}\right) \right\}. +$$ + +![](images/bdba134c3f22cad196c7c2d8ade504f6f0b9d5e4c78c9742baf0002ac9d51e7f.jpg) + +# D.1.1 Proof of Theorem 5.3 + +We denote the eigenvalues of $\mathbf{L}$ closest to 0 from above and below as + +$$ +\lambda_ {+} (\mathbf {L}) := \underset {l} {\arg \min } \left\{\lambda_ {l} (\mathbf {L}): \lambda_ {l} (\mathbf {L}) > 0 \right\}, +$$ + +$$ +\lambda_ {-} (\mathbf {L}) := \underset {l} {\arg \max } \left\{\lambda_ {l} (\mathbf {L}): \lambda_ {l} (\mathbf {L}) < 0 \right\}. \tag {9} +$$ + +We assume that the channel mixing $\mathbf{W} \in \mathbb{R}^{K \times K}$ and the graph Laplacians $\mathbf{L}, \mathbf{I} - \mathbf{L} \in \mathbb{R}^{N \times N}$ are real matrices. Finally, we suppose the eigenvalues of a generic matrix $\mathbf{M}$ are sorted in ascending order, i.e., $\lambda_{i}(\mathbf{M}) \leq \lambda_{j}(\mathbf{M}), i < j$ . + +We now reformulate Theorem 5.3 for the fractional heat equation (2) and provide its full proof, which follows a similar frequency analysis to the one in (Di Giovanni et al., 2023, Theorem B.3) + +Theorem D.5. Let $\mathcal{G}$ be an undirected graph with SNA L. Consider the initial value problem in (2) with channel mixing matrix $\mathbf{W} \in \mathbb{R}^{K \times K}$ and $\alpha \in \mathbb{R}$ . Then, for almost all initial conditions $\mathbf{x}_0 \in \mathbb{R}^{N \times K}$ the following is satisfied. + +$(\alpha >0)$ The solution to (2) is HFD if + +$$ +\lambda_ {K} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {1} (\mathbf {L})\right) < \lambda_ {1} (\mathbf {W}), +$$ + +and LFD otherwise. + +$(\alpha < 0)$ The solution to (2) is $(1 - \lambda_{-}(\mathbf{L}))$ FD if + +$$ +\lambda_ {K} (\mathbf {W}) f _ {\alpha} (\lambda_ {-} (\mathbf {L})) < \lambda_ {1} (\mathbf {W}) f _ {\alpha} (\lambda_ {+} (\mathbf {L})) , +$$ + +and $(1 - \lambda_{+}(\mathbf{L}))$ -FD otherwise. + +Proof of $(\alpha > 0)$ . As derived in (8), the solution of (2) with initial condition $\mathbf{x}_0$ can be written in a vectorized form as + +$$ +\begin{array}{l} \operatorname {v e c} (\mathbf {x}) (t) = \exp \left(- t \mathbf {W} ^ {\top} \otimes \mathbf {L} ^ {\alpha}\right) \operatorname {v e c} (\mathbf {x} _ {0}) \\ = \sum_ {r = 1} ^ {K} \sum_ {l = 1} ^ {N} c _ {r, l} \exp \left(- t \lambda_ {r} (\mathbf {W}) \mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right)\right) \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}), \\ \end{array} +$$ + +where $\lambda_r(\mathbf{W})$ are the eigenvalues of $\mathbf{W}$ with corresponding eigenvectors $\psi_r(\mathbf{W})$ , and $\lambda_l(\mathbf{L})$ are the eigenvalues of $\mathbf{L}$ with corresponding eigenvectors $\psi_l(\mathbf{L})$ . The coefficients $c_{r,l}$ are the Fourier coefficients of $\mathbf{x}_0$ , i.e., + +$$ +c _ {r, l} := \left\langle \operatorname {v e c} \left(\mathbf {x} _ {0}\right), \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}) \right\rangle . +$$ + +The key insight is to separate the eigenprojection corresponding to the most negative frequency. By Lemma D.3, this frequency component dominates for $t$ going to infinity. + +Suppose + +$$ +\lambda_ {K} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {1} (\mathbf {L})\right) < \lambda_ {1} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {N} (\mathbf {L})\right) = \lambda_ {1} (\mathbf {W}). +$$ + +In this case, $\lambda_K(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_1(\mathbf{L}))$ is the most negative frequency. Assume for simplicity that $\lambda_K(\mathbf{W})$ has multiplicity one; the argument can be applied even if this is not the case, since the corresponding eigenvectors are orthogonal for higher multiplicities. + +For almost all initial conditions $\mathbf{x}_0$ , the coefficient $c_{K,1}$ is not null; hence + +$$ +\frac {\operatorname {v e c} (\mathbf {x}) (t)}{\| \operatorname {v e c} (\mathbf {x}) (t) \| _ {2}} \xrightarrow {t \to \infty} \frac {c _ {K , 1} \psi_ {K} (\mathbf {W}) \otimes \psi_ {1} (\mathbf {L})}{\| c _ {K , 1} \psi_ {K} (\mathbf {W}) \otimes \psi_ {1} (\mathbf {L}) \| _ {2}}. +$$ + +By standard properties of the Kronecker product, we have + +$$ +\left(\mathbf {I} \otimes \mathbf {L}\right) \left(\psi_ {K} (\mathbf {W}) \otimes \psi_ {1} (\mathbf {L})\right) = \left(\mathbf {I} \psi_ {K} (\mathbf {W})\right) \otimes \left(\mathbf {L} \psi_ {1} (\mathbf {L})\right) = \lambda_ {1} (\mathbf {L}) \psi_ {K} (\mathbf {W}) \otimes \psi_ {1} (\mathbf {L}), \tag {10} +$$ + +i.e., $\psi_K(\mathbf{W})\otimes \psi_1(\mathbf{L})$ is an eigenvector of $\mathbf{I}\otimes \mathbf{L}$ corresponding to the eigenvalue $\lambda_{1}(\mathbf{L})$ . Then, by Proposition 3.3, $\psi_K(\mathbf{W})\otimes \psi_1(\mathbf{L})$ is also an eigenvector of $\mathbf{I}\otimes \mathbf{I} - \mathbf{L}$ corresponding to the eigenvalue $1 - \lambda_{1}(\mathbf{L}) = \lambda_{N}(\mathbf{I} - \mathbf{L})$ . An application of Lemma D.2 finishes the proof. + +Similarly, we can show that if $\alpha > 0$ and $\lambda_K(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_1(\mathbf{L})) > \lambda_1(\mathbf{W})$ the lowest frequency component $\lambda_{1}(\mathbf{I} - \mathbf{L})$ is dominant. + +Proof of $(\alpha < 0)$ . In this case either $\mathrm{f}_{\alpha}(\lambda_{+}(\mathbf{L}))\lambda_{1}(\mathbf{W})$ or $\mathrm{f}_{\alpha}(\lambda_{-}(\mathbf{L}))\lambda_{K}(\mathbf{W})$ are the most negative frequency components. Hence, if $\mathrm{f}_{\alpha}(\lambda_{-}(\mathbf{L}))\lambda_{K}(\mathbf{W}) > \mathrm{f}_{\alpha}(\lambda_{+}(\mathbf{L}))\lambda_{1}(\mathbf{W})$ the frequency $\mathrm{f}_{\alpha}(\lambda_{+}(\mathbf{L}))\lambda_{1}(\mathbf{W})$ is dominating and otherwise the frequency $\mathrm{f}_{\alpha}(\lambda_{-}(\mathbf{L}))\lambda_{K}(\mathbf{W})$ . We can see this by following the exact same reasoning of (i). + +Remark D.6. In the proof of $(\alpha < 0)$ , we are tacitly assuming that $\mathbf{L}$ has only non-zero eigenvalues. If not, we can truncate the SVD and remove all zeros singular values (which correspond to zeros eigenvalues). In doing so, we obtain the best invertible approximation of $\mathbf{L}$ to which the theorem can be applied. + +We now generalize the previous result to all directed graphs with normal SNA. + +Theorem D.7. Let $\mathcal{G}$ be a strongly connected directed graph with normal SNA $\mathbf{L}$ such that $\lambda_1(\mathbf{L})\in \mathbb{R}$ . Consider the initial value problem in (2) with channel mixing matrix $\mathbf{W}\in \mathbb{R}^{K\times K}$ and $\alpha >0$ . Then, for almost all initial values $\mathbf{x}_0\in \mathbb{R}^{N\times K}$ the solution to (2) is HFD if + +$$ +\lambda_ {K} (\mathbf {W}) | \lambda_ {1} (\mathbf {L}) | ^ {\alpha} < \lambda_ {1} (\mathbf {W}) | \lambda_ {N} (\mathbf {L}) | ^ {\alpha}, +$$ + +and LFD otherwise. + +Proof. Any normal matrix is unitary diagonalizable, i.e., there exist eigenvalues $\lambda_1,\ldots ,\lambda_N$ and corresponding eigenvectors $\mathbf{v}_1,\dots ,\mathbf{v}_N$ such that $\mathbf{L} = \mathbf{V}\boldsymbol {\Lambda}\mathbf{V}^{\mathrm{H}}$ . Then, by Lemma C.1, the singular value decomposition of $\mathbf{L}$ is given by $\mathbf{L} = \mathbf{U}\boldsymbol {\Sigma}\mathbf{V}^{\mathrm{H}}$ , where + +$$ +\pmb {\Sigma} = | \pmb {\Lambda} |, \mathbf {U} = \mathbf {V} \exp (i \pmb {\Theta}), \pmb {\Theta} = \mathrm {d i a g} \left(\{\theta_ {i} \} _ {i = 1} ^ {N}\right), \theta_ {i} = \mathrm {a t a n 2} (\Re \lambda_ {i}, \Im \lambda_ {i}). +$$ + +Hence, + +$$ +\mathbf {L} ^ {\alpha} = \mathbf {U} \boldsymbol {\Sigma} ^ {\alpha} \mathbf {V} ^ {\mathsf {H}} = \mathbf {V} | \boldsymbol {\Lambda} | ^ {\alpha} \exp (i \boldsymbol {\Theta}) \mathbf {V} ^ {\mathsf {H}}. +$$ + +Then, equivalent to the derivation of (8), the solution to the vectorized fractional heat equation + +$$ +\operatorname {v e c} (\mathbf {x}) ^ {\prime} (t) = - \mathbf {W} \otimes \mathbf {L} ^ {\alpha} \operatorname {v e c} (\mathbf {x}) (t) +$$ + +is given by + +$$ +\operatorname {v e c} (\mathbf {x}) (t) = \sum_ {r = 1} ^ {K} \sum_ {l = 1} ^ {N} c _ {r, l} \exp \left(- t \lambda_ {r} (\mathbf {W}) \mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right)\right) \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}). +$$ + +with + +$$ +\mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) = \left| \lambda (\mathbf {L}) _ {l} \right| ^ {\alpha} \exp (i \theta_ {l}). +$$ + +Now, equivalent to the proof of Theorem 5.3, we apply Lemma D.3. Therefore, the dominating frequency is given by the eigenvalue of $\mathbf{W} \otimes \mathbf{L}^{\alpha}$ with the most negative real part. The eigenvalues of $\mathbf{W} \otimes \mathbf{L}^{\alpha}$ are given by $\lambda_r(\mathbf{W})\mathrm{f}_\alpha (\lambda_l(\mathbf{L}))$ for $r = 1,\dots ,K,l = 1,\dots ,N$ . The corresponding real parts are given by + +$$ +\Re (\lambda_ {r} (\mathbf {W}) \mathrm {f} _ {\alpha} (\lambda_ {l} (\mathbf {L}))) = \lambda_ {r} (\mathbf {W}) | \lambda (\mathbf {L}) _ {i} | ^ {\alpha} \cos (\theta_ {i}) = \lambda_ {r} (\mathbf {W}) | \lambda (\mathbf {L}) _ {i} | ^ {\alpha - 1} \Re (\lambda (\mathbf {L}) _ {i}). +$$ + +By Perron-Frobenius, the eigenvalue of $\mathbf{L}$ with the largest eigenvalues is given by $\lambda_N(\mathbf{L})\in \mathbb{R}$ . Hence, for all $l = 1,\dots ,N$ + +$$ +\left| \lambda (\mathbf {L}) _ {l} \right| ^ {\alpha} \cos (\theta_ {l}) \leq \left| \lambda (\mathbf {L}) _ {N} \right| ^ {\alpha}. +$$ + +Similarly, for all $l = 1,\dots ,N$ with $\Re (\lambda (\mathbf{L})_l) < 0$ + +$$ +- \left| \lambda (\mathbf {L}) _ {l} \right| ^ {\alpha} \cos \left(\theta_ {l}\right) \leq - \left| \lambda (\mathbf {L}) _ {1} \right| ^ {\alpha}. +$$ + +Thus, the frequency with the most negative real part is either given by $\lambda_K(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_1(\mathbf{L}))$ or $\lambda_{1}(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_{N}(\mathbf{L}))$ . The remainder of the proof is analogous to the proof of Theorem D.7. + +![](images/7b3cdc86f4e9c58cfda0073ab2777c30bab7ee92228e32b855dc7d561c953020.jpg) + +In the following, we provide the complete statement and proof for the claims made in Theorem 5.3 when the underlying ODE is the Schrödinger equation as presented in (3). + +Theorem D.8. Let $\mathcal{G}$ be a undirected graph with SNA L. Consider the initial value problem in (3) with channel mixing matrix $\mathbf{W} \in \mathbb{C}^{K \times K}$ and $\alpha \in \mathbb{R}$ . Suppose that $\mathbf{W}$ has at least one eigenvalue with non-zero imaginary part and sort the eigenvalues of $\mathbf{W}$ in ascending order with respect to their imaginary part. Then, for almost initial values $\mathbf{x}_0 \in \mathbb{C}^{N \times K}$ , the following is satisfied. + +$(\alpha >0)$ Solutions of (3) are HFD if + +$$ +\Im \left(\lambda_ {K} (\mathbf {W})\right) f _ {\alpha} \left(\lambda_ {1} (\mathbf {L})\right) < \Im \left(\lambda_ {1} (\mathbf {W})\right), +$$ + +and LFD otherwise. + +$(\alpha < 0)$ Let $\lambda_{+}(\mathbf{L})$ and $\lambda_{-}(\mathbf{L})$ be the smallest positive and biggest negative non-zero eigenvalue of $\mathbf{L}$ , respectively. Solutions of (3) are $(1 - \lambda_{-}(\mathbf{L}))$ -FD if + +$$ +\Im \left(\lambda_ {K} (\mathbf {W})\right) f _ {\alpha} (\lambda_ {-} (\mathbf {L})) < \Im \left(\lambda_ {1} (\mathbf {W})\right) f _ {\alpha} (\lambda_ {+} (\mathbf {L})). +$$ + +Otherwise, solutions of (3) are $(1 - \lambda_{+}(\mathbf{L}))$ -FD. + +Proof. The proof follows the same reasoning as the proof for the heat equation in Theorem D.5. The difference is that we now apply Lemma D.4 instead of Lemma D.3. + +Therefore, the dominating frequency is either $\lambda_K(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_1(\mathbf{L}))$ or $\lambda_{1}(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_{N}(\mathbf{L}))$ if $\alpha >0$ and $\lambda_{K}(\mathbf{W})\mathrm{f}_{\alpha}\left(\lambda_{-}(\mathbf{L})\right)$ or $\lambda_{1}(\mathbf{W})\mathrm{f}_{\alpha}\left(\lambda_{+}(\mathbf{L})\right)$ if $\alpha < 0$ + +# D.2 Frequency Dominance for Numerical Approximations of the Heat Equation + +For $n \in \mathbb{N}$ and $h \in \mathbb{R}$ , $h > 0$ , the solution of (2) at time $nh > 0$ can be approximated with an explicit Euler scheme + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = \sum_ {k = 0} ^ {n} {\binom {n} {k}} h ^ {k} (- \mathbf {W} \otimes \mathbf {L} ^ {\alpha}) ^ {k} \operatorname {v e c} (\mathbf {x} _ {0}), +$$ + +which can be further simplified via the binomial theorem as + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = \left(\mathbf {I} - h \left(\mathbf {W} \otimes \mathbf {L} ^ {\alpha}\right)\right) ^ {n} \operatorname {v e c} \left(\mathbf {x} _ {0}\right). \tag {11} +$$ + +Hence, it holds the representation formula + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = \sum_ {r, l} c _ {r, l} \left(1 - h \lambda_ {r} (\mathbf {W}) \mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right)\right) ^ {n} \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}). +$$ + +In this case, the dominating frequency maximizes $|1 - h\lambda_r(\mathbf{W})\mathrm{f}_\alpha (\lambda_l(\mathbf{L}))|$ . When $h < \| \mathbf{W}\|^{-1}$ , the product $h\lambda_r(\mathbf{W})\mathrm{f}_\alpha (\lambda_l(\mathbf{L}))$ is guaranteed to be in $[-1,1]$ , and + +$$ +\left. \left| 1 - h \lambda_ {r} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) \right| = 1 - h \lambda_ {r} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) \in [ 0, 2 ] \right.. +$$ + +Therefore, the dominating frequency minimizes $h\lambda_r(\mathbf{W})\mathrm{f}_\alpha (\lambda_l(\mathbf{L}))$ . This is the reasoning behind the next result. + +Proposition D.9. Let $h \in \mathbb{R}$ , $h > 0$ . Consider the fractional heat equation (2) with $\alpha \in \mathbb{R}$ . Let $\{\mathbf{x}(nh)\}_{n \in \mathbb{N}}$ be the trajectory of vectors derived by approximating (2) with an explicit Euler scheme with step size $h$ . Suppose $h < \| \mathbf{W} \|^{-1}$ . Then, for almost all initial values $\mathbf{x}_0$ + +$$ +\mathscr {E} \left(\frac {\mathbf {x} (n h)}{\| \mathbf {x} (n h) \| _ {2}}\right) \xrightarrow {n \to \infty} \left\{ \begin{array}{l l} \frac {\lambda_ {N} (\mathbf {I} - \mathbf {L})}{2}, & i f \lambda_ {K} (\mathbf {W}) \mathrm {f} _ {\alpha} \left(\lambda_ {1} (\mathbf {L})\right) < \lambda_ {1} (\mathbf {W}), \\ 0 , & o t h e r w i s e . \end{array} \right. +$$ + +Proof. Define + +$$ +\left(\lambda_ {a}, \lambda_ {b}\right) := \underset {r, l} {\arg \max } \left\{\left| 1 - h \lambda_ {r} \left(\mathbf {W}\right) \mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) \right|: r \in \{1, \ldots , K \}, l \in \{1, \ldots , N \} \right\}. +$$ + +By the hypothesis on $h$ , this is equivalent to + +$$ +\left(\lambda_ {a}, \lambda_ {b}\right) = \underset {r, l} {\arg \min } \left\{\lambda_ {r} \left(\mathbf {W}\right) \mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right): r \in \{1, \dots , K \}, l \in \{1, \dots , N \} \right\}. +$$ + +Therefore, $(\lambda_{a},\lambda_{b})$ is either $(\lambda_{1}(\mathbf{W}),\lambda_{N}(\mathbf{L}))$ or $(\lambda_K(\mathbf{W}),\lambda_1(\mathbf{L}))$ . Hence, + +$$ +\frac {\operatorname {v e c} (\mathbf {x}) (n h)}{\| \operatorname {v e c} (\mathbf {x}) (n h) \| _ {2}} \xrightarrow {n \to \infty} \frac {c _ {a , b} \psi_ {a} (\mathbf {W}) \otimes \psi_ {b} (\mathbf {L})}{\| c _ {a , b} \psi_ {a} (\mathbf {W}) \otimes \psi_ {b} (\mathbf {L}) \| _ {2}}. +$$ + +If the condition $\lambda_K(\mathbf{W})\mathrm{f}_{\alpha}(\lambda_1(\mathbf{L})) < \lambda_1(\mathbf{W})$ is satisfied, we have $b = 1$ . Then by (10), the normalized $\operatorname{vec}(\mathbf{x})$ converges to the eigenvector of $\mathbf{I}\otimes \mathbf{I} - \mathbf{L}$ corresponding to the largest frequency $1 - \lambda_{1}(\mathbf{L}) = \lambda_{N}(\mathbf{I} - \mathbf{L})$ . An application of Lemma D.2 finishes the proof. + +If $\lambda_K(\mathbf{W})\mathrm{f}_\alpha (\lambda_1(\mathbf{L})) < \lambda_1(\mathbf{W})$ is not satisfied, we have $b = N$ , and the other direction follows with the same argument. + +Similarly to Proposition D.9 one can prove the following results for negative fractions. + +Proposition D.10. Let $h \in \mathbb{R}$ , $h > 0$ . Consider the fractional heat equation (2) with $\alpha < 0$ . Let $\{\mathbf{x}(nh)\}_{n \in \mathbb{N}}$ be the trajectory of vectors derived by approximating the solution of (2) with an explicit Euler scheme with step size $h$ . Suppose that $h < \| \mathbf{W} \|^{-1}$ . The approximated solution is $(1 - \lambda_{-}(\mathbf{L}))$ -FD if + +$$ +\lambda_ {1} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {+} (\mathbf {L})\right) < \lambda_ {K} (\mathbf {W}) f _ {\alpha} \left(\lambda_ {-} (\mathbf {L})\right), +$$ + +and $(1 - \lambda_{+}(\mathbf{L}))$ -FD otherwise. + +Proof. The proof follows the same reasoning as the proof of Proposition D.9 by realizing that the dominating frequencies $(\lambda_{a},\lambda_{b})$ are either given by $(\lambda_{1}(\mathbf{W}),\lambda_{+}(\mathbf{L}))$ or $(\lambda_K(\mathbf{W}),\lambda_{-}(\mathbf{L}))$ . + +# D.3 Frequency Dominance for Numerical Approximations of the Schrödinger Equation + +For $n \in \mathbb{N}$ and $h \in \mathbb{R}$ , $h > 0$ , the solution of (3) at time $nh > 0$ can be approximated with an explicit Euler scheme as well. Similarly to the previous section, we can write + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = \left(\mathbf {I} + i h \left(\mathbf {W} \otimes \mathbf {L} ^ {\alpha}\right)\right) ^ {n} \operatorname {v e c} (\mathbf {x} _ {0}). +$$ + +and + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = \sum_ {r, l} c _ {r, l} \left(1 + i h \lambda_ {r} (\mathbf {W}) \mathrm {f} _ {\alpha} (\lambda_ {l} (\mathbf {L}))\right) ^ {n} \psi_ {r} (\mathbf {W}) \otimes \psi_ {l} (\mathbf {L}). +$$ + +The dominating frequency will be discussed in the following theorem. + +Proposition D.11. Let $h \in \mathbb{R}$ , $h > 0$ . Let $\{\mathbf{x}(nh)\}_{n \in \mathbb{N}}$ be the trajectory of vectors derived by approximating (3) with an explicit Euler scheme with sufficiently small step size $h$ . Sort the eigenvalues of $\mathbf{W}$ in ascending order with respect to their imaginary part. Then, for almost all initial values $\mathbf{x}_0$ + +$$ +\mathscr {E} \left(\frac {\mathbf {x} (n h)}{\| \mathbf {x} (n h) \| _ {2}}\right) \xrightarrow {n \to \infty} \left\{ \begin{array}{l l} \frac {\lambda_ {N} (\mathbf {I} - \mathbf {L})}{2}, & i f f _ {\alpha} \left(\lambda_ {1} (\mathbf {L})\right) \mathfrak {S} \left(\lambda_ {K} (\mathbf {W})\right) < f _ {\alpha} \left(\lambda_ {N} (\mathbf {L})\right) \mathfrak {S} \left(\lambda_ {1} (\mathbf {W})\right) \\ 0, & o t h e r w i s e. \end{array} \right. +$$ + +Proof. Define + +$$ +\left(\lambda_ {a}, \lambda_ {b}\right) := \underset {r, l} {\arg \max } \left\{\left| 1 + i h \lambda_ {r} \left(\mathbf {W}\right) \mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) \right|: r \in \{1, \ldots , K \} , l \in \{1, \ldots , N \} \right\}. +$$ + +By definition of $a$ and $b$ , for all $r$ and $l$ it holds + +$$ +\left| 1 + i h \lambda_ {a} (\mathbf {W}) f _ {\alpha} (\lambda_ {b} (\mathbf {L})) \right| > \left| 1 + i h \lambda_ {r} (\mathbf {W}) f _ {\alpha} (\lambda_ {l} (\mathbf {L})) \right|. \tag {12} +$$ + +Hence, + +$$ +\frac {\operatorname {v e c} (\mathbf {x}) (t)}{\| \operatorname {v e c} (\mathbf {x}) (t) \| _ {2}} \xrightarrow {t \to \infty} \frac {c _ {a , b} \psi_ {a} (\mathbf {W}) \otimes \psi_ {b} (\mathbf {L})}{\| c _ {a , b} \psi_ {a} (\mathbf {W}) \otimes \psi_ {b} (\mathbf {L}) \| _ {2}}. +$$ + +We continue by determining the indices $a$ and $b$ . To do so, we note that (12) is equivalent to + +$$ +\begin{array}{l} \mathrm {f} _ {\alpha} (\lambda_ {l} (\mathbf {L})) \Im (\lambda_ {r} (\mathbf {W})) - \mathrm {f} _ {\alpha} (\lambda_ {b} (\mathbf {L})) \Im (\lambda_ {a} (\mathbf {W})) \\ > \frac {h}{2} \left(\mathrm {f} _ {\alpha} (\lambda_ {l} (\mathbf {L})) ^ {2} | \lambda_ {r} (\mathbf {W}) | ^ {2} - \mathrm {f} _ {\alpha} (\lambda_ {b} (\mathbf {L})) ^ {2} | \lambda_ {a} (\mathbf {W}) | ^ {2}\right) \\ \end{array} +$$ + +for all $r, l$ . Denote by $\varepsilon$ the gap + +$$ +0 < \varepsilon := \min _ {(r, l) \neq (a, b)} \left\{\mathrm {f} _ {\alpha} \left(\lambda_ {l} \left(\mathbf {L}\right)\right) \Im \left(\lambda_ {r} \left(\mathbf {W}\right)\right) - \mathrm {f} _ {\alpha} \left(\lambda_ {b} \left(\mathbf {L}\right)\right) \Im \left(\lambda_ {a} \left(\mathbf {W}\right)\right) \right\}. +$$ + +Noting that + +$$ +\left\{ \begin{array}{l} \frac {h}{2} \left(\mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) ^ {2} \left| \lambda_ {r} (\mathbf {W}) \right| ^ {2} - \mathrm {f} _ {\alpha} \left(\lambda_ {b} (\mathbf {L})\right) ^ {2} \left| \lambda_ {a} (\mathbf {W}) \right| ^ {2}\right) \leq h \| \mathbf {W} \| ^ {2} \| \mathbf {L} \| ^ {2 \alpha} = h \| \mathbf {W} \| ^ {2}, \\ \frac {h}{2} \left(\mathrm {f} _ {\alpha} \left(\lambda_ {l} (\mathbf {L})\right) ^ {2} \left| \lambda_ {r} (\mathbf {W}) \right| ^ {2} - \mathrm {f} _ {\alpha} \left(\lambda_ {b} (\mathbf {L})\right) ^ {2} \left| \lambda_ {u} (\mathbf {W}) \right| ^ {2}\right) < \varepsilon \end{array} \right. +$$ + +one gets that (12) is satisfied for $h < \varepsilon \| \mathbf{W} \|^ {- 2}$ . Therefore, for sufficiently small $h$ , the dominating frequencies are the ones with minimal imaginary part, i.e., either $\mathrm{f}_{\alpha}\left(\lambda_{1}(\mathbf{L})\right) \Im \left(\lambda_{K}(\mathbf{W})\right)$ or $\mathrm{f}_{\alpha}\left(\lambda_{N}(\mathbf{L})\right) \Im \left(\lambda_{1}(\mathbf{W})\right)$ . If $\mathrm{f}_{\alpha}\left(\lambda_{1}(\mathbf{L})\right) \Im \left(\lambda_{K}(\mathbf{W})\right) < \mathrm{f}_{\alpha}\left(\lambda_{N}(\mathbf{L})\right) \Im \left(\lambda_{1}(\mathbf{W})\right)$ , then $b = 1$ , and the normalized vec $(\mathbf{x})$ converges to the eigenvector corresponding to the smallest frequency $\lambda_{1}(\mathbf{L})$ . By (10), this is also the eigenvector of $\mathbf{I} \otimes \mathbf{I} - \mathbf{L}$ corresponding to the largest frequency $1 - \lambda_{1}(\mathbf{L}) = \lambda_{N}(\mathbf{I} - \mathbf{L})$ . An application of Lemma D.2 finishes the proof. + +Finally, we present a similar result for negative powers. + +Proposition D.12. Let $h \in \mathbb{R}$ , $h > 0$ . Consider the fractional Schrödinger equation (3) with $\alpha < 0$ . Let $\{\mathbf{x}(nh)\}_{n \in \mathbb{N}}$ be the trajectory of vectors derived by approximating the solution of (3) with an explicit Euler scheme with step size $h$ . Suppose that $h$ is sufficiently small. Sort the eigenvalues of $\mathbf{W}$ in ascending order with respect to their imaginary part. The approximated solution is $(1 - \lambda_{+}(\mathbf{L}))$ -FD if + +$$ +\lambda_ {1} (\mathbf {W}) f _ {\alpha} (\lambda_ {+} (\mathbf {L})) < \lambda_ {K} (\mathbf {W}) f _ {\alpha} (\lambda_ {-} (\mathbf {L})) , +$$ + +and $(1 - \lambda_{-}(\mathbf{L}))$ -FD otherwise. + +Proof. Similar to Proposition D.11, we can prove the statement by realizing that the dominating frequencies $(\lambda_{a},\lambda_{b})$ in (12) are either given by $(\lambda_{1}(\mathbf{W}),\lambda_{+}(\mathbf{L}))$ or $(\lambda_K(\mathbf{W}),\lambda_-(\mathbf{L}))$ . + +# E Appendix for Section 5.2 + +We begin this section by describing the solution of general linear matrix ODEs of the form (6) in terms of the Jordan decomposition of $\mathbf{M}$ . This is required when $\mathbf{M}$ is not diagonalizable. For instance, the SNA of a directed graph is not in general a symmetric matrix, hence, not guaranteed to be diagonalizable. We then proceed in Appendix E.1 with the proof of Theorem 5.6. + +For a given matrix $\mathbf{M} \in \mathbb{C}^{N \times N}$ , the Jordan normal form is given by + +$$ +\mathbf {M} = \mathbf {P J P} ^ {- 1}, +$$ + +where $\mathbf{P} \in \mathbb{C}^{N \times N}$ is an invertible matrix whose columns are the generalized eigenvectors of $\mathbf{M}$ , and $\mathbf{J} \in \mathbb{C}^{N \times N}$ is a block-diagonal matrix with Jordan blocks along its diagonal. Denote with $\lambda_1, \ldots, \lambda_m$ the eigenvalues of $\mathbf{M}$ and with $\mathbf{J}_1, \ldots, \mathbf{J}_m$ the corresponding Jordan blocks. Let $k_l$ be the algebraic multiplicity of the eigenvalue $\lambda_l$ , and denote with $\{\psi_l^i(\mathbf{M})\}_{i \in \{1, \ldots, k_l\}}$ the generalized eigenvectors of the Jordan block $\mathbf{J}_l$ . + +We begin by giving the following well-known result, which fully characterizes the frequencies for the solution of a linear matrix ODE. + +Lemma E.1. Let $\mathbf{M} = \mathbf{P}\mathbf{J}\mathbf{P}^{-1} \in \mathbb{C}^{N \times N}$ be the Jordan normal form of $\mathbf{M}$ . Let $\mathbf{x}: [0, T] \to \mathbb{R}^n$ be a solution to + +$$ +\mathbf {x} ^ {\prime} (t) = \mathbf {M} \mathbf {x} (t), \mathbf {x} (0) = \mathbf {x} _ {0}. +$$ + +Then, $\mathbf{x}$ is given by + +$$ +\mathbf {x} (t) = \sum_ {l = 1} ^ {m} \exp {(\lambda_ {l} (\mathbf {M}) t)} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {j} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \psi_ {l} ^ {j} (\mathbf {M}), +$$ + +where + +$$ +\mathbf {x} _ {0} = \sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {P e} _ {l} ^ {i}, +$$ + +and $\{\mathbf{e}_l^i : i \in \{1, \dots, k_l\}, l \in \{1, \dots, m\}\}$ is the standard basis satisfying $\mathbf{P}\mathbf{e}_l^i = \psi_l^i(\mathbf{M})$ . + +Proof. By (Perko, 2001, Section 1.8), the solution can be written as + +$$ +\exp \left(\mathbf {M} t\right) \mathbf {x} _ {0} = \mathbf {P} \exp \left(\mathbf {J} t\right) \mathbf {P} ^ {- 1} \left(\sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {P e} _ {l} ^ {i}\right) = \mathbf {P} \exp \left(\mathbf {J} t\right) \left(\sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {e} _ {l} ^ {i}\right), +$$ + +where $\exp (\mathbf{J}t) = \mathrm{diag}\left(\{\exp (\mathbf{J}_l t)\}_{l = 1}^m\right)$ and + +$$ +\exp \left(\mathbf {J} _ {l} t\right) = \exp \left(\lambda_ {l} (\mathbf {M}) t\right) \left[ \begin{array}{c c c c c} 1 & t & \frac {t ^ {2}}{2 !} & \dots & \frac {t ^ {k _ {l}}}{(k _ {l} - 1) !} \\ & 1 & t & & \vdots \\ & & 1 & \ddots & \frac {t ^ {2}}{2 !} \\ & & & \ddots & t \\ & & & & 1 \end{array} \right]. +$$ + +Since $\exp (\mathbf{J}t) = \bigoplus_{l = 1}^{m}\exp (\mathbf{J}_{l}t)$ , we can focus on a single Jordan block. Fix $l\in \{1,\ldots ,m\}$ , it holds + +$$ +\begin{array}{l} \mathbf {P} \exp \left(\mathbf {J} _ {l} t\right) \left(\sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {e} _ {l} ^ {i}\right) \\ = \mathbf {P} \exp (\lambda_ {l} (\mathbf {M}) t) \left(c _ {l} ^ {1} \mathbf {e} _ {l} ^ {1} + c _ {l} ^ {2} (t \mathbf {e} _ {l} ^ {1} + \mathbf {e} _ {l} ^ {2}) + c _ {l} ^ {3} \left(\frac {t ^ {2}}{2 !} \mathbf {e} _ {l} ^ {1} + t \mathbf {e} _ {l} ^ {2} + \mathbf {e} _ {l} ^ {3}\right) + \dots\right) \\ = \exp (\lambda_ {l} (\mathbf {M}) t) \left(c _ {l} ^ {1} \psi_ {l} ^ {1} (\mathbf {M}) + c _ {l} ^ {2} (t \psi_ {l} ^ {1} (\mathbf {M}) + \psi_ {l} ^ {2} (\mathbf {M})) \right. \\ \left. + c _ {l} ^ {3} \left(\frac {t ^ {2}}{2 !} \psi_ {l} ^ {1} (\mathbf {M}) + t \psi_ {l} ^ {2} (\mathbf {M}) + \psi_ {l} ^ {3} (\mathbf {M})\right) + \dots\right) \\ = \exp \left(\lambda_ {l} (\mathbf {M}) t\right) \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \psi_ {l} ^ {j} (\mathbf {M}). \\ \end{array} +$$ + +Bringing the direct sums together, we get + +$$ +\exp \left(\mathbf {M} t\right) \mathbf {x} _ {0} = \sum_ {l = 1} ^ {m} \exp \left(\lambda_ {l} (\mathbf {M}) t\right) \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \psi_ {l} ^ {j} (\mathbf {M}), +$$ + +from which the thesis follows. + +![](images/2d1f62eee6d3471190da58e03fa50fcd5c0bee4a1b3e4f032868310b2f4b2058.jpg) + +In the following, we derive a formula for the solution of ODEs of the form + +$$ +\mathbf {x} ^ {\prime} (t) = \mathbf {M} \mathbf {x} (t) \mathbf {W}, \mathbf {x} (0) = \mathbf {x} _ {0}, \tag {13} +$$ + +for a diagonal matrix $\mathbf{W} \in \mathbb{C}^{K \times K}$ and a general square matrix $\mathbf{M} \in \mathbb{C}^{N \times N}$ with Jordan normal form $\mathbf{PJP}^{-1}$ . By vectorizing, we obtain the equivalent linear system + +$$ +\operatorname {v e c} (\mathbf {x}) ^ {\prime} (t) = \mathbf {W} \otimes \mathbf {M} \operatorname {v e c} (\mathbf {x}) (t), \operatorname {v e c} (\mathbf {x}) (0) = \operatorname {v e c} \left(\mathbf {x} _ {0}\right). \tag {14} +$$ + +Then, by properties of the Kronecker product, there holds + +$$ +\mathbf {W} \otimes \mathbf {M} = \mathbf {W} \otimes (\mathbf {P J P} ^ {- 1}) = (\mathbf {I} \otimes \mathbf {P}) (\mathbf {W} \otimes \mathbf {J}) (\mathbf {I} \otimes \mathbf {P} ^ {- 1}) = (\mathbf {I} \otimes \mathbf {P}) (\mathbf {W} \otimes \mathbf {J}) (\mathbf {I} \otimes \mathbf {P}) ^ {- 1}. +$$ + +Note that $(\mathbf{I}\otimes \mathbf{P})(\mathbf{W}\otimes \mathbf{J})(\mathbf{I}\otimes \mathbf{P})^{-1}$ is not the Jordan normal form of $\mathbf{D}\otimes \mathbf{M}$ . However, we can characterize the Jordan form of $\mathbf{W}\otimes \mathbf{M}$ as follows. + +Lemma E.2. The Jordan decomposition of $\mathbf{W} \otimes \mathbf{J}$ is given by $\mathbf{W} \otimes \mathbf{J} = \tilde{\mathbf{P}}\tilde{\mathbf{J}}\tilde{\mathbf{P}}^{-1}$ where $\tilde{\mathbf{J}}$ is a block diagonal matrix with blocks + +$$ +\tilde {\mathbf {J}} _ {j, l} = \left[ \begin{array}{c c c c c} w _ {j} \lambda_ {l} (\mathbf {J}) & 1 & & & \\ & w _ {j} \lambda_ {l} (\mathbf {J}) & 1 & & \\ & & \ddots & & \\ & & & w _ {j} \lambda_ {l} (\mathbf {J}) & 1 \\ & & & & w _ {j} \lambda_ {l} (\mathbf {J}) \end{array} \right], +$$ + +and $\tilde{\mathbf{P}}$ is a diagonal matrix obtained by concatenating $\tilde{\mathbf{P}}_{j,l} = \mathrm{diag}\left(\left\{w_j^{-n + 1}\right\}_{n = 1}^{k_l}\right)$ . + +Proof. As $\mathbf{J} = \bigoplus_{l=1}^{m} \mathbf{J}_{l}$ , we can focus on a single Jordan block. Fix $l \in \{1, \ldots, m\}$ . We have + +$$ +\mathbf {W} \otimes \mathbf {J} _ {l} = \operatorname {d i a g} \left(\left\{w _ {j} \mathbf {J} _ {l} \right\} _ {j = 1} ^ {K}\right) = \bigoplus_ {j = 1} ^ {K} w _ {j} \mathbf {J} _ {l}, +$$ + +hence, we can focus once again on a single block. Fix $j \in \{1, \dots, K\}$ ; the Jordan decomposition of $w_{j}\mathbf{J}_{l}$ is given by $\tilde{\mathbf{P}}_l = \mathrm{diag}\left(\left\{w_j^{-n + 1}\right\}_{n = 1}^{k_l}\right)$ and + +$$ +\tilde {\mathbf {J}} _ {l} = \left[ \begin{array}{c c c c c c} w _ {j} \lambda_ {l} (\mathbf {J}) & 1 & & & & \\ & w _ {j} \lambda_ {l} (\mathbf {J}) & 1 & & & \\ & & \ddots & \ddots & & \\ & & & & w _ {j} \lambda_ {l} (\mathbf {J}) & 1 \\ & & & & & w _ {j} \lambda_ {l} (\mathbf {J}) \end{array} \right]. +$$ + +To verify it, compute the $(n,m)$ element + +$$ +\left(\tilde {\mathbf {P}} _ {l} \tilde {\mathbf {J}} _ {l} \tilde {\mathbf {P}} _ {l} ^ {- 1}\right) _ {n, m} = \sum_ {i, k} \left(\tilde {\mathbf {P}} _ {l}\right) _ {n, i} \left(\tilde {\mathbf {J}} _ {l}\right) _ {i, k} \left(\tilde {\mathbf {P}} _ {l} ^ {- 1}\right) _ {k, m}. +$$ + +Since $\tilde{\mathbf{P}}_l$ is a diagonal matrix, the only non-null entries are on the diagonal; therefore, $i = n$ and $k = m$ + +$$ += \left(\tilde {\mathbf {P}} _ {l}\right) _ {n, n} \left(\tilde {\mathbf {J}} _ {l}\right) _ {n, m} \left(\tilde {\mathbf {P}} _ {l} ^ {- 1}\right) _ {m, m} +$$ + +and the only non-null entries of $\tilde{\mathbf{J}}_l$ are when $m = n$ or $m = n + 1$ , hence + +$$ += \left\{ \begin{array}{l l} \left(\tilde {\mathbf {P}} _ {l}\right) _ {n, n} \left(\tilde {\mathbf {J}} _ {l}\right) _ {n, n} \left(\tilde {\mathbf {P}} _ {l} ^ {- 1}\right) _ {n, n} = w _ {j} \lambda_ {l} (\mathbf {J}) , & m = n , \\ \left(\tilde {\mathbf {P}} _ {l}\right) _ {n, n} \left(\tilde {\mathbf {J}} _ {l}\right) _ {n, n + 1} \left(\tilde {\mathbf {P}} _ {l} ^ {- 1}\right) _ {n + 1, n + 1} = w _ {j} ^ {- n + 1} w _ {j} ^ {n} = w _ {j} , & m = n + 1 . \end{array} \right. +$$ + +The thesis follows from assembling the direct sums back. + +![](images/d96788de08e219593c2a13b8a468287be72a773c792db9005d0b4fd885331d66.jpg) + +Lemma E.2 leads to the following result that fully characterizes the solution of (14) in terms of the generalized eigenvectors and eigenvalues of $\mathbf{M}$ and $\mathbf{W}$ . + +Proposition E.3. Consider (14) with $\mathbf{M} = \mathbf{P}\mathbf{J}\mathbf{P}^{-1}$ and $\mathbf{W}\otimes \mathbf{J} = \tilde{\mathbf{P}}\tilde{\mathbf{J}}\tilde{\mathbf{P}}^{-1}$ , where $\tilde{\mathbf{J}}$ and $\tilde{\mathbf{P}}$ are given in Lemma E.2. The solution of (14) is + +$$ +\operatorname {v e c} (\mathbf {x}) (t) = \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \exp \left(\lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {M}) t\right) \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \left(\lambda_ {l _ {1}} (\mathbf {W})\right) ^ {1 - j} \mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {M}), +$$ + +where the coefficients $c_{l_1,l_2}^i$ are given by + +$$ +\operatorname {v e c} (\mathbf {x} _ {0}) = \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} (\mathbf {I} \otimes \mathbf {P}) \tilde {\mathbf {P}} \mathbf {e} _ {l _ {1}} \otimes \mathbf {e} _ {l _ {2}} ^ {i} +$$ + +where $\{\mathbf{e}_{l_2}^i : l_2 \in \{1, \ldots, m\}, i \in \{1, \ldots, k_{l_2}\}\}$ is the standard basis satisfying $\mathbf{Pe}_{l_2}^i = \psi_{l_2}^i(\mathbf{M})$ . + +Proof. By Lemma E.2, the eigenvalues of $\mathbf{W} \otimes \mathbf{M}$ and the corresponding eigenvectors and generalized eigenvectors are + +$$ +\lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {M}), \mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {1} (\mathbf {M}), (\lambda_ {l _ {1}} (\mathbf {W})) ^ {- i + 1} \mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {i} (\mathbf {M}) +$$ + +for $l_1 \in \{1, \ldots, K\}$ , $l_2 \in \{1, \ldots, m\}$ and $i \in \{2, \ldots, k_l\}$ . Hence, by Lemma E.1, the solution of (14) is given by + +$$ +\operatorname {v e c} (\mathbf {x}) (t) = \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \exp \left(\lambda_ {l _ {2}} (\mathbf {M}) \lambda_ {l _ {1}} (\mathbf {W}) t\right) \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} (\lambda_ {l _ {1}} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L})) , +$$ + +where the coefficients $c_{l_1,l_2}^i$ are given by + +$$ +\operatorname {v e c} \left(\mathbf {x} _ {0}\right) = \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} (\mathbf {I} \otimes \mathbf {P}) \tilde {\mathbf {P}} \left(\mathbf {e} _ {l _ {1}} \otimes \mathbf {e} _ {l _ {2}} ^ {i} (\mathbf {M})\right). +$$ + +![](images/57ed1fd8ea05655a87378f7c0a709ecc9bebad843e72d2e3865d9bcfd0e2586d.jpg) + +# E.1 Proof of Theorem 5.6 + +In the following, we reformulate and prove Theorem 5.6. + +Corollary E.4. Let $\mathcal{G}$ be a strongly connected directed graph with SNA $\mathbf{L} \in \mathbb{R}^{N \times N}$ . Consider the initial value problem in (2) with diagonal channel mixing matrix $\mathbf{W} \in \mathbb{R}^{K \times K}$ and $\alpha = 1$ . Then, for almost all initial values $\mathbf{x}_0 \in \mathbb{R}^{N \times K}$ , the solution to (2) is HFD if + +$$ +\lambda_ {K} (\mathbf {W}) \Re \lambda_ {1} (\mathbf {L}) < \lambda_ {1} (\mathbf {W}) \lambda_ {N} (\mathbf {L}) +$$ + +and $\lambda_1(\mathbf{L})$ is the unique eigenvalue that minimizes the real part among all eigenvalues of $\mathbf{L}$ . Otherwise, the solution is LFD. + +Proof. Using the notation from Proposition E.3 and its proof, we can write the solution of the vectorized form of (2) as + +$$ +\operatorname {v e c} (\mathbf {x}) (t) = \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \exp \left(- \lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) t\right) \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} (\lambda_ {l _ {1}} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L})). +$$ + +As done extensively, we separate the terms corresponding to the frequency with minimal real part. This frequency dominates as the exponential converges faster than polynomials for $t$ going to infinity. Consider the case $\lambda_K(\mathbf{W})\Re (\lambda_1(\mathbf{L})) < \lambda_1(\mathbf{W})\Re (\lambda_N(\mathbf{L}))$ . As $\lambda_{1}(\mathbf{L})$ is unique, the product $\lambda_{K}(\mathbf{W})\Re (\lambda_{1}(\mathbf{L}))$ is the unique most negative frequency. Assume without loss of generality that $\lambda_K(\mathbf{W})$ has multiplicity one. The argument does not change for higher multiplicities as the corresponding eigenvectors are orthogonal since $\mathbf{W}$ is diagonal. Then, $\lambda_{K}(\mathbf{W})\lambda_{1}(\mathbf{L})$ has multiplicity one, and we calculate $\operatorname {vec}(\mathbf{x})(t)$ as + +$$ +\begin{array}{l} \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \exp \left(- \lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) t\right) \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \left(\lambda_ {l _ {1}} (\mathbf {W})\right) ^ {1 - j} \left(\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L})\right) \\ = c _ {K, 1} ^ {k _ {1}} \exp \left(- t \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L})\right) \frac {t ^ {k _ {1} - 1}}{(k _ {1} - 1) !} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {1} (\mathbf {L})) \\ + c _ {K, 1} ^ {k _ {1}} \exp (- t \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L})) \sum_ {j = 2} ^ {k _ {1}} \frac {t ^ {k _ {1} - j}}{(k _ {1} - j) !} (\lambda_ {K} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})) \\ + \exp \left(- t \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L})\right) \sum_ {i = 1} ^ {k _ {1} - 1} c _ {K, 1} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \left(\lambda_ {K} (\mathbf {W})\right) ^ {1 - j} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})\right) \\ + \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 2} ^ {m} \exp \left(- \lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) t\right) \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j}}{(i - j) !} \left(\lambda_ {l _ {1}} (\mathbf {W})\right) ^ {1 - j} \left(\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L})\right) \\ = \exp \left(- t \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L})\right) t ^ {k _ {1} - 1} \left(c _ {K, 1} ^ {k _ {1}} \frac {1}{(k _ {1} - 1) !} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {1} (\mathbf {L})\right) \right. \\ + c _ {K, 1} ^ {k _ {1}} \sum_ {j = 2} ^ {k _ {1}} \frac {t ^ {1 - j}}{(k _ {1} - j) !} (\lambda_ {K} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})) \\ + \sum_ {i = 1} ^ {k _ {1} - 1} c _ {K, 1} ^ {i} \sum_ {j = 1} ^ {i} \frac {1}{(i - j) !} t ^ {i - j - k _ {1} + 1} \left(\lambda_ {K} (\mathbf {W})\right) ^ {1 - j} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})\right) \\ + \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 2} ^ {m} \exp \left(- t (\lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) - \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L}))\right) \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \\ \left. \cdot \sum_ {j = 1} ^ {i} \frac {t ^ {i - j - k _ {1} + 1}}{(i - j) !} \left(\lambda_ {l _ {1}} (\mathbf {W})\right) ^ {1 - j} \left(\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L})\right)\right). \\ \end{array} +$$ + +We can then write the normalized solution as + +$$ +\begin{array}{l} \left(\frac {c _ {K , 1} ^ {k _ {1}}}{(k _ {1} - 1) !} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {1} (\mathbf {L})) + c _ {K, 1} ^ {k _ {1}} \sum_ {j = 2} ^ {k _ {1}} \frac {t ^ {1 - j}}{(k _ {1} - j) !} (\lambda_ {K} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})) \right. \\ + \sum_ {i = 1} ^ {k _ {1} - 1} c _ {K, 1} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j - k _ {1} + 1}}{(i - j) !} (\lambda_ {K} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})) \\ \left. + \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 2} ^ {m} e ^ {- t (\lambda_ {K} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) - \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L}))} \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j - k _ {1}}}{(i - j) !} (\lambda_ {l _ {1}} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L}))\right) \\ \cdot \left\| \frac {c _ {K , 1} ^ {k _ {1}}}{(k _ {1} - 1) !} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {1} (\mathbf {L})\right) + c _ {K, 1} ^ {k _ {1}} \sum_ {j = 2} ^ {k _ {1}} \frac {t ^ {1 - j}}{(k _ {1} - j) !} \left(\lambda_ {K} (\mathbf {W})\right) ^ {1 - j} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})\right) \right. \\ + \sum_ {i = 1} ^ {k _ {1} - 1} c _ {K, 1} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j - k _ {1} + 1}}{(i - j) !} (\lambda_ {l _ {1}} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {K} \otimes \psi_ {1} ^ {j} (\mathbf {L})) \\ + \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 2} ^ {m} \exp \left(- t (\lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) - \lambda_ {K} (\mathbf {W}) \lambda_ {1} (\mathbf {L}))\right) \\ \cdot \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l _ {1}, l _ {2}} ^ {i} \sum_ {j = 1} ^ {i} \frac {t ^ {i - j - k _ {1}}}{(i - j) !} (\lambda_ {l _ {1}} (\mathbf {W})) ^ {1 - j} (\mathbf {e} _ {l _ {1}} \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L})) \Bigg \| _ {2} ^ {- 1}. \\ \end{array} +$$ + +All summands, except the first, converge to zero for $t$ going to infinity. Hence, + +$$ +\frac {\operatorname {v e c} (\mathbf {x}) (t)}{\| \operatorname {v e c} (\mathbf {x}) (t) \| _ {2}} \xrightarrow {t \to \infty} \left\| \frac {c _ {K , 1} ^ {k _ {1}}}{(k _ {1} - 1) !} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {1} (\mathbf {L})\right) \right\| _ {2} ^ {- 1} \left(\frac {c _ {K , 1} ^ {k _ {1}}}{(k _ {1} - 1) !} \left(\mathbf {e} _ {K} \otimes \psi_ {1} ^ {1} (\mathbf {L})\right)\right). +$$ + +We apply Lemma D.2 to finish the proof for the HFD case. Note that $\psi_1^1 (\mathbf{L})$ is an eigenvector corresponding to $\lambda_{1}(\mathbf{L})$ . The LFD case is equivalent. By Perron-Frobenius for irreducible nonnegative matrices, there is no other eigenvalue with the same real part as $1 - \lambda_{N}(\mathbf{L}) = \lambda_{1}(\mathbf{I} - \mathbf{L})$ . + +Remark E.5. If the hypotheses are met, the convergence result also holds for $\mathbf{L}^{\alpha}$ . With the same reasoning, we can prove that the normalized solution converges to the eigenvector corresponding to the eigenvalue of $\mathbf{L}^{\alpha}$ with minimal real part. It suffices to consider the eigenvalues and generalized eigenvectors of $\mathbf{L}^{\alpha}$ . However, we do not know the relationship between the singular values of $\mathbf{L}^{\alpha}$ , where we defined the fractional Laplacian, and the eigenvalues of $\mathbf{L}$ . Hence, it is much more challenging to draw conclusions on the Dirichlet energy. + +# E.2 Explicit Euler + +In this subsection, we show that the convergence properties of the Dirichlet energy from Theorem 5.6 are also satisfied when (2) is approximated via an explicit Euler scheme. + +As noted in (11), the vectorized solution to (2) can be written as + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = (\mathbf {I} - h (\mathbf {W} \otimes \mathbf {L})) ^ {n} \operatorname {v e c} (\mathbf {x} _ {0}), +$$ + +when $\alpha = 1$ . We thus aim to analyze the Jordan decomposition of $\mathbf{L}^n$ for $\mathbf{L} \in \mathbb{C}^{n \times n}$ and $n \in \mathbb{N}$ . Let $\mathbf{L} = \mathbf{P}\mathbf{J}\mathbf{P}^{-1}$ , where $\mathbf{J}$ is the Jordan form, and $\mathbf{P}$ is a invertible matrix of generalized eigenvectors. + +Consider a Jordan block $\mathbf{J}_i$ associated with the eigenvalue $\lambda_i(\mathbf{M})$ . For a positive integer $n$ , the $n$ -th power of the Jordan block can be computed as: + +$$ +\mathbf {J} _ {l} ^ {n} = \lambda_ {l} (\mathbf {L}) ^ {n} \left[ \begin{array}{c c c c c} 1 & \binom {n} {1} \lambda_ {l} (\mathbf {L}) ^ {- 1} & \binom {n} {2} \lambda_ {l} (\mathbf {L}) ^ {- 2} & \dots & \binom {n} {k _ {l} - 1} \lambda_ {l} (\mathbf {L}) ^ {- k _ {l} + 1} \\ & 1 & \binom {n} {1} \lambda_ {l} (\mathbf {L}) ^ {- 1} & & \binom {n} {k _ {l} - 2} \lambda_ {l} (\mathbf {L}) ^ {- k _ {l} + 2} \\ & & 1 & & \vdots \\ & & & \ddots & \binom {n} {1} \lambda_ {l} (\mathbf {L}) ^ {- 1} \\ & & & & 1 \end{array} \right] +$$ + +We compute the $n$ -th power of $\mathbf{L}$ as $\mathbf{L}^n = (\mathbf{P}\mathbf{J}\mathbf{P}^{-1})^n = \mathbf{P}\mathbf{J}^n\mathbf{P}^{-1}$ , and we expand $\mathbf{x}_0$ as + +$$ +\mathbf {x} _ {0} = \sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {P e} _ {l} ^ {i}, +$$ + +where $\{\mathbf{e}_l^i : i \in \{1, \dots, k_l\}, l \in \{1, \dots, m\}\}$ is the standard basis and $\mathbf{Pe}_l^i = \psi_l^i(\mathbf{L})$ are the generalized eigenvectors of $\mathbf{L}$ . It is easy to see that + +$$ +\mathbf {L} ^ {n} \mathbf {x} _ {0} = \mathbf {P J} ^ {n} \mathbf {P} ^ {- 1} \left(\sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {P e} _ {l} ^ {i}\right) = \mathbf {P J} ^ {n} \left(\sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {e} _ {l} ^ {i}\right). +$$ + +As $\mathbf{J}^n = \bigoplus_{l=1}^{m} \mathbf{J}_l^n$ , we can focus on a single Jordan block. Fix $l \in \{1, \ldots, m\}$ , and compute + +$$ +\begin{array}{l} \mathbf {P J} _ {l} ^ {n} \left(\sum_ {i = 1} ^ {k _ {l}} c _ {l} ^ {i} \mathbf {e} _ {l} ^ {i}\right) = \mathbf {P} \left(\lambda_ {l} (\mathbf {M}) ^ {n} c _ {l} ^ {1} \mathbf {e} _ {l} ^ {1}\right) + \mathbf {P} \left(\binom {n} {1} \lambda_ {l} (\mathbf {M}) ^ {n - 1} c _ {l} ^ {1} \mathbf {e} _ {l} ^ {1} + \lambda_ {l} (\mathbf {M}) ^ {n} c _ {l} ^ {2} \mathbf {e} _ {l} ^ {2}\right) \\ + \mathbf {P} \left(\binom {n} {2} \lambda_ {l} (\mathbf {M}) ^ {n - 2} c _ {l} ^ {1} \mathbf {e} _ {l} ^ {1} + \binom {n} {1} \lambda_ {l} (\mathbf {L}) ^ {n - 1} c _ {l} ^ {2} \mathbf {e} _ {l} ^ {2} + \lambda_ {l} (\mathbf {M}) ^ {n} c _ {l} ^ {3} \mathbf {e} _ {l} ^ {3}\right) \\ + \dots . \\ \end{array} +$$ + +We can summarize our findings in the following lemma. + +Lemma E.6. For any $\mathbf{L} = \mathbf{P}\mathbf{J}\mathbf{P}^{-1} \in \mathbb{R}^{N \times N}$ and $\mathbf{x}_0 = \sum_{l=1}^{m} \sum_{i=1}^{k_l} c_l^i \psi_l^i(\mathbf{L})$ , we have + +$$ +\mathbf {L} ^ {n} \mathbf {x} _ {0} = \sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {\min \{k _ {l}, n - 1 \}} \sum_ {j = 1} ^ {i} \binom {n} {i - j} \lambda_ {l} (\mathbf {L}) ^ {n - i + j} c _ {l} ^ {j} \psi_ {l} ^ {j} (\mathbf {L}). +$$ + +We proceed with the main result of this subsection. + +Proposition E.7. Let $\mathcal{G}$ be a strongly connected directed graph with SNA $\mathbf{L} \in \mathbb{R}^{N \times N}$ . Consider the initial value problem in (2) with diagonal channel mixing matrix $\mathbf{W} \in \mathbb{R}^{K \times K}$ and $\alpha = 1$ . Approximate the solution to (2) with an explicit Euler scheme with a sufficiently small step size $h$ . Then, for almost all initial values $\mathbf{x}_0 \in \mathbb{C}^{N \times K}$ the following holds. If $\lambda_1(\mathbf{L})$ is unique and + +$$ +\lambda_ {K} (\mathbf {W}) \Re \lambda_ {1} (\mathbf {L}) < \lambda_ {1} (\mathbf {W}) \Re \lambda_ {N} (\mathbf {L}), \tag {15} +$$ + +the approximated solution is HFD. Otherwise, the solution is LFD. + +Proof. As noted in (11), the vectorized solution to (2) with $\alpha = 1$ , can be written as + +$$ +\operatorname {v e c} (\mathbf {x}) (n h) = (\mathbf {I} - h (\mathbf {W} \otimes \mathbf {L})) ^ {n} \operatorname {v e c} (\mathbf {x} _ {0}). +$$ + +Consider the Jordan decomposition of $\mathbf{L} = \mathbf{P}\mathbf{J}\mathbf{P}^{-1}$ and the Jordan decomposition of $\mathbf{W} \otimes \mathbf{J} = \tilde{\mathbf{P}}\tilde{\mathbf{J}}\tilde{\mathbf{P}}^{-1}$ , where $\tilde{\mathbf{J}}$ and $\tilde{\mathbf{P}}$ are specified in Lemma E.2. Then, + +$$ +\begin{array}{l} \operatorname {v e c} (\mathbf {x}) (n h) = \left(\mathbf {I} + h \mathbf {W} \otimes \left(\mathbf {P J P} ^ {- 1}\right)\right) ^ {n} \operatorname {v e c} \left(\mathbf {x} _ {0}\right) \\ = (\mathbf {I} \otimes \mathbf {P}) (\mathbf {I} - h \mathbf {W} \otimes \mathbf {J}) ^ {n} (\mathbf {I} \otimes \mathbf {P}) ^ {- 1} \operatorname {v e c} (\mathbf {x} _ {0}) \\ \end{array} +$$ + +$$ +\begin{array}{l} = (\mathbf {I} \otimes \mathbf {P}) (\mathbf {I} - h \tilde {\mathbf {P}} \tilde {\mathbf {J}} \tilde {\mathbf {P}} ^ {- 1}) ^ {n} (\mathbf {I} \otimes \mathbf {P}) ^ {- 1} \operatorname {v e c} (\mathbf {x} _ {0}) \\ = (\mathbf {I} \otimes \mathbf {P}) \tilde {\mathbf {P}} (\mathbf {I} - h \tilde {\mathbf {J}}) ^ {n} \tilde {\mathbf {P}} ^ {- 1} (\mathbf {I} \otimes \mathbf {P}) ^ {- 1} \mathrm {v e c} (\mathbf {x} _ {0}) \\ = (\mathbf {I} \otimes \mathbf {P}) \tilde {\mathbf {P}} (\mathbf {I} - h \tilde {\mathbf {J}}) ^ {n} ((\mathbf {I} \otimes \mathbf {P}) \tilde {\mathbf {P}}) ^ {- 1} \operatorname {v e c} (\mathbf {x} _ {0}). \\ \end{array} +$$ + +Now, decompose $\mathbf{x}_0$ into the basis of generalized eigenvectors, i.e., + +$$ +\operatorname {v e c} \left(\mathbf {x} _ {0}\right) = \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \sum_ {i = 1} ^ {k _ {l _ {2}}} c _ {l} ^ {i} \left(\left(\mathbf {I} \otimes \mathbf {P}\right) \tilde {\mathbf {P}}\right) \left(\mathbf {e} _ {l _ {1}} \otimes \mathbf {e} _ {l _ {2}} ^ {i} (\mathbf {L})\right). +$$ + +Then, by Lemma E.6, we have + +$$ +\begin{array}{l} \operatorname {vec}(\mathbf{x})(nh) = \sum_{l_{1} = 1}^{K}\sum_{l_{2} = 1}^{m}\sum_{i = 1}^{\min \left\{k_{l_{2}},t - 1\right\}}\sum_{j = 1}^{i}\binom {n}{i - j}\left(1 - h\lambda_{l_{1}}(\mathbf{W})\lambda_{l_{2}}(\mathbf{L})\right)^{n - i + j}c_{l_{1},l_{2}}^{j} \\ \left(\lambda_ {l _ {1}} (\mathbf {W})\right) ^ {1 - j} \psi_ {l _ {1}} (\mathbf {W}) \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L}). \\ \end{array} +$$ + +Now, consider the maximal frequency, i.e., + +$$ +L _ {1}, L _ {2} = \underset {l _ {1}, l _ {2}} {\arg \max } \left\{\left| 1 - h \lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L}) \right| \right\}. +$$ + +Then, the solution $\operatorname{vec}(\mathbf{x})(nh)$ can be written as + +$$ +\begin{array}{l} \sum_{l_{1} = 1}^{K}\sum_{l_{2} = 1}^{m}\sum_{i = 1}^{\min \{k_{l_{2}},n - 1\}}\sum_{j = 1}^{i}\binom {n}{i - j}\left(1 - h\lambda_{l_{1}}(\mathbf{W})\lambda_{l_{2}}(\mathbf{L})\right)^{n - i + j}c_{l_{1},l_{2}}^{j}\psi_{l_{1}}(\mathbf{W})\otimes \psi_{l_{2}}^{j}(\mathbf{L}) \\ = \left(1 - h \lambda_ {L _ {1}} (\mathbf {W}) \lambda_ {L _ {2}} (\mathbf {L})\right) ^ {n} \tag {16} \\ \end{array} +$$ + +$$ +\cdot \sum_ {l _ {1} = 1} ^ {K} \sum_ {l _ {2} = 1} ^ {m} \sum_ {i = 1} ^ {\min \{k _ {l _ {2}}, t - 1 \}} \sum_ {j = 1} ^ {i} \binom {n} {i - j} \frac {(1 - h \lambda_ {l _ {1}} (\mathbf {W}) \lambda_ {l _ {2}} (\mathbf {L})) ^ {n - i + j}}{(1 - h \lambda_ {L _ {1}} (\mathbf {W}) \lambda_ {L _ {2}} (\mathbf {L})) ^ {n}} c _ {l _ {1}, l _ {2}} ^ {j} \psi_ {l _ {1}} (\mathbf {W}) \otimes \psi_ {l _ {2}} ^ {j} (\mathbf {L}). +$$ + +With a similar argument as in the proof of Theorem 5.6, we can then see that + +$$ +\frac {\operatorname {v e c} (\mathbf {x}) (n h)}{\| \operatorname {v e c} (\mathbf {x}) (n h) \| _ {2}} \xrightarrow {t \to \infty} \frac {c _ {L _ {1} , L _ {2}} ^ {1} \psi_ {L _ {1}} (\mathbf {W}) \otimes \psi_ {L _ {2}} ^ {1} (\mathbf {L})}{\| c _ {L _ {1} , L _ {2}} ^ {1} \psi_ {L _ {1}} (\mathbf {W}) \otimes \psi_ {L _ {2}} ^ {1} (\mathbf {L}) \| _ {2}}, +$$ + +where $\psi_{L_2}^1 (\mathbf{L})$ is the eigenvector corresponding to $\lambda_{L_2}(\mathbf{L})$ . Note that for almost all $\mathbf{x}$ , we have $c_{L_1,L_2}^1\neq 0$ . Then $\psi_{L_2}^1 (\mathbf{L})$ is also an eigenvector of $\mathbf{I} - \mathbf{L}$ corresponding to the eigenvalue $1 - \lambda_{L_2}(\mathbf{L})$ . By Lemma D.2, we have that the approximated solution is $(1 - \lambda_{L_2}(\mathbf{L}))$ -FD. + +We finish the proof by showing that $L_{2} = 1$ if (15) is satisfied, and $L_{2} = N$ otherwise. First, note that either $\lambda_{K}(\mathbf{W})\Re \lambda_{1}(\mathbf{L})$ or $\lambda_{1}(\mathbf{W})\Re \lambda_{N}(\mathbf{L})$ are the most negative real parts among all $\{\lambda_l(\mathbf{W})\Re \lambda_r(\mathbf{L})\}_{l\in \{1,\ldots ,K\}, r\in \{1,\ldots ,N\}}$ . Assume first that $\lambda_{K}(\mathbf{W})\Re \lambda_{1}(\mathbf{L})$ has the most negative real part, i.e., (15) holds. Then, define + +$$ +\varepsilon := \max _ {l, r} \left| \lambda_ {K} (\mathbf {W}) \Re \lambda_ {1} (\mathbf {L}) - \lambda_ {l} (\mathbf {W}) \Re \lambda_ {r} (\mathbf {L}) \right|, +$$ + +and assume $h < \varepsilon \| \mathbf{W}\|^2$ . Now it is easy to see that + +$$ +2 \lambda_ {K} (\mathbf {W}) \Re \lambda_ {1} (\mathbf {L}) - h \lambda_ {K} (\mathbf {W}) ^ {2} | \lambda_ {1} (\mathbf {L}) | ^ {2} < 2 \lambda_ {l} (\mathbf {W}) \Re \lambda_ {r} (\mathbf {L}) - h \lambda_ {l} (\mathbf {W}) ^ {2} | \lambda_ {r} (\mathbf {L}) | ^ {2}, +$$ + +which is equivalent to $(K,1) = (L_1,L_2)$ . Hence, the dynamics are $(1 - \lambda_{1}(\mathbf{L}))$ -FD. As $(1 - \lambda_{1}(\mathbf{L}))$ is highest frequency of $\mathbf{I} - \mathbf{L}$ , we get HFD dynamics. Similar, we can show that if $\lambda_{1}(\mathbf{W})\Re \lambda_{N}(\mathbf{L})$ is the most negative frequency, we get LFD dynamics. Note that for the HFD argument, we must assume that $\lambda_{1}(\mathbf{L})$ is the unique eigenvalue with the smallest real part. For the LFD argument, it is already given that $\lambda_{N}(\mathbf{L})$ has multiplicity one by Perron-Frobenius Theorem. + +# E.3 GCN oversmooths + +Proposition E.8. Let $\mathcal{G}$ be a strongly connected and aperiodic directed graph with SNA $\mathbf{L} \in \mathbb{R}^{N \times N}$ . A GCN with the update rule + +$$ +\mathbf {x} _ {t + 1} = \mathbf {L} \mathbf {x} _ {t} \mathbf {W}, +$$ + +where $\mathbf{x}_0\in \mathbb{R}^{N\times K}$ are the input node features, always oversmooths. + +Proof. The proof follows similarly to the proof of Proposition E.7. The difference is that instead of (16), we can write the node features after $t$ layers as + +$$ +\operatorname {v e c}(\mathbf{x}_{t}) = \sum_{l_{1} = 1}^{K}\sum_{l_{2} = 1}^{m}\sum_{i = 1}^{\min \{k_{l_{2}},t - 1\}}\sum_{j = 1}^{i}\binom {t}{i - j}\bigl(\lambda_{l_{1}}(\mathbf{W})\lambda_{l_{2}}(\mathbf{L})\bigr)^{t - i + j}c_{l_{1},l_{2}}^{j}\psi_{l_{1}}(\mathbf{W})\otimes \psi_{l_{2}}^{j}(\mathbf{L}). +$$ + +Now note that by Perron-Frobenius the eigenvalue $\lambda_{N}(\mathbf{L})$ with the largest absolute value is real and has multiplicity one. Then, $\max_{l_1,l_2}|\lambda_{l_1}(\mathbf{W})\lambda_{l_2}(\mathbf{L})|$ is attained at either $\lambda_{1}(\mathbf{W})\lambda_{N}(\mathbf{L})$ or $\lambda_{K}(\mathbf{W})\lambda_{N}(\mathbf{L})$ . Equivalently to the proof of Proposition E.7, we can show that the corresponding GCN is $1 - \lambda_{N}(\mathbf{L})$ -FD. Now $1 - \lambda_{N}(\mathbf{L}) = \lambda_{1}(\mathbf{I} - \mathbf{L})$ , and $\lambda_{1}(\mathbf{I} - \mathbf{L})$ -FD corresponds to LFD, hence the GCN oversmooths. + +# F Appendix for the Cycle Graph Example + +Consider the cycle graph with $N$ nodes numbered from 0 to $N - 1$ . Since each node has degree 2, the SNA $\mathbf{L} = \mathbf{A} / 2$ is a circulant matrix produced by the vector $\mathbf{v} = (\mathbf{e}_1 + \mathbf{e}_{N - 1}) / 2$ . Denote $\omega = \exp (2\pi i / N)$ , the eigenvectors can be computed as + +$$ +\mathbf {v} _ {j} = \frac {1}{\sqrt {N}} \left(1, \omega^ {j}, \omega^ {2 j}, \dots , \omega^ {(N - 1) j}\right) +$$ + +associated to the eigenvalue $\lambda_{j} = \cos (2\pi j / N)$ . First, we can note that $\lambda_{j} = \lambda_{N - j}$ for all $j\in$ $\{1,\dots ,N / 2\}$ ; therefore, the multiplicity of each eigenvalue is 2 except $\lambda_0$ and, if $N$ is even, $\lambda_{N / 2}$ . Since the original matrix is symmetric, there exists a basis of real eigenvectors. A simple calculation + +$$ +\mathbf {L} \Re \mathbf {v} _ {j} + i \mathbf {L} \Im \mathbf {v} _ {j} = \mathbf {L} \mathbf {v} _ {j} = \lambda_ {j} \mathbf {v} _ {j} = \lambda_ {j} \Re \mathbf {v} _ {j} + i \lambda_ {j} \Im \mathbf {v} _ {j} +$$ + +shows that $\Re \mathbf{v}_j$ and $\Im \mathbf{v}_j$ , defined as + +$$ +\Re \mathbf {v} _ {j} = \frac {1}{\sqrt {N}} \left(\cos \left(\frac {2 \pi j n}{N}\right)\right) _ {n = 0} ^ {N - 1}, \Im \mathbf {v} _ {j} = \frac {1}{\sqrt {N}} \left(\sin \left(\frac {2 \pi j n}{N}\right)\right) _ {n = 0} ^ {N - 1} +$$ + +are two eigenvectors of the same eigenvalue $\lambda_{j}$ . To show that they are linearly independent, we compute under which conditions + +$$ +0 = a \Re \mathbf {v} _ {j} + b \Im \mathbf {v} _ {j}. +$$ + +We note that the previous condition implies that for all $n \notin \{0, N/2\}$ + +$$ +\begin{array}{l} 0 = a \cos \left(\frac {2 \pi j n}{N}\right) + b \sin \left(\frac {2 \pi j n}{N}\right) \\ = \sqrt {a ^ {2} + b ^ {2}} \sin \left(\frac {2 \pi j n}{N} + \arctan \left(\frac {b}{a}\right)\right) \\ \end{array} +$$ + +Suppose $a, b \neq 0$ , then it must be + +$$ +\frac {2 \pi j n}{N} + \arctan \left(\frac {b}{a}\right) = k \pi , k \in \mathbb {Z} +$$ + +which is equivalent to + +$$ +2 j n = \left(k - \frac {\arctan \left(\frac {b}{a}\right)}{\pi}\right) N, k \in \mathbb {Z} +$$ + +The left-hand side is always an integer, while the right-hand side is an integer if and only if $b = 0$ . This reduces the conditions to + +$$ +\left\{ \begin{array}{l} a \cos \left(\frac {2 \pi j n}{N}\right) = 0 \\ | a | \sin \left(\frac {2 \pi j n}{N}\right) = 0 \end{array} \right. +$$ + +which is true if and only if $a = 0$ . Consider now an even number of nodes $N$ ; the eigenspace of $\lambda_{N/2} = -1$ is + +$$ +\mathbf {v} _ {N / 2} = \frac {1}{\sqrt {N}} \left((- 1) ^ {n}\right) _ {n = 0} ^ {N - 1} +$$ + +hence, the maximal eigenvector of $\mathbf{I} - \mathbf{L}$ guarantees homophily 0. Consider now a number of nodes $N$ divisible by 4; the eigenspace of $\lambda_{N / 4} = 0$ has basis + +$$ +\Re \mathbf {v} _ {N / 4} = \frac {1}{\sqrt {N}} \left(\cos \left(\frac {\pi n}{2}\right)\right) _ {n = 0} ^ {N - 1}, \Im \mathbf {v} _ {N / 4} = \frac {1}{\sqrt {N}} \left(\sin \left(\frac {\pi n}{2}\right)\right) _ {n = 0} ^ {N - 1} +$$ + +Their sum is then equivalent to + +$$ +\begin{array}{l} \Re \mathbf {v} _ {N / 4} + \Im \mathbf {v} _ {N / 4} = \frac {1}{\sqrt {N}} \left(\cos \left(\frac {\pi n}{2}\right) + \sin \left(\frac {\pi n}{2}\right)\right) _ {n = 0} ^ {N - 1} \\ = \frac {\sqrt {2}}{\sqrt {N}} \left(\sin \left(\frac {\pi n}{2} + \frac {\pi}{4}\right)\right) _ {n = 0} ^ {N - 1} \\ = \sqrt {\frac {2}{N}} \left(\sin \left(\frac {\pi}{4} (2 n + 1)\right)\right) _ {n = 0} ^ {N - 1} \\ = \frac {1}{\sqrt {N}} (1, 1, - 1, - 1, \dots) \\ \end{array} +$$ + +hence, the mid eigenvector of $\mathbf{L}$ guarantees homophily $1/2$ . A visual explanation is shown in Figure 4. \ No newline at end of file diff --git a/afractionalgraphlaplacianapproachtooversmoothing/images.zip b/afractionalgraphlaplacianapproachtooversmoothing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9d19376116922240141cfb6571775b1864518fe9 --- /dev/null +++ b/afractionalgraphlaplacianapproachtooversmoothing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74d60b41cc27a7de7277399626f6d7810460f94c519fd1e7af4eea586197b67d +size 2710108 diff --git a/afractionalgraphlaplacianapproachtooversmoothing/layout.json b/afractionalgraphlaplacianapproachtooversmoothing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..af2eab7759bf1d686a903c7dce92503ff516b2e9 --- /dev/null +++ b/afractionalgraphlaplacianapproachtooversmoothing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6e357ccd5804ace3a366422fb0bc42fdebfa34145eb4272380e67faf0ff0519 +size 1781334 diff --git a/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_content_list.json b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..64feee021bb4f919db07e94e5bdd4a6b0fdf588d --- /dev/null +++ b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1a96f2643b83ac451f661723f6fc158ba7e60b58d307c8454bed5fea311f46e +size 161870 diff --git a/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_model.json b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6f82bbe1e8a10e769fbcd7eea939852844f34572 --- /dev/null +++ b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d61b7f7a4fd040872ab057c840daf990d7c2309a1c83540b078917d13c8bd4e +size 191615 diff --git a/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_origin.pdf b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..112084d32b538145ef023f6b4aca243ec557fca0 --- /dev/null +++ b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/481bed06-c110-4947-981e-506c6db700d1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72b1d9b9e071d2bbd73d7b4b2fce56f3391cc95256331aa4f296bd8fa5065653 +size 3016312 diff --git a/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/full.md b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e51625b70d042515051e506c2a41d9dafbbd0321 --- /dev/null +++ b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/full.md @@ -0,0 +1,727 @@ +# A Framework for Fast and Stable Representations of Multiparameter Persistent Homology Decompositions + +David Loiseaux + +DataShape + +Centre Inria d'Université Côte d'Azur + +Biot, France + +Mathieu Carrière + +DataShape + +Centre Inria d'Université Côte d'Azur + +Biot, France + +Andrew J. Blumberg + +Irving Institute for Cancer Dynamics + +Columbia University + +New-York, NY, USA + +# Abstract + +Topological data analysis (TDA) is an area of data science that focuses on using invariants from algebraic topology to provide multiscale shape descriptors for geometric data sets, such as graphs and point clouds. One of the most important such descriptors is persistent homology, which encodes the change in shape as a filtration parameter changes; a typical parameter is the feature scale. For many data sets, it is useful to simultaneously vary multiple filtration parameters, for example feature scale and density. While the theoretical properties of single parameter persistent homology are well understood, less is known about the multiparameter case. In particular, a central question is the problem of representing multiparameter persistent homology by elements of a vector space for integration with standard machine learning algorithms. Existing approaches to this problem either ignore most of the multiparameter information to reduce to the one-parameter case or are heuristic and potentially unstable in the face of noise. In this article, we introduce a new general representation framework that leverages recent results on decompositions of multiparameter persistent homology. This framework is rich in information, fast to compute, and encompasses previous approaches. Moreover, we establish theoretical stability guarantees under this framework as well as efficient algorithms for practical computation, making this framework an applicable and versatile tool for analyzing geometric data. We validate our stability results and algorithms with numerical experiments that demonstrate statistical convergence, prediction accuracy, and fast running times on several real data sets. + +# 1 Introduction + +Topological Data Analysis (TDA) [8] is a methodology for analyzing data sets using multiscale shape descriptors coming from algebraic topology. There has been intense interest in the field in the last decade, since topological features promise to allow practitioners to compute and encode information that classical approaches do not capture. Moreover, TDA rests on solid theoretical grounds, with guarantees accompanying many of its methods and descriptors. TDA has proved useful in a wide variety of application areas, including computer graphics [13, 33], computational biology [34], and material science [6, 35], among many others. + +The main tool of TDA is persistent homology. In its most standard form, one is given a finite metric space $X$ (e.g., a finite set of points and their pairwise distances) and a continuous function + +$f: X \to \mathbb{R}$ . This function usually represents a parameter of interest (such as, e.g., scale or density for point clouds, marker genes for single-cell data, etc), and the goal of persistent homology is to characterize the topological variations of this function on the data at all possible scales. Of course, the idea of considering multiscale representations of geometric data is not new [14, 32, 41]; the contribution of persistent homology is to obtain a novel and theoretically tractable multiscale shape descriptor. More formally, persistent homology is achieved by computing the so-called persistence barcode of $f$ , which is obtained by looking at all sublevel sets of the form $\{f^{-1}((-\infty, \alpha])\}_{\alpha \in \mathbb{R}}$ , also called filtration induced by $f$ , and by computing a decomposition of this filtration, that is, by recording the appearances and disappearances of topological features (connected components, loops, enclosed spheres, etc) in these sets. When such a feature appears (resp. disappears), e.g., in a sublevel set $f^{-1}((-\infty, \alpha_b])$ , we call the corresponding threshold $\alpha_b$ (resp. $\alpha_d$ ) the birth time (resp. death time) of the topological feature, and we summarize this information in a set of intervals, or bars, called the persistence barcode $D(f) := \{(\alpha_b, \alpha_d)\}_{\alpha \in A} \subset \mathbb{R} \times \mathbb{R} \cup \{\infty\}$ . Moreover, the bar length $\alpha_d - \alpha_b$ often serves as a proxy for the statistical significance of the corresponding feature. + +However, an inherent limitation of the formulation of persistent homology is that it can handle only a single filtration parameter $f$ . However, in practice it is common that one has to deal with multiple parameters. This translates into multiple filtration functions: a standard example is when one aims at obtaining meaningful topological representation of a noisy point cloud. In this case, both feature scale and density functions are necessary (see Appendix A). An extension of persistent homology to several filtration functions is called multiparameter persistent homology [3, 9], and studies the topological variations of a continuous multiparameter function $f: X \to \mathbb{R}^n$ with $n \in \mathbb{N}^*$ . This setting is notoriously difficult to analyze theoretically as there is no result ensuring the existence of an analogue of persistence barcodes, i.e., a decomposition into subsets of $\mathbb{R}^n$ , each representing the lifespan of a topological feature. + +Still, it remains possible to define weaker topological invariants in this setting. The most common one is the so-called rank invariant (as well as its variations, such as the generalized rank invariant [24], and its decompositions, such as the signed barcodes [4]), which describes how the topological features associated to any pair of sublevel sets $\{x\in X:f(x)\leq \alpha \}$ and $\{x\in X:f(x)\leq \beta \}$ such that $\alpha \leq \beta$ (w.r.t. the partial order in $\mathbb{R}^n$ ), are connected. The rank invariant is a construction in abstract algebra, and so the task of finding appropriate representations of this invariant, i.e., embeddings into Hilbert spaces, is critical. Hence, a number of such representations have been defined, which first approximate the rank invariant by computing persistence barcodes from several linear combinations of filtrations, a procedure often referenced as the fibered barcode (see Appendix E), and then aggregate known single-parameter representations for them [17, 18, 39]. Adequate representations of the generalized rank invariant have also been investigated recently for $n = 2$ [42]. + +However, the rank invariant, and its associated representations, are known to be much less informative than decompositions (when they exist): many functions have different decompositions yet the same rank invariants. Therefore, the aforementioned representations can encode only limited multiparameter topological information. Instead, in this work, we focus on candidate decompositions of the function, in order to create descriptors that are strictly more powerful than the rank invariant. Indeed, while there is no general decomposition theorem, there is recent work that constructs candidate decompositions in terms of simple pieces [1, 7, 29] that always exist but do not necessarily suffice to reconstruct all of the multiparameter information. Nonetheless, they are strictly more informative than the rank invariant under mild conditions, are stable, and approximate the true decomposition when it exists1. For instance, in Figure 2, we present a bifiltration of a noisy point cloud with scale and density (left), and a corresponding candidate decomposition comprised of subsets of $\mathbb{R}^2$ , each representing a topological feature (middle). One can see that there is a large green subset in the decomposition that represents the circle formed by the points that are not outliers (also highlighted in green in the bifiltration). + +![](images/dd38d743f4a56471a64d3274c894b08d87085f528d2cd11e68fec174c476dc66.jpg) +Figure 1: Common pipelines for the use of multiparameter persistent homology in data science—our work provides new contributions to the arrow highlighted in red. + +Unfortunately, while more informative, candidate decompositions suffer from the same problem than the rank invariant; they also need appropriate representations in order to be processed by standard data science methods. In this work, we bridge this gap by providing new representations designed for candidate decompositions. See Figure 1 for a summarizing figure. + +![](images/95b3995c23e8ed53dbf996282582844cfd3612d6c8c6e82ddd6822a98ac199e8.jpg) +Figure 2: (left) Bi-filtration of a noisy point cloud induced by both feature scale (using unions of balls with increasing radii) and sublevel sets of codensity. The cycle highlighted in the green zone can be detected as a large subset in the corresponding candidate decomposition computed by the MMA method [29] (middle), and in our representation of it (right). + +![](images/c49594938448c6521878a4c3ea94aaed752b1638d13427ef95878d728c29c06f.jpg) + +![](images/98239ff5774501fa18c053cc9925f1743645553ea8dfa01214911d6a454a1e72.jpg) + +Contributions. Our contributions in this work are listed below: + +- We provide a general framework that parametrizes representations of multiparameter persistent homology decompositions (Definition 1) and which encompasses previous approaches in the literature. These representations take the form of a parametrized family of continuous functions on $\mathbb{R}^n$ that can be binned into images for visualization and data science. +- We identify parameters in this framework that result in representations that have stability guarantees while still encoding more information than the rank invariant (see Theorem 1). +- We illustrate the performance of our framework with numerical experiments: (1) We demonstrate the practical consequences of the stability theorem by measuring the statistical convergence of our representations. (2) We achieve the best performance with the lowest runtime on several classification tasks on public data sets (see Sections 4.1 and 4.2). + +Related work. Closely related to our method is the recent contribution [10], which also proposes a representation for decompositions. However, their approach, while being efficient in practice, is a heuristic with no corresponding mathematical guarantees. In particular, it is known to be unstable: similar decompositions can lead to very different representations, as shown in Appendix B. Our approach can be understood as a subsequent generalization of the work of [10], with new mathematical guarantees that allow to derive, e.g., statistical rates of convergence. + +Outline. Our work is organized as follows. In Section 2, we recall the basics of multiparameter persistent homology. Next, in Section 3 we present our general framework and state our associated stability result. Finally, we showcase the numerical performances of our representations in Section 4, and we conclude in Section 5. + +# 2 Background + +In this section, we briefly recall the basics of single and multiparameter persistent homology, and refer the reader to Appendix C, Appendix D, and [31, 34] for a more complete treatment. + +Persistent homology. The basic brick of persistent homology is a filtered topological space $X$ , by which we mean a topological space $X$ together with a function $f\colon X\to \mathbb{R}$ (for instance, in Figure 5, $X = \mathbb{R}^2$ and $f = f_{P}$ ). Then, given $\alpha >0$ , we call $F(\alpha)\coloneqq f^{-1}((-\infty ,\alpha])\subseteq X$ the sublevel set of $f$ at level $\alpha$ . Given levels $\alpha_{1}\leq \dots \leq \alpha_{N}$ , the corresponding sublevel sets are nested w.r.t. inclusion, i.e., one has $F(\alpha_{1})\subseteq F(\alpha_{2})\subseteq \ldots \subseteq F(\alpha_{i})\subseteq \ldots \subseteq F(\alpha_{N})$ . This system is an example of filtration of $X$ , where a filtration is generally defined as a sequence of nested subspaces $X_{1}\subseteq \ldots \subseteq X_{i}\subseteq \ldots \subseteq X$ . Then, the core idea of persistent homology is to apply the kth homology functor $H_{k}$ on each $F(\alpha_{i})$ . We do not define the homology functor explicitly here, but simply recall that each $H_{k}(F(\alpha_{i}))$ is a vector space, whose basis elements represent the kth dimensional topological features of $F(\alpha_{i})$ (connected components for $k = 0$ , loops for $k = 1$ , spheres for $k = 2$ , etc). Moreover, the inclusions $F(\alpha_{i})\subseteq F(\alpha_{i + 1})$ translate into linear maps $H_{k}(F(\alpha_{i}))\to H_{k}(F(\alpha_{i + 1}))$ , which connect the features of $F(\alpha_{i})$ and $F(\alpha_{i + 1})$ together. This allows to keep track of the topological features in the filtration, and record their levels, often called times, of appearance and disappearance. More formally, such a sequence of vector spaces connected with linear maps $\mathbb{M} = H_*(F(\alpha_1))\to \dots \to H_*(F(\alpha_N))$ is called a persistence module, and the standard decomposition theorem [15, Theorem 2.8] states that this module can always be decomposed as $\mathbb{M} = \oplus_{i = 1}^{m}\mathbb{I}[\alpha_{b_i},\alpha_{d_i}]$ , where $\mathbb{I}[\alpha_{b_i},\alpha_{d_i}]$ stands for a module of dimension 1 (i.e., that represents a single topological feature) between $\alpha_{b_i}$ and $\alpha_{d_i}$ , and dimension 0 (i.e., that represents no feature) elsewhere. It is thus convenient to summarize such a module with its persistence barcode $D(\mathbb{M}) = \{[\alpha_{b_i},\alpha_{d_i}]\}_{1\leq i\leq m}$ . Note that in practice, one is only given a sampling of the topological space $X$ , which is usually unknown. In that case, persistence barcodes are computed using combinatorial models of $X$ computed from the data, called simplicial complexes. See Appendix C. + +Multiparameter persistent homology. The persistence modules defined above extend straightforwardly when there are multiple filtration functions. An $n$ -filtration, or multifiltration, induced by a function $f: X \to \mathbb{R}^n$ , is the family of sublevel sets $F = \{F(\alpha)\}_{\alpha \in \mathbb{R}^n}$ , where $F(\alpha) := \{x \in X: f(x) \leq \alpha\}$ and $\leq$ denotes the partial order of $\mathbb{R}^n$ . Again, applying the homology functor $H_k$ on the multifiltration $F$ induces a multiparameter persistence module $\mathbb{M}$ . However, contrary to the single-parameter case, the algebraic structure of such a module is very intricate, and there is no general decomposition into modules of dimension at most 1, and thus no analogue of the persistence barcode. Instead, the rank invariant has been introduced as a weaker invariant: it is defined, for a module $\mathbb{M}$ , as the function $\mathrm{RI}:(\alpha,\beta) \mapsto \mathrm{rank}(\mathbb{M}(\alpha) \to \mathbb{M}(\beta))$ for any $\alpha \leq \beta$ , but is also known to miss a lot of structural properties of $\mathbb{M}$ . To remedy this, several methods have been developed to compute candidate decompositions for $\mathbb{M}$ [1, 7, 29], where a candidate decomposition is a module $\tilde{\mathbb{M}}$ that can be decomposed as $\tilde{\mathbb{M}} \simeq \oplus_{i=1}^{m} M_i$ , where each $M_i$ is an interval module, i.e., its dimension is at most 1, and its support $\operatorname{supp}(M_i) := \{\alpha \in \mathbb{R}^n : \dim(M_i(\alpha)) = 1\}$ is an interval of $\mathbb{R}^n$ (see Appendix D). In particular, when $\mathbb{M}$ does decompose into intervals, candidate decompositions must agree with the true decomposition. One also often asks candidate decompositions to preserve the rank invariant. + +Distances. Finally, multiparameter persistence modules can be compared with two standard distances: the interleaving and bottleneck (or $\ell^{\infty}$ ) distances. Their explicit definitions are technical and not necessary for our main exposition, so we refer the reader to, e.g., [3, Sections 6.1, 6.4] and Appendix D for more details. The stability theorem [27, Theorem 5.3] states that multiparameter persistence modules are stable: $d_{\mathrm{I}}(\mathbb{M},\mathbb{M}^{\prime})\leq \| f - f^{\prime}\|_{\infty}$ , where $f$ and $f^{\prime}$ are continuous multiparameter functions associated to $\mathbb{M}$ and $\mathbb{M}^{\prime}$ respectively. + +# 3 T-CDR: a template for representations of candidate decompositions + +Even though candidate decompositions of multiparameter persistence modules are known to encode useful data information, their algebraic definitions make them not suitable for subsequent data science and machine learning purposes. Hence, in this section, we introduce the Template Candidate + +Decomposition Representation (T-CDR): a general framework and template system for representations of candidate decompositions, i.e., maps defined on the space of candidate decompositions and taking values in an (implicit or explicit) Hilbert space. + +# 3.1 T-CDR definition + +Notations. In this article, by a slight abuse of notation, we will make no difference in the notations between an interval module and its support, and we will denote the restriction of an interval support $M$ to a given line $\ell$ as $M|_{\ell}$ . + +Definition 1. Let $\mathbb{M} = \oplus_{i=1}^{m} M_i$ be a candidate decomposition, and let $\mathcal{M}$ be the space of interval modules. The Template Candidate Decomposition Representation (T-CDR) of $\mathbb{M}$ is: + +$$ +V _ {\mathrm {o p}, w, \phi} (\mathbb {M}) = \mathrm {o p} \left(\left\{w \left(M _ {i}\right) \cdot \phi \left(M _ {i}\right) \right\} _ {i = 1} ^ {m}\right), \tag {1} +$$ + +where op is a permutation invariant operation (sum, max, min, mean, etc), $w: \mathcal{M} \to \mathbb{R}$ is a weight function, and $\phi: \mathcal{M} \to \mathcal{H}$ sends any interval module to a vector in a Hilbert space $\mathcal{H}$ . + +The general definition of T-CDR is inspired from a similar framework that was introduced for single-parameter persistence with the automatic representation method PersLay [11]. + +Relation to previous work. Interestingly, whenever applied on candidate decompositions that preserve the rank invariant, specific choices of op, $w$ and $\phi$ reproduce previous representations: + +- Using $w: M_i \mapsto 1, \phi: M_i \mapsto \left\{ \begin{array}{ll} \mathbb{R}^n & \to \mathbb{R} \\ x & \mapsto \Lambda(x, M_i|_{\ell_x}) \end{array} \right.$ and $\mathrm{op} = k$ th maximum, where $l_x$ is the diagonal line crossing $x$ , and $\Lambda(\cdot, \ell)$ denotes the tent function associated to any segment $\ell \subset \mathbb{R}^n$ , induces the $k$ th multiparameter persistence landscape (MPL) [39]. +- Using $w: M_i \mapsto 1$ , $\phi: M_i \mapsto \left\{ \begin{array}{ll} \mathbb{R}^n \times \mathbb{R}^n & \to \mathbb{R}^d \\ p, q & \mapsto w'(M_i \cap [p, q]) \cdot \phi'(M_i \cap [p, q]) \end{array} \right.$ and $\mathrm{op} = \mathrm{op}'$ , where $\mathrm{op}', w'$ and $\phi'$ are the parameters of any persistence diagram representation from Perslay, induces the multiparameter persistence kernel (MPK) [17]. +- Using $w: M_i \mapsto \mathrm{vol}(M_i)$ , $\phi: M_i \mapsto \left\{ \begin{array}{ll} \mathbb{R}^n & \to \mathbb{R} \\ x & \mapsto \exp(-\min_{\ell \in L} d(x, M_i|_\ell)^2 / \sigma^2) \end{array} \right.$ and $\mathrm{op} = \sum$ , where $L$ is a set of (pre-defined) diagonal lines, induces the multiparameter persistence image (MPI) [10]. + +Recall that the first two approaches are built from fibered barcodes and rank invariants, and that it is easy to find persistence modules that are different yet share the same rank invariant (see [38, Figure 3]). On the other hand, the third approach uses more information about the candidate decomposition, but is known to be unstable (see Appendix B). Hence, in the next section, we focus on specific choices for the T-CDR parameters that induce stable yet informative representations. + +# 3.2 Metric properties + +In this section, we study specific parameters for T-CDR (see Definition 1) that induce representations with associated robustness properties. We call this subset of representations Stable Candidate Decomposition Representations (S-CDR), and define them below. + +Definition 2. The S-CDR parameters are: + +1. the weight function $w: M \mapsto \sup \{ \varepsilon > 0 : \exists y \in \mathbb{R}^n \text{ s.t. } \ell_{y,\varepsilon} \subset \operatorname{supp}(M) \}$ , where $\ell_{y,\varepsilon}$ is the segment between $y - \varepsilon \cdot [1, \ldots, 1]$ and $y + \varepsilon \cdot [1, \ldots, 1]$ , +2. the individual interval representations $\phi_{\delta}(M):\mathbb{R}^n\to \mathbb{R}$ + +(a) $\phi_{\delta}(M)(x) = \frac{1}{\delta} w(\operatorname {supp}(M)\cap R_{x,\delta}),$ (b) $\phi_{\delta}(M)(x) = \frac{1}{(2\delta)^n}\mathrm{vol}(\operatorname {supp}(M)\cap R_{x,\delta}),$ +(c) $\phi_{\delta}(M)(x) = \frac{1}{(2\delta)^n}\sup_{x',\pmb{\delta}'}\{\mathrm{vol}(R_{x'},\pmb{\delta}') : R_{x',\pmb{\delta}'} \subseteq \mathrm{supp}(M) \cap R_{x,\pmb{\delta}}\}$ , + +where $R_{x,\delta}$ is the hypersquare $\{y\in \mathbb{R}^n:x - \delta \leq y\leq x + \delta \} \subseteq \mathbb{R}^n$ $\pmb {\delta}\coloneqq \pmb {\delta}\cdot [1,\dots ,1]\in$ $\mathbb{R}^n$ for any $\delta >0$ , and vol denotes the volume of a set in $\mathbb{R}^n$ + +3. the permutation invariant operators $\mathrm{op} = \sum$ and $\mathrm{op} = \sup$ . + +Intuitively, the S-CDR weight function is the length of the largest diagonal segment one can fit inside $\operatorname{supp}(M)$ , and the S-CDR interval representations (a), (b) and (c) are the largest normalized diagonal length, volume, and hypersquare volume that one can fit inside $\operatorname{supp}(M) \cap R_{x,\delta}$ , respectively. These S-CDR interval representations allow for some trade-off between computational cost and the amount of information that is kept: (a) and (c) are very easy to compute, but (b) encodes more information about interval shapes. See Figure 2 (right) for visualizations. + +Equipped with these S-CDR parameters, we can now define the two following S-CDRs, that can be applied on any candidate decomposition $\mathbb{M} = \oplus_{i=1}^{m} M_i$ : + +$$ +V _ {p, \delta} (\mathbb {M}) := \sum_ {i = 1} ^ {m} \frac {w \left(M _ {i}\right) ^ {p}}{\sum_ {j = 1} ^ {m} w \left(M _ {j}\right) ^ {p}} \phi_ {\delta} \left(M _ {i}\right), \quad (2) \quad V _ {\infty , \delta} (\mathbb {M}) := \sup _ {1 \leq i \leq m} \phi_ {\delta} \left(M _ {i}\right). \tag {3} +$$ + +Stability. The main motivation for introducing S-CDR parameters is that the corresponding S-CDRs are stable in the interleaving and bottleneck distances, as stated in the following theorem. + +Theorem 1. Let $\mathbb{M} = \oplus_{i=1}^{m} M_i$ and $\mathbb{M}' = \oplus_{j=1}^{m'} M_j'$ be two candidate decompositions. Assume that we have $\frac{1}{m} \sum_{i} w(M_i)$ , $\frac{1}{m'} \sum_{j} w(M_j') \geq C$ , for some $C > 0$ . Then for any $\delta > 0$ , one has + +$$ +\left\| V _ {0, \delta} (\mathbb {M}) - V _ {0, \delta} \left(\mathbb {M} ^ {\prime}\right) \right\| _ {\infty} \leq 2 \left(d _ {\mathrm {B}} \left(\mathbb {M}, \mathbb {M} ^ {\prime}\right) \wedge \delta\right) / \delta , \tag {4} +$$ + +$$ +\left\| V _ {1, \delta} (\mathbb {M}) - V _ {1, \delta} \left(\mathbb {M} ^ {\prime}\right) \right\| _ {\infty} \leq \left[ 4 + \frac {2}{C} \right] \left(d _ {\mathrm {B}} \left(\mathbb {M}, \mathbb {M} ^ {\prime}\right) \wedge \delta\right) / \delta , \tag {5} +$$ + +$$ +\left\| V _ {\infty , \delta} (\mathbb {M}) - V _ {\infty , \delta} \left(\mathbb {M} ^ {\prime}\right) \right\| _ {\infty} \leq \left(d _ {\mathrm {I}} \left(\mathbb {M}, \mathbb {M} ^ {\prime}\right) \wedge \delta\right) / \delta , \tag {6} +$$ + +where $\wedge$ stands for minimum. + +A proof of Theorem 1 can be found in Appendix F. + +These results are the main theoretical contribution in this work, as the only other decomposition-based representation in the literature [10] has no such guarantees. The other representations [17, 18, 39, 42] enjoy similar guarantees than ours, but are computed from the rank invariant and do not exploit the information contained in decompositions. Theorem 1 shows that S-CDRs bring the best of both worlds: these representations are richer than the rank invariant and stable at the same time. We also provide an additional stability result with a similar, yet more complicated representation in Appendix G, whose upper bound does not involve taking minimum. + +Remark 1. S-CDRs are injective representations: if the support of two interval modules are different, then their corresponding S-CDRs (evaluated on a point that belongs to the support of one interval but not on the support of the other) will differ, provided that $\delta$ is sufficiently small. + +# 4 Numerical Experiments + +In this section, we illustrate the efficiency of our S-CDRs with numerical experiments. First, we explore the stability theorem in Section 4.1 by studying the convergence rates, both theoretically and empirically, of S-CDRs on various data sets. Then, we showcase the efficiency of S-CDRs on classification tasks in Section 4.2, and we investigate their running times in Section 4.3. Our code for computing S-CDRs is based on the MMA [29] and Gudhi [36] libraries for computing candidate decompositions2. It is publicly available at https://github.com/DavidLapous/multipipers and will be merged as a module of the Gudhi library. We also provide pseudo-code in Appendix H. + +# 4.1 Convergence rates + +In this section, we study the convergence rate of S-CDRs with respect to the number of sampled points, when computed from specific bifiltrations. Similar to the single parameter persistence setting [16], these rates are derived from Theorem 1. Indeed, since concentration inequalities for multiparameter + +The persistence modules have already been described in the literature, these concentration inequalities can transfer to our representations. Note that while Equations (7) and (8), which provide such rates, are stated for the S-CDR in (3), they also hold for the S-CDR in (2). + +Measure bifiltration. Let $\mu$ be a compactly supported probability measure of $\mathbb{R}^D$ , and let $\mu_n$ be the discrete measure associated to a sampling of $n$ points from $\mu$ . The measure bifiltration associated to $\mu$ and $\mu_n$ is defined as $\mathcal{F}_{r,t}^\mu := \{x \in \mathbb{R}^D : \mu(B(x, r)) \leq t\}$ , where $B(x, r)$ denotes the Euclidean ball centered on $x$ with radius $r$ . Now, let $\mathbb{M}$ and $\mathbb{M}_n$ be the multiparameter persistence modules obtained from applying the homology functor on top of the measure bifiltrations $\mathcal{F}^\mu$ and $\mathcal{F}^{\mu_n}$ . These modules are known to enjoy the following stability result [2, Theorem 3.1, Proposition 2.23 (i]): $d_{\mathrm{I}}(\mathbb{M}, \mathbb{M}_n) \leq d_{\mathrm{Pr}}(\mu, \mu_n) \leq \min(d_W^p (\mu, \mu_n)^{\frac{1}{2}}, d_W^p (\mu, \mu_n)^{\frac{p}{p+1}})$ , where $d_W^p$ and $d_{\mathrm{Pr}}$ stand for the $p$ -Wasserstein and Prokhorov distances between probability measures. Combining these inequalities with Theorem 1, then taking expectations and applying the concentration inequalities of the Wasserstein distance (see [26, Theorem 3.1] and [21, Theorem 1]) lead to: + +$$ +\delta \mathbb {E} \left[ \| V _ {\infty , \delta} (\mathbb {M}) - V _ {\infty , \delta} (\mathbb {M} _ {n}) \| _ {\infty} \right] \leq \left(c _ {p, q} \mathbb {E} \left(| X | ^ {q}\right) n ^ {- \left(\frac {1}{2 p \vee d}\right) \wedge \frac {1}{p} - \frac {1}{q}} \log^ {\alpha / q} n\right) ^ {\frac {p}{p + 1}}, \tag {7} +$$ + +where $\vee$ stands for maximum, $\alpha = 2$ if $2p = q = d$ , $\alpha = 1$ if $d \neq 2p$ and $q = dp / (d - p) \wedge 2p$ or $q > d = 2p$ and $\alpha = 0$ otherwise, $c_{p,q}$ is a constant that depends on $p$ and $q$ , and $X$ is a random variable of law $\mu$ . + +Cech complex and density. A limitation of the measure bifiltration is that it can be difficult to compute. Hence, we now focus on another, easier to compute bifiltration. Let $X$ be a smooth compact $d$ -submanifold of $\mathbb{R}^D$ ( $d \leq D$ ), and $\mu$ be a measure on $X$ with density $f$ with respect to the uniform measure on $X$ . We now define the bifiltration $\mathcal{F}^{C,f}$ with: + +$$ +\mathcal {F} _ {u, v} ^ {C, f} := \operatorname {C e c h} (u) \cap f ^ {- 1} ([ v, \infty)) = \left\{x \in \mathbb {R} ^ {D}: d (x, X) \leq u, f (x) \geq v \right\}. +$$ + +Moreover, given a set $X_{n}$ of $n$ points sampled from $\mu$ , we also consider the approximate bifiltration $\mathcal{F}^{C,f_n}$ , where $f_{n}\colon X\to \mathbb{R}$ is an estimation of $f$ (such as, e.g., a kernel density estimator). Let $\mathbb{M}$ and $\mathbb{M}_n$ be the multiparameter persistence modules associated to $\mathcal{F}^{C,f}$ and $\mathcal{F}^{C,f_n}$ . Then, the stability of the interleaving distance [27, Theorem 5.3] ensures: + +$$ +d _ {\mathrm {I}} (\mathbb {M}, \mathbb {M} _ {n}) \leq \| f - f _ {n} \| _ {\infty} \vee d _ {H} (X, X _ {n}), +$$ + +where $d_H$ stands for the Hausdorff distance. Moreover, concentration inequalities for the Hausdorff distance and kernel density estimators are also available in the literature (see [16, Theorem 4] and [23, Corollary 15]). More precisely, when the density $f$ is $L$ -Lipschitz and bounded from above and from below, i.e., when $0 < f_{\mathrm{min}} \leq f \leq f_{\mathrm{max}} < \infty$ , and when $f_n$ is a kernel density estimator of $f$ with associated kernel $k$ , one has: + +$$ +\mathbb {E} \left(d _ {H} (X, X _ {n})\right) \lesssim \left(\frac {\log n}{n}\right) ^ {\frac {1}{d}} \text {a n d} \mathbb {E} (\| f - f _ {n} \| _ {\infty}) \lesssim L h _ {n} + \sqrt {\frac {\log \left(1 / h _ {n}\right)}{n h _ {n} ^ {d}}}, +$$ + +where $h_n$ is the (adaptive) bandwidth of the kernel $k$ . In particular, if $\mu$ is a measure comparable to the uniform measure of a $d = 2$ -manifold, then for any stationary sequence $h_n \coloneqq h > 0$ , and considering a Gaussian kernel $k$ , one has: + +$$ +\delta \mathbb {E} \left[ \| V _ {\infty , \delta} (\mathbb {M}) - V _ {\infty , \delta} \left(\mathbb {M} _ {n}\right) \| _ {\infty} \right] \lesssim \sqrt {\frac {\log n}{n}} + L h. \tag {8} +$$ + +Empirical convergence rates. Now that we have established the theoretical convergence rates of S-CDRs, we estimate and validate them empirically on data sets. We will first study a synthetic data set and then a real data set of point clouds obtained with immunohistochemistry. We also illustrate how the stability of S-CDRs (stated in Theorem 1) is critical for obtaining such convergence in Appendix B, where we show that our main competitor, the multiparameter persistence image [11], is unstable and thus cannot achieve convergence, both theoretically and numerically. + +Annulus with non-uniform density. In this synthetic example, we generate an annulus of 25,000 points in $\mathbb{R}^2$ with a non-uniform density, displayed in Figure 3a. Then, we compute the bifiltration $\mathcal{F}^{C,f_n}$ corresponding to the Alpha filtration and the sublevel set filtration of a kernel density estimator, with + +bandwidth parameter $h = 0.1$ , on the complete Alpha simplicial complex. Finally, we compute the candidate decompositions and associated S-CDRs of the associated multiparameter module (in homology dimension 1), and their normalized distances to the target representation, using either $\| \cdot \|_2^2$ or $\| \cdot \|_\infty$ . The corresponding distances for various numbers of sample points are displayed in log-log plots in Figure 3b. One can see that the empirical rate is roughly consistent with the theoretical one $(-1/2$ for $\| \cdot \|_\infty$ and $-1$ for $\| \cdot \|_2$ ) even when $p \neq \infty$ (in which case our S-CDRs are stable for $d_{\mathrm{B}}$ but theoretically not for $d_{\mathrm{I}}$ ). + +Figure 3: Convergence rate of synthetic data set. +![](images/29655082a263511f8bc917828e0a9bd24f2e762158914a84ac67aa1ec9b95a82.jpg) +(a) Scatter plot of the synthetic (b) (left) $\| \cdot \| _2^2$ and (right) $\| \cdot \|_{\infty}$ between the target representation and data set colored by a kernel den- the empirical one w.r.t. $n$ sity estimator. + +![](images/41bed7fb30bdf9234de91dce477342f72517786fd53934ec1d463f96adda9a35.jpg) + +![](images/6009f33c10130c737fbe88cc02193728f728a3a902b48247385c549637f4ab89.jpg) + +Immunohistochemistry data. In our second experiment, we consider a point cloud representing cells, taken from [40], see Figure 4a. These cells are given with biological markers, which are typically used to assess, e.g., cell types and functions. In this experiment, we first triangulate the point cloud by placing a $100 \times 100$ grid on top of it. Then, we filter this grid using the sublevel set filtrations of kernel density estimators (with Gaussian kernel and bandwidth $h = 1$ ) associated to the CD8 and CD68 biological markers for immune cells. Finally, we compute the associated candidate decompositions of the multiparameter modules in homology dimensions 0 and 1, and we compute and concatenate their corresponding S-CDRs. Similar to the previous experiment, the theoretical convergence rate of our representations is upper bounded by the one for kernel density estimators with the $\infty$ -norm. The convergence rates are displayed in Figure 4b. Again, one can see that the observed and theoretical convergence rates are consistent. + +Figure 4: Convergence rate of immunohistochemistry data set. +![](images/7bc3cc9f28da64b00a492cb2c81ab7fe9f4c15b535ac20ab5fc3704fd53fe2a8.jpg) +(a) Point cloud of cells colored by CD8 (red) and CD68 (black). + +![](images/17c8763a0081dde7c4110ef6f9939fc89e06e12345712b97411c2b6d39e4cb6a.jpg) +(b) (left) $\| \cdot \| _2^2$ and (right) $\| \cdot \|_{\infty}$ distances between the target representation and the empirical one w.r.t. $n$ . + +![](images/aa9010436f999e15385e81756cf7bf0ae228cc6e66f21a1314d6ec79049a2fd5.jpg) + +# 4.2 Classification + +In this section, we illustrate the efficiency of S-CDRs by using them for classification purposes. We show that they perform comparably or better than existing topological representations as well as standard baselines on several UCR benchmark data sets, graph data sets, and on the immunohistochemistry data set. Concerning UCR, we work with point clouds obtained from time delay embedding applied on the UCR time series, following the procedure of [10], and we produce S-CDRs with bifiltrations coming from combining either the Rips filtration with sublevel sets of a kernel density estimator (as in Section 4.1), or the Alpha filtration with the sublevel sets of the distance-to-measure + +with parameter $m = 0.1$ (as in [10] and the baselines therein). Concerning graph datasets, we produce S-CDRs by filtering the graphs themselves directly using Heat Kernel Signature with parameter $t = 10$ , Ricci curvature and node degree (similarly to what is used in the literature [11, 22, 45]). In all tasks, every point cloud or graph has a label (corresponding to the type of its cells in the immunohistochemistry data set, and to pre-defined labels in the UCR and graph data sets), and our goal is to check whether we can predict these labels by training classifiers on the corresponding S-CDRs. + +For point clouds (immunohistochemistry and UCR), we compare the performances of our S-CDRs (evaluated on a $50 \times 50$ grid) to the one of the multiparameter persistence landscape (MPL) [39], kernel (MPK) [17] and images (MPI) [10], as well as their single-parameter counterparts (P-L, P-I and PSS-K). We also compare to some non-topological baselines: we used the standard Ripley function evaluated on 100 evenly spaced samples in [0, 1] for the immunohistochemistry data set, and k-NN classifiers with three difference distances for the UCR time series (denoted by B1, B2, B3), as suggested in [19]. For graps, we compare S-CDRs to the Euler characteristic based multiparameter persistence methods ECP, RT, and HTn, introduced in [22]. In order to also include non topological baselines, we also compare against the state-of-the-art graph classification methods RetGK [44], FGSD [37], and GIN [43]. + +All scores on the immunohistochemistry data set were computed after cross-validating a few classifiers (random forests, support vector machines and xgboost, with their default Scikit-Learn parameters) with 5 folds. For the time series data, our accuracy scores were obtained after also cross-validating the following S-CDR parameters; $p \in \{0,1\}$ , op ∈ {sum, mean}, $\delta \in \{0.01,0.1,0.5,1\}$ , $h \in \{0.1,0.5,1,1.5\}$ with homology dimensions 0 and 1, and the following bandwidth parameters for kernel density estimation: $b \in \{0.1\%, 1\%, 10\%, 20\%, 30\% \}$ , which are percentages of the diameters of the point clouds. with 5 folds. Parameters and results on graph data sets were cross-validated and averaged over 10 folds, following the pipelines of [22]. All results can be found in Table 1 (immunohistochemistry and UCR—UCR acronyms are provided in Appendix I) and Table 2. Bold indicates best accuracy and underline indicates best accuracy among topological methods. Note that there are no variances for UCR data sets since pre-defined train/test splits were provided. One can see that S-CDR almost always outperform topological baselines and are comparable to the standard baselines on the UCR benchmarks. Most notably, S-CDRs radically outperform the standard baseline and competing topological measures on the immunohistochemistry data set. For graph data sets, results are competitive with both topological and non-topological baselines; S-CDRs perform even slightly better on COX2. + +
DatasetB1B2B3PSS-KP-IP-LMPKMPLMPIS-CDR (Rips + KDE)S-CDR (Alpha + DTM)
DPOAG62.662.677.076.969.870.567.670.571.971.971.9
DPOC71.772.571.747.567.466.374.669.671.773.874.6
PPOAG78.578.580.575.982.078.078.078.581.081.984.9
PPOC80.879.078.478.472.272.578.778.781.879.483.2
PPTW70.775.675.661.472.273.779.573.276.175.675.1
IPD95.595.595.0-64.761.180.778.671.981.277.2
GP91.391.390.790.684.780.088.794.090.796.392.7
GPAS89.996.591.8-84.587.093.085.190.588.093.7
GPMVF97.597.599.7-88.387.396.888.395.995.395.9
PC93.392.287.8-83.476.785.684.486.793.190.0
RipleyPMPLS-CDR
Immuno67.2(2.3)60.7(4.2)65.3(3.0)91.4(1.6)
+ +Table 1: Accuracy scores for UCR and immunohistochemistry data sets. + +
DatasetRetGKFGSDGINECPRTHT nDS-CDR
COX281.4(0.6)--80.3(0.4)79.7(0.4)80.6(0.4)82.0(0.2)
DHFR81.5(0.9)--82.0(0.4)81.3(0.4)83.1(0.5)81.6(0.2)
IMDB-B71.9(1.0)73.675.1(5.1)73.3(0.4)74.0(0.5)74.7(0.5)73.5(0.2)
IMDB-M47.7(0.3)52.452.3(2.8)48.7(0.4)50.2(0.4)49.9(0.4)49.5(0.2)
MUTAG90.3(1.1)92.190(8.8)90.0(0.8)87.3(0.6)89.4(0.7)88.4(0.3)
PROTEINS78.0(0.3)73.476.2(2.6)75.0(0.3)75.4(0.4)75.4(0.4)73.9(0.2)
+ +# 4.3 Running time comparisons + +In this section, we provide running time comparisons between S-CDRs and the MPI and MPL representations, in which we measured the time needed to compute all the train and test S-CDRs and baselines of the previous data sets, averaged over the folds (again, note that since UCR data sets already provide the train/test splits, there is no variance in the corresponding results). All representations are evaluated on grids of sizes $50 \times 50$ and $100 \times 100$ , and we provide the maximum running time over $p \in \{0,1,\infty\}$ . All computations were done using a Ryzen 4800 laptop CPU, with 16GB of RAM. We provide results in Table 3, where it can be seen that S-CDRs (computed on the pinched annulus and immunohistochemistry data sets) can be computed much faster than the other representations, by a factor of at least 25. As for UCR data sets, which contain only small time series and corresponding point clouds, it can still be observed that S-CDRs can be computed faster than the baselines. Interestingly, this sparse and fast implementation based on corners can also be used to improve on the running time of the multiparameter persistence landscapes (MPL), as one can see from Algorithm 4 in Appendix H (which retrieves the persistence barcode of a multiparameter persistence module along a given line; this is enough to compute the MPL) and from Table 3. + +Table 2: Accuracy scores on graph datasets. + +
AnnulusImmunoPPTWGP
Ours (S-CDR)250ms(2ms)275ms(9.8ms)33.0ms(3.99ms)45.6ms(5.74ms)
Ours (MPL)36.9ms(0.8ms)65.9ms(0.9ms)22.4ms(2.15ms)31.8ms(2.95ms)
MPI (50)6.43s(25ms)5.67s(23.3ms)65.2ms(12.9ms)208ms(16.3ms)
MPL (50)17s(39ms)15.6s(14ms)154ms(27.9ms)630ms(30.0ms)
MPI (100)13.1s(125ms)11.65s(7.9ms)289ms(75.0ms)1.69s(77.7ms)
MPL (100)35s(193ms)31.3s(23.3ms)843ms(200ms)4.43s(186ms)
+ +Table 3: Running times for S-CDRs and competitors. + +# 5 Conclusion + +In this article, we study the general question of representing decompositions of multiparameter persistence modules in Topological Data Analysis. We first introduce T-CDR: a general template framework including specific representations (called S-CDR) that are provably stable. Our experiments show that S-CDR is superior to the state of the art. + +Limitations. (1) Our current T-CDR parameter selection is currently done through cross-validation, which can be very time consuming and limits the number of parameters to choose from. (2) Our classification experiments were mostly illustrative. In particular, it would be useful to investigate more thoroughly on the influence of the T-CDR and S-CDR parameters, as well as the number of filtrations, on the classification scores. (3) In order to generate finite-dimensional vectors, we evaluated T-CDR and S-CDR on finite grids, which limited their discriminative powers when fine grids were too costly to compute. + +Future work. (1) Since T-CDR is similar to the PersLay framework of single parameter persistence [11] and since, in this work, each of the framework parameter was optimized by a neural network, it is thus natural to investigate whether one can optimize T-CDR parameters in a data-driven way as well, so as to be able to avoid cross-validation. (2) In our numerical applications, we focused on representations computed off of MMA decompositions [29]. In the future, we plan to investigate whether working with other decomposition methods [1, 7] lead to better numerical performance when combined with our representation framework. + +Acknowledgments. The authors would like to thank the area chair and anonymous reviewers for their insightful comments and constructive suggestions. The authors would also like to thank Hannah Schreiber for her great help with the implementation of our method. The authors are grateful to the OPAL infrastructure from Université Côte d'Azur for providing resources and support. DL was supported by ANR grant 3IA Côte d'Azur (ANR-19-P3IA-0002). MC was supported by ANR grant TopModel (ANR-23-CE23-0014). AJB was partially supported by ONR grant N00014-22-1-2679 and NSF grant DMS-2311338. + +# References + +[1] H. Asashiba, E. Escolar, K. Nakashima, and M. Yoshiwaki. On approximation of 2d persistence modules by interval-decomposables. Journal of Computational Algebra, 6-7:100007, 2023. +[2] A. Blumberg and M. Lesnick. Stability of 2-parameter persistent homology. Foundations of Computational Mathematics, 2022. +[3] M. Botnan and M. Lesnick. An introduction to multiparameter persistence. In CoRR. arXiv:2203.14289, 2022. +[4] M. Botnan, S. Oppermann, and S. Oudot. Signed Barcodes for Multi-Parameter Persistence via Rank Decompositions. In 38th International Symposium on Computational Geometry (SoCG 2022), volume 224, pages 19:1–19:18. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2022. +[5] M. B. Botnan and M. Lesnick. Algebraic stability of zigzag persistence modules. *Algebraic & Geometric Topology*, 18(6):3133-3204, Oct. 2018. +[6] M. Buchet, Y. Hiraoka, and I. Obayashi. Persistent homology and materials informatics. In Nanoinformatics, pages 75–95. Springer-Verlag, 2018. +[7] C. Cai, W. Kim, F. Mémoli, and Y. Wang. Elder-rule-stairstcodes for augmented metric spaces. In 36th International Symposium on Computational Geometry (SoCG 2020), pages 26:1-26:17. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2020. +[8] G. Carlsson. Topology and data. Bulletin of the American Mathematical Society, 46(2):255-308, 2009. +[9] G. Carlsson and A. Zomorodian. The theory of multidimensional persistence. Discrete & Computational Geometry, 42(1):71-93, 2009. +[10] M. Carrière and A. Blumberg. Multiparameter persistence image for topological machine learning. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pages 22432-22444. Curran Associates, Inc., 2020. +[11] M. Carrière, F. Chazal, Y. Ike, T. Lacombe, M. Royer, and Y. Umeda. PersLay: a neural network layer for persistence diagrams and new graph topological signatures. In 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020), pages 2786-2796. PMLR, 2020. +[12] M. Carrière, S. Oudot, and M. Ovsjanikov. Local signatures using persistence diagrams. In HAL. HAL:01159297, 2015. +[13] M. Carrière, S. Oudot, and M. Ovsjanikov. Stable topological signatures for points on 3D shapes. Computer Graphics Forum, 34(5):1-12, 2015. +[14] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing Multiple Parameters for Support Vector Machines. Machine Learning, 46(1):131-159, Jan. 2002. +[15] F. Chazal, V. de Silva, M. Glisse, and S. Oudot. The structure and stability of persistence modules. SpringerBriefs in Mathematics. Springer-Verlag, 2016. +[16] F. Chazal, M. Glisse, C. Labruère, and B. Michel. Convergence rates for persistence diagram estimation in topological data analysis. Journal of Machine Learning Research, 16(110):3603-3635, 2015. + +[17] R. Corbet, U. Fugacci, M. Kerber, C. Landi, and B. Wang. A kernel for multi-parameter persistent homology. Computers & Graphics: X, 2:100005, 2019. +[18] B. Coskunuzer, I. Akcora, Cuneyt Dominguez, Y. Chen, Z. Zhen, M. Kantarcioglu, and Y. Gel. Smart Vectorizations for Single and Multiparameter Persistence. In CoRR. arXiv:2104.04787, 2021. +[19] H.-A. Dau, A. Bagnall, K. Kamgar, C.-C. Yeh, Y. Zhu, S. Gharghabi, C. Ratanamahatana, and E. Keogh. The UCR time series archive. In CoRR. arXiv:1810.07758, 2018. +[20] T. Dey and C. Xin. Generalized persistence algorithm for decomposing multiparameter persistence modules. Journal of Applied and Computational Topology, 2022. +[21] N. Fournier and A. Guillin. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3):707-738, Aug. 2015. +[22] O. Hacquard and V. Lebovici. Euler Characteristic Tools For Topological Data Analysis. In CoRR. arXiv:2303.14040, 2023. +[23] J. Kim, J. Shin, A. Rinaldo, and L. Wasserman. Uniform convergence rate of the kernel density estimator adaptive to intrinsic volume dimension. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), volume 97, pages 3398-3407. PMLR, 2019. +[24] W. Kim and F. Mémoli. Generalized Persistence Diagrams for Persistence Modules over Posets. Journal of Applied and Computational Topology, 5:533-581, 2021. +[25] C. Landi. The rank invariant stability via interleavings. In Research in Computational Topology, pages 1-10. Springer, 2018. +[26] J. Lei. Convergence and Concentration of Empirical Measures under Wasserstein Distance in Unbounded Functional Spaces. Bernoulli, 26(1), Feb. 2020. +[27] M. Lesnick. The theory of the interleaving distance on multidimensional persistence modules. Foundations of Computational Mathematics, 15(3):613-650, 2015. +[28] M. Lesnick and M. Wright. Computing minimal presentations and bigraded betti numbers of 2-parameter persistent homology. SIAM Journal on Applied Algebra and Geometry, 6(2):267-298, 2022. +[29] D. Loiseaux, M. Carrière, and A. Blumberg. Fast, stable and efficient approximation of multi-parameter persistence modules with MMA. In CoRR. arXiv:2206.02026, 2022. +[30] J. Munkres, A.-W. (1942-1999), and P. Publishing. Elements of Algebraic Topology. Advanced Book Classics. Basic Books, 1984. +[31] S. Oudot. Persistence theory: from quiver representations to data analysis, volume 209 of Mathematical Surveys and Monographs. American Mathematical Society, 2015. +[32] S. Ozer. Similarity domains machine for scale-invariant and sparse shape modeling. IEEE Trans. Image Process., 28(2):534-545, 2019. +[33] A. Poulenard, P. Skraba, and M. Ovsjanikov. Topological function optimization for continuous shape matching. Computer Graphics Forum, 37(5):13-25, 2018. +[34] R. Rabadán and A. Blumberg. Topological data analysis for genomics and evolution. Cambridge University Press, 2019. +[35] M. Saadatfar, H. Takeuchi, V. Robins, N. François, and Y. Hiraoka. Pore configuration landscape of granular crystallization. Nature Communications, 8:15082, 2017. +[36] The GUDHI Project. GUDHI User and Reference Manual. GUDHI Editorial Board, 3.6.0 edition, 2022. +[37] S. Verma and Z.-L. Zhang. Hunt for the unique, stable, sparse and fast feature learning on graphs. In Advances in Neural Information Processing Systems 31 (NeurIPS 2017), volume 30, pages 88-98. Curran Associates, Inc., 2017. + +[38] O. Vipond. Local equivalence of metrics for multiparameter persistence modules. In CoRR. arXiv:2004.11926, 2020. +[39] O. Vipond. Multiparameter persistence landscapes. Journal of Machine Learning Research, 21(61):1-38, 2020. +[40] O. Vipond, J. Bull, P. Macklin, U. Tillmann, C. Pugh, H. Byrne, and H. Harrington. Multiparameter persistent homology landscapes identify immune cell spatial patterns in tumors. Proceedings of the National Academy of Sciences of the United States of America, 118(41), 2021. +[41] A. P. Witkin. Scale-space filtering. In Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pages 329-332. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, Jan. 1987. +[42] C. Xin, S. Mukherjee, S. Samaga, and T. Dey. GRIL: A 2-parameter Persistence Based Vectorization for Machine Learning. In 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning. OpenReviews.net, 2023. +[43] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations (ICLR 2019). OpenReviews.net, 2019. +[44] Z. Zhang, M. Wang, Y. Xiang, Y. Huang, and A. Nehorai. RetGK: graph kernels based on return probabilities of random walks. In Advances in Neural Information Processing Systems 32 (NeurIPS 2018), pages 3964-3974. Curran Associates, Inc., 2018. +[45] Q. Zhao and Y. Wang. Learning metrics for persistence-based summaries and applications for graph classification. In Advances in Neural Information Processing Systems 33 (NeurIPS 2019), pages 9855–9866. Curran Associates, Inc., 2019. + +# A Limitation of single-parameter persistent homology + +A standard example of single-parameter filtration for point clouds, called the Čech filtration, is to consider $f: x \mapsto d_P(x) \coloneqq \min_{p \in P} \|x - p\|$ , where $P$ is a point cloud. The sublevel sets of this function are balls centered on the points in $P$ with growing radii, and the corresponding persistence barcode contains the topological features formed by $P$ . See Figure 5a. When the radius of the balls is small, they form three connected components (upper left), identified as the three long red bars in the barcode (lower). When this radius is moderate, a cycle is formed (upper middle), identified as the long blue bar in the barcode (lower). Finally, when the radius is large, the balls cover the whole Euclidean plane, and no topological features are left, except for one connected component, that never dies (upper right). + +![](images/fb5d332c1a7628f99d0211150cbbd9bff6581ef98a54e70118bd7cdc6b6e7775.jpg) +(a) Persistence barcode obtained from growing balls on a clean point cloud. + +![](images/f80a7b36fcffaabb0970ad3c0397f636a40efa6ab037a5f4c70e8f1c82635258.jpg) +(b) Persistence barcode obtained from growing balls on a noisy point cloud. +Figure 5: Example of persistence barcode construction for point clouds using Čech filtration. In the middle sublevel sets, we highlight the topological cycles in blue. + +The limitation of the Čech filtration alone for noisy point cloud is illustrated in Figure 5b, where only feature scale is used, and the presence of three outlier points messes up the persistence barcode entirely, since they induce the appearance of two cycles instead of one. In order to remedy this, one could remove points whose density is smaller than a given threshold, and then process the trimmed point cloud as before, but this requires using an arbitrary threshold choice to decide which points should be considered outliers or not. + +# B Unstability of MPI + +In this section, we provide theoretical and experimental evidence that the multiparameter persistence image (MPI) [10], which is another decomposition-based representation from the literature, suffers from lack of stability as it does not enjoy guarantees such as the ones of S-CDRs (see Theorem 1). There are two main sources of instability in the MPI. The first one is due to the discretization induced by the lines: since it is obtained by placing Gaussian functions on top of slices of the intervals in the decompositions, if the Gaussian bandwidth $\sigma$ (see previous work, third item in Section 3.1) is too small w.r.t. the distance between consecutive lines in $L$ , discretization effects can arise, as illustrated in Figure 6, in which the intervals do not appear as continuous shapes. + +The second problem comes from the weight function: it is easy to build decompositions that are close in the interleaving distance, yet whose interval volumes are very different. See [5, Figure 15], and Figure 7, in which we show a family of modules $\mathbb{M}^{\epsilon}$ (left) made of a single interval built from connecting two squares with a bridge of diameter $\epsilon > 0$ . We also show the distances between $\mathbb{M}^{\epsilon}$ and the limit module $\mathbb{M}$ made of two squares with no bridge, through their MPI and S-CDRs (right). Even though the interleaving distance between $\mathbb{M}^{\epsilon}$ and $\mathbb{M}$ goes to zero as $\epsilon$ goes to zero, the distances between MPI representations converge to a positive constant, whereas the distances between S-CDRs goes effectively to zero. + +We also show that this lack of stability can even prevent convergence. In Figure 8, we repeated the setup of Section 4.1 on a synthetic data set sampled from a noisy annulus in the Euclidean plane. As + +![](images/dfe1a125ab9e4affec1a2013affb89b9e1d277a97af6af6805672ad227bdfeb4.jpg) +Figure 6: Examples of numerical artifacts occurring when the number of lines is too small w.r.t. the Gaussian bandwidth, for two different families of lines. + +![](images/30169eb0b51f340ec3236ef7778b78435d13fa96d28abd105cbf3e780e868694.jpg) + +![](images/02daa12cda7757694b946933b6b119b59cc8adbfe4f0e3620f735bb5d08c92e5.jpg) +Figure 7: Example of unstability of MPI. + +![](images/49e4660db644a02a0771bdb51fd44a980fb6594e1991987f13f5882dac56b4c5.jpg) + +one can clearly see, convergence does not happen for MPI as the Euclidean distance to the limit MPI representation associated to the maximal number of subsample points suddenly drops to zero after an erratic behavior, while S-CDRs exhibit a gradual decrease in agreement with Theorem 1. + +![](images/7343b24def37d054e260a55126e9f49f18a9b4a7e2cc93a14254d43e36ceb13a.jpg) +Figure 8: Example of lack of convergence for MPI. + +![](images/7e4877847ab318df807664c9b2c60c34020b9c5f719d9cef4d22a58e66759b4c.jpg) + +# C Simplicial homology + +In this section, we recall the basics of simplicial homology with coefficients in $\mathbb{Z} / 2\mathbb{Z}$ , which we use for practical computations. A more detailed presentation can be found in [30, Chapter 1]. The basic bricks of simplicial (persistent) homology are simplicial complexes, which are combinatorial models of topological spaces that can be stored and processed numerically. + +Definition 3. Given a set of points $X_{n} \coloneqq \{x_{1},\ldots ,x_{n}\}$ sampled in a topological space $X$ , an abstract simplicial complex built from $X_{n}$ is a family $S(X_{n})$ of subsets of $X_{n}$ such that: + +- if $\tau \in S(X_n)$ and $\sigma \subseteq \tau$ , then $\sigma \in S(X_n)$ , and +- if $\sigma, \tau \in S(X_n)$ , then either $\sigma \cap \tau \in S(X_n)$ or $\sigma \cap \tau = \emptyset$ . + +Each element $\sigma \in S(X_n)$ is called a simplex of $S(X_n)$ , and the dimension of a simplex is defined as $\dim(\sigma) := \operatorname{card}(\sigma) - 1$ . Simplices of dimension 0 are called vertices. + +An important linear operator on simplices is the so-called boundary operator. Roughly speaking, it turns a simplex into the chain of its faces, where a chain is a formal sum of simplices. The set of chains has a group structure, denoted by $Z_{*}(S(X_{n}))$ . + +Definition 4. Given a simplex $\sigma \coloneqq [x_{i_1},\ldots ,x_{i_p}]$ , the boundary operator $\partial$ is defined as + +$$ +\partial (\sigma) := \sum_ {j = 1} ^ {p} \left[ x _ {i _ {1}}, \dots , x _ {i _ {j - 1}}, x _ {i _ {j + 1}}, \dots , x _ {i _ {p}} \right]. +$$ + +In other words, it is the chain constructed from $\sigma$ by removing one vertex at a time. This operator $\partial$ can then be extended straightforwardly to chains by linearity. + +Given a simplicial complex, a topological feature is defined as a cycle, i.e., a chain such that each simplex in its boundary appears an even number of times. In order to formalize this property, we remove a simplex in a chain every time it appears twice, and we let a cycle be a chain $c$ s.t. $\partial (c) = 0$ . + +Now, one can easily check that $\partial \circ \partial (c) = 0$ for any chain $c$ , i.e., the boundary of a cycle is always a cycle. Hence, one wants to exclude cycles that are boundaries, since they correspond somehow to trivial cycles. Again, such boundaries form a group that is denoted by $B_{*}(S(X_{n}))$ . + +Definition 5. The homology group in dimension $k$ is the quotient group + +$$ +H _ {k} (S (X _ {n})) := \frac {Z _ {k} (S (X _ {n})}{B _ {k} (S (X _ {n}))}. +$$ + +In other words, it is the group (one can actually show it is a vector space) of cycles made of simplices of dimension $k$ that are not boundaries. + +See Figure 9 for an illustration of these definitions. Finally, given a filtered simplicial complex (with a filtration defined as in Section 2), computing its associated persistence barcode using the simplicial homology functor can be done with several softwares, such as, e.g., Gudhi [36]. + +![](images/1a2a40e08efecf13547f8752d17aa2aa38d504ef1cb4b4f6dd0265c279a1d7c6.jpg) +Figure 9: Example of simplicial complex $S$ built from eight points, and made of eight vertices (simplices of dimension 0), fourteen edges (simplices of dimension 1) and six triangles (simplices of dimension 2). The purple path is a cycle; indeed $\partial ([v_1, v_7] + [v_7, v_4] + [v_4, v_6] + [v_6, v_1]) = [v_1] + [v_7] + [v_7] + [v_4] + [v_4] + [v_6] + [v_6] + [v_1] = 0$ since every vertex appears twice. Similarly, the blue path is a cycle as well. However, both paths represent the same topological feature, in the sense that they belong to the same equivalence class of $H_{1}(S)$ , since their sum is exactly the boundary of the 2-chain comprised of the six triangles of the complex, i.e., they differ only by a trivial cycle. Hence, the dimension of $H_{1}(S)$ is 1. + +# D Modules, interleaving and bottleneck distances + +In this section, we provide a more formalized version of multiparameter persistence modules and their associated distances. Strictly speaking, multiparameter persistence modules are nothing but a parametrized family of vector spaces obtained from applying the homology functor to a multifiltration, as explained in Section 2. + +Definition 6. A multiparameter persistence module $\mathbb{M}$ is a family of vector spaces $\{M(\alpha):\alpha \in \mathbb{R}^n\}$ , together with linear transformations, also called transition maps, $\varphi_{\alpha}^{\beta}:\mathbb{M}(\alpha)\to \mathbb{M}(\beta)$ for any $\alpha \leq \beta$ (where $\leq$ denotes the partial order of $\mathbb{R}^n$ ), that satisfy $\varphi_{\alpha}^{\gamma} = \varphi_{\beta}^{\gamma}\circ \varphi_{\alpha}^{\beta}$ for any $\alpha \leq \beta \leq \gamma$ . + +Of particular interest are interval modules, since they are easier to work with. + +Definition 7. An interval module $\mathbb{M}$ is a multiparameter persistence module such that: + +- its dimension is at most 1: $\dim (\mathbb{M}(\alpha))\leq 1$ for any $\alpha \in \mathbb{R}^n$ , and +- its support $\operatorname{supp}(\mathbb{M}) \coloneqq \{\alpha \in \mathbb{R}^n : \dim(\mathbb{M}(\alpha)) = 1\}$ is an interval of $\mathbb{R}^n$ + +where an interval of $\mathbb{R}^n$ is a subset of $I\subseteq \mathbb{R}^n$ that satisfy: + +- (convexity) if $p, q \in I$ and $p \leq r \leq q$ then $r \in I$ , and +- (connectivity) if $p, q \in I$ , then there exists a finite sequence $r_1, r_2, \ldots, r_m \in I$ , for some $m \in \mathbb{N}$ , such that $p \sim r_1 \sim r_2 \sim \dots \sim r_m \sim q$ , where $\sim$ can be either $\leq$ or $\geq$ . + +In the main body of this article, we study representations for candidate decompositions of modules, i.e., direct sums4 of interval modules that approximate the original modules. + +Multiparameter persistence modules can be compared with the interleaving distance [27]. + +Definition 8 (Interleaving distance). Given $\varepsilon >0$ , two multiparameter persistence modules $\mathbb{M}$ and $\mathbb{M}'$ are $\varepsilon$ -interleaved if there exist two morphisms $f\colon \mathbb{M}\to \mathbb{M}_{\varepsilon}'$ and $g\colon \mathbb{M}'\to \mathbb{M}_{\varepsilon}$ such that $g_{.+ \varepsilon} \circ f_{.} = \varphi_{.}^{+2\varepsilon}$ and $f_{.+ \varepsilon} \circ g_{.} = \psi_{.}^{+2\varepsilon}$ , where $\mathbb{M}_{\varepsilon}$ is the shifted module $\{\mathbb{M}(x + \varepsilon)\}_{x\in \mathbb{R}^n}$ , $\varepsilon = (\varepsilon, \ldots, \varepsilon) \in \mathbb{R}^n$ , and $\varphi$ and $\psi$ are the transition maps of $\mathbb{M}$ and $\mathbb{M}'$ respectively. The interleaving distance between two multiparameter persistence modules $\mathbb{M}$ and $\mathbb{M}'$ is then defined as $d_{\mathrm{I}}(\mathbb{M},\mathbb{M}') := \inf \left\{\varepsilon \geq 0:\mathbb{M}\text{and}\mathbb{M}'\text{are}\varepsilon\text{-interleaved}\right\}$ . + +The main property of this distance is that it is stable for multi-filtrations that are obtained from the sublevel sets of functions. More precisely, given two continuous functions $f, g: S \to \mathbb{R}^n$ defined on a simplicial complex $S$ , let $\mathbb{M}(f), \mathbb{M}(g)$ denote the multiparameter persistence modules obtained from the corresponding multifiltrations $\{S_x^f := \{\sigma \in S: f(\sigma) \leq x\}\}_{x \in \mathbb{R}^n}$ and $\{S_x^g := \{\sigma \in S: g(\sigma) \leq x\}\}_{x \in \mathbb{R}^n}$ . Then, one has [27, Theorem 5.3]: + +$$ +d _ {\mathrm {I}} (\mathbb {M} (f), \mathbb {M} (g)) \leq \| f - g \| _ {\infty}. \tag {9} +$$ + +Another usual distance is the bottleneck distance [5, Section 2.3]. Intuitively, it relies on decompositions of the modules into direct sums of indecomposable summands5 (which are not necessarily intervals), and is defined as the largest interleaving distance between summands that are matched under some matching. + +Definition 9 (Bottleneck distance). Given two multisets $A$ and $B$ , $\mu \colon A \nrightarrow B$ is called a matching if there exist $A' \subseteq A$ and $B' \subseteq B$ such that $\mu \colon A' \to B'$ is a bijection. The subset $A' := \operatorname{coim}(\mu)$ (resp. $B' := \operatorname{im}(\mu)$ ) is called the coimage (resp. image) of $\mu$ . + +Let $\mathbb{M} \cong \bigoplus_{i \in \mathcal{I}} M_i$ and $\mathbb{M}' \cong \bigoplus_{j \in \mathcal{J}} M_j'$ be two multiparameter persistence modules. Given $\varepsilon \geq 0$ , the modules $\mathbb{M}$ and $\mathbb{M}'$ are $\varepsilon$ -matched if there exists a matching $\mu: \mathcal{I} \nrightarrow \mathcal{J}$ such that $M_i$ and $M_{\mu(i)}'$ are $\varepsilon$ -interleaved for all $i \in \operatorname{coim}(\mu)$ , and $M_i$ (resp. $M_j'$ ) is $\varepsilon$ -interleaved with the null module $\mathbf{0}$ for all $i \in \mathcal{I} \backslash \operatorname{coim}(\mu)$ (resp. $j \in \mathcal{J} \backslash \operatorname{im}(\mu)$ ). + +The bottleneck distance between two multiparameter persistence modules $\mathbb{M}$ and $\mathbb{M}'$ is then defined as $d_{\mathrm{B}}(\mathbb{M},\mathbb{M}')\coloneqq \inf \left\{\varepsilon \geq 0:\mathbb{M}\text{and}\mathbb{M}'\text{are}\varepsilon \text{-matched}\right\}$ . + +Since a matching between the decompositions of two multiparameter persistence modules induces an interleaving between the modules themselves, it follows that $d_{\mathrm{I}} \leq d_{\mathrm{B}}$ . Note also that $d_{\mathrm{B}}$ can actually be arbitrarily larger than $d_{\mathrm{I}}$ , as showcased in [5, Section 9]. + +# E The fibered barcode and its properties + +Definition 10. Let $n \in \mathbb{N}^*$ and $F = \{F^{(1)}, \ldots, F^{(n)}\}$ be a multifiltration on a topological space $X$ . Let $\mathbf{e}, b \in \mathbb{R}^n$ , and $\ell_{\mathbf{e},b} : \mathbb{R} \to \mathbb{R}^n$ be the line in $\mathbb{R}^n$ defined with $\ell_{\mathbf{e},b}(t) = t \cdot \mathbf{e} + b$ , that is, $\ell_{\mathbf{e},b}$ is the line of direction $\mathbf{e}$ passing through $b$ . Let $F_{\mathbf{e},b} : \mathbb{R} \to \mathcal{P}(X)$ defined with $F_{\mathbf{e},b}(t) = \bigcap_{i=1}^{n} F^{(i)}([\ell_{\mathbf{e},b}(t)]_i)$ , where $[\cdot]_i$ denotes the $i$ -th coordinate. Then, each $F_{\mathbf{e},b}$ is a single-parameter filtration and has a corresponding persistence barcode $B_{\mathbf{e},b}$ . The set $\mathcal{B}(F) = \{B_{\mathbf{e},b} : \mathbf{e}, b \in \mathbb{R}^n\}$ is called the fibered barcode of $F$ . + +The two following lemmas from [25] describe two useful properties of the fibered barcode. + +Lemma 1 (Lemma 1 in [25]). Let $\mathbf{e}, b \in \mathbb{R}^n$ and $\ell_{\mathbf{e},b}$ be the corresponding line. Let $\hat{e} = \min_i[\mathbf{e}]_i$ . Let $F, F'$ be two multi-filtrations, $\mathbb{M}, \mathbb{M}'$ be the corresponding persistence modules and $B_{\mathbf{e},b} \in \mathcal{B}(F)$ and $B_{\mathbf{e},b}' \in \mathcal{B}(F')$ be the corresponding barcodes in the fibered barcodes of $F$ and $F'$ . Then, the following stability property holds: + +$$ +d _ {\mathrm {B}} \left(B _ {\mathbf {e}, b}, B _ {\mathbf {e}, b} ^ {\prime}\right) \leq \frac {d _ {\mathrm {I}} \left(\mathbb {M} , \mathbb {M} ^ {\prime}\right)}{\hat {e}}. \tag {10} +$$ + +Lemma 2 (Lemma 2 in [25]). Let $\mathbf{e},\mathbf{e}^{\prime},b,b^{\prime}\in \mathbb{R}^{n}$ and $\ell_{\mathbf{e},b},\ell_{\mathbf{e}^{\prime},b^{\prime}}$ be the corresponding lines. Let $\hat{e} = \min_{i}[\mathbf{e}]_{i}$ and $\hat{e}^{\prime} = \min_{i}[\mathbf{e}^{\prime}]_{i}$ . Let $F$ be a multi-filtration, $\mathbb{M}$ be the corresponding persistence module and $B_{\mathbf{e},b},B_{\mathbf{e}^{\prime},b^{\prime}}\in \mathcal{B}(F)$ be the corresponding barcodes in the fibered barcode of $F$ . Assume $\mathbb{M}$ is decomposable $\mathbb{M} = \oplus_{i = 1}^{m}M_{i}$ , and let $K > 0$ such that $M_{i}\subseteq B_{\infty}(0,K):= \{x\in \mathbb{R}^{n}:||x||_{\infty}\leq K\}$ for all $i\in [[1,m]]$ . Then, the following stability property holds: + +$$ +d _ {\mathrm {B}} \left(B _ {\mathbf {e}, b}, B _ {\mathbf {e} ^ {\prime}, b ^ {\prime}}\right) \leq \frac {\left(K + \max \left\{\| b \| _ {\infty} , \| b ^ {\prime} \| _ {\infty} \right\}\right) \cdot \| \mathbf {e} - \mathbf {e} ^ {\prime} \| _ {\infty} + \| b - b ^ {\prime} \| _ {\infty}}{\hat {e} \cdot \hat {e} ^ {\prime}}. \tag {11} +$$ + +# F Proof of Theorem 1 + +Our proof is based on several lemmas. In the first one, we focus on the S-CDR weight function $w$ as defined in Definition 2. + +Lemma 3. Let $M$ and $M'$ be two interval modules with compact support. Then, one has + +$$ +d _ {1} (M, 0) = \frac {1}{2} \sup _ {b, d \in \operatorname {s u p p} (M)} \min _ {j} \left(d _ {j} - b _ {j}\right) _ {+} = w (M). \tag {12} +$$ + +Furthermore, one has the equality + +$$ +\left| w (M) - w \left(M ^ {\prime}\right) \right| \leq d _ {1} \left(M, M ^ {\prime}\right). \tag {13} +$$ + +Proof. We first show Equation (12) with two inequalities. + +First inequality: $\leq$ Let $M$ be an interval module. If $d_{\mathrm{I}}(M,0) = 0$ , then the inequality is trivial, so we now assume that $d_{\mathrm{I}}(M,0) > 0$ . Let $\delta > 0$ such that $\delta < d_{\mathrm{I}}(M,0)$ . By definition of $d_{\mathrm{I}}$ , the identity morphism $M \to M_{2\delta}$ cannot be factorized by 0. This implies the existence of some $b \in \mathbb{R}^n$ such that $\operatorname{rank}(M(b) \to M(b + 2\delta)) > 0$ ; in particular, $b, b + 2\delta \in \operatorname{supp}(M)$ . Making $\delta$ converge to $d_{\mathrm{I}}(M,0)$ yields the desired inequality. + +Second inequality: $\geq$ Let $(K_{n})_{n\in \mathbb{N}}$ be a compact interval exhaustion of $\operatorname {supp}(M)$ , and $b_{n},d_{n}\in K_{n}$ be two points that achieve the maximum in + +$$ +\frac{1}{2}\sup_{b,d\in K_{n}}\min_{j}(d_{j} - b_{j})_{+}. +$$ + +Now, by functoriality of persistence modules, we can assume without loss of generality that $b_{n}$ and $d_{n}$ are on the same diagonal line (indeed, if they are not, it is possible to transform $d_{n}$ into + +$\tilde{d}_n$ such that $b_n$ and $\tilde{d}_n$ are on the same diagonal line and also achieve the supremum). Thus, $\mathrm{rank}(M(b_n) \to M(d_n)) > 0$ , and $d_{\mathrm{I}}(M,0) \geq \frac{1}{2} \| d_n - b_n \|_{\infty}$ . Taking the limit over $n \in \mathbb{N}$ leads to the desired inequality. + +Inequality (13) follows directly from the triangle inequality applied on $d_{\mathrm{I}}$ . + +![](images/d36184ffaaff97826b0145eabe176f1bd1b638efd2a302252fd1b4c2b793330d.jpg) + +In the following lemma, we rewrite volumes of interval module supports using interleaving distances. + +Lemma 4. Let $M$ be an interval module, and $R \subseteq \mathbb{R}^n$ be a compact rectangle, with $n \geq 2$ . Then, one has: + +$$ +\operatorname {v o l} \left(\operatorname {s u p p} (M) \cap R\right) = 2 \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {\mathrm {I}} \left(M \big | _ {l _ {y} \cap R}, 0\right) \mathrm {d} \lambda^ {n - 1} (y) +$$ + +where $l_y$ is the diagonal line crossing $y$ , and $\lambda^{n-1}$ denotes the Lebesgue measure in $\mathbb{R}^{n-1}$ . + +Proof. Using the change of variables $y_{i} = x_{i} - x_{n}$ and $t = x_{n}$ (which has a trivial Jacobian) yields the following inequalities: + +$$ +\begin{array}{l} \operatorname {v o l} (\operatorname {s u p p} (M) \cap R) = \int_ {\operatorname {s u p p} (M) \cap R} \mathrm {d} \lambda^ {n} (x) \\ = \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} \int_ {t \in \mathbb {R}} \mathbb {1} _ {\operatorname {s u p p} (M) \cap R} (y + t) d t d \lambda^ {n - 1} (y) \\ = \int_ {\left\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \right\}} \operatorname {d i a m} _ {\| \cdot \| _ {\infty}} \left(\operatorname {s u p p} (M) \cap l _ {y} \cap R\right) d \lambda^ {n - 1} (y) \\ \end{array} +$$ + +where $l_{y}$ is the diagonal line passing through $y$ . Now, since $M$ is an interval module, one has $\mathrm{diam}_{\| \cdot \|_{\infty}}(\operatorname{supp}(M) \cap l_{y} \cap R) = 2d_{\mathrm{I}}(M|_{l_{y} \cap R}, 0)$ , which concludes the proof. + +![](images/d9fc28f7235a6f9759f02dc111343e756773cdee7cb689eb8676596bac1961a8.jpg) + +In the following proposition, we provide stability bounds for single interval modules. + +Proposition 1. If $M$ and $M'$ are two interval modules, then for any $\delta > 0$ and S-CDR parameter $\phi_{\delta}$ in Definition 2, one has: + +1. $0 \leq \phi_{\delta}(M)(x) \leq \frac{w(M)}{\delta} \land 1$ , for any $x \in \mathbb{R}^n$ , +2. $\| \phi_{\delta}(M) - \phi_{\delta}(M^{\prime})\|_{\infty}\leq 2(d_{\mathrm{I}}(M,M^{\prime})\wedge \delta) / \delta .$ + +Proof. Claim 1. is a simple consequence of Equation (12). + +Claim 2. for S-CDR parameter (a) is a simple consequence of the triangle inequality. + +Let us prove Claim 2. for (b). Let $x\in \mathbb{R}^n$ and $\delta >0$ .One has: + +$$ +\begin{array}{l} \left| \phi_ {\delta} (M) (x) - \phi_ {\delta} \left(M ^ {\prime}\right) (x) \right| \leq \frac {2}{(2 \delta) ^ {n}} \int_ {\{y: y _ {n} = 0 \}} \left| d _ {\mathrm {I}} \left(M \big | _ {l _ {y} \cap R _ {x, \delta}}, 0\right) - d _ {\mathrm {I}} \left(M ^ {\prime} \big | _ {l _ {y} \cap R _ {x, \delta}}, 0\right) \right| \mathrm {d} \lambda^ {n - 1} (y) \\ \leq \frac {2}{(2 \delta) ^ {n}} \int_ {\{y: y _ {n} = 0 \}} d _ {\mathrm {I}} \left(M \big | _ {l _ {y} \cap R _ {x, \delta}}, M ^ {\prime} \big | _ {l _ {y} \cap R _ {x, \delta}}\right) \mathrm {d} \lambda^ {n - 1} (y) \\ \leq 2 \left(d _ {\mathrm {I}} \left(M, M ^ {\prime}\right) \wedge \delta\right) / \delta , \\ \end{array} +$$ + +where the first inequality comes from Lemma 4, the second inequality is an application of the triangle inequality, and the third inequality comes from Lemma 1. + +Finally, let us prove Claim 2. for (c). Let $x \in \mathbb{R}^n$ and $\delta > 0$ . Let $b \leq d \in \operatorname{supp}(M) \cap R_{x,\delta}$ . Let also $\gamma > 0$ . Then, using Lemma 4, one has: + +$$ +\begin{array}{l} \frac {1}{(2 \delta) ^ {n}} \operatorname {v o l} (\operatorname {s u p p} (M) \cap R _ {b, d}) = \frac {2}{(2 \delta) ^ {n}} \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {\mathrm {I}} \left(M \mid_ {R _ {b, d} \cap l _ {y}}, 0\right) \mathrm {d} \lambda^ {n - 1} (y) \\ \leq \frac {2}{\delta} \gamma + \frac {2}{(2 \delta) ^ {n}} \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {\mathrm {I}} (M \big | _ {R _ {b + \gamma , d - \gamma} \cap l _ {y}}, 0) \mathrm {d} \lambda^ {n - 1} (y), \\ \end{array} +$$ + +using the convention $R_{a,b} = \varnothing$ if $a \not\leq b$ . Now, set $\gamma := d_{\mathrm{I}}(M|_{R_{x,\delta}}, M'|_{R_{x,\delta}})$ . If $b + \gamma$ or $d - \gamma \notin \operatorname{supp}(M')$ then $d_{\mathrm{I}}(M|_{R_{x,\delta}}, M'|_{R_{x,\delta}}) = \gamma > d_{\mathrm{I}}(M, M')$ which is impossible. Thus, + +$$ +\begin{array}{l} \frac {1}{(2 \delta) ^ {n}} \operatorname {v o l} \left(R _ {b, d}\right) \leq 2 d _ {\mathbf {I}} \left(M \mid_ {R _ {x, \delta}}, M ^ {\prime} \mid_ {R _ {x, \delta}}\right) / \delta \\ + \sup _ {a, c \in R _ {x, \delta} \cap \operatorname {s u p p} (M ^ {\prime})} \frac {2}{(2 \delta) ^ {n}} \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {\mathrm {I}} \left(M \mid_ {R _ {a, c} \cap l _ {y}}, 0\right) \mathrm {d} \lambda^ {n - 1} (y) \\ = 2 d _ {\mathrm {I}} \left(M \mid_ {R _ {x, \delta}}, M ^ {\prime} \mid_ {R _ {x, \delta}}\right) / \delta + \phi_ {\delta} \left(M ^ {\prime}\right) (x) \\ \end{array} +$$ + +Finally, taking the supremum on $b \leq d \in \operatorname{supp}(M) \cap R_{x,\delta}$ yields + +$$ +\phi_ {\delta} (M) (x) - \phi_ {\delta} (M ^ {\prime}) (x) \leq 2 d _ {\mathrm {I}} \left(M \big | _ {R _ {x, \delta}}, M ^ {\prime} \big | _ {R _ {x, \delta}}\right) / \delta \leq 2 \left(d _ {\mathrm {I}} (M, M ^ {\prime}) \wedge \delta\right) / \delta . +$$ + +The desired inequality follows by symmetry on $M$ and $M^{\prime}$ + +![](images/c94fcc6fb5996db6638576e33a782fcd0519d10bda3c042da4020fdb6516c0c8.jpg) + +Equipped with these results, we can finally prove Theorem 1. + +# Proof. Theorem 1. + +Let $\mathbb{M} = \oplus_{i=1}^{m} M_i$ and $\mathbb{M}' = \oplus_{j=1}^{m'} M_j'$ be two modules that are decomposable into interval modules and $x \in \mathbb{R}^n$ . + +Inequality 5. To simplify notations, we define the following: $w_{i} \coloneqq w(M_{i})$ , $\phi_{i,x} \coloneqq \phi_{\delta}(M_i)(x)$ and $w_{j}^{\prime} \coloneqq w(M_{j}^{\prime})$ , $\phi_{j,x}^{\prime} \coloneqq \phi_{\delta}(x,M_{j}^{\prime})$ . Let us also assume without loss of generality that the indices are consistent with a matching achieving the bottleneck distance. In other words, the bottleneck distance is achieved for a matching that matches $M_{i}$ with $M_{i}^{\prime}$ for every $i$ (up to adding 0 modules in the decompositions of $\mathbb{M}$ and $\mathbb{M}^{\prime}$ so that $m = m^{\prime}$ ). Finally, assume without loss of generality that $\sum_{i}w_{i}^{\prime}\geq \sum_{i}w_{i}$ . Then, one has: + +$$ +\begin{array}{l} | V _ {1, \delta} (\mathbb {M}) (x) - V _ {1, \delta} (\mathbb {M} ^ {\prime}) (x) | = \left| \frac {1}{\sum_ {i} w _ {i}} \sum_ {i} w _ {i} \phi_ {i, x} - \frac {1}{\sum_ {i} w _ {i} ^ {\prime}} \sum_ {i} w _ {i} ^ {\prime} \phi_ {i, x} ^ {\prime} \right| \\ \leq \frac {1}{\sum_ {i} w _ {i} ^ {\prime}} \left| \sum_ {i} w _ {i} \phi_ {i, x} - \sum_ {i} w _ {i} ^ {\prime} \phi_ {i, x} ^ {\prime} \right| + \left| \frac {1}{\sum_ {i} w _ {i}} - \frac {1}{\sum_ {i} w _ {i} ^ {\prime}} \right| \left| \sum_ {i} w _ {i} \phi_ {i, x} \right|. \\ \end{array} +$$ + +Now, for any index $i$ , since $d_{\mathrm{I}}(M_i,M_i') \leq d_{\mathrm{B}}(\mathbb{M},\mathbb{M}')$ and $|w_{i} - w_{i}^{\prime}| \leq d_{\mathrm{I}}(M_i,M_i^{\prime}) \leq d_{\mathrm{B}}(\mathbb{M},\mathbb{M}^{\prime})$ by Lemma 3, Proposition 1 ensures that: + +$$ +| w _ {i} \phi_ {i, x} - w _ {i} ^ {\prime} \phi_ {i, x} ^ {\prime} | \leq | w _ {i} - w _ {i} ^ {\prime} | \phi_ {i, x} + w _ {i} ^ {\prime} | \phi_ {i, x} - \phi_ {i, x} ^ {\prime} | \leq 2 (w _ {i} + w _ {i} ^ {\prime}) (d _ {\mathrm {B}} (\mathbb {M}, \mathbb {M} ^ {\prime}) \wedge \delta) / \delta +$$ + +and + +$$ +\left| \frac {1}{\sum_ {i} w _ {i}} - \frac {1}{\sum_ {i} w _ {i} ^ {\prime}} \right| \leq \frac {1}{\sum_ {i} w _ {i} ^ {\prime}} \left| \frac {\sum_ {i} w _ {i} ^ {\prime} - w _ {i}}{\sum_ {i} w _ {i}} \right| \leq \frac {m d _ {\mathrm {B}} (\mathbb {M} , \mathbb {M} ^ {\prime})}{(\sum_ {i} w _ {i} ^ {\prime}) (\sum_ {i} w _ {i})}. +$$ + +Finally, + +$$ +\begin{array}{l} | V _ {1, \delta} (\mathbb {M}) (x) - V _ {1, \delta} (\mathbb {M} ^ {\prime}) (x) | \leq \left[ \frac {\sum_ {i} w _ {i} + w _ {i} ^ {\prime}}{\sum_ {i} w _ {i} ^ {\prime}} + \frac {\sum_ {i} w _ {i}}{\frac {1}{m} (\sum_ {i} w _ {i}) (\sum_ {i} w _ {i} ^ {\prime})} \right] 2 (d _ {\mathrm {B}} (\mathbb {M}, \mathbb {M} ^ {\prime}) \wedge \delta) / \delta \\ \leq \left[ 4 + \frac {2}{C} \right] \left(d _ {\mathrm {B}} (\mathbb {M}, \mathbb {M} ^ {\prime}) \wedge \delta\right) / \delta . \\ \end{array} +$$ + +Inequality 4 can be proved using the proof of Inequality 5 by replacing every $w_{i}$ by 1. + +Inequality 6. Let us prove the inequality for (b). Let $R \coloneqq R_{x - \delta, x + \delta}$ . One has: + +$$ +\begin{array}{l} V _ {\infty , \delta} (\mathbb {M}) (x) - V _ {\infty , \delta} (\mathbb {M} ^ {\prime}) (x) = \\ \sup _ {i} \frac {2}{(2 \delta) ^ {n}} \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {\mathrm {I}} \left(M _ {i} \mid_ {l _ {y} \cap R}, 0\right) \mathrm {d} \lambda^ {n - 1} (y) \\ - \sup _ {j} \frac {2}{(2 \delta) ^ {n}} \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {I} \left(M _ {j} ^ {\prime} \mid_ {l _ {y} \cap R}, 0\right) \mathrm {d} \lambda^ {n - 1} (y) \\ \end{array} +$$ + +$$ +\begin{array}{l} (f o r a n y i n d e x j) \leq \sup _ {i} \frac {2}{(2 \delta) ^ {n}} \int_ {\{y \in \mathbb {R} ^ {n}: y _ {n} = 0 \}} d _ {\mathrm {I}} (M _ {i} | _ {l _ {y} \cap R}, 0) - d _ {\mathrm {I}} (M _ {j} ^ {\prime} | _ {l _ {y} \cap R}, 0) \mathrm {d} \lambda^ {n - 1} (y) \\ \leq \frac{2}{(2\delta)^{n}}\int_{\{y\in \mathbb{R}^{n}:y_{n} = 0\}}\sup_{i}\inf_{j}d_{\mathrm{I}}(M_{i}\big|_{l_{y}\cap R},M_{j}^{\prime}\big|_{l_{y}\cap R}) \mathrm{d}\lambda^{n - 1}(y) \\ \end{array} +$$ + +Now, as the interleaving distance is equal to the bottleneck distance for single parameter persistence [15, Theorem 5.14], one has: + +$$ +\sup _ {i} \inf _ {j} d _ {\mathrm {I}} (M _ {i} | _ {l _ {y} \cap R}, M _ {j} ^ {\prime} | _ {l _ {y} \cap R}) \leq d _ {\mathrm {B}} (\mathbb {M} | _ {l _ {y} \cap R}, \mathbb {M} ^ {\prime} | _ {l _ {y} \cap R}) = d _ {\mathrm {I}} (\mathbb {M} | _ {l _ {y} \cap R}, \mathbb {M} ^ {\prime} | _ {l _ {y} \cap R}) \leq d _ {\mathrm {I}} (\mathbb {M}, \mathbb {M} ^ {\prime}) \wedge \delta +$$ + +which leads to the desired inequality. The proofs for (a) and (c) follow the same lines (upper bound the suprema in the right hand term with either infima or appropriate choices in order to reduce to the single parameter case). + +# G An additional stability theorem + +In this section, we define a new S-CDR, with a slightly different type of upper bound. It relies on the fibered barcode introduced in Appendix E. We also slightly abuse notations and use $M_{i}$ to denote both an interval module and its support. + +Proposition 2. Let $\mathbb{M} \simeq \oplus_{i=1}^{m} M_i$ be a multiparameter persistence module that can be decomposed into interval modules. Let $\sigma > 0$ , and let $0 \leq \delta \leq \delta(\mathbb{M})$ , where + +$\delta (\mathbb{M})\coloneqq \inf \{\delta \geq 0:\Gamma_{\mathbb{M}}$ achieves $d_{\mathrm{B}}(B_{\mathbf{e}_{\Delta},x},B_{\mathbf{e}_{\Delta},x + \delta \mathbf{u}})$ for all $x,\mathbf{u}$ s.t. $\| \mathbf{u}\|_{\infty} = 1,\langle \mathbf{e}_{\Delta},\mathbf{u}\rangle = 0\}$ , where $\Gamma_{\mathbb{M}}$ is the partial matching induced by the decomposition of $\mathbb{M}$ . Let $\mathcal{N}(x,\sigma)$ denote the function + +$$ +\begin{array}{l} \mathcal {N} (x, \sigma): \left\{ \begin{array}{l l} \mathbb {R} ^ {n} & \to \mathbb {R} \\ p & \mapsto \exp \left(- \frac {\| p - x \| ^ {2}}{2 \sigma^ {2}}\right) \end{array} \right. a n d l e t \\ V _ {\delta , \sigma} (\mathbb {M}): \left\{\begin{array}{l l}\mathbb {R} ^ {n}&\rightarrow \mathbb {R}\\x&\mapsto \max _ {1 \leq i \leq m} \quad \max _ {f \in \mathcal {C} (x, \delta , M _ {i})} \quad \| \mathcal {N} (x, \sigma) \cdot f \| _ {1}\end{array}\right. \tag {14} \\ \end{array} +$$ + +where $\mathcal{C}(x,\delta,M_i)$ stands for the set of interval functions from $\mathbb{R}^n$ to $\{0,1\}$ whose support is $T_{\delta}(\ell)\cap M_i$ , where $\ell$ is a connecting component of $\mathrm{im}(\ell_{\mathbf{e}_{\Delta},x})\cap M_i$ and $\mathbf{e}_{\Delta} = [1,\dots ,1]\in \mathbb{R}^{n}$ , and where $T_{\delta}(\ell)$ is the $\delta$ -thickening of the line $L(\ell)$ induced by $\ell$ : $T_{\delta}(\ell) = \{x\in \mathbb{R}^{k}:\| x,L(\ell)\|_{\infty}\leq \delta \}$ . + +Then, $V_{\delta ,\sigma}$ satisfies the following stability property: + +$$ +\left\| V _ {\delta , \sigma} (\mathbb {M}) - V _ {\delta , \sigma} \left(\mathbb {M} ^ {\prime}\right) \right\| _ {\infty} \leq (\sqrt {\pi} \sigma) ^ {n} \cdot \sqrt {2 ^ {n + 1} \delta^ {n - 1} d _ {\mathrm {I}} \left(\mathbb {M} , \mathbb {M} ^ {\prime}\right) + C _ {n} (\delta)}, \tag {15} +$$ + +where $C_n(\cdot)$ is a continuous function such that $C_n(\delta) \to 0$ when $\delta \to 0$ . + +Proof. Let $\mathbb{M} = \oplus_{i=1}^{m} M_i$ and $\mathbb{M}' = \oplus_{j=1}^{m'} M_j'$ be two persistence modules that are decomposable into intervals, let $x \in \mathbb{R}^k$ and let $0 \leq \delta \leq \min\{\delta(\mathbb{M}), \delta(\mathbb{M}')\}$ . + +Notations. We first introduce some notations. Let $N$ (resp. $N^{\prime}$ ) be the number of bars in $B_{\mathbf{e}_{\Delta},x}$ (resp. $B_{\mathbf{e}_{\Delta},x}^{\prime}$ ), and assume without loss of generality that $N \leq N^{\prime}$ . Let $\Gamma$ be the partial matching achieving $d_{\mathrm{B}}(B_{\mathbf{e}_{\Delta},x}, B_{\mathbf{e}_{\Delta},x}^{\prime})$ . Let $N_{1}$ (resp. $N_{2}$ ) be the number of bars in $B_{\mathbf{e}_{\Delta},x}$ that are matched (resp. not matched) to a bar in $B_{\mathbf{e}_{\Delta},x}^{\prime}$ under $\Gamma$ , so that $N = N_{1} + N_{2}$ . Finally, note that $B_{\mathbf{e}_{\Delta},x} = \{\ell : \exists i$ such that $\ell \in \mathcal{C}(\mathrm{im}(\ell_{\mathbf{e}_{\Delta},x}) \cap M_i)$ and $\mathrm{im}(\ell_{\mathbf{e}_{\Delta},x}) \cap M_i \neq \emptyset\}$ , where $\mathcal{C}$ stands for the set of connected components (and similarly for $B_{\mathbf{e}_{\Delta},x}^{\prime}$ ), and let $F_{\Gamma}: B_{\mathbf{e}_{\Delta},x} \to B_{\mathbf{e}_{\Delta},x}^{\prime}$ be a function defined on all bars of $B_{\mathbf{e}_{\Delta},x}$ that coincides with $\Gamma$ on the $N_{1}$ bars of $B_{\mathbf{e}_{\Delta},x}$ that have an associated bar in $B_{\mathbf{e}_{\Delta},x}^{\prime}$ , and that maps the $N_{2}$ remaining bars of $B_{\mathbf{e}_{\Delta},x}$ to some arbitrary bars in the $(N^{\prime} - N_{1})$ remaining bars of $B_{\mathbf{e}_{\Delta},x}^{\prime}$ . + +A reformulation of the problem with vectors. We now derive vectors that allow to reformulate the problem in a useful way. Let $\hat{V}$ be the sorted vector of dimension $N$ containing all weights $\| \mathcal{N}(x,\sigma)\cdot f\| _1$ , where $f$ is the interval function whose support is $T_{\delta}(\ell)\cap M_i$ for some $M_{i}$ , where $\ell \in B_{\mathbf{e}_{\Delta},x}$ is a connected component of $\mathrm{im}(\ell_{\mathbf{e}_{\Delta},x})\cap M_i$ . Now, let $\hat{V}^{\prime}$ be the vector of dimension $N^{\prime}$ obtained by concatenating the two following vectors: + +- the vector $\hat{V}_1'$ of dimension $N$ whose $i$ th coordinate is $\|\mathcal{N}(x, \sigma) \cdot f'\|_1$ , where $f'$ is the interval function whose support is $T_{\delta}(\ell') \cap M_j'$ for some $M_j'$ , and $\ell' \in B_{\mathbf{e}_{\Delta}, x}'$ is the image under $\Gamma$ of the bar $\ell \in B_{\mathbf{e}_{\Delta}, x}$ corresponding to the $i$ th coordinate of $\hat{V}$ , i.e., $\ell' = F_{\Gamma}(\ell)$ where $[\hat{V}]_i = \|\mathcal{N}(x, \sigma) \cdot f\|_1$ and $f$ is the interval function whose support is $T_{\delta}(\ell) \cap M_{i_0}$ for some $M_{i_0}$ . In other words, $\hat{V}_1'$ is the (not necessarily sorted) vector of weights computed on the bars of $B_{\mathbf{e}_{\Delta}, x}'$ that are images (under the partial matching $\Gamma$ achieving the bottleneck distance) of the bars of $B_{\mathbf{e}_{\Delta}, x}$ that were used to generate the (sorted) vector $\hat{V}$ . +- the vector $\hat{V}_2'$ of dimension $(N' - N)$ whose $j$ th coordinate is $\|\mathcal{N}(x, \sigma) \cdot f'\|_1$ , where $f'$ is an interval function whose support is $T_\delta(\ell') \cap M_j'$ for some $M_j'$ , and $\ell' \in B_{\mathbf{e}_{\Delta}, x}'$ satisfies $\ell' \notin \mathrm{im}(F_\Gamma)$ . In other words, $\hat{V}_2'$ is the vector of weights computed on the bars of $B_{\mathbf{e}_{\Delta}, x}'$ (in an arbitrary order) that are not images of bars of $B_{\mathbf{e}_{\Delta}, x}$ under $\Gamma$ . + +Finally, we let $V$ be the vector of dimension $N'$ obtained by filling $\hat{V}$ (whose dimension is $N \leq N'$ ) with null values until its dimension becomes $N'$ , and we let $V' = \operatorname{sort}(\hat{V}')$ be the vector obtained after sorting the coordinates of $\hat{V}'$ . Observe that: + +$$ +\left| V _ {\delta , \sigma} (\mathbb {M}) (x) - V _ {\delta , \sigma} \left(\mathbb {M} ^ {\prime}\right) (x) \right| = \left[ V - V ^ {\prime} \right] _ {1} = \left[ V - \operatorname {s o r t} \left(\hat {V} ^ {\prime}\right) \right] _ {1} \tag {16} +$$ + +An upper bound. We now upper bound $\left\| V - \hat{V}^{\prime}\right\|_{\infty}$ . Let $q\in [[1,N^{\prime}]]$ . Then, one has $[V]_q = \|\mathcal{N}(x,\sigma)\cdot f\|_1$ , where $f$ is an interval function whose support is $T_{\delta}(\ell)\cap M_i$ for some $M_{i}$ with $\ell \in B_{\mathbf{e}_{\Delta},x}$ if $q\leq N$ and $\ell = \emptyset$ otherwise; and similarly $[\hat{V}^{\prime}]_{q} = \| \mathcal{N}(x,\sigma)\cdot f^{\prime}\|_{1}$ , where $f^{\prime}$ is an interval function whose support is $T_{\delta}(\ell^{\prime})\cap M_j^{\prime}$ for some $M_j^\prime$ with $\ell^{\prime}\in B_{\mathbf{e}_{\Delta},x}^{\prime}$ . Thus, one has: + +$$ +\begin{array}{l} [ V - \hat {V} ^ {\prime} ] _ {q} = | \| \mathcal {N} (x, \sigma) \cdot f \| _ {1} - \| \mathcal {N} (x, \sigma) \cdot f ^ {\prime} \| _ {1} | \\ \leq \| \mathcal {N} (x, \sigma) \cdot f - \mathcal {N} (x, \sigma) \cdot f ^ {\prime} \| _ {1} \text {b y} \\ = \| \mathcal {N} (x, \sigma) \cdot \left(f - f ^ {\prime}\right) \| _ {1} \text {b y} \\ \leq \| \mathcal {N} (x, \sigma) \| _ {2} \cdot \| f - f ^ {\prime} \| _ {2} \text {b y} \\ = (\sqrt {\pi} \sigma) ^ {k} \cdot \| f - f ^ {\prime} \| _ {2} \\ \end{array} +$$ + +Since $(f - f^{\prime})$ is an interval function whose support is $(T_{\delta}(\ell)\cap M_i)\triangle (T_{\delta}(\ell^{\prime})\cap M_j^{\prime})$ , one has $\| f - f^{\prime}\|_{2} = \sqrt{|(T_{\delta}(\ell)\cap M_{i})\triangle(T_{\delta}(\ell^{\prime})\cap M_{j}^{\prime})|}$ . Given a segment $\ell$ and a vector $\mathbf{u}$ , we let $\ell_{\mathbf{u}}$ denote the segment $\mathbf{u} + \ell$ , and we let $\ell^{\mathbf{u}}$ denote the (infinite) line induced by $\ell_{\mathbf{u}}$ . More precisely: + +$$ +\begin{array}{l} \left\| f - f ^ {\prime} \right\| _ {2} ^ {2} = \left| \left(T _ {\delta} (\ell) \cap M _ {i}\right) \triangle \left(T _ {\delta} \left(\ell^ {\prime}\right) \cap M _ {j} ^ {\prime}\right) \right| \\ = \left| \bigcup_ {\mathbf {u}} \left(\ell^ {\mathbf {u}} \cap M _ {i}\right) \triangle \bigcup_ {\mathbf {u}} \left(\left(\ell^ {\prime}\right) ^ {\mathbf {u}} \cap M _ {j} ^ {\prime}\right) \right| \\ \end{array} +$$ + +where $\mathbf{u}$ ranges over the vectors such that $\| \mathbf{u}\|_{\infty}\leq \delta ,\langle \mathbf{u},\mathbf{e}_{\Delta}\rangle = 0$ + +$$ +\begin{array}{l} \leq \int_ {\mathbf {u}} | (\ell^ {\mathbf {u}} \cap M _ {i}) \triangle ((\ell^ {\prime}) ^ {\mathbf {u}} \cap M _ {j} ^ {\prime}) | \mathrm {d} \mathbf {u} \\ \leq \int_ {\mathbf {u}} | (\ell^ {\mathbf {u}} \cap M _ {i}) \triangle (\ell \cap M _ {i}) _ {\mathbf {u}} | + | (\ell \cap M _ {i}) _ {\mathbf {u}} \triangle (\ell^ {\prime} \cap M _ {j} ^ {\prime}) _ {\mathbf {u}} | + | (\ell^ {\prime} \cap M _ {j} ^ {\prime}) _ {\mathbf {u}} \triangle ((\ell^ {\prime}) ^ {\mathbf {u}} \cap M _ {j} ^ {\prime}) | d \mathbf {u} \\ \leq \int_ {\mathbf {u}} 4 d _ {\mathrm {B}} \left(B _ {\mathbf {e} _ {\Delta}, x}, B _ {\mathbf {e} _ {\Delta}, x + \mathbf {u}}\right) + 4 d _ {\mathrm {B}} \left(B _ {\mathbf {e} _ {\Delta}, x}, B _ {\mathbf {e} _ {\Delta}, x} ^ {\prime}\right) + 4 d _ {\mathrm {B}} \left(B _ {\mathbf {e} _ {\Delta}, x} ^ {\prime}, B _ {\mathbf {e} _ {\Delta}, x + \mathbf {u}} ^ {\prime}\right) \mathrm {d} \mathbf {u} \tag {17} \\ \leq 4 \int_ {\mathbf {u}} \| \mathbf {u} \| _ {\infty} + d _ {\mathrm {I}} (\mathbb {M}, \mathbb {M} ^ {\prime}) + \| \mathbf {u} \| _ {\infty} \mathrm {d} \mathbf {u} \text {b y L e m m a 1 a n d 2} \\ \end{array} +$$ + +Inequality (17) comes from the fact that the symmetric difference between two bars (in two different barcodes) that are both matched (or unmatched) by the optimal partial matching is upper bounded by four times the bottleneck distance between the barcodes, and that (by assumption) the partial matchings achieving $d_{\mathrm{B}}(B_{\mathbf{e}_{\Delta},x},B_{\mathbf{e}_{\Delta},x + \mathbf{u}})$ and $d_{\mathrm{B}}(B_{\mathbf{e}_{\Delta},x}^{\prime},B_{\mathbf{e}_{\Delta},x + \mathbf{u}}^{\prime})$ are induced by $\mathbb{M}$ and $\mathbb{M}'$ . + +Conclusion. Finally, one has + +$$ +\begin{array}{l} \left| V _ {\delta , \sigma} (\mathbb {M}) (x) - V _ {\delta , \sigma} \left(\mathbb {M} ^ {\prime}\right) (x) \right| = \left[ V - V ^ {\prime} \right] _ {1} = \left[ V - \operatorname {s o r t} \left(\hat {V} ^ {\prime}\right) \right] _ {1} \text {f r o m E q u a t i o n (1 6)} \\ \leq \left\| V - V ^ {\prime} \right\| _ {\infty} \\ \leq (\sqrt {\pi} \sigma) ^ {k} \cdot \sqrt {2 ^ {n + 1} \delta^ {n - 1} d _ {\mathrm {I}} (\mathbb {M} , \mathbb {M} ^ {\prime}) + C _ {n} (\delta)}, \tag {18} \\ \end{array} +$$ + +with $C_n(\delta) = 8\int_{\mathbf{u}}\|\mathbf{u}\|_{\infty} \, \mathrm{d}\mathbf{u} \to 0$ when $\delta \to 0$ . Inequality (18) comes from the fact that any upper bound for the norm of the difference between a given vector $\hat{V}'$ and a sorted vector $V$ , is also an upper bound for the norm of the difference between the sorted version $V'$ of $\hat{V}'$ and the same vector $V$ (see Lemma 3.9 in [12]). + +While the stability constant is not upper bounded by $\delta$ , $V_{\delta, \sigma}$ is more difficult to compute than the S-CDRs presented in Definition 2. + +# H Pseudo-code for S-CDRs + +In this section, we briefly present the pseudo-code that we use to compute S-CDRs. Our code is based on implicit descriptions of the candidate decompositions of multiparameter persistence modules (which are the inputs of S-CDRs) through their so-called birth and death corners. These corners can be obtained with, e.g., the public softwares MMA [29] and Rivet [28]. + +In order to compute our S-CDRs, we implement the following procedures: + +1. Given an interval module $M$ and a rectangle $R \subset \mathbb{R}^n$ , compute the restriction $M|_R$ . +2. Given an interval module $M$ (or $M|_{R}$ ), compute $d_{\mathrm{I}}(M,0)$ . This allows to compute our weight function and first interval representation in Definition 2. +3. Given an interval module restricted to a rectangle, compute the volume of the biggest rectangle in the support of this module. This allows to compute the third interval representation in Definition 2. + +For the first point, Algorithm 1 works by "pushing" the corners of the interval on the given rectangle in order to obtain the updated corners. + +Algorithm 1: Restriction of an interval module to a rectangle +Data: birth and death corners of an interval module $M$ , rectangle $R = \{z \in \mathbb{R}^n : m \leq z \leq M\}$ +Result: new_interval_corners, the birth and death corners of $M|_R$ . +for interval = {interval_birth_corners, interval_death_corners} in $M$ do + new_birth_list ← []; + for b in interval_birth_corners do + if b ≤ M then + b' = {max(b_i, m_i) for i ∈ [1, n]}; + Append b' to new_birth_list; + end + end + new_death_list ← []; + for d in interval_death_corners do + if d ≥ m then + d' = {min(d_i, M_i) for i ∈ [1, n)]; + Append d' to new_death_list; + end + end + new_interval_corners ← [new_birth_list, new_death_list]; +end + +For the second point, we proved in Lemma 3 that our S-CDR weight function is equal to $d_{\mathrm{I}}(M,0)$ and has a closed-form formula with corners, that we implement in Algorithm 2. + +Algorithm 2: S-CDR weight function +Data: birth and death corners of an interval module M +Result: distance, the interleaving distance $d_{I}(M,0)$ . +distance $\leftarrow 0$ ; +for $b$ in $M$ birthcorners do + for $d$ in $M$ deathcorners do + distance $\leftarrow$ max (result, $\frac{1}{2} \min_{i} (d_{i} - b_{i})_{+}$ ); + end +end + +The third point also has a closed-form formula with corners, leading to Algorithm 3. + +Algorithm 3: S-CDR interval representation +Data: birth and death corners of an interval module $M$ +Result: volume, the volume of the biggest rectangle fitting in supp $(M)$ +volume $\leftarrow 0$ ; +for $b$ in $M$ -birthcorners do +for $d$ in $M$ -deathcorners do +| volume $\leftarrow$ max(result, $\Pi_i(d_i - b_i)_+$ ); +end +end + +Finally, we show how to get the persistence barcodes corresponding to slices of an interval module solely from the corners of the interval module. + +Algorithm 4: Restriction of an interval module to a line +```txt +Data: birth and death corners of an interval module $M$ , a diagonal line $l$ +Result: barcode, the persistence barcode associated to $M|_l$ +barcode $\leftarrow []$ ; + $y \gets$ an arbitrary point in $l$ ; +for interval = {interval_birth_corners, interval_death_corners} in M do + birth $\leftarrow y + 1 \times \min_{b \in \text{interval\_birth\_corners}} \max_i b_i - y_i$ ; + death $\leftarrow y + 1 \times \max_{d \in \text{interval\_death\_corners}} \min_i d_i - y_i$ ; + bar $\leftarrow$ [birth, death]; + Append bar to barcode; +end +``` + +# I UCR acronyms + +
DatasetAcronym
DistalPhalanxOutlineAgeGroupDPOAG
DistalPhalanxOutlineCorrectDPOC
ProximalPhalanxOutlineAgeGroupPPOAG
ProximalPhalanxOutlineCorrectPPOC
ProximalPhalanxTWPPTW
ItalyPowerDemandIPD
GunPointGP
GunPointAgeSpanGPAS
GunPointMaleVersusFemaleGPMVF
PowerConsPC
\ No newline at end of file diff --git a/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/images.zip b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..010326fd59af6bb08aff1bae6c3e808d679e5ac5 --- /dev/null +++ b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d058795624d423c4374785b19308399ca17e779827b5eef4ae5a1958a830014c +size 908354 diff --git a/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/layout.json b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6b84886c4818b75d2d07b30517bf38792a994651 --- /dev/null +++ b/aframeworkforfastandstablerepresentationsofmultiparameterpersistenthomologydecompositions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13c12530c6cdbfa48b1256febda379e6a615739d4001481f552f73b487cc9cfc +size 1177885 diff --git a/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_content_list.json b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..98a9e6c5e38d6c3f4abece8b21cbfa88fb0048c0 --- /dev/null +++ b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a0e2b8682a816496dbcee1851cddc9487e7a02895d93225505577d2c67b69fa +size 172566 diff --git a/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_model.json b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2daa6268b7c019361bc290d643cc69f408d2c704 --- /dev/null +++ b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:903901d4a89abd3539ab951a881900485da6f83ad095181d163c0e0a1e179e23 +size 204434 diff --git a/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_origin.pdf b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1bfde9f8415349145b340b141cfeb12d49d8e81f --- /dev/null +++ b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/dc4bd812-fb75-4413-acf9-3cc6843a33ea_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bccd923a35165ee9039ee14c9a2bba084819f68a84845d7017f8600e285f0892 +size 1052305 diff --git a/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/full.md b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/full.md new file mode 100644 index 0000000000000000000000000000000000000000..11820af0ec5aa25da851ef7e3e6e27099d39bea9 --- /dev/null +++ b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/full.md @@ -0,0 +1,893 @@ +# A General Framework for Equivariant Neural Networks on Reductive Lie Groups + +# Ilyes Batatia + +Engineering Laboratory, + +University of Cambridge + +Cambridge, CB2 1PZ UK + +Department of Chemistry, + +ENS Paris-Saclay, Université Paris-Saclay + +91190Gif-sur-Yvette,France + +ilyes.batatia@ens-paris-saclay.fr + +# Mario Geiger + +Department of Electrical Engineering + +and Computer Science, + +Massachusetts Institute of Technology + +Cambridge, MA, USA + +# Jose Munoz + +EIA University, FTA Group + +Antioquia, Colombia + +# Tess Smidt + +Department of Electrical Engineering and Computer Science, + +Massachusetts Institute of Technology + +Cambridge, MA, USA + +# Lior Silberman + +Department of Mathematics + +University of British Columbia + +Vancouver, BC, Canada V6T 1Z2 + +# Christoph Ortner + +Department of Mathematics + +University of British Columbia + +Vancouver, BC, Canada V6T 1Z2 + +# Abstract + +Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neural Network architecture capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group $G$ . Our approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. We also introduce the lie-nn software library, which provides all the necessary tools to develop and implement such general $G$ -equivariant neural networks. It implements routines for the reduction of generic tensor products of representations into irreducible representations, making it easy to apply our architecture to a wide range of problems and groups. The generality and performance of our approach are demonstrated by applying it to the tasks of top quark decay tagging (Lorentz group) and shape recognition (orthogonal group). + +# 1 Introduction + +Convolutional Neural Networks (CNNs) (LeCun et al., 1989) have become a widely used and powerful tool for computer vision tasks, in large part due to their ability to achieve translation equivariance. This property led to improved generalization and a significant reduction in the number of parameters. Translation equivariance is one of many possible symmetries occurring in machine learning tasks. + +A wide range of symmetries described by reductive Lie Groups is present in physics, such as $O(3)$ in molecular mechanics, $\mathrm{SO}(1,3)$ in High-Energy Physics, $\mathrm{SU}(2^N)$ in quantum mechanics, and $\mathrm{SU}(3)$ in quantum chromodynamics. Machine learning architectures that respect these symmetries often lead to significantly improved predictions while requiring far less training data. This has been demonstrated in many applications including 2D imaging with $\mathrm{O}(2)$ symmetry (Cohen and Welling, 2016a; Esteves et al., 2017), machine learning force fields with $\mathrm{O}(3)$ symmetry (Anderson et al., 2019; Bartók et al., 2013; Batzner et al., 2022; Batatia et al., 2022a) or jet tagging with $\mathrm{SO}^{+}(1,3)$ symmetry (Bogatskiy et al., 2022; Li et al., 2022). + +One way to extend CNNs to other groups (Finzi et al., 2020; Kondor and Trivedi, 2018) is through harmonic analysis on homogeneous spaces, where the convolution becomes an integral over the group. Other architectures work directly with finite-dimensional representations. We follow the demonstration of Bogatskiy et al. (2020a) who constructed a universal approximation of any equivariant map with a feed-forward neural network with vector activations belonging to finite-dimensional representations of a wide class of Lie groups. In this way, one can avoid computational challenges created by infinite-dimensional representations. + +![](images/d0350958a7419a28c8ec31b560e1128addb0f14d5b74dfa4ff54f0f1549b38d8.jpg) +Figure 1: Examples of natural science problems and associated reductive Lie groups. For high energy physics, the Lorentz group $\mathrm{SO}(1,3)$ ; for chemistry, the Euclidean group $\mathrm{E}(3)$ ; for quantum-chromodynamics, the $\mathrm{SU}(3)$ group. + +![](images/0b72ee96162668ead123bde821e34e2c0a778af594b25a1d7f96a96b9f68b8c2.jpg) + +![](images/25ebb474c04b226abd04f283ba2f704706a6303441b0a127955e40b0369d0068.jpg) + +Alternatively, our current work can be thought of as a generalization of the Atomic Cluster Expansion (ACE) formalism of Drautz (2019) to general Lie groups. The ACE formalism provides a complete body-ordered basis of $\mathrm{O}(3)$ -invariant features. By combining the concepts of ACE and $\mathrm{E}(3)$ -equivariant neural networks, Batatia et al. (2022a) proposed the MACE architecture, which achieves state-of-the-art performance on learning tasks in molecular modelling. The present work generalizes the ACE and MACE architectures to arbitrary Lie groups in order to propose a generic architecture for creating representations of geometric point clouds in interaction. + +Concretely, our work makes the following contributions: + +- We develop the $G$ -Equivariant Cluster Expansion. This framework generalizes the ACE (Drautz, 2019) and MACE (Batatia et al., 2022b) architectures to parameterize properties of point clouds, equivariant under a general reductive Lie group $G$ . +- We prove that our architecture is universal, even for a single layer. +- We introduce Lie-nm, a new library providing all the essential tools to apply our framework to a variety of essential Lie Groups in physics and computer visions, including the Lorentz group, $\mathrm{SU}(N)$ , $\mathrm{SL}_2(\mathbb{C})$ and product groups. +- We illustrate the generality and efficiency of our general-purpose approach by demonstrating excellent accuracy on two prototype applications, jet tagging, and 3D point cloud recognition. + +# 2 Background + +We briefly review a few important group-theoretic concepts: A real (complex) Lie group is a group that is also a finite-dimensional smooth (complex) manifold in which the product and inversion of the group are also smooth (holomorphic) maps. Among the most important Lie groups are Matrix Lie groups, which are closed subgroups of $\mathrm{GL}(n,\mathbb{C})$ the group of invertible $n\times n$ matrices with complex entries. This includes well-known groups such as $\operatorname {Sp}(2n,\mathbb{R})$ consisting of matrices of determinant one, that is relevant in Hamiltonian dynamics A finite-dimensional representation of the Lie group + +$G$ is a finite-dimensional vector space $V$ endowed with a smooth homomorphism $\rho \colon G\to \mathrm{GL}(V)$ . Features in the equivariant neural networks live in these vector spaces. An irreducible representation $V$ is a representation that has no subspaces which are invariant under the action of the group (other than $\{0\}$ and $V$ itself). This means that $V$ can not be decomposed non-trivially as the direct sum of representations. A reductive group over a field $F$ is a (Zariski-) closed subgroup of the group of matrices $\operatorname {GL}(n,F)$ such that every finite-dimensional representation of $G$ on an $F$ -vectorspace can be decomposed as a sum of irreducible representations. + +# 3 Related Work + +Lie group convolutions Convolutional neural networks (CNNs), which are translation equivariant, have also been generalized to other symmetries. For example, G-convolutions (Cohen and Welling, 2016b) generalized CNNs to discrete groups. Steerable CNNs (Cohen and Welling, 2016a) generalized CNNs to $O(2)$ equivariance and Spherical CNNs (Cohen et al., 2018) $O(3)$ equivariance. A general theory of convolution on any compact group and symmetric space was given by Kondor and Trivedi (2018). This work was further extended to equivariant convolutions on Riemannian manifolds by Weiler et al. (2021). + +ACE The Atomic Cluster Expansion (ACE) (Drautz, 2019) introduced a systematic framework for constructing complete $O(3)$ -invariant high body order basis sets with constant cost per basis function, independent of body order (Dusson et al., 2022). + +e3nn + Equivariant MLPs The e3nn library (Geiger and Smidt, 2022) provides a complete solution to build $E(3)$ -equivariant neural networks based on irreducible representations. The Equivariant MLPs (Finzi et al., 2021) include more groups, such as $SO(1,3)$ , $Z_{n}$ , but are restricted to reducible representations making them much less computationally efficient than irreducible representations. + +Equivariant MPNNs and MACE Equivariant MPNNs (Kondor et al., 2018; Anderson et al., 2019; Bogatskiy et al., 2020a; Satorras et al., 2021; Brandstetter et al., 2022; Batzner et al., 2022) have emerged as a powerful architecture to learn on geometric point clouds. They construct permutation invariants and group equivariant representations of point clouds. Successful applications include simulations in chemistry, particle physics, and 3D vision. MACE (Batatia et al., 2022a) generalized the $O(3)$ -Equivariant MPNNs to build messages of arbitrary body order, outperforming other approaches on molecular tasks. (Batatia et al., 2022b) showed that the MACE design space is large enough to include most of the previously published equivariant architectures. + +# 4 The $G$ -Equivariant Cluster Expansion + +We are concerned with the representation of properties of point clouds. Point clouds are described as multi-sets (unordered tuples) $X = [x_i]_i$ where each particle $x_i$ belongs to a configuration domain $\Omega$ . We denote the set of all such multi-sets by $\mathrm{msets}(\Omega)$ . For example, in molecular modeling, $x_i$ might describe the position and species of an atom and therefore $x_i = (r_i, Z_i) \in \mathbb{R}^3 \times \mathbb{Z}$ , while in high energy physics, one commonly uses the four-momentum $x_i = (E_i, p_i) \in \mathbb{R}^4$ , but one could also include additional features such as charge, spin, and so forth. A property of the point cloud is a map + +$$ +\Phi : \operatorname {m s e t s} (\Omega) \rightarrow Z \tag {1} +$$ + +i.e., $X \mapsto \Phi(X) \in Z$ , usually a scalar or tensor. The range space $Z$ is application dependent and left abstract throughout this paper. Expressing the input as a multi-set implicitly entails two important facts: (1) it can have varying lengths; (2) it is invariant under the permutations of the particles. The methods developed in this article are also applicable to fixed-length multi-sets, in which case $\Phi$ is simply a permutation-invariant function defined on some $\Omega^{N}$ . Mappings that are not permutation-invariant are special cases with several simplifications. + +In many applications, especially in the natural sciences, particle properties satisfy additional symmetries. When a group $G$ acts on $\Omega$ as well as on $Z$ we say that $\Phi$ is $G$ -equivariant if + +$$ +\Phi \circ g = \rho_ {Z} (g) \Phi , \quad g \in G \tag {2} +$$ + +where $\rho_Z(g)$ is the action of the group element $g$ on the range space $Z$ . In order to effectively incorporate exact group symmetry into properties $\Phi$ , we consider model architectures of the form + +$$ +\Phi : \text {m s e t s} (\Omega) \underset {\text {e m b e d d i n g}} {\longrightarrow} V \underset {\text {p a r a m e t e r i z a t i o n}} {\longrightarrow} V \underset {\text {r e a d o u t}} {\longrightarrow} Z, \tag {3} +$$ + +where the space $V$ into which we "embed" the parameterization is a possibly infinite-dimensional vector space in which a convenient representation of the group is available. For simplicity we will sometimes assume that $Z = V$ . + +The Atomic Cluster Expansion (ACE) framework (Drautz, 2019; Dusson et al., 2022; Drautz, 2020)) produces a complete linear basis for the space of all "smooth" $G$ -equivariant properties $\Phi$ for the specific case when $G = \mathrm{O}(3)$ and $x_{i}$ are vectorial interatomic distances. Aspects of the ACE framework were incorporated into $\mathrm{E}(3)$ -equivariant message passing architectures, with significant improvements in accuracy (Batatia et al., 2022a). In the following paragraphs we demonstrate that these ideas readily generalize to arbitrary reductive Lie groups. + +# 4.1 Efficient many-body expansion + +The first step is to expand $\Phi$ in terms of body orders, and truncate the expansion at a finite order $N$ : + +$$ +\Phi^ {(N)} (X) = \varphi_ {0} + \sum_ {i} \varphi_ {1} \left(x _ {i}\right) + \sum_ {i _ {1}, i _ {2}} \varphi_ {2} \left(x _ {i _ {1}}, x _ {i _ {2}}\right) + \dots + \sum_ {i _ {1}, \dots , i _ {N}} \varphi_ {N} \left(x _ {i _ {1}}, \dots , x _ {i _ {N}}\right), \tag {4} +$$ + +where $\varphi_{n}$ defines the $n$ -body interaction. Formally, the expansion becomes systematic in the limit as $N \to \infty$ . The second step is the expansion of the $n$ -particle functions $\varphi_{n}$ in terms of a symmetrized tensor product basis. To define this we first need to specify the embedding of particles $x$ : A countable family $(\phi_k)_k$ is a 1-particle basis if they are linearly independent on $\Omega$ and any smooth 1-particle function $\varphi_1$ (not necessarily equivariant) can be expanded in terms of $(\phi_k)_k$ , i.e, + +$$ +\varphi_ {1} (x) = \sum_ {k} w _ {k} \phi_ {k} (x). \tag {5} +$$ + +For the sake of concreteness, we assume that $\phi_k: \Omega \to \mathbb{C}$ , but the range can in principle be any field. We provide concrete examples of 1-particle bases in Appendix A.2. Let a complex vector space $V$ be given, into which the particle embedding maps, i.e., + +$$ +(\phi_ {k} (x)) _ {k} \in V \quad \forall x \in \Omega . +$$ + +As a consequence of (5) any smooth scalar $n$ -particle function $\varphi_{n}$ can be expanded in terms of the corresponding tensor product basis, + +$$ +\varphi_ {n} \left(x _ {1}, \dots , x _ {n}\right) = \sum_ {k _ {1}, \dots , k _ {n}} w _ {k _ {1} \dots k _ {n}} \prod_ {s = 1} ^ {n} \phi_ {k _ {s}} \left(x _ {s}\right). \tag {6} +$$ + +Inserting these expansions into (4) and interchanging summation (see appendix for the details) we arrive at a model for scalar permutation-symmetric properties, + +$$ +A _ {k} = \sum_ {x \in X} \phi_ {k} (x), \quad \boldsymbol {A} _ {\boldsymbol {k}} = \prod_ {s = 1} ^ {n} A _ {k}, \quad \Phi^ {(N)} = \sum_ {\boldsymbol {k} \in \mathcal {K}} w _ {\boldsymbol {k}} \boldsymbol {A} _ {\boldsymbol {k}}, \tag {7} +$$ + +where $\mathcal{K}$ is the set of all $k$ tuples indexing the features $A_{k}$ . Since $A_{k}$ is invariant under permuting $k$ , only ordered $k$ tuples are retained. The features $A_{k}$ are an embedding of $\mathrm{msets}(\Omega)$ into the space $V$ . The tensorial product features (basis functions) $A_{k}$ form a complete linear basis of multi-set functions on $\Omega$ and the weights $w_{k}$ can be understood as a symmetric tensor. We will extend this linear cluster expansion model $\Phi^{(N)}$ to a message-passing type neural network model in § 4.4. + +While the standard tensor product embeds $(\otimes_{s=1}^{n}\phi_{k_s})_k \colon \Omega^n \to V^n$ , the $n$ -correlations $\mathbf{A}_k$ are symmetric tensors and embed $(\mathbf{A}_k)_k \colon \mathrm{msets}(\Omega) \to \mathrm{Sym}^n V$ . + +The evaluation of the symmetric tensor features $A_{k}$ is the computational bottleneck in most scenarios, but efficient recursive evaluation algorithms (Batatia et al., 2022a; Kaliuzhnyi and Ortner, 2022) are available. See Appendix A.13.2 for further discussion of model computational costs. + +# 4.2 Symmetrisation + +With (7) we obtained a systematic linear model for (smooth) multi-set functions. It remains to incorporate $G$ -equivariance. We assume that $G$ is a reductive Lie group with a locally finite representation in $V$ . In other words, we choose a representation $\rho = (\rho_{kk'}) \colon G \to \mathrm{GL}(V)$ such that + +$$ +\phi_ {k} \circ g = \sum_ {k ^ {\prime}} \rho_ {k k ^ {\prime}} (g) \phi_ {k ^ {\prime}}, \tag {8} +$$ + +where for each $k$ the sum over $k'$ is over a finite index-set depending only on $k$ . Most Lie groups one encounters in physical applications belong to this class, the affine groups being notable exceptions. However, those can usually be treated in an ad hoc fashion, which is done in all $E(3)$ -equivariant architectures we are aware of. In practice, these requirements restrict how we can choose the embedding $(\phi_k)_k$ . If the point clouds $X = [x_i]_i$ are already given in terms of a representation of the group, then one may simply construct $V$ to be iterative tensor products of $\Omega$ ; see e.g. the MTP (Shapeev, 2016) and PELICAN (Bogatskiy et al., 2022) models. To construct an equivariant two-particle basis we need to first construct the set of all intertwining operators from $V \otimes V \to V$ . Concretely, we seek all solutions $C_{k_1 k_2}^{\alpha, K}$ to the equation + +$$ +\sum_ {k _ {1} ^ {\prime} k _ {2} ^ {\prime}} C _ {k _ {1} ^ {\prime} k _ {2} ^ {\prime}} ^ {\boldsymbol {\alpha}, K} \rho_ {k _ {1} ^ {\prime} k _ {1}} (g) \rho_ {k _ {2} ^ {\prime} k _ {2}} (g) = \sum_ {K ^ {\prime}} \rho_ {K K ^ {\prime}} (g) C _ {k _ {1} k _ {2}} ^ {\boldsymbol {\alpha}, K ^ {\prime}}; \tag {9} +$$ + +or, written in operator notation, $C^\alpha \rho \otimes \rho = \rho C^\alpha$ . We will call the $C_k^{\alpha ,K}$ generalized Clebsch-Gordan coefficients since in the case $G = \mathrm{SO}(3)$ acting on the spherical harmonics embedding $\phi_{lm} = Y_l^m$ those coefficients are exactly the classical Clebsch-Gordan coefficients. The index $\alpha$ enumerates a basis of the space of all solutions to this equation. For the most common groups, one normally identifies a canonical basis $C^\alpha$ and assigns a natural meaning to this index (cf. § A.5). Our abstract notation is chosen because of its generality and convenience for designing computational schemes. The generalization of the Clebsch-Gordan equation (9) to $n$ products of representations acting on the symmetric tensor space $\operatorname {Sym}^n (V)$ becomes (cf. § A.9) + +$$ +\sum_ {\boldsymbol {k} ^ {\prime}} \mathcal {C} _ {\boldsymbol {k} ^ {\prime}} ^ {\boldsymbol {\alpha}, K} \bar {\rho} _ {\boldsymbol {k} ^ {\prime} \boldsymbol {k}} = \sum_ {K ^ {\prime}} \rho_ {K K ^ {\prime}} \mathcal {C} _ {\boldsymbol {k}} ^ {\boldsymbol {\alpha}, K ^ {\prime}} \quad \forall K, \quad \boldsymbol {k} = (k _ {1}, \dots , k _ {N}), \quad g \in G, +$$ + +$$ +\text {w h e r e} \quad \overline {{\rho}} _ {\boldsymbol {k} ^ {\prime} \boldsymbol {k}} = \sum_ {\substack {\boldsymbol {k} ^ {\prime \prime} = \pi \boldsymbol {k} ^ {\prime} \\ \pi \in S _ {n}}} \rho_ {\boldsymbol {k} ^ {\prime \prime} \boldsymbol {k}} \quad \text {a n d} \quad \rho_ {\boldsymbol {k} ^ {\prime} \boldsymbol {k}} = \prod_ {t = 1} ^ {n} \rho_ {\boldsymbol {k} _ {t} ^ {\prime} k _ {t}}. \tag{10} +$$ + +Due to the symmetry of the $(A_{k})_{k}$ tensors $\mathcal{C}_k^{\alpha ,K}$ need only be computed for ordered $k$ tuples and the sum $\sum_{k'}$ also runs only over ordered $k$ tuples. Again, the index $\alpha$ enumerates a basis of the space of solutions. Equivalently, (10) can be written in compact notation as $\mathcal{C}^{\alpha}\overline{\rho} = \rho \mathcal{C}^{\alpha}$ . These coupling operators for $N$ products can often (but not always) be constructed recursively from couplings of pairs (9). We can now define the symmetrized basis + +$$ +\boldsymbol {B} _ {\alpha} ^ {K} = \sum_ {\boldsymbol {k} ^ {\prime}} \mathcal {C} _ {\boldsymbol {k} ^ {\prime}} ^ {\alpha , K} \boldsymbol {A} _ {\boldsymbol {k} ^ {\prime}}. \tag {11} +$$ + +The equivariance of (11) is easily verified by applying a transformation $g \in G$ to the input (cf § A.6). + +Universality: In the limit as the correlation order $N\to \infty$ , the features $(B_{\alpha}^{K})_{K,\alpha}$ form a complete basis of smooth equivariant multi-set functions, in a sense that we make precise in Appendix A.7. Any equivariant property $\Phi_V:\Omega \rightarrow V$ can be approximated by a linear model + +$$ +\Phi_ {V} ^ {K} = \sum_ {\alpha} c _ {\alpha} ^ {K} B _ {\alpha} ^ {K}, \tag {12} +$$ + +to within arbitrary accuracy by taking the number of terms in the linear combination to infinity. + +# 4.3 Dimension Reduction + +The tensor product of the cluster expansion in (7) is taken on all the indices of the one-particle basis. Unless the embedding $(\phi_k)_k$ is very low-dimensional it is often preferable to "sketch" this tensor product. For example, consider the canonical embedding of an atom $x_i = (r_i,Z_i)$ , + +$$ +\phi_ {k} (x _ {i}) = \phi_ {z n l m} (x _ {i}) = \delta_ {z Z _ {i}} R _ {n l} (r _ {i}) Y _ {l} ^ {m} (\hat {\boldsymbol {r}} _ {i}). +$$ + +Only the $(lm)$ channels are involved in the representation of $\mathrm{O}(3)$ hence there is considerable freedom in "compressing" the $(z,n)$ channels. + +Following Darby et al. (2022) we construct a sketched $G$ -equivariant cluster expansion: We endow the one-particle basis with an additional index $c$ , referred to as the sketched channel, replacing the index $k$ with the index pair $(c,k)$ , and renaming the embedding $(\phi_{ck})_{c,k}$ . In the case of three-dimensional particles one may, for example, choose $c = (z,n)$ . In general, it is crucial that the representation + +remains in terms of $\rho_{k,k'}$ , that is, (8) becomes $\phi_{ck} \circ g = \sum_{k'} \rho_{kk'}(g) \phi_{ck'}$ . Therefore, manipulating only the $c$ channel does not change any symmetry properties of the architecture. Generalizing Darby et al. (2022), the $G$ -TRACE (tensor-reduced ACE) basis then becomes + +$$ +B _ {c \alpha} ^ {K} = \sum_ {\mathbf {k} ^ {\prime}} C _ {\mathbf {k} ^ {\prime}} ^ {\alpha , K} \tilde {A} _ {c \mathbf {k} ^ {\prime}}, \quad \text {w h e r e} \tag {13} +$$ + +$$ +\tilde {A} _ {c \boldsymbol {k}} = \prod_ {t = 1} ^ {n} \left(\sum_ {c ^ {\prime}} w _ {c c ^ {\prime}} \sum_ {x \in X} \phi_ {c ^ {\prime} k _ {t}} (x)\right). \tag {14} +$$ + +This construction is best understood as an equivariance-preserving canonical tensor decomposition (Darby et al., 2022). There are numerous variations, but for the sake of simplicity, we restrict our presentation to this one case. + +Universality: Following the proof of Darby et al. (2022) one can readily see that the $G$ -TRACE architecture inherits the universality of the cluster expansion, in the limit of decoupled channels $\# c \to \infty$ . A smooth equivariant property $\Phi$ may be approximated to within arbitrary accuracy by an expansion $\Phi^K(X) \approx \sum_{c,\alpha} c_\alpha^K B_{c,\alpha}^K(X)$ . Since the embedding $\tilde{A}_{ck}$ is learnable, this is a nonlinear model. We refer to § A.7 for the details. + +# 4.4 G-MACE, Multi-layer cluster expansion + +The $G$ -equivariant cluster expansion is readily generalized to a multi-layer architecture by re-expanding previous features in a new cluster expansion (Batatia et al., 2022b). The multi-set $X$ is endowed with extra features, $\pmb{h}_i^t = (h_{i,cK}^t)_{c,K}$ , that are updated for $t \in \{1,\dots,T\}$ iterations. These features themselves are chosen to be a field of representations such that they have a well-defined transformation under the action of the group. This results in + +$$ +x _ {i} ^ {t} = \left(x _ {i}, \boldsymbol {h} _ {i} ^ {t}\right) \tag {15} +$$ + +$$ +\phi_ {c k} ^ {t} \left(x _ {i}, \boldsymbol {h} _ {i} ^ {t}\right) = \sum_ {\boldsymbol {\alpha}} w _ {\boldsymbol {\alpha}} ^ {t, c k} \sum_ {k ^ {\prime}, k ^ {\prime \prime}} C _ {k ^ {\prime} k ^ {\prime \prime}} ^ {\boldsymbol {\alpha}, k} h _ {i, c k ^ {\prime}} ^ {t} \phi_ {c k ^ {\prime \prime}} \left(x _ {i}\right) \tag {16} +$$ + +The recursive update of the features proceeds as in a standard message-passing framework but with the unique aspect that messages are formed via the $G$ -TRACE and in particular can contain arbitrary high correlation order: + +$$ +m _ {i, c K} ^ {t} = \sum_ {\boldsymbol {\alpha}} W _ {\boldsymbol {\alpha}} ^ {t, c K} \boldsymbol {B} _ {c \boldsymbol {\alpha}} ^ {t, K}. \tag {17} +$$ + +The gathered message $\pmb{m}_i^t = (m_{i,cK}^t)_{c,k}$ is then used to update the particle states, + +$$ +x _ {i} ^ {t + 1} = \left(x _ {i}, \boldsymbol {h} _ {i} ^ {t + 1}\right), \quad \boldsymbol {h} _ {i} ^ {t + 1} = U _ {t} \left(\boldsymbol {m} _ {i} ^ {t}\right), \tag {18} +$$ + +where $U_{t}$ can be an arbitrary fixed or learnable transformation (even the identity). Lastly, a readout function maps the state of a particle to a target quantity of interest, which could be local to each particle or global to the mset $X$ , + +$$ +y _ {i} = \sum_ {t = 1} ^ {T} \mathcal {R} _ {t} ^ {\text {l o c}} \left(x _ {i} ^ {t}\right), \quad \text {r e s p e c t i v e l y}, \quad y = \sum_ {t = 1} ^ {T} \mathcal {R} _ {t} ^ {\text {g l o b}} \left(\left\{x _ {i} ^ {t} \right\} _ {i}\right). \tag {19} +$$ + +This multi-layer architecture corresponds to a general message-passing neural network with arbitrary body order of the message at each layer. We will refer to this architecture as $G$ -MACE. The $G$ -MACE architecture directly inherits universality from the $G$ -ACE and $G$ -TRACE architectures: + +Theorem 4.1 (Universality of $G$ -MACE). Assume that the one-particle embedding $(\phi_k)_k$ is a complete basis. Then, the set of $G$ -MACE models, with a fixed finite number of layers $T$ , is dense in the set of continuous and equivariant properties of point clouds $X \in \mathrm{msets}(\Omega)$ , in the topology of pointwise convergence. It is dense in the uniform topology on compact and size-bounded subsets. + +# 5 lie-nn: Generating Irreducible Representations for Reductive Lie Groups + +In order to construct the G-cluster expansion for arbitrary Lie groups, one needs to compute the generalized Clebsch-Gordan coefficients (10) for a given tuple of representations (see 11). To facilitate this task, we have implemented an open source software library, $\mathrm{lie - nn}^1$ . In this section we review the key techniques employed in this library. + +# 5.1 Lie Algebras of Reductive Lie Groups + +Formally, the Lie algebra of a Lie group is its tangent space at the origin and carries an additional structure, the Lie bracket. Informally the Lie algebra can be thought of as a linear approximation to the Lie group but, due to the group structure, this linear approximation carries (almost) full information about the group. In particular the representation theory of the Group is almost entirely determined by the Lie algebra, which is a simpler object to work with instead of the fully nonlinear Lie group. + +Lie algebra The Lie groups we study can be realized as closed subgroups $G \subset \mathrm{GL}_n(\mathbb{R})$ of the general linear group. In that case their Lie algebras can be concretely realized as $\mathfrak{g} = \operatorname{Lie}(G) = \{X \in M_n(\mathbb{R}) \mid \forall t \in \mathbb{R} : \exp(tX) \in G\}$ where $\exp(X) = 1 + X + \frac{1}{2} X^2 \ldots$ is the standard matrix exponential. It turns out that $\mathfrak{g} \subset M_n(\mathbb{R})$ is a linear subspace closed under the commutator bracket $[X, Y] = XY - YX$ . + +Structure theory We fix a linear basis $\{X_i\} \subset \mathfrak{g}$ , called a set of generators for the group. The Lie algebra structure is determined by the structure constants $A_{ijk}$ defined by $[X_i, X_j] = \sum_k A_{ijk} X_k$ , in that if $X = \sum_i a_i X_i$ and $Y = \sum_j b_j X_j$ then $[X, Y] = \sum_k \left( \sum_{i,j} A_{ijk} a_i b_j \right) X_k$ . The classification of reductive groups provides convenient generating sets for their Lie algebras (or their complexifications). One identifies a large commutative subalgebra $\mathfrak{h} \subset \mathfrak{g}$ (sometimes of $\mathfrak{g}_{\mathbb{C}} = \mathfrak{g} \otimes_{\mathbb{R}} \mathbb{C}$ ) with basis $\{H_i\}$ so that most (or all) of the other generators $E_\alpha$ can be chosen so that $[H_i, E_\alpha] = \alpha(H_i) E_\alpha$ for a linear function $\alpha$ on $\mathfrak{h}$ . These functions are the so-called roots of $\mathfrak{g}$ . Structural information about $\mathfrak{g}$ is commonly encoded pictorially via the Dynkin diagram of $\mathfrak{g}$ , a finite graph the nodes of which are a certain subset of the roots. There are four infinite families of simple complex Lie algebras $A_n = \mathfrak{su}(n+1)$ , $B_n = \mathfrak{so}(2n+1)$ , $C_n = \mathfrak{sp}(2n)$ , $D_n = \mathfrak{so}(2n)$ and further five exceptional simple complex Lie algebras (a general reductive Lie algebra is the direct sum of several simple ones and its centre). The Lie algebra only depends on the connected component of $G$ , thus when the group $G$ is disconnected in addition to the infinitesimal generators $\{X_i\}$ one also needs to fix so-called "discrete generators", a subset $\mathbf{H} \subset G$ containing a representative from each connected component. + +![](images/4ae7839d6d5684757127dbcbc3703e6b503a98e973c63bdbf3f0eb4bd078e2bf.jpg) +Figure 2: Examples of Dynkin diagrams and their associated group class. + +Representation theory The representation theory of complex reductive Lie algebras is completely understood. Every finite-dimensional representation is (isomorphic to) the direct sum of irreducible representations ("irreps"), with the latter parametrized by appropriate linear functional on $\mathfrak{h}$ ("highest weight"). Further given a highest weight $\lambda$ there is a construction of the associated irrep with an explicit action of the infinitesimal generators chosen above. The Weyl Dimension Formula gives the dimension of an irrep in terms of its highest weight. + +# 5.2 Numerical Computations in lie-nn + +The most basic class of the lie- $\mathfrak{m}$ library encodes a group $G$ and infinitesimal representation $d\rho$ of $\mathfrak{g}$ using the tuple + +$$ +\rho := (A, n, \{d \rho (X _ {i}) \} _ {i}, \{\rho (h) \} _ {h \in \mathbf {H}}), \tag {20} +$$ + +with $A$ the structure constants of the group, $n$ the dimension of the representation, and $d\rho(X_i)$ and $\rho(h)$ being $n \times n$ matrices encoding the action of the infinitesimal and the discrete generators respectively. The action of infinitesimal generators is related to the action of group generators by the exponential, $\forall X \in \mathfrak{g}, \rho(e^X) = e^{d\rho(X)}$ . For finite groups, we assume that $d\rho(X) = 0$ as they have only discrete generators. + +As the building blocks of the theory irreps are treated specially; the package implements functionality for the following operations for each supported Lie group: + +- Constructing the irrep with a given highest weight. + +- Determining the dimension of an irrep. +- Decomposing the tensor product of several irreps into irreps up to isomorphism (the selection rule, giving the list of irreducible components and their multiplicities). +- Decomposing the tensor product of several irreps into irreps explicitly via a change of basis ("generalized Clebsch-Gordan coefficients"). +- Computing the symmetrized tensor product of the group (see. 5.3 and A.9 for details). + +To construct an irrep explicitly as in (20) one needs to choose a basis in the abstract representation space (including a labeling scheme for the basis) so that we can give matrix representations for the action of generators. For this purpose, we use in lie-nn the Gelfand-Tsetlin (GT) basis (Gelfand and Tsetlin, 1950) and associated labeling of the basis by GT patterns (this formalism was initially introduced for algebras of type $A_{n}$ but later generalized to all classical groups). Enumerating the GT patterns for a given algebra gives the dimension of a given irrep, the selection rules can be determined combinatorially, and it is also possible to give explicit algorithms to compute Clebsch-Gordan coefficients (the case of $A_{n}$ is treated by Alex et al. (2011)). For some specific groups, simplifications to this procedure are possible and GT patterns are not required. + +In some cases, one wants to compute coefficients for reducible representations or for representations where the analytical computation with GT patterns is too complex. In these cases, a numerical algorithm to compute the coefficients is required. Let $d\rho_{1}, d\rho_{2}$ be two Lie aglebra representations of interest. The tensor product on the Lie algebra $d\rho_{1} \otimes d\rho_{2}(X)$ can be computed as, + +$$ +d \rho_ {1} \otimes d \rho_ {2} (X) = d \rho_ {1} (X) \otimes 1 + 1 \otimes d \rho_ {2} (X) \tag {21} +$$ + +Therefore, given sets of generators of three representations $d\rho_{1}, d\rho_{2}, d\rho_{3}$ , the Clebsch-Gordan coefficients are the change of basis between $(d\rho_{1}(X) \otimes 1 + 1 \otimes d\rho_{2}(X))$ and $d\rho_{3}(X)$ . One can compute this change of basis numerically via a null space algorithm. For some groups, one can apply an iterative algorithm that generates all irreps starting with a single representation, using the above-mentioned procedure (see A.10). + +# 5.3 Symmetric Powers + +Let $V$ be a vector space and $\{e_i\}$ be a basis of $V$ . The symmetric power of $V$ , $\operatorname{Sym}^n V$ , can be regarded as the space of homogeneous polynomials of degree $n$ in the variables $e_i$ . The product basis in Equation 10 spans exactly this space. A basis of $\operatorname{Sym}^n V$ can be constructed as, + +$$ +\left\{e _ {i _ {1}} \cdot e _ {i _ {2}} \dots e _ {i _ {n}} \mid i _ {1} \leq \dots \leq i _ {n} \right\} \tag {22} +$$ + +If $V_{\lambda}$ if an irreducible representation of a reductive Lie group $G$ with highest weight $\lambda$ , $\operatorname{Sym}^n V_{\lambda}$ admits a decomposition into irreducible representations, + +$$ +\operatorname {S y m} ^ {n} V _ {\lambda} = \bigoplus c _ {\lambda , \mu} V _ {\mu} \tag {23} +$$ + +The generalized Clebsch-Gordan coefficients in (11) represent the change of basis between $\mathrm{Sym}^n V_\lambda$ and one of the $V_{\mu}$ . The following steps are taken to obtain these coefficients: + +- Construct the symmetric power basis as in (22) +- Compute the coefficients $c_{\lambda, \mu}$ , using Freudenthal's Formula or GT patterns. +- For any $\mu$ with $c_{\lambda, \mu}$ non-zero, find a basis of $V_{\mu}$ , and compute the change of basis between the basis of $\operatorname{Sym}^n V_{\lambda}$ and $V_{\mu}$ + +Alternatively, if one simply has the Clebsch-Gordan coefficients, the change of basis from $V_{\lambda}$ to some $V_{\mu}$ , a new algorithm outlined in Appendix A.9 and implemented in Lie-nn can construct the change of basis from $\mathrm{Sym}^n V_{\lambda}$ to $V_{\mu}$ . + +# 6 Applications + +# 6.1 Lie groups and their applications + +In Table 6.1 we give a non-exhaustive overview of Lie groups and their typical application domains, to which our methodology naturally applies. + +Benchmarking our method on all of these applications is beyond the scope of the present work, in particular, because most of these fields do not have standardized benchmarks and baselines to + +Table 1: Lie groups of interests covered by the present methods and their potential applications to equivariant neural networks. The groups above the horizontal line are already available in lie-nn. The ones below the line fall within our framework and can be added. + +
GroupApplicationReference
U(1)Electromagnetism(Lagrave et al., 2021)
SU(3)Quantum Chromodynamics(Favoni et al., 2022)
SO(3)3D point clouds(Batatia et al., 2022a)
SO+(1,3)Particle Physics(Bogatskiy et al., 2020b)
SL(3,R)Point cloud classification-
SU(2N)Entangled QP-
Sp(N)Hamiltonian dynamics-
SO(2N+1)Projective geometry-
+ +compare against. The MACE architecture has proven to be state of the art for a large range of atomistic modeling benchmarks (Batatia et al., 2022a). In the next section, we choose two new prototypical applications and their respective groups to further assess the performance of our general approach. + +# 6.2 Particle physics with the $SO(1,3)$ + +Jet tagging consists in identifying the process that generated a collimated spray of particles called a jet after a high-energy collision occurs at particle colliders. Each jet can be defined as a multiset of four-momenta $[(E_i,\mathbf{p}_i)]_{i = 1}^N$ , where $E_{i}\in \mathbb{R}^{+}$ and $\mathbf{p}_i\in \mathbb{R}^3$ + +Current state-of-the-art models incorporate the natural symmetry arising from relativistic objects, e.g., the Lorentz symmetry, as model invariance. To showcase the performance and generality of the $G$ -MACE framework we use the Top-Tagging dataset (Butter et al., 2019), where the task is to differentiate boosted top quarks from the background composed of gluons and light quark jets. In Table 6.2, we can see that $G$ -MACE achieves excellent accuracy, being the only arbitrary equivariant model to reach similar accuracy as PELICAN, which is an invariant model. We refer to Appendix A.11.1 for the details of the architecture. + +Table 2: Comparisson between state-of-the-art metrics on the Top-Tagging dataset. Scores were taken from (Bogatskiy et al., 2022; Qu et al., 2022; Qu and Gouskos, 2020; Munoz et al., 2022; Bogatskiy et al., 2020a; Komiske et al., 2019; Pearkes et al., 2017). + +
Architecture#ParamsAccuracyAUCRej30%
PELICAN45k0.9420.9872289 ± 204
partT2.14M0.9400.9861602 ± 81
ParticleNet498k0.9380.9851298 ± 46
LorentzNet224k0.9420.9872195 ± 173
BIP4k0.9310.981853 ± 68
LGN4.5k0.9290.964435 ± 95
EFN82k0.9270.979888 ± 17
TopoDNN59k0.9160.972295 ± 5
LorentzMACE228k0.9420.9871935 ± 85
+ +# 6.3 3D Shape recognition + +3D shape recognition from point clouds is of central importance for computer vision. We use the ModelNet10 dataset (Wu et al., 2015) to test our proposed architecture in this setting. As rotated objects need to map to the same class, we use a MACE model with $O(3)$ symmetry. To create an encoder version of $G$ -MACE, we augment a PointNet++ implementation (Yan, 2019) with $G$ -MACE layers. See the appendix A.11.2 for more details on the architecture. We see in Table 3 that MACE outperforms the non-equivariant baseline. + +Table 3: Accuracy in shape recognition. Scores were taken from (Qi et al., 2016), (Qi et al., 2017), Shen et al. (2018), Li et al. (2018), Kumawat and Raman (2019) + +
ArchitecturePointMACE (ours)PointNetPointNet ++KCNSO-NetLP-3DCNN
Accuracy96.194.295.094.495.594.4
RepresentationPoint CloudPoint cloudPoint cloudPoint CloudPoint CloudVoxel grid
+ +# 7 Conclusion + +We introduced the $G$ -Equivariant Cluster Expansion, which generalizes the successful ACE and MACE architectures to symmetries under arbitrary reductive Lie groups. We provide an open-source Python library lie-nn that provides all the essential tools to construct such general Lie-group equivariant neural networks. We demonstrated that the general $G$ -MACE architecture simultaneously achieves excellent accuracy in Chemistry, Particle Physics, and Computer Vision. Future development will implement additional groups and generalize to new application domains. + +# Acknowledgments and Disclosure of Funding + +IB's work was supported by the ENS Paris Saclay. CO's work was supported by NSERC Discovery Grant IDGR019381 and NFRF Exploration Grant GR022937. This work was also performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3). IB would like to thank Gábor Csányi for his support. + +# References + +Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Neural Computation 1, 541 (1989). +T. S. Cohen and M. Welling, ICLR 2017 (2016a), 10.48550/ARXIV.1612.08498. +C. Esteves, C. Allen-Blanchette, X. Zhou, and K. Daniilidis, "Polar transformer networks," (2017). +B. Anderson, T. S. Hy, and R. Kondor, in Advances in Neural Information Processing Systems, Vol. 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Curran Associates, Inc., 2019). +A. P. Bartók, R. Kondor, and G. Csányi, Physical Review B 87, 184115 (2013). +S. Batzner, A. Musaelian, L. Sun, M. Geiger, J. P. Mailoa, M. Kornbluth, N. Molinari, T. E. Smidt, and B. Kozinsky, Nature Communications 13, 2453 (2022). +I. Batatia, D. P. Kovács, G. N. C. Simm, C. Ortner, and G. Csányi, "Mace: Higher order equivariant message passing neural networks for fast and accurate force fields," (2022a). +A. Bogatskiy, T. Hoffman, D. W. Miller, and J. T. Offermann, Machine Learning and the Physical Sciences workshop, NeurIPS 2022 (2022), arXiv:2211.00454 [hep-ph]. +C. Li, H. Qu, S. Qian, Q. Meng, S. Gong, J. Zhang, T.-Y. Liu, and Q. Li, (2022), arXiv:2208.07814 [hep-ph]. +M. Finzi, S. Stanton, P. Izmailov, and A. G. Wilson, in Proceedings of the 37th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 119, edited by H. D. III and A. Singh (PMLR, 2020) pp. 3165-3176. +R. Kondor and S. Trivedi, in Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 80, edited by J. Dy and A. Krause (PMLR, 2018) pp. 2747-2755. +A. Bogatskiy, B. Anderson, J. T. Offermann, M. Roussi, D. W. Miller, and R. Kondor, "Lorentz Group Equivariant Neural Network for Particle Physics," (2020a), arXiv:2006.04780 [hep-ph]. +R.Drautz,Phys.Rev.B 99,014104 (2019). + +I. Batatia, S. Batzner, D. P. Kovács, A. Musaelian, G. N. C. Simm, R. Drautz, C. Ortner, B. Kozinsky, and G. Csányi, "The design space of e(3)-equivariant atom-centered interatomic potentials," (2022b). +T. Cohen and M. Welling, in Proceedings of The 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 48, edited by M. F. Balcan and K. Q. Weinberger (PMLR, New York, New York, USA, 2016) pp. 2990-2999. +T. S. Cohen, M. Geiger, J. Kohler, and M. Welling, in International Conference on Learning Representations (2018). +M. Weiler, P. Forre, E. Verlinde, and M. Welling, "Coordinate independent convolutional networks - isometry and gauge equivariant convolutions on riemannian manifolds," (2021). +G. Dusson, M. Bachmayr, G. Csányi, R. Drautz, S. Etter, C. van der Oord, and C. Ortner, Journal of Computational Physics 454, 110946 (2022). +M. Geiger and T. Smidt, "e3nn: Euclidean neural networks," (2022). +M. Finzi, M. Welling, and A. G. Wilson, "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups," (2021). +R. Kondor, Z. Lin, and S. Trivedi, in Advances in Neural Information Processing Systems, Vol. 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Curran Associates, Inc., 2018). +V. G. Satorras, E. Hoogeboom, and M. Welling, "E(n) equivariant graph neural networks," (2021). +J. Brandstetter, R. Hesselink, E. van der Pol, E. J. Bekkers, and M. Welling, "Geometric and physical quantities improve e(3) equivariant message passing," (2022), arXiv:2110.02905 [cs.LG]. +R.Drautz,Phys.Rev.B 102,024104 (2020). +I. Kaliuzhnyi and C. Ortner, ArXiv e-prints 2202.04140 (2022). +A. Shapeev, Multiscale Model. Simul. 14, 1153 (2016). +J. P. Darby, D. P. Kovács, I. Batatia, M. A. Caro, G. L. W. Hart, C. Ortner, and G. Csányi, “Tensorreduced atomic density representations,” (2022). +I. M. Gelfand and M. L. Tsetlin, 825828, 71 (1950). +A. Alex, M. Kalus, A. Huckleberry, and J. von Delft, Journal of Mathematical Physics 52, 023507 (2011). +P.-Y. Lagrave, Y. Cabanes, and F. Barbaresco, in Geometric Science of Information, edited by F. Nielsen and F. Barbaresco (Springer International Publishing, Cham, 2021) pp. 577-584. +M. Favoni, A. Ipp, D. I. Müller, and D. Schuh, Phys. Rev. Lett. 128, 032003 (2022). +A. Bogatskiy, B. Anderson, J. Offermann, M. Roussi, D. Miller, and R. Kondor, in Proceedings of the 37th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 119, edited by H. D. III and A. Singh (PMLR, 2020) pp. 992-1002. +A. Butter et al., SciPost Phys. 7, 014 (2019), arXiv:1902.09914 [hep-ph]. +H. Qu, C. Li, and S. Qian, "Particle transformer for jet tagging," (2022). +H. Qu and L. Gouskos, Phys. Rev. D 101, 056019 (2020), arXiv:1902.08570 [hep-ph]. +J. M. Munoz, I. Batatia, and C. Ortner, "Bip: Boost invariant polynomials for efficient jet tagging," (2022). +P. T. Komiske, E. M. Metodiev, and J. Thaler, JHEP 01, 121 (2019), arXiv:1810.05165 [hep-ph]. +J. Pearkes, W. Fedorko, A. Lister, and C. Gay, (2017), arXiv:1704.02124 [hep-ex]. + +Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, Los Alamitos, CA, USA, 2015) pp. 1912-1920. +X. Yan, github.com/yanx27 (2019). +C. R. Qi, H. Su, K. Mo, and L. J. Guibas, arXiv preprint arXiv:1612.00593 (2016). +C. R. Qi, L. Yi, H. Su, and L. J. Guibas, arXiv preprint arXiv:1706.02413 (2017). +Y. Shen, C. Feng, Y. Yang, and D. Tian, in CVPR (2018). +J. Li, B. M. Chen, and G. H. Lee, CoRR (2018). +S. Kumawat and S. Raman, in CVPR (2019). +A. I. Molev, “A weight basis for representations of even orthogonal lie algebras,” (1999), arXiv:math/9902060 [math.RT]. +R. Steinberg, Bull. Amer. Math. Soc. 67, 406 (1961). +M. Bachmayr, G. Dusson, C. Ortner, and J. Thomas, ArXiv e-prints 2109.14771 (2021). +J. Thomas, H. Chen, and C. Ortner, Arch. Ration. Mech. Anal. 246 (2022). +T. Broecker, Representation of Compact Lie Groups (Springer Berlin Heidelberg, 1985). +N. Bourbaki, Lie Groups and Lie Algebras (Springer Berlin, Heidelberg, 1989). + +# A Appendix + +# A.1 Background on Lie groups and Tensor Products of Representations + +# A.1.1 Lie groups + +Lie groups generalize the idea of continuous symmetry such as the rotation symmetry of the round sphere $S^n$ (given by the action of the orthogonal group $\mathrm{O}(n)$ ), the symmetry of a vector space under changes of basis (given by the action of the general linear group $\mathrm{GL}_n(\mathbb{R})$ ), or the invariance of Minkowski spacetime under boosts (given by the Lorentz group). The symmetry groups of specific geometries were studied in the 19th century, leading to Sophus Lie's study of general continuous symmetry. Other major contributors to our understanding include Klein, Riemann, Hilbert, Poincaré, Noether, Cartan, among many others. In particular the solution of Hilbert's 5th problem by Gleason, Montgomery, and Zippin shows that under very general conditions a continuous symmetry group must be a Lie group. Symmetry being central to mathematics Lie groups appear in many areas of mathematics, from differential geometry to number theory. + +Let us give the general formal definition, as well as a more practical concrete one: + +Definition A.1. A real (complex) Lie group is a group that is also a finite-dimensional smooth (complex) manifold in which the product and inversion of the group are also smooth (holomorphic) maps. + +A linear Lie group is a subgroup of the group $\mathrm{GL}_n(\mathbb{R})$ which is also a closed subset (in the sense of real analysis). + +It is a fact that every closed subgroup of a Lie group is a Lie group, so every group of the second type is indeed also a group of the first type. Conversely while in general a Lie group is only locally isomorphic to a linear group (by a surjective group homomorphism), every compact Lie group is isomorphic to a linear group, and most Lie groups that arise in practice are linear. + +When the Lie group $G$ acts on a space $X$ (e.g. as a symmetry group) it naturally also acts on spaces of functions on $X$ (by translation), and this action is linear. In general, an action of $G$ on a (suually complex) vector space by linear map is called a linear representation (of $G$ ). In the context of our paper the space of functions on $X$ is the space of features, and we achieve efficient computation in it by decomposing this representation into simpler constituents – in other words by understanding the representation theory of $G$ . + +Describing the representation theory of arbitrary Lie groups is a very difficult problem and an active area of research in pure mathematics. However, for certain classes of groups it is possible to say quite a bit (for example classical harmonic analysis can be viewed from the lens of the representation theory of commutative groups). An important class of groups whose representation theory can be completely understood is the class of reductive Lie groups, on which we focus in this paper. + +For now let us give a concrete definition; an abstract will follow later once we introduce the necessary language. + +Definition A.2. A linear Lie group $G \subset \mathrm{GL}_n(\mathbb{R})$ is reductive if it is also closed under transpose: for every $g \in G$ we have that $g^T \in G$ . + +Most of the groups that are of interest in natural sciences are reductive Lie groups. This includes well-known groups such as the symplectic group $\mathrm{Sp}_{2n}(\mathbb{R})$ relevant in Hamiltonian dynamics, and orthogonal group $\mathrm{O}(n)$ which parametrizes rotations in $\mathbb{R}^n$ . + +# A.1.2 Linear representations + +Fix a Lie group $G$ . A linear representation of $G$ is a pair $(\pi, V)$ where $V$ is a complex vector space and $\pi$ is an action of $G$ on $V$ by linear maps. In other words for each $g \in G$ we have a complex-linear map $\pi(g) \colon V \to V$ such that $\pi(g_1g_2) = \pi(g_1)\pi(g_2)$ and such that $\pi(1_G) = \mathrm{Id}_V$ (in other words, $\pi \colon G \to \mathrm{GL}(V)$ is a group homomorphism). + +For an example let the familiar rotation group $\mathrm{O}(3)$ act on the usual round sphere $S^2$ . Then $\mathrm{O}(3)$ acts on the space of functions on $S^2$ . This space is infinite-dimensional, but it turns out we can approximate it by a sum of finite-dimensional pieces. Indeed for each degree $d \geq 0$ let $V_d$ be the space of polynomials in the variables $x, y, z$ which are homogeneous of degree $d$ (for example + +$V_{0} = \operatorname{span}\{1\}, V_{1} = \operatorname{span}\{x, y, z\}, V_{2} = \operatorname{span}\{x^{2}, xy, y^{2}, \ldots\}$ . Letting O(3) act on $\mathbb{R}^3$ by linear change of variable, it acts on each $V_{d}$ since such change of variable does not change the degree of homogeneity. Now let $W_{d} \subset V_{d}$ be the subspace of harmonic polynomials, those polynomials $p$ that satisfy $\frac{\partial^2 p}{\partial x^2} + \frac{\partial^2 p}{\partial y^2} + \frac{\partial^2 p}{\partial z^2} = 0$ . Then $W_{d}$ is itself O(3) invariant since the Laplace operator is rotation-invariant, making $W_{d}$ a subrepresentation of $V_{d}$ . It is a (not immediately obvious) fact that each $W_{d}$ is in fact an irreducible representation: it has no subrepresentations of its own other than itself and the zero subspace. Further, if we interpret the harmonic polynomials in $W_{d}$ as functions on the unit sphere $S^2 \subset \mathbb{R}^3$ we can view $W_{d}$ as a space of functions on the sphere. It then turns out that the spaces $W_{d}$ are linearly independent of each other, and that their sum $\bigoplus_{d \geq 0} W_{d}$ is dense in any reasonable space of functions on the sphere (the space of continuous functions, the space of smooth functions, or the space of square-integrable functions, each with its respective notion of convergence). In other words, we have morally decomposed the space of functions on $S^2$ as the sum of O(3)-invariant subrepresentations, each of which is irreducible. + +Decomposing a representation into its irreducible constituents is a major goal of representation theory (as well as a computational goal of this paper). When this is possible we can understand a general representations $V$ of a group $G$ by first enumerating its irreducible representations and then counting how many times each irreducible occurs as summand in $V$ , known as the (this is very much analogous to understanding general positive integers via their prime factorization – here the irreducible representations play the role of the prime numbers). + +Reductive groups are particularly amenable to this approach because their representations indeed do decompose as direct sums in this fashion. It is fact that a linear Lie group $G$ is reductive if and only if each of its finite-dimensional representations is isomorphic to a direct sum of irreducible representations. + +Before we delve further into representation theory we need a brief detour into the structure theory of Lie groups. + +# A.1.3 Lie algebras + +As Lie groups are manifolds, understanding their representations theory usually leads to advanced geometrical and topological results. We will now describe how these difficult questions can be transformed into simpler linear algebra questions using the Lie algebras associated with Lie groups. + +A Lie algebra is a vector space $\mathfrak{g}$ over a field $F$ together with a bilinear map $[\cdot, \cdot]: \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}$ called the Lie bracket, satisfying the following axioms: + +- The Lie bracket is bilinear, that is for all $x,y,z\in \mathfrak{g}$ and $\alpha ,\beta \in F$ $[\alpha x + \beta y,z] = \alpha [x,z] + \beta [y,z]$ and $[x,\alpha y + \beta z] = \alpha [x,y] + \beta [x,z]$ . +- The Lie bracket is antisymmetric, that is for all $x,y\in \mathfrak{g},[x,y] = -[y,x]$ +- The Lie bracket satisfies the Jacobi identity, that is for all $x, y, z \in \mathfrak{g}$ , $[x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0$ . + +Lie algebras can alternatively be seen as the tangent space of a Lie group to the identity: + +Theorem A.3. Let $G$ be a Lie group viewed as a smooth manifold, the tangent space $T_{\Gamma}G$ is a Lie algebra called the Lie algebra of $G$ noted $\mathfrak{g}$ . Moreover in the case of connected complex Lie groups, the category of representation of $G$ , $\operatorname{Rep}(G)$ , is equivalent to the category of representation of $\mathfrak{g}$ , $\operatorname{Rep}(\mathfrak{g})$ . + +This theorem is at the heart of the representation theory of Lie groups. As Lie algebra are vector spaces, the functorial equivalence between the two categories of representation translates geometrical and topological questions into simpler linear algebra questions. + +A classification of Lie algebras was carried out by Cartan and later Gelfand work. In the case of complex Lie algebras, this classification is complete. Two major classes of Lie algebras emerged from Cartan's work, semi-simple Lie algebras and nilpotent Lie algebras. + +Definition A.4 (Central series). A central series of a Lie algebra $\mathfrak{g}$ is a sequence of ideals $(\mathfrak{g}_i)_{i\in \mathbb{N}}$ such that $\mathfrak{g}_0 = \mathfrak{g}$ and $\mathfrak{g}_{i + 1} = [\mathfrak{g}_i,\mathfrak{g}]$ . + +Definition A.5. A Lie algebra is called semi-simple if it does not contain any non-trivial ideals. A Lie algebra is called nilpotent if its central series terminates to zero. + +An important result of this classification is that any Lie algebras over an algebraically closed field can be decomposed into semisimple parts and a nilpotent part. We call this the Jordan decomposition. Therefore, understanding arbitrary representations of a Lie algebra boils down to understanding representation of both semi-simple and nilpotent algebras. + +Most reductive Lie groups of interest have a semi-simple Lie algebra as associated Lie algebra. These algebras are also the one for which the representation theory is the best understood. + +The study of Lie algebras is decomposed in two subfields. The structure theory studies the classification of Lie algebras in terms of their eigenspaces (as vector spaces). The representation theory builds on the structure theory to classify representations of semi-simple Lie algebras. We will give a very short introduction to both of them. + +# A.1.4 Structure theory of semi-simple Lie algebras + +Let $\mathfrak{g}$ be a semi-simple complex Lie algebras. Let $\mathfrak{h}$ be one maximal set of mutually commutative element of $\mathfrak{g}$ . Maximal means here that no other such set contains strictly more element than $\mathfrak{h}$ . We call $\mathfrak{h}$ the Cartan algebra of $\mathfrak{g}$ . As elements of $\mathfrak{h}$ are mutually commutative, they share the same eigenspaces. + +Therefore one can decompose $\mathfrak{g}$ in terms of eigenspaces of elements of $\mathfrak{h}$ . Let $\alpha \in \mathfrak{h}^*$ be a function in the dual of the Cartan algebra such that $\forall h_i \in \mathfrak{h}, \alpha(h_i) = \alpha_i$ with $\alpha_i$ the eigenvalue associated to $h_i$ . We call this linear operator the roots of $\mathfrak{g}$ . Let $R$ be the space of all this functions, called the root space. One therefore has the following decomposition of the Lie algebra; + +$$ +\mathfrak {g} = \mathfrak {h} + \bigoplus_ {\alpha \in R} g _ {\alpha} \tag {24} +$$ + +The structure theory of semi-simple complex Lie algebras is the study of the root space $R$ and the associated eigenspaces $g_{\alpha}$ . One can classify complex semi-simple Lie algebras based on their type of root spaces. For real semi-simple Lie algebras, the structure theory essential reduces down to the complex case with a few extra steps. + +# A.1.5 Representation theory of complex semi-simple Lie algebras and Lie groups + +The representation theory of complex reductive Lie algebras is completely understood. Every finite-dimensional representation is the direct sum of irreducible representations. Every irreducible representation is uniquely determined by a highest weight $\lambda$ , which is an element of the dual of the Cartan algebra $h$ . The dimension of each irreducible representation can be computed using the Weyl dimension formula. + +$$ +\dim V _ {\lambda} = \frac {\prod_ {\alpha \in R ^ {+}} (\lambda + \rho , \alpha)}{\prod_ {\alpha \in R ^ {+}} (\rho , \alpha)} \tag {25} +$$ + +where $R^{+}$ is a subset of the root space $R$ , called the positive roots, and $\rho$ is the half sum of the positive roots. Obtaining explicitly a basis for each representation is much harder. Gelfand and Tselin proposed a basis for the general case of $SU(N)$ groups. This basis was recently generalized to other complex semi-simple algebras Molev (1999). + +# A.1.6 Representations of real semi-simple Lie groups + +The representation theory of real semi-simple Lie groups can be elucidated by examining the complex semi-simple case through the application of the so-called Weyl unitary trick. This technique establishes a connection between the representations of real Lie groups and those of the complexification of their universal covering groups. + +Definition A.6. Given a Lie algebra $\mathfrak{g}$ , there exists a unique simply connected group $\hat{G}$ possessing $\mathfrak{g}$ as its Lie algebra. A group homomorphism $\psi$ exists from this unique simply connected Lie group to any group $G$ sharing the same Lie algebra. We refer to $\hat{G}$ as the universal covering Lie group, and it is unique up to isomorphism. + +An essential observation is that the irreducible representations of $G$ form a subset of the irreducible representations of $\hat{G}$ . Understanding the irreducible representations of $\hat{G}$ suffices. Take the case of + +the $\mathfrak{su}(2)$ Lie algebra, for instance. The universal covering group is $\mathrm{SU}(2)$ , corresponding to the special unitary two-by-two matrices. The group $\mathrm{SO}(3)$ of special orthogonal matrices also has $\mathfrak{su}(2)$ as its Lie algebra. Consequently, the irreducible representations of $\mathrm{SO}(3)$ are a subset of those of $\mathfrak{su}(2)$ . Representations of $\mathfrak{su}(2)$ can be indexed by a half-integer, known as the quantum number $j$ . The integer representations are also irreducible representations of $\mathrm{SO}(3)$ , with non-integer ones being referred to as the spin representations. + +Since the representation theory of complex Lie groups is well understood, an optimal approach to understanding the representation theory of a real Lie group involves studying the complexification of its universal cover. + +Definition A.7. For a universal covering Lie group $G$ with Lie algebra $\mathfrak{g}$ , the universal covering Lie group $\hat{G}_{\mathbb{C}}$ , having the associated Lie algebra $\mathfrak{g}_{\mathbb{C}} = \mathfrak{g} \otimes \mathbb{C}$ , is termed the complexification of $\hat{G}$ . + +The group of interest $G$ , its universal cover $\hat{G}$ and the complexification of the universal cover $\hat{G}_{\mathbb{C}}$ , all share the similar irreducible representations. Therefore one obtains the following chain of inclusion of the respective representations of the groups, + +$$ +R e p (G) \subset R e p (\hat {G}) \subset R e p (\hat {G} _ {\mathbb {C}}) \tag {26} +$$ + +Let $K$ be the maximal compact subgroup of $\hat{G}_{\mathbb{C}}$ and $\rho_{\lambda}$ a finite dimensional irreducible representation of $K$ , with $\lambda$ as its highest weight. It is possible to analytically continue $\rho_{\lambda}$ into an irreducible representation of $\hat{G}_{\mathbb{C}}$ , denoted as $\rho_{\lambda}^{\hat{G}_{\mathbb{C}}}$ , and also into the analytical conjugate $\bar{\rho}_{\lambda}^{\hat{G}_{\mathbb{C}}}$ . Every finite-dimensional representation of $\hat{G}_{\mathbb{C}}$ can be constructed as the tensor product form $\rho_{\lambda}^{\hat{G}_{\mathbb{C}}}\otimes \bar{\rho}_{\lambda^{\prime}}^{\hat{G}_{\mathbb{C}}}$ . Due to the fact that finite-dimensional representations of the universal covers and complexification of a semi-simple Lie group are isomorphic, one can express: + +$$ +\rho_ {\lambda} ^ {\hat {G} _ {\mathbb {C}}} \otimes \bar {\rho} _ {\lambda^ {\prime}} ^ {\hat {G} _ {\mathbb {C}}} := \rho_ {(\lambda , \lambda^ {\prime})} ^ {\hat {G}} := \rho_ {(\lambda , \lambda^ {\prime})} ^ {G} \tag {27} +$$ + +As universal covering groups contain more representations than the original cover, this mapping is not an isomorphism for all lambda. To illustrate this, consider the example of the Lorentz group $SO(1,3)$ . + +The universal cover of $\mathrm{SO}(1,3)$ is $\mathrm{SL}(2,C)$ , and the maximal compact subgroup of $\mathrm{SL}(2,C)$ is $\mathrm{SU}(2)$ . The irreducible representations of $\mathrm{SU}(2)$ are indexed by $l\in \mathbb{N} / 2$ and correspond to the Wigner D matrices $D^{l}(g),g\in \mathrm{SU}(2)$ . The $\mathrm{SU}(2)$ group elements are parameterized by Euler angles that can be analytically continued to $\mathrm{SL}(2,C)$ as follows: + +$$ +\alpha = \phi + i \kappa , \beta = \theta + i \epsilon , \gamma = \phi + i \xi +$$ + +$$ +\phi \in [ 0, 2 \pi), \theta \in [ 0, \pi ], \phi \in [ 0, 2 \pi), \kappa , \epsilon , \xi \in \mathbb {R} \tag {28} +$$ + +This enables the construction of the fundamental representation of $\mathrm{SL}(2,\mathbb{C})$ , resulting in representations of the Lorentz group corresponding to the product of Wigner D matrices: + +$$ +D ^ {k / 2} (\alpha , \beta , \gamma) \otimes \bar {D} ^ {l / 2} (\alpha , \beta , \gamma) \tag {29} +$$ + +The irreducible representations of $\mathrm{SO}(1,3)$ are indexed by a pair of integers $(l,k)$ corresponding to the associated $SU(2)$ representations in the tensor product. + +# A.1.7 Gefland-Tselin Basis + +The enumeration of finite-dimensional representations of complex Lie groups has been well understood. However, the derivation of explicit matrix representations in specific bases and the tensor product decomposition of representations within these bases remain challenging and subject to ongoing research. The Gelfand-Tselin basis provides an explicit structure for irreducible representations, initially designed for the $SU(N)$ group Gelfand and Tsetlin (1950); Alex et al. (2011), and subsequently generalized to encompass other classical Lie groups Molev (1999). + +Central to the concept of GT patterns is the application of branching rules, whereby representations of a larger group are constructed by enumerating induced representations on its subgroups. Consider, for instance, the $SU(N)$ case where irreducible representations are characterized by the highest weight vector $\lambda$ , a vector of length $N$ terminated by a zero, $\lambda = (\lambda_{1},\dots,\lambda_{N - 1},0)$ . This highest weight + +vector may spawn several induced representations of the subgroup $SU(N - 1)$ , each described by $N - 1$ vectors. This process can be iteratively executed in a descending chain: + +$$ +S U (N) \rightarrow S U (N - 1) \rightarrow \dots \rightarrow S U (2) \rightarrow U (1) \tag {30} +$$ + +This chain commences with the group of interest and culminates with the smallest subgroup. From this branching chain, triangular arrays known as GT-patterns can be constructed, with each row corresponding to a vector of a representation ranging from $SU(N)$ to $U(1)$ : + +$$ +\left( \begin{array}{c c c c c} \lambda_ {N, 1} & \lambda_ {N, 2} & \dots & \lambda_ {N, N - 1} & 0 \\ \lambda_ {N - 1, 1} & \lambda_ {N - 1, 2} & \dots & \lambda_ {N - 1, N - 1} \\ \vdots & \vdots & & \ddots \\ \lambda_ {2 1} & \lambda_ {2 2} & & \\ \lambda_ {1 1} & & & \end{array} \right) \tag {31} +$$ + +Adjacent rows within these patterns must adhere to a so-called "snake rule" to form an admissible GT pattern: + +$$ +\lambda_ {k, 1} > \lambda_ {k - 1, 1} > \lambda_ {k, 2} > \lambda_ {k - 1, 2} > \dots > \lambda_ {k, k - 1} > \lambda_ {k - 1, k - 1} > \lambda_ {k, k} \tag {32} +$$ + +The dimension of a specific irreducible representation corresponds to the count of such admissible arrays, with the top row representing the highest weight vector. Representations sharing the same highest weight vectors interrelate through ladder operators. One key attribute of GT patterns is that the action of ladder operators on a GT pattern is analytically known, enabling the construction of the matrix representation for all representations using only the highest weight representation. Furthermore, constructing the highest weight matrix representation equals solving a linear system when the ladder operators are known analytically. Extending this methodology to other classical Lie groups (such as $(n),\mathrm{Sp}(n),\mathrm{SL}(n))$ presents challenges as the ladder operators' entries become increasingly complex and recursive. + +# A.2 Example of one particle basis for $\mathbf{O}(3)$ and $\mathrm{SO}(1,3)$ groups + +In the paper, the one-particle basis $\phi$ is left abstract, as it depends on the specific type of input point cloud and the group under consideration. We now describe two concrete examples, one involving isometry the other Lorentz equivariance. + +# A.2.1 Isometry equivariance + +In the first example we consider a point cloud comprised of atoms, for example representing a molecule. Here, the point states are denoted by $x_{i} = (r_{i},Z_{i})\in \mathbb{R}^{3}\times \mathbb{Z}$ , where $r_i$ corresponds to the position in 3D and $Z_{i}$ , its atomic number, is a categorical variable. The one-particle basis must form a complete basis for $\mathbb{R}^3\times \mathbb{Z}$ , which is both rotationally equivariant and translationally invariant. The translation is typically handled by referring to the atom's state in relation to a central value or another point. For simplicity, we can assume $r_i$ the particle's position relative to the point cloud's centre of mass, ensuring translational invariance. + +The space $\mathbb{R}^3$ can be represented as the product $\mathbb{R}\times S^2$ , with $S^2$ being the sphere. As such, we choose a basis for $\mathbb{R}$ such as Bessel functions, Chebyshev polynomials or a Fourier basis, denoted as $R_{n}$ . For $S^2$ , we utilise the spherical harmonics $Y_{lm}$ , which form a basis of functions in the sphere. Consequently, the one-particle basis can be expanded as: + +$$ +\phi_ {n l m} \left(x _ {j}\right) = R _ {n} \left(r _ {i}\right) Y _ {l m} \left(\hat {r} _ {i}\right) f \left(Z _ {i}\right) \tag {33} +$$ + +where $r_i$ corresponds to the norm of $\pmb{r}_i$ and $\hat{r}_i = \frac{\pmb{r}_i}{r_i}$ . + +The case of the Lorentz group is more complex than the (3) case. Consider a point cloud made up of elementary particles, represented by their 4-momenta $x_{i} = (E_{i},\pmb{p}_{i}) \in \mathbb{R}^{4}$ . Although the 4-momenta is a vector, the natural metric in this space is not the Euclidean metric but rather a pseudo metric known as the Minkowski metric. This pseudo-metric encapsulates the unique role that time plays in special relativity. In natural units, the pseudo norm $\eta$ of a 4-vector $u = (u_0,u_1,u_2,u_3)$ is defined as + +$$ +\eta (u, u) = u _ {0} ^ {2} - u _ {1} ^ {2} - u _ {2} ^ {2} - u _ {3} ^ {2} \tag {34} +$$ + +To construct the one-particle basis, it is necessary to expand a four-vector in terms of the basis of representations of the Lorentz groups. As outlined in Appendix A.1.6, irreducible representations + +of the Lorentz group are characterised by a tuple of integers $(l,k)$ . The 4-vectors correspond to the fundamental, irreducible representation, namely the $(1,1)$ representation. Analogous structures to the spherical harmonics can be obtained by examining the symmetric powers of the irreducible representations. These are analogous to harmonic polynomials on Minkowski space, much as spherical harmonics are harmonic polynomials on the sphere $S^2$ . We thus define the Minkowski spherical harmonics as the following symmetric tensor product, + +$$ +Y _ {l m} \left(\mathbf {p} _ {i}\right) = \operatorname {S y m} ^ {l} \mathbf {p} _ {i} \tag {35} +$$ + +These spherical harmonics form a harmonic polynomial basis on Minkowski space. Therefore, one can create the one-particle basis in a manner akin to the $O(3)$ case, as follows: + +$$ +\phi_ {n l m} \left(\mathbf {p} _ {j}\right) = R _ {n} \left(p _ {i}\right) Y _ {l m} \left(\hat {p} _ {i}\right) \tag {36} +$$ + +Where $p_i = \eta(\pmb{p}_i, \pmb{p}_i)$ , and $\hat{p}_i = \frac{\pmb{p}_i}{p_i + \epsilon}$ , with $\epsilon$ being a small constant that prevents divergence when $p_i$ is zero due to the non-positiveness of the Minkowski norm. + +# A.3 Extended Related Work + +Convolutional Neural Networks (CNNs), which are translation equivariance, initiated the utilization of data symmetry in machine learning architectures. Throughout time, CNNs have been extended to include other symmetries as well. Central to all these generalizations is the group averaging operation, + +$$ +\operatorname {A v g} (f) (x) = \int_ {g \in G} f (g \cdot x) d g, \tag {37} +$$ + +Where $x$ denotes the input signal or feature, $f$ is the convolution kernel, $G$ represents the group of interest, and $dg$ is an invariant measure on $G$ . This transformation is essential, as it converts any convolution into a group invariant convolution. The feasibility of this approach largely depends on the computational simplicity of the integral. The most straightforward instance occurs for finite groups, as explained by G-convolutions (Cohen and Welling, 2016b), where the integral simplifies to a sum, + +$$ +\operatorname {A v g} (f) (x) = \sum_ {g \in G} f (g \cdot x). \tag {38} +$$ + +In the context of $G$ being a compact group, the invariant measure $dg$ is unique, referred to as the Haar measure of the group, allowing the integral to be computed numerically, e.g., on a grid. Steerable CNNs (Cohen and Welling, 2016a), LieConv Finzi et al. (2020) and Spherical CNNs (Cohen et al., 2018) extended CNNs to $O(2)$ and $O(3)$ equivariance, respectively. A general theory for convolution on any compact group and symmetric space was developed by Kondor and Trivedi (2018), and further extended to equivariant convolutions on Riemannian manifolds by Weiler et al. (2021). However, this approach has several limitations, + +- The direct computation of the integral can be numerically unstable and inefficient, even for relatively small groups like $O(3)$ . +- In the case of non-compact groups, a unique invariant measure is absent, and the integral diverges. +- Across these methods, the convolution kernel $f$ is usually constrained to a two-body operator + +In the case of compact groups, the integral over the group may be calculated via an alternative means. There exists a linear operator, called the Clebsch Gordan operator $\mathcal{C}$ , such that, + +$$ +\operatorname {A v g} (f) (x) = \mathcal {C} (f) (x) \tag {39} +$$ + +Therefore, the complex integral over the group becomes a linear operation. The form of this operator depends on the basis in which $f$ is expended. In the case of $G = O(3)$ and if this basis is carefully chosen, the entries of this operator are known analytically. This approach is numerically stable and more efficient and was taken by numerous works, including ACE Dusson et al. (2022), Cormorant Anderson et al. (2019), NequIP Batzner et al. (2022), or MACE Batatia et al. (2022b). The central aim of our work is to show that this approach can also be generalized to all reductive Lie groups, even non-compact ones, and provide tools to do so. + +# A.4 Proof of (7) + +This statement follows closely the arguments by Dusson et al. (2022); Drautz (2020) and others. + +$$ +\begin{array}{l} \sum_ {j _ {1}, \dots , j _ {n}} \sum_ {\boldsymbol {k}} w _ {\boldsymbol {k}} \prod_ {s} \phi_ {k _ {s}} \left(x _ {j _ {s}}\right) = \sum_ {\boldsymbol {k}} w _ {\boldsymbol {k}} \sum_ {j _ {1}, \dots , j _ {n}} \prod_ {s} \phi_ {k _ {s}} \left(x _ {j _ {s}}\right) \\ = \sum_ {\boldsymbol {k}} w _ {\boldsymbol {k}} \prod_ {s = 1} ^ {n} \sum_ {j} \phi_ {k _ {s}} (x _ {j}) \\ = \sum_ {\boldsymbol {k}} w _ {\boldsymbol {k}} \prod_ {s = 1} ^ {n} A _ {k} \\ = \sum_ {k} w _ {k} A _ {k}. \\ \end{array} +$$ + +# A.5 Custom notation and indexing + +We briefly contrast our notation for Clebsch-Gordan coefficients (10) with the standard notation. By means of example, consider the group $SO(3)$ in which case the Clebsch-Gordan equations are written as + +$$ +\sum_ {m _ {1} ^ {\prime} m _ {2} ^ {\prime}} C _ {l _ {1} m _ {1} ^ {\prime} l _ {2} m _ {2} ^ {\prime}} ^ {L M} \rho_ {m _ {1} ^ {\prime} m _ {1}} ^ {l _ {1}} (g) \rho_ {m _ {2} ^ {\prime} m _ {2}} ^ {l _ {2}} (g) = \sum_ {M ^ {\prime}} \rho_ {M M ^ {\prime}} ^ {L} (g) C _ {l _ {1} m _ {1} l _ {2} m _ {2}} ^ {L M ^ {\prime}}. \tag {40} +$$ + +In this setting, our index $\alpha$ simply enumerates all possible such coefficients. One can often assign a natural meaning to this index, e.g., for the group $SO(3)$ it is given by the pair of angular quantum numbers $(l_1,l_2)$ . Specifically, in this case, we obtain + +$$ +C _ {l _ {1} m _ {1} l _ {2} m _ {2}} ^ {\boldsymbol {\alpha}, L M} = \left\{ \begin{array}{l l} C _ {l _ {1} m _ {1} l _ {2} m _ {2}} ^ {L M}, & \text {i f} \boldsymbol {\alpha} = (l _ {1}, l _ {2}), \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {41} +$$ + +where $C_{l_1m_1l_2m_2}^{LM}$ are the Clebsch-Gordan coefficients in the classical notation. Thus, the additional index $\alpha$ is not really required in the case of $SO(3)$ , nor our other main example, $SO(1,3)$ . Our notation is still useful to organize the computations of equivariant models, especially when additional channels are present, which is usually the case. Moreover, it allows for easy generalization to other groups where such a simple identification is not possible (Steinberg, 1961). + +# A.6 Equivariance of G-cluster expansion + +The equivariance of the G-cluster expansion is easily verified by applying a transformation $g$ to the input, + +$$ +\begin{array}{l} \boldsymbol {B} _ {\boldsymbol {\alpha}} ^ {K} \circ g = \sum_ {\boldsymbol {k}} \mathcal {C} _ {\boldsymbol {k}} ^ {\boldsymbol {\alpha}, K} \boldsymbol {A} _ {\boldsymbol {k}} \circ g \\ = \sum_ {\boldsymbol {k}} \mathcal {C} _ {\boldsymbol {k}} ^ {\boldsymbol {\alpha}, K} \left(\sum_ {\boldsymbol {k} ^ {\prime}} \prod_ {t} \rho_ {k _ {t}, k _ {t} ^ {\prime}} (g) \boldsymbol {A} _ {\boldsymbol {k} ^ {\prime}}\right) \\ = \sum_ {\boldsymbol {k} ^ {\prime}} \left(\sum_ {\boldsymbol {k}} \mathcal {C} _ {\boldsymbol {k}} ^ {\alpha , K} \prod_ {t} \rho_ {k _ {t}, k _ {t} ^ {\prime}} (g)\right) \boldsymbol {A} _ {\boldsymbol {k} ^ {\prime}} \tag {42} \\ = \sum_ {\boldsymbol {k} ^ {\prime}} \left(\sum_ {K ^ {\prime}} \rho_ {K K ^ {\prime}} (g) \mathcal {C} _ {\boldsymbol {k} ^ {\prime}} ^ {\boldsymbol {\alpha}, K ^ {\prime}}\right) \boldsymbol {A} _ {\boldsymbol {k} ^ {\prime}} \\ = \sum_ {K ^ {\prime}} \rho_ {K K ^ {\prime}} (g) \boldsymbol {B} _ {\boldsymbol {\alpha}} ^ {K ^ {\prime}}. \\ \end{array} +$$ + +# A.7 Completeness of the basis and Universality of G-MACE + +We explain in which sense the basis $B_{\alpha}^{K}$ is a complete basis, and briefly sketch how to prove this claim. The argument is contained almost entirely in (Dusson et al., 2022) and only requires a single modification, namely Step 3 below, using a classical argument from representation theory. We will therefore give only a very brief summary and explain that necessary change. + +We start with an arbitrary equivariant property $\Phi^V$ embedded in $V$ where we have a representation, i.e. the actual target property is $\Phi$ is then given as a linear mapping from $V$ to $Z$ . For technical reasons, we require that only finitely many entries $\Phi_K^V$ may be non-zero, but this is consistent with common usage. For example, if $G = O(3)$ and if $\Phi$ is a scalar, then $\Phi_0^V = \Phi$ , while all other $\Phi_{LM}^V \equiv 0$ . If $\Phi$ is a covariant vector, then $\Phi_{LM}^V$ is non-zero if and only if $L = 1$ ; and so forth. For other groups, the labeling may differ but the principle remains the same. + +1. Convergence of the cluster expansion. The first step in our parameterisation is to approximate $\Phi^V$ in terms of a truncated many-body expansion (4). It is highly application-dependent on how fast this expansion converges. Rigorous results in this direction in the context of learning interatomic potentials can be found in (Bachmayr et al., 2021; Thomas et al., 2022). A generic statement can be made if the number of input particles is limited by an upper bound, in which case the expansion becomes exact for a finite $N$ . This case leads to the uniform density result stated in Theorem 4.1. We adopt this setting for the time being and return to the pointwise convergence setting below. + +In the uniform convergence setting we also require that the domain $\Omega$ is compact. + +Throughout the remainder of this section we may therefore assume that an $N$ can be chosen as well as smooth components $\varphi^{(n)}$ such that the resulting model $\Phi^{V,N}$ approximates $\Phi^V$ to within a target accuracy $\epsilon$ , + +$$ +\left| \Phi_ {K} ^ {V, N} (\pmb {x}) - \Phi_ {K} ^ {V} (\pmb {x}) \right| \leq \epsilon \qquad \forall \pmb {x} \in \mathrm {m s e t s} (\Omega). +$$ + +2. The density of the embedding. As already stated in the main text, if the components $\varphi_K^{(n)}$ are smooth, and the embedding $\{\phi_k\}_k$ is dense in the space of one-particle functions (5) then it follows that the $\varphi_K^{(n)}$ can be expanded in terms of the tensor product basis $\phi_k \coloneqq \bigotimes_{s=1}^n \phi_{k_s}$ to within arbitrary accuracy. The precise statement is the following standard result of approximation theory: if $\operatorname{span}\{\phi_k\}_k$ are dense in $C(\Omega)$ , then $\operatorname{span}\{\phi_k\}_k$ are dense in $C(\Omega^n)$ . That is, for any $\epsilon > 0$ , there exist approximants $p_K^{(n)}$ such that + +$$ +\left\| \varphi_ {K} ^ {(n)} - p _ {K} ^ {(n)} \right\| _ {\infty} \leq \epsilon . +$$ + +3. The density of the symmetrized basis. The next and crucial step is to show that, if the $\varphi_K^{(n)}$ are equivariant, then the $p_K^{(n)}$ may be chosen equivariant as well without loss of accuracy. If the group $G$ is compact then the representations $\rho$ can be chosen unitary (Broecker, 1985). In that case, the argument from (Dusson et al., 2022) can be used almost verbatim: let + +$$ +\bar {p} ^ {(n)} (\boldsymbol {x}) := \int_ {G} \rho (g) ^ {- 1} p ^ {(n)} (g \boldsymbol {x}) H (d g), +$$ + +where $H$ is the normalized Haar measure then $\bar{p}^{(n)}$ is equivariant by construction and + +$$ +\begin{array}{l} \left| \varphi^ {(n)} (\boldsymbol {x}) - \bar {p} ^ {(n)} (\boldsymbol {x}) \right| \\ = \left| \int_ {G} \rho (g) ^ {- 1} \left(\varphi^ {(n)} (g \boldsymbol {x}) - p ^ {(n)} (g \boldsymbol {x})\right) H (d g) \right| \\ \leq \int_ {G} \left| \varphi^ {(n)} (g \boldsymbol {x}) - p ^ {(n)} (g \boldsymbol {x}) \right| H (d g) \\ \leq \int_ {G} \| \varphi^ {(n)} - p ^ {(n)} \| _ {\infty} H (d g) \leq \epsilon . \\ \end{array} +$$ + +If the group is not compact, then one can apply "Weyl's Unitary Trick" (see (Bourbaki, 1989), Ch. 3): first, one complexifies the group (if it is real) and then constructs a maximal compact subgroup $K_{\mathbb{C}}$ of the complexification. This new group $K$ will have the same representation as $G$ and because it is compact, that representation may again be chosen as unitary. Therefore, symmetrizing $p^{(n)}$ with + +respect to $K_{\mathbb{C}}$ results in an approximant that is not only equivariant w.r.t. $K_{\mathbb{C}}$ but also equivariant w.r.t. $G$ . + +4. The density of the basis $B_{\alpha}^{K}$ . As the last step, one can readily observe that the symmetrization and cluster expansion steps can be exchanged. I.e. first symmetrizing and then employing the steps (7) result in the same model. Letting $\epsilon \rightarrow 0$ in the foregoing argument while fixing the number of particles $\# x$ results in all errors vanishing. Note that this will in particular require taking $N \to \infty$ . +5. Pointwise convergence. To obtain density in the sense of pointwise convergence we first introduce the canonical cluster expansion without self-interacting terms + +$$ +\Phi_ {K} (\boldsymbol {x}) = \sum_ {n = 0} ^ {\infty} \sum_ {j _ {1} < \dots < j _ {n}} v _ {K} ^ {(n)} (x _ {j _ {1}}, \ldots x _ {j _ {n}}). +$$ + +The difference here is that the summation is only over genuine sub-clusters. Because of this restriction, the series is finite for all multi-set inputs $\pmb{x}$ . In other words, it converges in the pointwise sense. + +One can easily see that $v_{n}$ can be chosen (explicitly) to make this expansion exact. After truncating the expansion at finite $n \leq N$ and then expanding the potentials $v_{K}^{(n)}$ one can exactly transform the canonical cluster expansion into the self-interacting cluster expansion. This procedure is detailed in (Dusson et al., 2022; Drautz, 2020). + +The arguments up to this point establish the claimed universality for the linear ACE model. The corresponding universality of the TRACE model follows immediately from (Darby et al., 2022). Since a single layer of the MACE model is a TRACE model, this completes the proof of Theorem 4.1. + +# A.8 Product of groups + +Let $G_{1}$ and $G_{2}$ be two reductive Lie groups (or finite groups). Let $A_{1}$ and $A_{2}$ be the two structure constants of $G_{1}$ and $G_{2}$ , $(d\rho_{1},\rho_{1})$ and $(d\rho_{2},\rho_{2})$ their continuous and discrete generators. One can define a representation of the direct product group $G_{1}\times G_{2}$ as + +$$ +\rho := \left(A _ {1} \mid A _ {2}, n _ {1} n _ {2}, \left\{d \rho_ {1} \left(X _ {i}\right) \otimes I _ {2} + I _ {1} \otimes d \rho_ {2} \left(\tilde {X} _ {j}\right) \right\} _ {i, j}, \left\{\rho \left(h _ {1}\right) \otimes I _ {2} + I _ {1} \otimes \rho \left(h _ {2}\right) \right\} _ {h _ {1}, h _ {2} \in \mathbf {H} _ {1}, \mathbf {H} _ {2}}\right) \tag {43} +$$ + +The following essential property holds: if $\rho_{1}$ and $\rho_{2}$ are irreducible representations of $G_{1}$ and $G_{2}$ then $\rho_{1} \otimes \rho_{2}$ is an irreducible representation of $G_{1} \times G_{2}$ . Moreover, for reducible Lie groups, all the irreps of $G_{1} \times G_{2}$ are of this form. Therefore, one can construct all the irreps of $G_{1} \times G_{2}$ this way. It is of particular interest in the case of equivariant message passing networks on points clouds, where the group of interest is $G \times S_{n}$ . + +Let us give a non-trivial application of lie-nn for computing invariants for the product group of $O(3) \times S_3$ . We consider a set of three vectors, + +$$ +a = \left[ \begin{array}{c} a _ {x} \\ a _ {y} \\ a _ {z} \end{array} \right] b = \left[ \begin{array}{c} a _ {x} \\ a _ {y} \\ a _ {z} \end{array} \right] c = \left[ \begin{array}{c} a _ {x} \\ a _ {y} \\ a _ {z} \end{array} \right] +$$ + +A given vector belongs to the product representation of the $l = 1$ vector of the $O(3)$ group and the natural representation of $S_{3}$ . In lie-nn, we can define the product representation as follows, + +$$ +\operatorname {r e p} = \text {l i e . g r o u p _ {-}} \text {p r o d u c t (l i e . f i n i t e . S n _ {-} n a t u r a l (3) , l i e . i r r e p s . O 3 (1 = 1 , p = - 1))} +$$ + +If one wants to know the unique permutation invariant vector that one can construct from the three vectors $a$ , $b$ and $c$ , one can use the following code, + +$$ +\begin{array}{l} \text {q s = l i e . i n f e r c h a n g e o f b a s i s (} \\ \quad \text {l i e . g r o u p _ {p r o d u c t (l i e . f i n i t e . S n _ {t r i v i a l} (3) , l i e . i r r e p s . O 3 (l = 1 , p = - 1)) ,} \\ \quad \text {r e p ,} \\ \quad \text {)} \end{array} +$$ + +As expected, the output is the sum of the three vectors, $a + b + c$ , corresponding to the only permutation invariant vector that can be constructed from the three vectors. + +Consider now the case where we want to extract a permutation invariant scalar from the set of products of two vectors. In this case, we can use the following code, + +qs = lie .infer_change_of BASIS( + +lie . group _ product( lie . finite . Sn _ trivial (3), lie . irreps . O3(1 = 0, p = 1)), + +lie.tensor_product(rep,rep), + +) + +In this code, we first construct the tensor product representation and then seek the permutation invariant scalar in it. The result is two distinct scalars, + +$$ +e _ {1} = a _ {x} ^ {2} + a _ {y} ^ {2} + a _ {z} ^ {2} + b _ {x} ^ {2} + b _ {y} ^ {2} + b _ {z} ^ {2} + c _ {x} ^ {2} + c _ {y} ^ {2} + c _ {z} ^ {2} +$$ + +$$ +e _ {2} = a _ {x} b _ {x} + a _ {x} c _ {x} + a _ {y} b _ {y} + a _ {y} c _ {y} + a _ {z} b _ {z} + a _ {z} c _ {z} + b _ {x} c _ {x} + b _ {y} c _ {y} + b _ {z} c _ {z} +$$ + +Finally we consider a case that would be very hard to do by hand. It is the case of extracting a permutation invariant $l = 2$ vector for this tensor product of the original set of three vectors. + +qs = lie.infer_change_of BASIS( + +lie . group _ product ( lie . finite . Sn _ trivial ( 3 ) , lie . irreps . O3(1 = 2 , p = 1)) , + +lie.tensor_product(rep,rep), + +) + +The computation tells us that there exists two such vectors, + +$$ +v _ {1} = \left[ \begin{array}{c} - \sqrt {2} a _ {x} a _ {z} - \sqrt {2} b _ {x} b _ {z} - \sqrt {2} c _ {x} c _ {z} \\ - \sqrt {2} a _ {x} a _ {y} - \sqrt {2} b _ {x} b _ {y} - \sqrt {2} c _ {x} c _ {y} \\ \frac {\sqrt {6} a _ {x} ^ {2}}{6} - \frac {\sqrt {6} a _ {y} ^ {2}}{3} + \frac {\sqrt {6} a _ {z} ^ {2}}{6} + \frac {\sqrt {6} b _ {x} ^ {2}}{6} - \frac {\sqrt {6} b _ {y} ^ {2}}{3} + \frac {\sqrt {6} c _ {x} ^ {2}}{6} + \frac {\sqrt {6} c _ {y} ^ {2}}{6} - \frac {\sqrt {6} c _ {z} ^ {2}}{3} + \frac {\sqrt {6} c _ {z} ^ {2}}{6} \\ - \sqrt {2} a _ {y} a _ {z} - \sqrt {2} b _ {y} b _ {z} - \sqrt {2} c _ {y} c _ {z} \\ \frac {\sqrt {2} a _ {x} ^ {2}}{2} - \frac {\sqrt {2} a _ {z} ^ {2}}{2} + \frac {\sqrt {2} b _ {x} ^ {2}}{2} - \frac {\sqrt {2} b _ {z} ^ {2}}{2} + \frac {\sqrt {2} c _ {x} ^ {2}}{2} - \frac {\sqrt {2} c _ {z} ^ {2}}{2} \end{array} \right] +$$ + +$$ +v _ {2} = \left[ \begin{array}{c} - a _ {x} b _ {z} - a _ {x} c _ {z} - a _ {z} b _ {x} - a _ {z} c _ {x} - b _ {x} c _ {z} - b _ {z} c _ {x} \\ - a _ {x} b _ {y} - a _ {x} c _ {y} - a _ {y} b _ {x} - a _ {y} c _ {x} - b _ {x} c _ {y} - b _ {y} c _ {x} \\ \frac {\sqrt {3} a _ {x} b _ {x}}{3} + \frac {\sqrt {3} a _ {x} c _ {x}}{3} - \frac {2 \sqrt {3} a _ {y} b _ {y}}{3} - \frac {2 \sqrt {3} a _ {y} c _ {y}}{3} + \frac {\sqrt {3} a _ {z} b _ {z}}{3} + \frac {\sqrt {3} a _ {z} c _ {z}}{3} + \frac {\sqrt {3} b _ {x} c _ {x}}{3} - \frac {2 \sqrt {3} b _ {y} c _ {y}}{3} + \frac {\sqrt {3} b _ {z} c _ {z}}{3} \\ - a _ {y} b _ {z} - a _ {y} c _ {z} - a _ {z} b _ {y} - a _ {z} c _ {y} - b _ {y} c _ {z} - b _ {z} c _ {y} \\ a _ {x} b _ {x} + a _ {x} c _ {x} - a _ {z} b _ {z} - a _ {z} c _ {z} + b _ {x} c _ {x} - b _ {z} c _ {z} \end{array} \right] +$$ + +# A.9 Symmetric Tensor products + +The permutation group is an important concept in the context of tensor products. It can be useful to focus on a subset of the full tensor product space that exhibits certain permutation equivariance. For example, the spherical harmonics are defined as the permutation-invariant part of a tensor product. + +The symmetric tensor product can be thought of as a change of basis, or projector, from the tensor product to the symmetric part of the tensor product. In the case of a tensor product of correlation order four we have, + +$$ +S _ {\nu} = B _ {\nu ; i j k l} x _ {i} y _ {j} z _ {k} w _ {l} \tag {44} +$$ + +where $B$ is the change of basis that satisfies: + +$$ +B _ {\nu ; i j k l} = B _ {\nu ; \sigma (i j k l)} \forall \sigma \in S _ {4} \tag {45} +$$ + +We propose in lie-nn a new algorithm used to calculate $B$ . The Symmetric Tensor Product is calculated using a tree structure, starting at the leaves and progressing towards the trunk. The leaves are the basis of the individual indices, and they are combined and constrained at each step to impose symmetry. + +# A.10 Computing the irreps from input representations + +For some groups, the computation of the generators $X$ can become a very involved task. However in most applications, the data itself is already given in a form of a representation. One approach proposed by (Finzi et al., 2021) is to not work in the space of irreps but the space of polynomials of the input representation. This approach has the advantage of requiring little previous knowledge of + +the group. However it is also much less efficient than using irreps. One alternative way is to consider polynomials of the input representation, that are reducible and then compute the block diagonalisation to project down to irreps subspace. One can then work directly as polynomials in this subspace and compute Clebsch-Gordan coefficients numerically. We provide routines in Lie-nn to carry out these operations from any given input representation. + +# A.11 Details of numerical experiments + +# A.11.1 Jet Tagging + +Dataset The dataset (Butter et al., 2019) was generated using a Pythia, Delphes, and FastJet (using cuts for the jet's kinematics on $\Delta \eta = 2$ , $R = 0.8$ ) to simulate the response of the ATLAS detector at the Large Hadron Collider (LHC). The dataset is released under the "Creative Commons Attribution 4.0" license. The entire dataset contains 2 millions jets with a 60/20/20 for training, validation, and testing balanced splits. + +Model The model uses 3 layers of the $G$ -MACE architecture to generate the Lorentz group equivariant representation of each jet. For the 1 particle basis, we use a product of radial features on the Minkowski distances, and $SO(1,3)$ spherical harmonics. The radial features are computing by passing a logarithmic radial basis as in (Bogatskiy et al., 2022) into a [64, 64, 64, 512] MLP using SiLU nonlinearities on the outputs of the hidden layers. The internal representations used are $(0,0)$ and $(1,1)$ . We use 72 channels for each representation. For the embedding, and readout out, we use similar architectures to LorentzNet. + +Training Models were trained on an NVIDIA A100 GPU in single GPU training. Typical training time for the dataset is up to 72 hours. Models were trained with AMSGrad variant of Adam, with default parameters of $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and $\epsilon = 10^{-8}$ . We used a learning rate of 0.0035 and a batch size of 64. The model was trained for 80 epochs with 2 epochs of linear learning rate warmup and followed by a phase of cosine annealing LR scheduling. + +# A.11.2 3D shape recognition + +Dataset ModelNet10 (Wu et al., 2015) is a synthetic 3D object point clouds dataset containing 4,899 pre-aligned shapes from 10 categories. The dataset is split into 3,991 $(80\%)$ shapes for training and 908 $(20\%)$ shapes for testing. We were unable to find a license. + +Model The model uses a three-layer encoder architecture following the PointNet++ one. We use an encoder of the full point cloud into sub-point clouds of sizes [1024, 256, 128]. Each PointNet layer maps a point cloud of size $N^t$ to one of size $N^{t+1}$ . We compute the node features as the sum of the PointNet output and the MACE output, + +$$ +h ^ {(t + 1)} = \operatorname {P o i n t N e t} \left(x y z ^ {(t)}, h ^ {(t)}\right) + \operatorname {M A C E} \left(x y z ^ {(t)}, h ^ {(t)}\right) \tag {46} +$$ + +Training Models were trained on an NVIDIA A100 GPU in single GPU training. The typical training time for the dataset is up to 12 hours. Models were trained with the AMSGrad variant of Adam, with default parameters of $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and $\epsilon = 10^{-8}$ . + +# A.12 Limitations and Future Work + +The spectrum of potential applications of the present method is very large. In this paper, we focus on a subset of applications that have known benchmarks and baselines. A broader range of groups is implemented in the lie-nn library. Future work should focus on applying this architecture to tasks with domain-specific knowledge. + +# A.13 Computational cost + +# A.13.1 Clebsch Gordan generation + +Generation time The Clebsch Gordan generation time depends on the size of the representations. In Table 3, we plot the generation time as a function of the size of the representations for different groups. The generation time is found to range from a matter of milliseconds for smaller representations to a few seconds for larger ones. The generation of CGs constitutes a preprocessing step, separate from the model inference computations. Therefore, the results can be stored for subsequent use, ensuring that this phase does not affect the model's overall performance. + +![](images/28e09fee4f64798a80b4d0c31153228d7214204c8c0e221a4bcb9ddb2696bdc8.jpg) +Figure 3: Generation time in seconds of Clebsch-Gordan coefficients as a function of the size of the representations for different groups. The representation size is computed as the product of the three representation sizes involved in the Clebsch-Gordan. + +![](images/7a5a3620e2b09246f061a43c319485bc5eee07b3bc8908e9b66d72abf46e13c3.jpg) + +![](images/4dc6d0e3a6c7da1a49788015eeb7209babf3846a612fcb743c71ac6859794bbb.jpg) + +Sparsity One distinguishing feature of the Clebsch-Gordan coefficient is their sparsity. This sparsity comes from the selection rule, which is unique for each group. These selection rules also give rise to a sparsity structure that can be exploited to achieve the best efficiency. In Figure 4 we compare the sparsity percentage as a function of the size of the representations for different groups. + +![](images/5902d8fa393cf1dec5d727604bdc29d06b789884c1ecd7013694a3101d65aba5.jpg) +Figure 4: Sparsity in the percentage of non-zero entries of Clebsch-Gordan coefficients as a function of the size of the representations for different groups. The representations size is computed as the product of the three representation sizes involved in the Clebsch-Gordan. + +![](images/ec411c2d7f6680d77031eed8b9f9fc77382ce782927ad9edd2db47fe9b027a99.jpg) + +![](images/ee4542ea77308e85f625fe72bc7ee3b053944dab7f8848646e343ccac472f114.jpg) + +# A.13.2 G-MACE computational cost + +The computational bottleneck of traditional equivariant MPNNs is the equivariant tensor product on the edges of the graph used to form the message. In G-MACE the edge operation is the pooling operation to compute the features $A_{k}$ . Correlations $A_{k}$ are computed through the tensor product of the product basis, an operation that is carried out on nodes. In typical models, the correlations are the bottleneck. Since the number of nodes is orders of magnitudes smaller than the number of edges in most applications we are envisioning, it is a significant computational advantage to organize the computation in this way. We return to reviewing the cost of the product basis below. + +![](images/04c3673675c7359614e76635b872e0e7382fec2273b9a9b311ef91c17a2562aa.jpg) +Figure 5: Inference time for a single jet for a two layer LorentzMACE model as a function of the correlation order in the product basis and the maximal angular resolution. Timing made on a Nvidia A100 and averaged over a 100 runs. + +Obtaining the equivariant basis $B^K$ involves simply linear operation. The Clebsch-Gordan coefficients are very sparse, with a well defined structural sparsity, which can be easily exploited for constructing highly efficient low level code on both CPUs and GPUs. + +Thus, we return to the product basis $A_{k}$ , which is normally the computational bottleneck. It has structural sparsity due to its symmetry under permuting the $k$ tuples and due to the structural sparsity in the Clebsch-Gordan coefficients. Despite that sparsity it was shown in Kaliuzhnyi and Ortner (2022) that in theory the product basis can be evaluated recursively with $O(1)$ operations per feature, however, this requires effective use of sparse tensor formats which is challenging to optimize for GPU architectures since a naive implementation relies on random memory access patterns. Instead, the current MACE code employs a similar recursive evaluation algorithm described in Batatia et al. (2022a), which employs full tensor formats, but avoids the random memory access. For the most common case of 3-correlation models we estimate that this code reaches within a factor 3-5 of the optimal performance of hypothetical sparse tensor implementation. \ No newline at end of file diff --git a/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/images.zip b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d73a8c0574cb6cbdacc62f730f451175acef7a81 --- /dev/null +++ b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3ede0bf417019070b4af1d407948bd76049bc7d454ec7dd9675138e5d7ce748 +size 665457 diff --git a/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/layout.json b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..763eb5415eb33cbe0a7fe9c7d44ce01cd7b64516 --- /dev/null +++ b/ageneralframeworkforequivariantneuralnetworksonreductiveliegroups/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a11a1577ac24ec9de51fda8993861d975a2180556988d0bf85c1d31626c2b303 +size 1154941 diff --git a/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_content_list.json b/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3a1d160511f3c074eb4dbeacca59aafd7a040a37 --- /dev/null +++ b/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:888c3b6e00ea96f13f1487455f4863f282daf3be40485369eaac9c05f0a3091c +size 145151 diff --git a/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_model.json b/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..acb8722604f58ee64f4d8daf9449bb8adffed853 --- /dev/null +++ b/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c028c2b7b2d30bbe1560999ef4bd1f4cbc9c0adf5f14302f34541b15213b3d21 +size 172730 diff --git a/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_origin.pdf b/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..16df9ac0443418e3cb16fc5944b104bc214a6db6 --- /dev/null +++ b/ageneralframeworkforrobustginvarianceingequivariantnetworks/82242b93-819a-46d6-b002-4a1eda647e4f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f086fb7008b9944fbfcbb9c3cf65c02f1af6e0d9db691163f86daccaaa6a9ce +size 3206118 diff --git a/ageneralframeworkforrobustginvarianceingequivariantnetworks/full.md b/ageneralframeworkforrobustginvarianceingequivariantnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..192ddf69be3c07ce58d70e273b461662418d379b --- /dev/null +++ b/ageneralframeworkforrobustginvarianceingequivariantnetworks/full.md @@ -0,0 +1,683 @@ +# A General Framework for Robust $G$ -Invariance in $G$ -Equivariant Networks + +Sophia Sanborn sanborn@ucsb.edu + +Nina Miolane +ninamiolane@ucsb.edu + +Department of Electrical and Computer Engineering +UC Santa Barbara + +# Abstract + +We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks ( $G$ -CNNs), which we call the $G$ -triple-correlation ( $G$ -TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps—such as the max—are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the $G$ -TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max $G$ -Pooling in $G$ -CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for $G$ -CNNs defined on both commutative and non-commutative groups— $SO(2)$ , $O(2)$ , $SO(3)$ , and $O(3)$ (discretized as the cyclic $C8$ , dihedral $D16$ , chiral octahedral $O$ and full octahedral $O_h$ groups)—acting on $\mathbb{R}^2$ and $\mathbb{R}^3$ on both $G$ -MNIST and $G$ -ModelNet10 datasets. + +# 1 Introduction + +The pooling operation is central to the convolutional neural network (CNN). It was originally introduced in the first CNN architecture—Fukushima's 1980 Neocognitron [17]—and remained a fixture of the model since. The Neocognitron was directly inspired by the canonical model of the visual cortex as a process of hierarchical feature extraction and local pooling [25, 1]. In both the neuroscience and CNN model, pooling is intended to serve two purposes. First, it facilitates the local-to-global coarse-graining of structure in the input. Second, it facilitates invariance to local changes—resulting in network activations that remain similar under small perturbations of the input. In this way, CNNs construct hierarchical, multi-scale features that have increasingly large extent and increasing invariance. + +The pooling operation in traditional CNNs, typically a local max or average, has remained largely unchanged over the last forty years. The variations that have been proposed in the literature [40, 56] mostly tackle its coarse-graining purpose, improve computational efficiency, or reduce overfitting, but do not seek to enhance its properties with respect to invariance. Both max and avg operations are reasonable choices to fulfill the goal of coarse-graining within CNNs and $G$ -CNNs. However, they are excessively imprecise and lossy with respect to the goal of constructing robust representations of objects that are invariant only to irrelevant visual changes. Indeed, the max and avg operations are invariant to many natural image transformations such as translations and rotations, but also + +![](images/a0009e5dc6a66e39500cf7c5d458abd2b548a3877916221d9c06abf47131b9e0.jpg) +Figure 1: Achieving Robust $G$ -Invariance in $G$ -CNNs with the $G$ -Triple-Correlation. The output of a $G$ -Convolutional layer is equivariant to the actions of $G$ on the domain of the signal. To identify signals that are equivalent up to group action, the layer can be followed by a $G$ -Invariant map that eliminates this equivariance. In $G$ -CNNs, Max $G$ -Pooling is a commonly used for this purpose. Taking the maximum of the $G$ -Convolutional equivariant output is indeed invariant to the actions of the group. However, it is also lossy: many non-equivalent output vectors have the same maximum. Our method—the $G$ -Triple-Correlation is the lowest-order polynomial invariant map that is complete [46]. As a complete invariant, it preserves all information about the signal structure, removing only the action of the group. Our approach thus provides a new foundation for achieving robust $G$ -Invariance in $G$ -CNNs. + +to unnatural transformations, including pixel permutations, that may destroy the image structure. This excessive invariance has been implicated in failure modes such as vulnerability to adversarial perturbations [20, 26], and a bias towards textures rather than objects [4]. To overcome these challenges and enable robust and selective invariant representation learning, there is a need for novel computational primitives that selectively parameterize invariant maps for natural transformations. + +Many of the transformations that occur in visual scenes are due to the actions of groups. The appreciation of this fact has led to the rise of group-equivariant convolutional networks (G-CNNs) [8] and the larger program of Geometric Deep Learning [6]. While this field has leveraged the mathematics of group theory to attain precise generalized group-equivariance in convolutional network layers, the pooling operation has yet to meet its group theoretic grounding. Standardly, invariance to a group $G$ is achieved with a simple generalization of max pooling: Max $G$ -Pooling [8]—see Fig. 1 (top-right). However, this approach inevitably suffers from the lossiness of the max operation. + +Here, we unburden the pooling operation of the dual duty of invariance and coarse-graining, by uncoupling these operations into two steps that can be performed with precision. We retain the standard max and avg pooling for coarse-graining, but introduce a new method for robust $G$ -invariance via the group-invariant triple correlation —see Fig. 1 (bottom-right). The group-invariant triple correlation is the lowest-order complete operator that can achieve exact invariance [32]. As such, we propose a general framework for robust $G$ -Invariance in $G$ -Equivariant Networks. We show the advantage of this approach over standard max $G$ -pooling in several $G$ -CNN architectures. Our extensive experiments demonstrate improved scores in classification accuracy in traditional benchmark datasets as well as improved adversarial robustness. + +# 2 Background + +We first cover the fundamentals of group-equivariant neural networks—also known as $G$ -CNNs, or $G$ -Equivariant Networks—before introducing the framework for $G$ -Invariant Pooling. + +# 2.1 Mathematical Prerequisites + +The construction of $G$ -CNNs requires mathematical prerequisites of group theory, which we recall here. The interested reader can find details in [23]. + +Groups. A group $(G,\cdot)$ is a set $G$ with a binary operation $\cdot$ , which we can generically call the product. The notation $g_{1}\cdot g_{2}$ denotes the product of two elements in the set; however, it is standard to omit + +the operator and write simply $g_{1}g_{2}$ — a convention we adopt here. Concretely, a group $G$ may define a class of transformations. For example, we can consider the group of two-dimensional rotations in the plane—the special orthogonal group $SO(2)$ —or the group of two-dimensional rotations and translations in the plane—the special euclidean group $SE(2)$ . Each element of the group $g \in G$ defines a particular transformation, such as one rotation by $30^{\circ}$ or one rotation by $90^{\circ}$ . The binary operation $\cdot$ provides a means for combining two particular transformations—for example, first rotating by $30^{\circ}$ and then rotating by $90^{\circ}$ . In mathematics, for a set of transformations $G$ to be a group under the operation $\cdot$ , the four axioms of closure, associativity, identity and inverse must hold. These axioms are recalled in Appendix A. + +Group Actions on Spaces. We detail how a transformation $g$ can transform elements of a space, for example how a rotation of $30^{\circ}$ indeed rotates a vector in the plane by $30^{\circ}$ . We say that the transformations $g$ 's act on (the elements of) a given space. Specifically, consider $X$ a space, such as the plane. A group action is a function $L: G \times X \to X$ that maps $(g, x)$ pairs to elements of $X$ . We say a group $G$ acts on a space $X$ if the following properties of the action $L$ hold: + +1. Identity: The identity $e$ of the group $G$ "does nothing", i.e., it maps any element $x \in X$ to itself. This can be written as: $\tilde{L}(e, x) = x$ . +2. Compatibility: Two elements $g_1, g_2 \in G$ can be combined before or after the map $L$ to yield the same result, i.e., $L(g_1, L(g_2, x)) = L(g_1 g_2, x)$ . For example, rotating a 2D vector by $30^\circ$ and then $40^\circ$ yields the same result as rotating that vector by $70^\circ$ in one time. + +For simplicity, we will use the shortened notation $L_{g}(x)$ to denote $L(g,x)$ the action of the transformation $g$ on the element $x$ . + +Some group actions $L$ have additional properties and turn the spaces $X$ on which they operate into homogeneous spaces. Homogeneous spaces play an important role in the definition of the $G$ -convolution in $G$ -CNNs, so that we recall their definition here. We say that $X$ is a homogeneous space for a group $G$ if $G$ acts transitively on $X$ —that is, if for every pair $x_{1}, x_{2} \in X$ there exists an element of $g \in G$ such that $L_{g}(x_{1}) = x_{2}$ . The concept can be clearly illustrated by considering the surface of a sphere, the space $S^2$ . The sphere $S^2$ is a homogeneous space for $SO(3)$ , the group of orthogonal $3 \times 3$ matrices with determinant one that define 3-dimensional rotations. Indeed, for every pair of points on the sphere, one can define a 3D rotation matrix that takes one to the other. + +Group Actions on Signal Spaces. We have introduced essential concepts from group theory, where a group $G$ can act on any abstract space $X$ . Moving towards building $G$ -CNNs, we introduce how groups can act on spaces of signals, such as images. Formally, a signal is a map $f: \Omega \to \mathbb{R}^c$ , where $\Omega$ is called the domain of the signal and $c$ denotes the number of channels. The space of signals itself is denoted $L_2(\Omega, \mathbb{R}^c)$ . For example, $\Omega = \mathbb{R}^2$ or $\mathbb{R}^3$ for 2D and 3D images. Gray-scale images have one channel ( $c = 1$ ) and color images have the 3 red-green-blue channels ( $c = 3$ ). + +Any action of a group of transformations $G$ on a domain $\Omega$ yields an action of that same group on the spaces of signals defined on that domain, i.e., on $L_{2}(\Omega, \mathbb{R}^{c})$ . For example, knowing that the group of 2D rotations $SO(2)$ acts on the plane $\Omega = \mathbb{R}^2$ allows us to define how $SO(2)$ rotates 2D gray-scale images in $L_{2}(\mathbb{R}^{2}, \mathbb{R}^{c})$ . Concretely, the action $L$ of a group $G$ on the domain $\Omega$ yields the following action of $G$ on $L_{2}(\Omega, \mathbb{R}^{c})$ : + +$$ +L _ {g} [ f ] (u) = f \left(L _ {g ^ {- 1}} (u)\right), \quad \text {f o r a l l} u \in \Omega \text {a n d f o r a l l} g \in G. \tag {1} +$$ + +We use the same notation $L_{g}$ to refer to the action of the transformation $g$ on either an element $u$ of the domain or on a signal $f$ defined on that domain, distinguishing them using [.]. for the signal case. We note that the domain of a signal can be the group itself: $\Omega = G$ . In what follows, we will also consider actions on real signals defined on a group, i.e., on signals such as $\Theta : G \to \mathbb{R}$ . + +Invariance and Equivariance. The concepts of group-invariance and equivariance are at the core of what makes the $G$ -CNNs desirable for computer vision applications. We recall their definitions here. A function $\psi : X \mapsto Y$ is $G$ -invariant if $\psi(x) = \psi(L_g(x))$ , for all $g \in G$ and $x \in X$ . This means that group actions on the input space have no effect on the output. Applied to the group of rotations acting on the space of 2D images $X = L_2(\Omega, \mathbb{R}^c)$ with $\Omega = \mathbb{R}^2$ , this means that a $G$ -invariant function $\psi$ produces an input that will stay the same for any rotated version of a given signal. For example, whether the image contains the color red is invariant with respect to any rotation of that + +image. A function $\psi : X \mapsto Y$ is $G$ -equivariant if $\psi(L_g(x)) = L_g'(\psi(x))$ for all $g \in G$ and $x \in X$ , where $L$ and $L'$ are two different actions of the group $G$ , on the spaces $X$ and $Y$ respectively. This means that a group action on the input space results in a corresponding group action of the same group element $g$ on the output space. For example, consider $\psi$ that represents a neural network performing a foreground-background segmentation of an image. It is desirable for $\psi$ to be equivariant to the group of 2D rotations. This equivariance ensures that, if the input image $f$ is rotated by $30^\circ$ , then the output segmentation $\psi(f)$ rotates by $30^\circ$ as well. + +# 2.2 $G$ -Equivariant Networks + +$G$ -CNNs are built from the following fundamental building blocks: $G$ -convolution, spatial pooling, and $G$ -pooling. The $G$ -convolution is equivariant to the action of the group $G$ , while the $G$ -pooling achieves $G$ -invariance. Spatial pooling achieves coarse-graining. We review the group-specific operations here. The interested reader can find additional details in [8, 10], which include the definitions of these operations using the group-theoretic framework of principal bundles and associated vector bundles. + +# 2.2.1 $G$ -Convolution + +In plain language, a standard translation-equivariant convolutional neural network layer sweeps filters across a signal (typically, an image), translating the filter and then taking an inner product with the signal to determine the similarity between a local region and the filter. $G$ -CNNs [8] generalize this idea, replacing translation with the action of other groups that define symmetries in a machine learning task—for example, rotating a filter, to determine the presence of a feature in various orientations. + +Consider a signal $f$ defined on a domain $\Omega$ on which a group $G$ acts. A neural network filter is a map $\phi : \Omega \to \mathbb{R}^c$ defined with the same domain $\Omega$ and codomain $\mathbb{R}^c$ as the signal. A $G$ -convolutional layer is defined by a set of filters $\{\phi_1, \dots, \phi_K\}$ . For a given filter $k$ , the layer performs a $G$ -convolution with the input signal $f$ : + +$$ +\Theta_ {k} (g) = \left(\phi_ {k} * f\right) (g) = \int_ {u \in \Omega} \phi_ {k} \left(L _ {g ^ {- 1}} (u)\right) f (u) d u, \quad \forall g \in G, \tag {2} +$$ + +by taking the dot product in $\mathbb{R}^c$ of the signal with a transformed version of the filter. In practice, the domain $\Omega$ of the signal is discretized, such that the $G$ -convolutional layer becomes: + +$$ +\Theta_ {k} (g) = \sum_ {u \in \Omega} \phi_ {k} \left(L _ {g ^ {- 1}} (u)\right) f (u), \quad \forall g \in G. \tag {3} +$$ + +The output of one filter $k$ is therefore a map $\Theta_k: G \to \mathbb{R}$ , while the output of the whole layer with $K$ filters is $\Theta: G \to \mathbb{R}^K$ defined as $\Theta(g) = [\Theta_1(g), \dots, \Theta_K(g)]$ for all $g \in G$ . The $G$ -convolution therefore outputs a signal $\Theta$ whose domain has necessarily become the group $\Omega = G$ and whose number of channels is the number of convolutional filters $K$ . + +The $G$ -convolution is equivariant to the action of the group on the domain of the signal $f$ [8]. That is, the action of $g$ on the domain of $f$ results in a corresponding action on the output of the layer. Specifically, consider a filter $\phi_k$ , we have: + +$$ +\phi_ {k} * L _ {g} [ f ] = L _ {g} ^ {\prime} \left[ \phi_ {k} * f \right], \quad \forall g \in G, \tag {4} +$$ + +where $L_{g}$ and $L_{g}^{\prime}$ represent the actions of the same group element $g$ on the functions $f$ and $\phi_k * f$ respectively. This property applies for the $G$ -convolutions of the first layer and of the next layers [8]. + +# 2.2.2 $G$ -Pooling + +Invariance to the action of the group is achieved by pooling over the group $(G$ -Pooling) [8]. The pooling operation is typically performed after the $G$ -convolution, so that we restrict its definition to signals $\Theta$ defined over a group $G$ . In $G$ -pooling, a max typically is taken over the group elements: + +$$ +\mu_ {k} = \max _ {g \in G} \Theta_ {k} (g). \tag {5} +$$ + +$G$ -pooling extracts a single real scalar value $\mu_{k}$ from the full feature vector $\Theta_{k}$ , which has $|G|$ values, with $|G|$ the size of the (discretized) group $G$ as shown in Fig. 1. When the group $G$ is a grid + +discretizing $\mathbb{R}^n$ , max $G$ -Pooling is equivalent to the standard spatial max pooling used in translation-equivariant CNNs, and it can be used to achieve coarse-graining. More generally, $G$ -Pooling is $G$ -invariant, as shown in [8]. However, we argue here that it is excessively $G$ -invariant. Although it achieves the objective of invariance to the group action, it also loses substantial information. As illustrated in Fig. 1, many different signals $\Theta$ may yield same result $\mu$ through the $G$ -pooling operation, even if these signals do not share semantic information. This excessive invariance creates an opportunity for adversarial susceptibility. Indeed, inputs $f$ can be designed with the explicit purpose of generating a $\mu_k$ that will fool a neural network and yield an unreasonable classification result. For this reason, we introduce our general framework for robust, selective $G$ -invariance. + +# 3 The $G$ -Triple-Correlation Layer for Robust $G$ -Invariance + +We propose a $G$ -Invariant layer designed for $G$ -CNNs that is complete—that is, it preserves all information about the input signal except for the group action. Our approach leverages the theory of the triple correlation on groups [32] and applies it to the design of robust neural network architectures. Its theoretical foundations in signal processing and invariant theory allows us to generally define the unique $G$ -invariant maps of lowest polynomial order that are complete, hence providing a general framework for selective, robust $G$ -invariance in $G$ -CNNs [46]. + +# 3.1 The $G$ -Triple-Correlation Layer + +The $G$ -Triple-Correlation $(G\text{-TC})$ on a real signal $\Theta : G \to \mathbb{R}$ is the integral of the signal multiplied by two independently transformed copies of it [32]: + +$$ +\tau_ {\Theta} \left(g _ {1}, g _ {2}\right) = \int_ {g \in G} \Theta (g) \Theta \left(g g _ {1}\right) \Theta \left(g g _ {2}\right) d g. \tag {6} +$$ + +This definition holds for any locally compact group $G$ on which we can define the Haar measure $dg$ used for integration purposes [28]. This definition above is applicable to the $G$ -CNNs where $\Theta$ is a collection of scalar signals over the group. We show in Appendix B that we can extend the definition to steerable $G$ -CNNs where $\Theta$ can be an arbitrary field [9]. + +In the equation above, the $G$ -TC is computed for a pair of group elements $g_{1}, g_{2}$ . In practice, we sweep over all pairs in the group. Appendix C illustrates the triple correlation on three concrete groups. Importantly, the $G$ -triple-correlation is invariant to the action of the group $G$ on the signal $\Theta$ [28], as shown below. + +Proposition 1. Consider a signal $\Theta : G \mapsto \mathbb{R}^c$ . The $G$ -Triple-Correlation $\tau$ is $G$ -invariant: + +$$ +\tau_ {L g [ \Theta ]} = \tau_ {\Theta}, \quad f o r a l l g \in G, \tag {7} +$$ + +where $L_{g}$ denotes an action of a transformation $g$ on the signal $\Theta$ . + +The proof is recalled in Appendix D. We propose to achieve $G$ -invariance in a $G$ -CNN by applying the $G$ -Triple-Correlation (G-TC) to the output $\Theta$ of a $G$ -convolutional layer. Specifically, we apply the $G$ -TC to each real scalar valued signal $\Theta_{k}$ that comes from the $G$ -convolution of filter $\phi_{k}$ , for $k \in \{1, \dots, K\}$ . We only omit the subscript $k$ for clarity of notations. In practice, we will use the triple correlation on discretized groups, where the integral is replaced with a summation: + +$$ +T _ {\Theta} \left(g _ {1}, g _ {2}\right) = \sum_ {g \in G} \Theta (g) \Theta \left(g g _ {1}\right) \Theta \left(g g _ {2}\right), \tag {8} +$$ + +for $\Theta$ a scalar valued function defined over $G$ . While it seems that the layer computes $T_{\Theta}(g_1, g_2)$ for all pairs of group elements $(g_1, g_2)$ , we note that the real scalars $\Theta(gg_1)$ and $\Theta(gg_2)$ commute so that only half of the pairs are required. We will see that we can reduce the number of computations further when the group $G$ possesses additional properties such as commutativity. + +We note that the triple correlation is the spatial dual of the bispectrum, which has demonstrated robustness properties in the context of deep learning with bispectral neural networks [42]. The goal of bispectral neural networks is to learn an unknown group $G$ from data. The bispectral layer proposed in [42] assumes an MLP architecture. Our work is the first to generalize the use of bispectral invariants to convolutional networks. Here, we assume that the group $G$ is known in advance, and + +exploit the theoretical properties of the triple correlation to achieve robust invariance. One path for future extension may be to combine our approach with the learning approach of [42], to parameterize and learn the group $G$ that defines a $G$ -Equivariant and $G$ -Invariant layer. + +# 3.2 Selective Invariance through Completeness + +We show here that the proposed $G$ -triple-correlation is guaranteed to preserve all information aside from any equivariant component due to the group action on the input domain. This crucial property distinguishes our proposed layer from standard $G$ -Pooling methods, which collapse signals and lose crucial information about the input (Figure 1). In contrast with standard, excessively invariant $G$ -pooling methods, we show here that our $G$ -TC layer is instead selectively $G$ -invariant thanks to its completeness property [54, 29, 31], defined here: + +Proposition 2. Every integrable function with compact support $G$ is completely identified—up to group action—by its $G$ -triple-correlation. We say that the $G$ -triple-correlation is complete. + +Mathematically, an operator $\mathcal{T}$ is complete for a group action $L$ if the following holds: for every pair of signals $\Theta_{1}$ and $\Theta_{2}$ , if $\mathcal{T}(\Theta_1) = \mathcal{T}(\Theta_2)$ then the signals are equal up to the group action, that is: there exists a group element $h$ such that $\Theta_{2} = L_{h}[\Theta_{1}]$ . + +The proof of the completeness of the $G$ -triple-correlation is only valid under a precise set of assumptions [32] (Theorem 2). As we seek to integrate the $G$ -triple-correlation to enhance robustness in neural networks, we investigate here the scope of these assumptions. First, the assumptions are not restrictive on the type of groups $G$ that can be used. Indeed, the proof only requires the groups to be Tatsuyaama duality groups and the groups of interest in this paper meet this condition. This includes all locally compact commutative groups, all compact groups including the groups of rotations, the special orthogonal groups $SO(n)$ , and groups of translations and rotations, the special euclidean groups $SE(n)$ . Second, the assumptions are not restrictive on the types of signals. Indeed, the signal only needs to be such that any of its Fourier transform coefficients are invertible. For example, when the Fourier transform coefficients are scalar values, this means that we require these scalars to be non-zero. In practical applications on real image data with noise, there is a probability 0 that the Fourier transform coefficients of the input signal will be exactly 0 (scalar case) or non-invertible (matrix case). This is because the group of invertible matrices is dense in the space of matrices. Therefore, this condition is also verified in the applications of interest and more generally we expect the property of completeness of our $G$ -TC layer to hold in practical neural network applications. + +# 3.3 Uniqueness + +The above two subsections prove that our $G$ -Triple Correlation layer is selectively $G$ -invariant. Here, we note that our proposed layer is the lowest-degree polynomial layer that can achieve this goal. In invariant theory, it is observed that the $G$ -Triple Correlation is the only third-order polynomial invariant (up to change of basis) [46]. Moreover, it is the lowest-degree polynomial invariant that is also complete. It thus provides a unique and minimal-complexity solution to the problem of robust invariance within this function class. + +# 3.4 Computational Complexity + +The $G$ -Triple Correlation enjoys some symmetries that we can leverage to avoid computing it for each pair of group elements (which would represent $|G|^2$ computations), hence making the feedforward pass more efficient. We summarize these symmetries here. + +Proposition 3. Consider two transformations $g_1, g_2 \in G$ . The $G$ -Triple Correlation of a real signal $\Theta$ has the following symmetry: + +$$ +T _ {\Theta} (g _ {1}, g _ {2}) = T _ {\Theta} (g _ {2}, g _ {1}). +$$ + +If $G$ is commutative, the $G$ -Triple Correlation of a real signal has the following additional symmetries: + +$$ +T _ {\Theta} (g _ {1}, g _ {2}) = T _ {\Theta} (g _ {1} ^ {- 1}, g _ {2} g _ {1} ^ {- 1}) = T _ {\Theta} (g _ {2} g _ {1} ^ {- 1}, g _ {1} ^ {- 1}) = T _ {\Theta} (g _ {2} ^ {- 1}, g _ {1} g _ {2} ^ {- 1}) = T _ {\Theta} (g _ {1} g _ {2} ^ {- 1}, g _ {2} ^ {- 1}). +$$ + +The proofs are given in [39] for the group of translations. We extend them to any locally compact group $G$ in Appendix E. In practice, these symmetries mean that even if there are theoretically + +$|G|^{2}$ computations, this number immediately reduces to $\frac{|G|(|G| + 1)}{2}$ and further reduces if the group $G$ of interest is commutative. In addition, more subtle symmetries can be exploited to reduce the computational cost to linear $|G| + 1$ for the case of one-dimensional cyclic groups [34] by considering the spectral dual of the $G$ -TC: the bispectrum. We provide a computational approach to extend this reduction to more general, non-commutative groups in Appendix F. The theory supporting our approach has yet to be extended to this general case. Thus, there is an opportunity for new theoretical work that further increases the computational efficiency of the $G$ -Triple-Correlation. + +# 4 Related Work + +The Triple Correlation. The triple correlation has a long history in signal processing [48, 5, 39]. It originally emerged from the study of the higher-order statistics of non-Gaussian random processes, but its invariance properties with respect to translation have been leveraged in texture statistics [53] and data analysis in neuroscience [13], as well as early multi-layer perceptron architectures in the 1990's [12, 33]. The triple correlation was extended to groups beyond translations in [32], and its completeness with respect to general compact groups was established in [30]. To the best of our knowledge, the triple correlation has not previously been introduced as a method for achieving invariance in convolutional networks for either translation or more general groups. + +Pooling in CNNs. Pooling in CNNs typically has the dual objective of coarse graining and achieving local invariance. While invariance is one desiderata for the pooling mechanism, the machinery of group theory is rarely employed in the computation of the invariant map itself. As noted in the introduction, max and average pooling are by far the most common methods employed in CNNs and $G$ -CNNs. However, some approaches beyond strict max and average pooling have been explored. Soft-pooling addresses the lack of smoothness of the max function and uses instead a smooth approximation of it, with methods including polynomial pooling [49] and learned-norm [22], among many others [15, 14, 43, 44, 3, 45, 11, 35]. Stochastic pooling [57] reduces overfitting in CNNs by introducing randomness in the pooling, yielding mixed-pooling [55], max pooling dropout [51], among others [47, 58, 21] + +Geometrically-Aware Pooling. Some approaches have been adopted to encode spatial or structural information about the feature maps, including spatial pyramid pooling [24], part-based pooling [59], geometric $L_{p}$ pooling [16] or pooling regions defined as concentric circles [41]. In all of these cases, the pooling computation is still defined by a max. These geometric pooling approaches are reminiscent of the Max $G$ -Pooling for $G$ -CNNs introduced by [8] and defined in Section 2.2.2, without the explicit use of group theory. + +Higher-Order Pooling. Average pooling computes first-order statistics (the mean) by pooling from each channel separately and does not account for the interaction between different feature maps coming from different channels. Thus, second-order pooling mechanisms have been proposed to consider correlations between features across channels [38, 19], but higher-orders are not investigated. Our approach computes a third-order polynomial invariant; however, it looks for higher-order correlations within the group rather than across channels and thus treats channels separately. In principle, these approaches could be combined. + +# 5 Experiments & Results + +# Implementation + +We implement the $G$ -TC Layer for arbitrary discretized groups with an efficient implementation built on top of the ESCNN library [7, 50], which provides a general implementation of $E(n)$ -Equivariant Steerable Convolutional Layers. The method is flexibly defined, requiring the user only to provide a (Cayley) table that defines the group's product structure. The code is publicly available at https://github.com/sophiaas/gtc-invariance. Here, we demonstrate the approach on the groups $SO(2)$ , and $O(2)$ , $SO(3)$ , and $O(3)$ , discretized as the groups $C_n$ (cyclic), $D_n$ (dihedral), $O$ (chiral octahedral), and $O_h$ (full octahedral), respectively. ESCNN provides implementations for $G$ -Conv layers on all of these $E(n)$ subgroups. + +![](images/89995f271c4745e962dddfc93f77fa126f097adacc98c5d6a1bc1fd6cf7da971.jpg) +R2 + +![](images/673d633f91407da6b2b424f4e331374c68a3f4a96326ee5f7f55370e400426f8.jpg) + +![](images/59c80e1d074aebeae6cf9ac583cda4d702b0e90289a946241fbb96beb3ec896f.jpg) +O(2) + +![](images/b316eb93f462b49fe48e59456f9b9db14d2737c953ff6940d9f348b9b9c1c799.jpg) + +![](images/39366cab0891cb79b83f95dc573db69f68fb6675b8bca4e4b8cab96751ab25aa.jpg) +R + +![](images/81bd4b35b88a7b625f8c3ab9896e06fd70753fe6aef5226a5cba6361028d3e9e.jpg) +Figure 2: Datasets. The $O(2)$ -MNIST (top) and $O(3)$ -ModelNet10 (bottom) datasets are generated by applying a random (rotation, reflection) pair to each element of the original datasets. Although we visualize the continuous group here, in practice, we discretize the group $O(3)$ as the full octahedral group $O_h$ to reduce computational complexity. $SO(2)$ and $SO(3)$ datasets are generated similarly, by applying a random rotation to each datapoint. + +![](images/5f2f30c37f8402cca6a01db503c933fb9370b05513d52af420e098f9c7d85972.jpg) +O(3) + +![](images/ff0a8ca4b078536620ffc88430f62c626710f289cfc0abcfec2a4b5f2ad9ba57.jpg) + +# Experimental Design + +We examine the performance of the $G$ -TC over Max $G$ -Pooling in $G$ -Equivariant Networks defined on these groups and trained on $G$ -Invariant classification tasks. For the groups $SO(2)$ and $O(2)$ acting on $\mathbb{R}^2$ , we use the MNIST dataset of handwritten characters [37], and for the groups $SO(3)$ and $O(3)$ acting on $\mathbb{R}^3$ , we use the voxelized ModelNet10 database of 3D objects [52]. We generate $G$ -MNIST and $G$ -ModelNet10 datasets by transforming the domain of each signal in the dataset by a randomly sampled group element $g \in G$ (Figure 2). + +In these experiments, we train pairs of models in parameter-matched architectures, in which only the $G$ -Pooling method differs. Note that the purpose of these experiments is to compare differences in performance between models using Max $G$ -Pooling vs. the $G$ -TC—not to achieve SOTA accuracy. Thus, we do not optimize the models for overall performance. Rather, we fix a simple architecture and set of hyperparameters and examine the change in performance that arises from replacing Max $G$ -Pooling with the $G$ -TC Layer (Figure 3). + +![](images/f667a2d1afcb081a09d84cb7d40d7a8d9e9f67d6e5b7b1b5e67c9e553b90084e.jpg) +Figure 3: Models. We compare two simple architectures comprised of a single G-Conv block followed by either a Max $G$ -Pool layer or a $G$ -TC Layer and an MLP Classifier. + +![](images/46d1fef225f82d7c602699e74a94d7616a4827e6c923ca8a09f8855320fd377e.jpg) + +To isolate the effects of the $G$ -Pooling method, all models are comprised of a single $G$ -Conv block followed by $G$ -Pooling (Max or TC) and an MLP Classifier. Notably, while many $G$ -Conv models in the literature use the semi-direct product of $G$ with $\mathbb{R}^n$ —i.e. incorporating the actions of the group $G$ into a standard translational convolutional model—here, we perform only pure $G$ -Conv, without translation. Thus, we use filters the same size as the input in all models. The $G$ -Conv block is comprised of a $G$ -Conv layer, a batch norm layer, and an optional nonlinearity. For the + +Max $G$ -Pool model, ReLU is used as the nonlinearity. Given the third-order nonlinearity of the TC, we omit the nonlinearity in the $G$ -Conv block in the TC Model. The $G$ -TC layer increases the dimensionality of the output of the $G$ -Conv block; consequently the input dimension of the first layer of the MLP is larger and the weight matrix contains more parameters than for the Max $G$ -Pool model. To compensate for this, we increase the dimension of the output of the first MLP layer in the Max Model, to match the overall number of parameters. + +# Evaluation Methods + +We evaluate the models in two ways. First, we examine differences in the raw classification accuracy obtained by replacing Max $G$ -Pooling with the $G$ -TC Layer. Second, we assess the completeness of the model by optimizing "metameric" stimuli for the trained models—inputs that yield the same pre-classifier representation as a target input, but are perceptually distinct. The completeness evaluation is inspired by a recent paper that incorporates the bispectrum—the spectral dual of the triple correlation—into a neural network architecture trained to yield $G$ -invariant representations for $G$ -transformed data [42]. In this work, two inputs are considered "perceptually distinct" if they are not in the same group orbit. They find that all inputs optimized to yield the same representation in the bispectral model are identical up to the group action. By contrast, many metameric stimuli can be found for $E(2)$ -CNN [50], a $G$ -Equivariant CNN that uses Max $G$ -Pooling. Given the duality of the bispectrum and the triple correlation, we expect to observe similar "completeness" for $G$ -CNNs using the $G$ -TC Layer. + +# 5.1 Classification Performance + +We train $G$ -TC and Max $G$ -Pooling models on the $SO(2)$ and $O(2)$ -MNIST and chiral $(O)$ and full $(O_h)$ octahedral voxelized ModelNet10 training datasets and examine their classification performance on the test set. Full training details including hyperparameters are provided in Appendix G. Table 1 shows the test classification accuracy obtained by the Max- $G$ and $G$ -TC architectures on each dataset. Accuracy is averaged over four random seeds, with confidence intervals showing standard deviation. We find that the model equipped with $G$ -TC obtains a significant improvement in overall classification performance—an increase of 1.3, 0.89, 1.84 and 3.49 percentage points on $SO(2)$ -MNIST, $O(2)$ -MNIST, $O$ -ModelNet10 and $O_h$ -ModelNet10 respectively. + +
MethodC8-CNN on SO(2)-MNISTD16-CNN on O(2)-MNIST
AccuracyParametersAccuracyParameters
Max G-Pool95.23 ± 0.1532,91592.17% ± 0.23224,470
G-TC96.53 ± 0.1635,21893.06 % ± 0.09221,074
MethodO-CNN on O-ModelNet10Oh-CNN on Oh-ModelNet10
AccuracyParametersAccuracyParameters
Max G-Pool72.17% ± 0.95500,19871.73% ± 0.231,826,978
G-TC74.01% ± 0.48472,06675.22% ± 0.621,817,602
+ +Table 1: Classification Accuracy & Parameter Counts for Models Trained on G-MNIST and G-ModelNet10. Confidence intervals reflect standard deviation over four random seeds per model. The model equipped with G-TC rather than Max G-Pooling obtains significantly improved classification performance on all datasets. + +# 5.2 Completeness + +Following the analysis of [42], we next evaluate the completeness of the models trained on the $G$ -MNIST Dataset. Figure 4 shows inputs optimized to yield the same pre-classifier representation as a set of target images. In line with similar findings from [42], we find that all inputs yielding an identical representations and classifications in the $G$ -TC Model are within the same group orbit. Notably, the optimized images are identical to the targets, up to the group action. This reflects exactly the completeness of the $G$ -TC: the $G$ -TC preserves all signal structure up to the group action. + +Thus, any rotated version of the a target will yield the same $G$ -TC Layer output. By contrast, many "metameric" misclassified stimuli can be found for the Max $G$ -Pool Model, a consequence of the lossiness of this pooling operation. + +![](images/b535272b9c5a5ea08496715731beedc6b50cd87a54de7a43a56a4d25594d58a5.jpg) +Figure 4: Optimized Model Metamers. For each model, 100 targets from the MNIST dataset were randomly selected. 100 inputs were randomly initialized and optimized to yield identical pre-classifier model presentations. All inputs optimized for the $G$ -TC Model converge to the orbit of the target. By contrast, metamers that bear no semantic relationship to the targets are found for every target in the Max $G$ -Pooling model. + +![](images/87e894ef74a72586fd99210860488592c2197a5f84139984d8acac0541f471fe.jpg) + +# 6 Discussion + +In this work, we introduced a new method for achieving robust group-invariance in group-equivariant convolutional neural networks. Our approach, the $G$ -TC Layer, is built on the triple correlation on groups, the lowest-degree polynomial that is a complete group-invariant map [32, 46]. Our method inherits its completeness, which provides measurable gains in robustness and classification performance as compared to the ubiquitous Max $G$ -Pooling. + +This improved robustness comes at a cost: the $G$ -TC Layer increases the dimension of the output of a $G$ -Convolutional layer from $G$ to $\frac{|G|(|G| + 1)}{2}$ . While the dimension of the discretized groups used in $G$ -CNNs is typically small, this increase in computational cost may nonetheless deter practitioners from its use. However, there is a path to further reduction in computational complexity provided that we consider its spectral dual: the bispectrum. In [34], an algorithm is provided that exploits more subtle symmetries of the bispectrum to demonstrate that only $|G| + 1$ terms are needed to provide a complete signature of signal structure, for the one-dimensional cyclic group. In Appendix F, we extend the computational approach from [34] to more general groups and provided a path for substantial reduction in the complexity of the $G$ -TC Layer, thus expanding its practical utility. Novel mathematical work that grounds our proposed computations in group theory is required to quantify the exact complexity reduction that we provide. + +As geometric deep learning is applied to increasingly complex data from the natural sciences [18, 2, 27], we expect robustness to play a critical role in its success. Our work is the first to introduce the general group-invariant triple correlation as a new computational primitive for geometric deep learning. We expect the mathematical foundations and experimental successes that we present here to provide a basis for rethinking the problems of invariance and robustness in deep learning architectures. + +# Acknowledgments + +The authors thank Christopher Hillar, Bruno Olshausen, and Christian Shewmake for many conversations on the bispectrum and triple correlation, which have helped shape the ideas in this work. Thanks also to the members of the UCSB Geometric Intelligence Lab and to four anonymous reviewers for feedback on earlier versions. Lastly, the authors acknowledge financial support from the UC Noyce Initiative: UC Partnerships in Computational Transformation, NIH R01 1R01GM144965-01, and NSF Grant 2134241. + +# References + +[1] Edward H Adelson and James R Bergen. "Spatiotemporal energy models for the perception of motion". In: Josa a 2.2 (1985), pp. 284-299. +[2] Kenneth Atz, Francesca Grisoni, and Gisbert Schneider. "Geometric deep learning on molecular representations". In: Nature Machine Intelligence 3.12 (2021), pp. 1023-1032. +[3] Florentin Bieder, Robin Sandkuhler, and Philippe C Cattin. "Comparison of methods generalizing max-and average-pooling". In: arXiv preprint arXiv:2103.01746 (2021). +[4] Wieland Brendel and Matthias Bethge. "Approximating cnns with bag-of-local-features models works surprisingly well onImagenet". In: arXiv preprint arXiv:1904.00760 (2019). +[5] D Brillinger. "Some history of higher-order statistics and spectra". In: Stat. Sin. 1 (1991), pp. 465-476. +[6] Michael M Bronstein et al. "Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges". In: arXiv preprint arXiv:2104.13478 (2021). +[7] Gabriele Cesa, Leon Lang, and Maurice Weiler. "A Program to Build E(N)-Equivariant Steerable CNNs". In: International Conference on Learning Representations. 2022. URL: https://openreview.net/forum?id=WE4qe9xlnQw. +[8] Taco Cohen and Max Welling. "Group equivariant convolutional networks". In: International conference on machine learning. PMLR. 2016, pp. 2990-2999. +[9] Taco S Cohen and Max Welling. "Steerable cnns". In: arXiv preprint arXiv:1612.08498 (2016). +[10] Taco S. Cohen, Mario Geiger, and Maurice Weiler. “A general theory of equivariant CNNs on homogeneous spaces”. In: Advances in Neural Information Processing Systems 32.NeurIPS (2019). ISSN: 10495258. +[11] Wojciech Czaja et al. "Maximal function pooling with applications". In: Excursions in Harmonic Analysis, Volume 6: In Honor of John Benedetto's 80th Birthday (2021), pp. 413-429. +[12] Anastasios Delopoulos, Andreas Tirakis, and Stefanos Kollias. "Invariant image classification using triple-correlation-based neural networks". In: IEEE Transactions on Neural Networks 5.3 (1994), pp. 392-408. +[13] Sarita S Deshpande, Graham A Smith, and Wim van Drongelen. "Third-order motifs are sufficient to fully and uniquely characterize spatiotemporal neural network activity". In: Scientific Reports 13.1 (2023), p. 238. +[14] Hayoung Eom and Heeyoul Choi. Alpha-Integration Pooling for Convolutional Neural Networks. 2020. arXiv: 1811.03436 [cs.LG]. +[15] Joan Bruna Estrach, Arthur Szlam, and Yann LeCun. "Signal recovery from pooling representations". In: International conference on machine learning. PMLR. 2014, pp. 307-315. +[16] Jiashi Feng et al. "Geometric Lp-norm feature pooling for image classification". In: CVPR 2011. IEEE. 2011, pp. 2609-2704. +[17] K. Fukushima. “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position”. In: Biological Cybernetics 36 (1980), pp. 193–202. +[18] Pablo Gainza et al. "Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning". In: Nature Methods 17.2 (2020), pp. 184-192. +[19] Zilin Gao et al. "Global second-order pooling convolutional networks". In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2019, pp. 3024-3033. +[20] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples". In: arXiv preprint arXiv:1412.6572 (2014). + +[21] Benjamin Graham. "Fractional max-pooling". In: arXiv preprint arXiv:1412.6071 (2014). +[22] Caglar Gulcehre et al. "Learned-norm pooling for deep feedforward and recurrent neural networks". In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part I 14. Springer, 2014, pp. 530-546. +[23] Brian C Hall. "Lie groups, Lie algebras, and representations". In: Quantum Theory for Mathematicians. Springer, 2013, pp. 333-366. +[24] Kaiming He et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition". In: IEEE transactions on pattern analysis and machine intelligence 37.9 (2015), pp. 1904-1916. +[25] David H Hubel and Torsten N Wiesel. "Receptive fields of single neurones in the cat's striate cortex". In: The Journal of physiology 148.3 (1959), pp. 574-591. +[26] Jorn-Henrik Jacobsen et al. "Excessive invariance causes adversarial vulnerability". In: arXiv preprint arXiv:1811.00401 (2018). +[27] Xiangyang Ju et al. "Performance of a geometric deep learning pipeline for HL-LHC particle tracking". In: The European Physical Journal C 81 (2021), pp. 1-14. +[28] R. Kakarala. “A group theoretic approach to the triple correlation”. In: IEEE Workshop on higher order statistics. 1993, pp. 28-32. +[29] Ramakrishna Kakarala. “A group-theoretic approach to the triple correlation”. In: [1993 Proceedings] IEEE Signal Processing Workshop on Higher-Order Statistics. IEEE. 1993, pp. 28–32. +[30] Ramakrishna Kakarala. "Completeness of bispectrum on compact groups". In: arXiv preprint arXiv:0902.0196 1 (2009). +[31] Ramakrishna Kakarala. "The bispectrum as a source of phase-sensitive invariants for Fourier descriptors: a group-theoretic approach". In: Journal of Mathematical Imaging and Vision 44.3 (2012), pp. 341-353. +[32] Ramakrishna Kakarala. "Triple correlation on groups". PhD thesis. University of California, Irvine, 1992. +[33] Stefanos D Kollias. "A multiresolution neural network approach to invariant image recognition". In: Neurocomputing 12.1 (1996), pp. 35-57. +[34] R. Kondor. Group theoretical methods in machine learning. Columbia University, PhD Thesis, 2008. +[35] Ashwani Kumar. "Ordinal pooling networks: for preserving information over shrinking feature maps". In: arXiv preprint arXiv:1804.02702 (2018). +[36] Y LeCun, C Cortes, and C Burges. "The MNIST Dataset of Handwritten Digits (Images)". In: NYU: New York, NY, USA (1999). +[37] Yann LeCun. "The MNIST database of handwritten digits". In: http://yann.lecun.com/exdb/mnist/ (1998). +[38] Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji. "Bilinear CNN models for fine-grained visual recognition". In: Proceedings of the IEEE international conference on computer vision. 2015, pp. 1449-1457. +[39] Chrysostomos L Nikias and Jerry M Mendel. "Signal processing with higher-order spectra". In: IEEE Signal processing magazine 10.3 (1993), pp. 10-37. +[40] Rajendran Nirthika et al. "Pooling in convolutional neural networks for medical image analysis: a survey and an empirical study". In: Neural Computing and Applications 34.7 (Feb. 2022), pp. 5321-5347. DOI: 10.1007/s00521-022-06953-8. URL: https://doi.org/10.1007/s00521-022-06953-8. +[41] Kunlun Qi et al. "Concentric circle pooling in deep convolutional networks for remote sensing scene classification". In: Remote Sensing 10.6 (2018), p. 934. +[42] Sophia Sanborn et al. "Bispectral Neural Networks". In: International Conference on Learning Representations (2023). +[43] Arash Shahriari and Fatih Porikli. "Multipartite pooling for deep convolutional neural networks". In: arXiv preprint arXiv:1710.07435 (2017). +[44] Zenglin Shi, Yangdong Ye, and Yunpeng Wu. "Rank-based pooling for deep convolutional neural networks". In: Neural Networks 83 (2016), pp. 21-31. + +[45] Alexandros Stergiou, Ronald Poppe, and Grigorios Kalliatakis. "Refining activation downsampling with SoftPool". In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, pp. 10357-10366. +[46] Bernd Sturmfels. Algorithms in invariant theory. Springer Science & Business Media, 2008. +[47] Zhiqiang Tong, Kazuyuki Aihara, and Gouhei Tanaka. “A hybrid pooling method for convolutional neural networks”. In: Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16–21, 2016, Proceedings, Part II 23. Springer. 2016, pp. 454–461. +[48] J. Tukey. "The spectral representation and transformation properties of the higher moments of stationary time series". In: Reprinted in The Collected Works of John W. Tukey 1 (1953), pp. 165-184. +[49] Zhen Wei et al. "Building detail-sensitive semantic segmentation networks with polynomial pooling". In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, pp. 7115-7123. +[50] Maurice Weiler and Gabriele Cesa. "General E(2)-Equivariant Steerable CNNs". In: Conference on Neural Information Processing Systems (NeurIPS). 2019. +[51] Haibing Wu and Xiaodong Gu. "Max-pooling dropout for regularization of convolutional neural networks". In: Neural Information Processing: 22nd International Conference, ICONIP 2015, Istanbul, Turkey, November 9-12, 2015, Proceedings, Part I 22. Springer. 2015, pp. 46-54. +[52] Zhirong Wu et al. "3d shapenets: A deep representation for volumetric shapes". In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015, pp. 1912-1920. +[53] John I Yellott. "Implications of triple correlation uniqueness for texture statistics and the Julesz conjecture". In: JOSA A 10.5 (1993), pp. 777-793. +[54] JI Yellott Jr and GJ Iverson. "Uniqueness theorems for generalized autocorrelations". In: Journal of the Optical Society of America 9 (1992), pp. 388-404. +[55] Dingjun Yu et al. "Mixed pooling for convolutional neural networks". In: Rough Sets and Knowledge Technology: 9th International Conference, RSKT 2014, Shanghai, China, October 24-26, 2014, Proceedings 9. Springer. 2014, pp. 364-375. +[56] Afia Zafar et al. “A Comparison of Pooling Methods for Convolutional Neural Networks”. In: Applied Sciences 12.17 (Aug. 2022), p. 8643. DOI: 10.3390/app12178643. URL: https://doi.org/10.3390/app12178643. +[57] Matthew D Zeiler and Rob Fergus. "Stochastic pooling for regularization of deep convolutional neural networks". In: arXiv preprint arXiv:1301.3557 (2013). +[58] Shuangfei Zhai et al. "S3pool: Pooling with stochastic spatial sampling". In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, pp. 4970-4978. +[59] Ning Zhang, Ryan Farrell, and Trever Darrell. “Pose pooling kernels for sub-category recognition”. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE. 2012, pp. 3665–3672. + +# Appendices + +# A Group Axioms + +For a set of transformations $G$ to be a group under the operation $\cdot$ , these four axioms must hold: + +1. Closure: The product of any two elements of the group is also an element of the group, i.e., for all $g_1, g_2 \in G$ , $g_1 g_2 \in G$ . +2. Associativity: The grouping of elements under the operation does not change the outcome, so long as the order of elements is preserved, i.e., $(g_{1}g_{2})g_{3} = g_{1}(g_{2}g_{3})$ . +3. Identity: There exists a "do-nothing" identity element $e$ that such that the product of $e$ with any other element $g$ returns $g$ , i.e., $ge = eg = g$ for all $g \in G$ . +4. Inverse: For every element $g$ , there exists an inverse element $g^{-1}$ such that the product of $g$ and $g^{-1}$ returns the identity, i.e. $gg^{-1} = g^{-1}g = e$ . + +# B The Case of Steerable $G$ -CNNs + +We consider the framework of Steerable $G$ -CNNs defined in [9]. Consider a group $G$ that is the semi-direct product $G = \mathbb{Z}_2 \ltimes H$ of the group of translations $\mathbb{Z}_2$ and a group $H$ of transformations that fixes the origin $0 \in \mathbb{Z}_2$ . Consider the feature map $\Theta : \mathbb{Z}_2 \to \mathbb{R}^K$ that is the output of a steerable $G$ -CNN. Here, $\Theta$ is a field that transforms according to a representation $\pi$ induced by a representation $\rho$ of $H$ on the fiber $R^K$ . + +The $G$ -TC can be defined on $\Theta$ by replacing the regular representation by the induced representation. Specifically, replace any $\Theta(g)$ , i.e., the scalar value of the feature map at group element $g$ by $\pi(g)(\Theta)(x)$ , i.e., the vector value of the feature map at $x$ after a group element $g$ has acted on it via the representation $\pi$ : + +$$ +\tau_ {\Theta} (g _ {1}, g _ {2}) = \int_ {G} \pi (g) (\Theta) (x) ^ {\dagger} \cdot \pi (g _ {1} g) (\Theta) (x) \cdot \pi (g _ {2} g) (\Theta) (x) d g. +$$ + +Instead of computing the $G$ -TC for each scalar coordinate $k$ of $\Theta$ as in the main text, we directly compute it as a vector. The formulation does not depend on the choice of $x$ by homogeneity of the domain $\mathbb{Z}_2$ for the group $G$ . Importantly, this $G$ -TC is invariant to the action of the induced representation, see Appendix D. + +# C The $G$ -Triple Correlation: Concrete Examples + +We show how to compute the $G$ -Triple Correlation ( $G$ -TC) on three concrete groups. We start with the $G$ -TC for the group $\mathbb{R}^2$ of 2D translations. + +Example 1. Consider the group of $2D$ translations $G = (\mathbb{R}^2, +)$ with the addition as the group law. Consider a signal $\Theta$ that is a real function defined over $\mathbb{R}^2$ and can therefore be identified with an image. For any $x_1, x_2 \in \mathbb{R}^2$ , the $G$ -TC is given by: + +$$ +\tau_ {\Theta} \left(x _ {1}, x _ {2}\right) = \int_ {x \in \mathbb {R} ^ {2}} \Theta (x) \Theta \left(x + x _ {1}\right) \Theta \left(x + x _ {2}\right) d x. \tag {9} +$$ + +Next, we consider the special orthogonal group $SO(2)$ of 2D rotations. + +Example 2. Consider the group of $2D$ rotations $G = (SO(2),\cdot)$ where $SO(2)$ is parameterized by $[0,2\pi]$ and the composition of rotations $\cdot$ is the addition of angles modulo $2\pi$ : + +$$ +\theta_ {1} \cdot \theta_ {2} \equiv \theta_ {1} + \theta_ {2} [ 2 \pi ]. \tag {10} +$$ + +For a real signal $\Theta$ defined over $G$ , we have: + +$$ +\tau_ {\Theta} \left(\theta_ {1}, \theta_ {2}\right) = \int_ {\theta \in S O (2)} \Theta (\theta) \Theta \left(\theta + \theta_ {1}\right) \Theta \left(\theta + \theta_ {2}\right) d \theta , \tag {11} +$$ + +for any $\theta_1, \theta_2 \in SO(2)$ and the addition is taken modulo $2\pi$ . + +Finally, we compute the $G$ -TC for the special euclidean group $SE(2)$ of 2D rotations and translations, i.e., of 2D rigid-body transformations. + +Example 3. Consider the group of 2D rigid body transformations $G = SE(2) = (SO(2) \times \mathbb{R}^2, \cdot)$ equipped with the group composition law: + +$$ +\left(\theta_ {1}, x _ {1}\right) \cdot \left(\theta_ {2}, x _ {2}\right) \equiv \left(\theta_ {1} + \theta_ {2}, R _ {\theta_ {1}}. x _ {2} + x _ {1}\right), \tag {12} +$$ + +where $R_{\theta_1} = \begin{bmatrix} \cos \theta_1 & -\sin \theta_1 \\ \sin \theta_1 & \cos \theta_1 \end{bmatrix}$ is the 2D rotation matrix associated with rotation $\theta_1$ and the addition of angles $\theta_1 + \theta_2$ is taken modulo $2\pi$ . + +For a real signal defined on $SE(2)$ we have: + +$$ +\begin{array}{l} \tau_ {\Theta} \left(\left(\theta_ {1}, x _ {1}\right), \left(\theta_ {2}, x _ {2}\right)\right) = \int_ {\left(\theta , x\right) \in S E (2)} \Theta (\theta , x) \Theta \left(\left(\theta , x\right) \cdot \left(\theta_ {1}, x _ {1}\right)\right) \Theta \left(\left(\theta , x\right) \cdot \left(\theta_ {2}, x _ {2}\right)\right) d \theta d x \\ = \int_ {(\theta , x) \in S E (2)} \Theta (\theta , x) \Theta (\theta + \theta_ {1}, R _ {\theta}. x _ {1} + x) \Theta (\theta + \theta_ {2}, R _ {\theta}. x _ {2} + x) d \theta d x, \\ \end{array} +$$ + +for any $\theta_1, \theta_2 \in SO(2)$ and $x_1, x_2 \in \mathbb{R}^2$ . + +# D Invariance of the $G$ -Triple Correlation + +Consider a real signal $\Theta$ defined over a group $G$ . The $G$ -Triple Correlation is invariant to group actions on the domain of the signal $\Theta$ as shown in [28]. + +Proposition 4. Consider two real signals $\Theta_1, \Theta_2$ defined over a group $G$ . If there exists $h \in G$ such that one signal is obtained from the other by a group action, i.e., $\Theta_2 = L_h[\Theta_1]$ , then $\tau_{\Theta_1} = \tau_{\Theta_2}$ . + +We recall the proof of [28] below. + +Proof. Consider two real signals $\Theta_1, \Theta_2$ defined over a group $G$ , such that $\Theta_2 = L_h[\Theta_1]$ for a group action $L_h$ of group element $h$ . We show that this implies that $\tau_{\Theta_1} = \tau_{\Theta_2}$ . + +Taking $g_{1}, g_{2} \in G$ , we have: + +$$ +\begin{array}{l} \tau_ {\Theta_ {2}} (g _ {1}, g _ {2}) = \int_ {g \in G} \Theta_ {2} (g) \Theta_ {2} (g g _ {1}) \Theta_ {2} (g g _ {2}) d g \\ = \int_ {g \in G} L _ {h} [ \Theta_ {1} ] (g) L _ {h} [ \Theta_ {1} ] (g g _ {1}) L _ {h} [ \Theta_ {1} ] (g g _ {2}) d g \\ = \int_ {g \in G} \Theta_ {1} (h g) \Theta_ {1} (h g g _ {1}) \Theta_ {1} (h g g _ {2}) d g \\ = \int_ {g \in G} \Theta_ {1} (g) \Theta_ {1} \left(g g _ {1}\right) \Theta_ {1} \left(g g _ {2}\right) d g \\ = \tau_ {\Theta_ {1}} (g _ {1}, g _ {2}). \\ \end{array} +$$ + +where we use the change of variable $hg \to g$ . + +This proves the invariance of the $G$ -TC with respect to group actions on the signals. + +![](images/efde1d05a28682efc380aa10cf443f56b86951435fb9dec5de2195cef76ffe80.jpg) + +# E Symmetries of the $G$ -Triple Correlation + +The $G$ -Triple Correlation ( $G$ -TC) enjoys some symmetries that we can leverage to avoid computing it for each pair of group elements (which would represent $|G|^2$ computations), hence making the feedforward pass more efficient. + +These symmetries are given in the main text. We recall them here for completeness. + +Proposition 5. Consider two transformations $g_1, g_2 \in G$ . The $G$ -Triple Correlation of a signal $\Theta$ has the following symmetry: + +$$ +\left(s 1\right) \quad T _ {\Theta} \left(g _ {1}, g _ {2}\right) = T _ {\Theta} \left(g _ {2}, g _ {1}\right). +$$ + +If $G$ is commutative, the $G$ -Triple Correlation of a real signal has the following additional symmetries: + +$$ +\left(s 2\right) \quad T _ {\Theta} \left(g _ {1}, g _ {2}\right) = T _ {\Theta} \left(g _ {1} ^ {- 1}, g _ {2} g _ {1} ^ {- 1}\right) = T _ {\Theta} \left(g _ {2} g _ {1} ^ {- 1}, g _ {1} ^ {- 1}\right) = T _ {\Theta} \left(g _ {1} g _ {2} ^ {- 1}, g _ {2} ^ {- 1}\right) = T _ {\Theta} \left(g _ {2} ^ {- 1}, g _ {1} g _ {2} ^ {- 1}\right). +$$ + +Our proof extends the proof given in [39] for the group of translations. + +Proof. Consider two transformations $g_1, g_2 \in G$ . + +Symmetry (s1) relies on the fact that $\Theta(gg_2)$ and $\Theta(gg_1)$ are scalar values that commute: + +$$ +T _ {\Theta} (g _ {2}, g _ {1}) = \frac {1}{| G |} \sum_ {g \in G} \Theta (g) \Theta (g g _ {2}) \Theta (g g _ {1}) = \frac {1}{| G |} \sum_ {g \in G} \Theta (g) \Theta (g g _ {1}) \Theta (g g _ {2}) = T _ {\Theta} (g _ {2}, g _ {1}). +$$ + +For symmetry (s2), we assume that $G$ is commutative. We prove the first equality: + +$$ +\begin{array}{l} T _ {\Theta} (g _ {1} ^ {- 1}, g _ {2} g _ {1} ^ {- 1}) = \frac {1}{| G |} \sum_ {g \in G} \Theta (g) \Theta (g g _ {1} ^ {- 1}) \Theta (g g _ {2} g _ {1} ^ {- 1}) \\ = \frac {1}{| G |} \sum_ {g ^ {\prime} \in G} \Theta \left(g ^ {\prime} g _ {1}\right) \Theta \left(g ^ {\prime}\right) \Theta \left(g ^ {\prime} g _ {1} g _ {2} g _ {1} ^ {- 1}\right) \quad (\text {w i t h} g ^ {\prime} = g g _ {1} ^ {- 1}, \text {i . e . ,} g = g ^ {\prime} g _ {1}) \\ = \frac {1}{| G |} \sum_ {g ^ {\prime} \in G} \Theta \left(g ^ {\prime} g _ {1}\right) \Theta \left(g ^ {\prime}\right) \Theta \left(g ^ {\prime} g _ {2}\right) (G \text {c o m m u t a t i v e :} g _ {2} g _ {1} ^ {- 1} = g _ {1} ^ {- 1} g _ {2}) \\ = \frac {1}{| G |} \sum_ {g \in G} \Theta (g g _ {1}) \Theta (g) \Theta (g g _ {2}) \\ = \frac {1}{| G |} \sum_ {g \in G} \Theta (g) \Theta (g g _ {1}) \Theta (g g _ {2}) (\Theta \text {t a k e s o n r e a l v a l u e s t h a t c o m m u t e}) \\ = T _ {\Theta} \left(g _ {2}, g _ {1}\right) \\ \end{array} +$$ + +The second equality of symmetry (s2) follows using (s1). The third and fourth equality of symmetry (s2) have the same proof. + +This result and its proof are also valid for the extension of the $G$ -TC that we propose in Appendix B. They naturally emerge by replacing the regular representation by the induced representation in the proof above. + +Specifically, consider a signal $\Theta_{2} = \pi (g_{0})[\Theta_{1}]$ obtained from the action of $g_{0}$ on a signal $\Theta_{1}$ . We want to show that $\tau_{\Theta_1} = \tau_{\Theta_2}$ . The key ingredient of the proof is the change of variable within the integral $\int_{G^{\prime}}$ , which follows the semi-direct product structure of $\mathrm{G}$ : + +$$ +h ^ {\prime} = h h _ {0} \text {a n d} t ^ {\prime} = \phi (h) t _ {0} + t +$$ + +where $\phi(h)$ is a matrix representing $h$ that acts on $\mathbb{Z}_2$ via matrix multiplication: e.g., a rotation matrix $R = \phi(r)$ in the case of $SE(n)$ . This concludes the adaptation of the proof for the steerable case. + +# F Algorithmic Reduction + +In this section, we show that we can reduce the complexity of the $G$ -Triple Correlation of a real signal. This computational reduction requires that we consider, instead, the spectral dual of the $G$ -TC, the bispectrum [31]. In what follows, we consider a signal $\Theta$ defined over a finite group $G$ . The signal can be real or complex valued. + +# F.1 Reduction for Commutative Groups + +Consider a commutative group $G$ . The bispectrum for a signal $\Theta$ is defined over a pair of irreducible representations $\rho_1, \rho_2$ of the group $G$ as: + +$$ +\beta (\Theta) _ {\rho_ {1}, \rho_ {2}} = \mathcal {F} (\Theta) _ {\rho_ {1}} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho_ {2}} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho_ {1} \rho_ {2}} \quad \in \mathbb {C}, \tag {13} +$$ + +where $\mathcal{F}(\Theta)$ is the Fourier transform of $\Theta$ that generalizes the classical Fourier transformation to signals defined over a group. We note that, in group theory, the irreducible representations (irreps) of commutative groups map to scalar values. Hence, the bispectrum is a complex scalar in this case. + +For a discrete commutative group, the number of irreps is equal to the size of the group. [KondorThesis] proved that, for cyclic groups, it is enough to compute $|G| + 1$ bispectral coefficients to fully describe the signal $\Theta$ up to group action, instead of the $|G|^2$ that would otherwise be required from its definition. + +# F.2 Reduction for Non-Commutative Groups + +Consider a non-commutative group $G$ . The bispectrum of a signal $\Theta$ is defined over a pair of irreducible representations $\rho_1, \rho_2$ of $G$ as: + +$$ +\beta (\Theta) _ {\rho_ {1}, \rho_ {2}} = [ \mathcal {F} (\Theta) _ {\rho_ {1}} \otimes \mathcal {F} (\Theta) _ {\rho_ {2}} ] ^ {\dagger} C _ {\rho_ {1}, \rho_ {2}} \Big [ \bigoplus_ {\rho \in \rho_ {1} \otimes \rho_ {2}} \mathcal {F} (\Theta) _ {\rho} \Big ] C _ {\rho_ {1}, \rho_ {2}} ^ {\dagger} \qquad \in \mathbb {C} ^ {D _ {1} D _ {2} \times D _ {1} D _ {2}}, +$$ + +where $\otimes$ is the tensor product, and $\oplus$ is a direct sum over irreps. The unitary Clebsch-Gordan matrix $C_{\rho_1,\rho_2}$ is analytically defined for each pair of representations $\rho_1,\rho_2$ as: + +$$ +\left(\rho_ {1} \otimes \rho_ {2}\right) (g) = C _ {\rho_ {1}, \rho_ {2}} ^ {\dagger} \left[ \bigoplus_ {\rho \in \rho_ {1} \otimes \rho_ {2}} \rho (g) \right] C _ {\rho_ {1}, \rho_ {2}}. \tag {14} +$$ + +We note that the irreps of non-commutative groups map to matrices, hence the bispectrum is a complex matrix in this case. + +We provide an algorithmic approach reducing the computational complexity of the bispectrum for non-commutative finite groups. We show that we can recover the signal $\Theta$ from a small subset of its bispectral coefficients. That is, we can recover $\Theta$ from coefficients $\beta (\Theta)_{\rho_1,\rho_2}$ computed for a few, well-chosen irreps $\rho_{1},\rho_{2}$ . In practice, we only need to compute a few bispectral coefficients to have a complete invariant of the signal $\Theta$ — hence reducing the computational complexity of the layer. + +Generalizing [28], we will show that a subset of bispectral coefficients allows us to recover the Fourier transform of the signal $\Theta$ for every irreducible representation $\rho$ of the group. This will show that we can recover the signal $\Theta$ itself by applying the inverse Fourier transform. + +We first show relationships between the bispectral coefficients and the Fourier coefficients of the signal $\Theta$ in the following Lemma. We denote $\rho_0$ the trivial representation of the group $G$ , which is the representation that sends every group element to the scalar 1. + +Lemma 1. Consider $\rho_0$ the trivial representation of the group $G$ . Consider $\rho$ any other irreducible representation of dimension $D$ . The bispectral coefficients write: + +$$ +\beta_ {\rho_ {0}, \rho_ {0}} = | \mathcal {F} (\Theta) _ {\rho_ {0}} | ^ {2} \mathcal {F} (\Theta) _ {\rho_ {0}} \in \mathbb {C} +$$ + +$$ +\beta_ {\rho , \rho_ {0}} = \mathcal {F} (\Theta) _ {\rho_ {0}} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} \in \mathbb {C} ^ {D \times D}. +$$ + +Here, and in what follows, we denote $\beta$ the bispectrum of $\Theta$ , i.e., we omit the argument $\Theta$ for clarity of notations. + +Proof. For $\rho_0$ the trivial representation, and $\rho$ an irreps, the Clebsh-Jordan (CJ) matrix $C_{\rho \rho_0}$ is the identity matrix, and the matrix $C_{\rho \rho_0}$ is the scalar 1. + +We compute the bispectral coefficient $\beta_{\rho_0,\rho_0}$ : + +$$ +\begin{array}{l} \beta_ {\rho_ {0}, \rho_ {0}} = (\mathcal {F} (\Theta) _ {\rho_ {0}} \otimes \mathcal {F} (\Theta) _ {\rho_ {0}}) ^ {\dagger} C _ {\rho_ {0} \rho_ {0}} \left[ \bigoplus_ {\rho \in \rho_ {0} \otimes \rho_ {0}} \mathcal {F} (\Theta) _ {\rho} \right] C _ {\rho_ {0} \rho_ {0}} ^ {\dagger} \\ = \left(\mathcal {F} (\Theta) _ {\rho_ {0}} \otimes \mathcal {F} (\Theta) _ {\rho_ {0}}\right) ^ {\dagger} C _ {\rho_ {0} \rho_ {0}} \mathcal {F} (\Theta) _ {\rho_ {0}} C _ {\rho_ {0} \rho_ {0}} ^ {\dagger} \quad \left(\rho_ {0} \otimes \rho_ {0} = \rho_ {0} \text {w h i c h i s i r r e d u c i b l e}\right) \\ = \left(\mathcal {F} (\Theta) _ {\rho_ {0}} ^ {2}\right) ^ {\dagger} C _ {\rho_ {0} \rho_ {0}} \mathcal {F} (\Theta) _ {\rho_ {0}} C _ {\rho_ {0} \rho_ {0}} ^ {\dagger} \quad \left(\mathcal {F} (\Theta) _ {\rho_ {0}} \text {i s a s c a l a r f o r w h i c h t e n s o r p r o d u c t i s m u l t i p l i c a t i o n}\right) \\ = \left| \mathcal {F} (\Theta) _ {\rho_ {0}} \right| ^ {2} \mathcal {F} (\Theta) _ {\rho_ {0}} \quad (\text {C J m a t r i c e s a r e e q u a l t o 1 .}) \\ \end{array} +$$ + +Take any irreducible representation $\rho$ of dimension $D$ , we have: + +$$ +\begin{array}{l} \beta_ {\rho , \rho_ {0}} = (\mathcal {F} (\Theta) _ {\rho} \otimes \mathcal {F} (\Theta) _ {\rho_ {0}}) ^ {\dagger} C _ {\rho \rho_ {0}} \left[ \bigoplus_ {\rho \in \rho \otimes \rho_ {0}} \mathcal {F} (\Theta) _ {\rho} \right] C _ {\rho \rho_ {0}} ^ {\dagger} \\ = \mathcal {F} (\Theta) _ {\rho_ {0}} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} ^ {\dagger} C _ {\rho \rho_ {0}} \mathcal {F} (\Theta) _ {\rho} C _ {\rho \rho_ {0}} ^ {\dagger} \quad (\mathcal {F} (\Theta) _ {\rho_ {0}} \text {i s a s c a l a r a n d} \rho \otimes \rho_ {0} = \rho) \\ = \mathcal {F} (\Theta) _ {\rho_ {0}} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} \qquad (\text {C J m a t r i c e s a r e i d e n t i t y m a t r i c e s}). \\ \end{array} +$$ + +![](images/43edac9166ab7ece2f2821b868aca41141d8eb2b5b4f85a167569d18c53b6968.jpg) + +Next, we summarize our main result. + +Proposition 6. We can recover the Fourier coefficients of a signal $\Theta$ from only $L$ bispectral coefficients, where $L$ is a number computed from the Kronecker product table of the group $G$ . + +In the proof, we propose an algorithmic method that iteratively computes bispectral coefficients until the Fourier coefficients of the signal are all recovered. We note that, for arbitrary groups and their representations, Clebsch-Gordan (CJ) matrices are not known in general, yet they can be computed numerically. Thus, the proof below assumes that the CJ matrices are given for the group $G$ of interest. + +# Proof. Algorithmic Approach. + +First, we show how we can recover the first Fourier coefficient (DC component), i.e., the Fourier transform of the signal at the trivial representation $\rho_0$ , from a single bispectral coefficient. + +$$ +\mathcal {F} (\Theta) _ {\rho_ {0}} = \hat {f} _ {\rho_ {0}} = \int_ {G} \Theta (g) \rho_ {0} (g) d g = \int_ {G} \Theta (g) d g \in \mathbb {C}. \tag {15} +$$ + +Using Lemma 1, we can recover this Fourier component from the bispectral coefficient $\beta_{\rho_0,\rho_0}$ , as: + +$$ +\left| \mathcal {F} (\Theta) _ {\rho_ {0}} \right| = \left(\left| \beta_ {\rho_ {0}, \rho_ {0}} \right|\right) ^ {1 / 3}, \quad \arg \left(\mathcal {F} (\Theta) _ {\rho_ {0}}\right) = \arg \left(\beta_ {\rho_ {0}, \rho_ {0}}\right). \tag {16} +$$ + +Next, consider an irreducible representation $\rho_{1}$ of dimension $D$ . We seek to recover the Fourier coefficient $\mathcal{F}(\Theta)_{\rho_1}$ . This Fourier coefficient is a matrix in $\mathbb{C}^{D\times D}$ . Using Lemma 1, we can recover it from a single bispectral coefficient: + +$$ +\mathcal {F} (\Theta) _ {\rho} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} = \frac {\beta_ {\rho , \rho_ {0}}}{\mathcal {F} (\Theta) _ {\rho_ {0}} ^ {\dagger}} \in \mathbb {C} ^ {D \times D}, \tag {17} +$$ + +since we have already recovered the Fourier coefficient $\mathcal{F}(\Theta)_{\rho_0}$ . The matrix $\mathcal{F}(\Theta)_{\rho}^{\dagger}\mathcal{F}(\Theta)_{\rho}$ is hermitian, and thus admits a square-root, that we denote $\mathcal{F}(\Theta)_{\rho}^{\prime}$ : + +$$ +\mathcal {F} (\Theta) _ {\rho} ^ {\prime} = \left(\frac {\beta_ {\rho , \rho_ {0}}}{\mathcal {F} (\Theta) _ {\rho_ {0}} ^ {\dagger}}\right) ^ {1 / 2}. \tag {18} +$$ + +The square-root $\mathcal{F}(\Theta)_{\rho}'$ only corresponds to $\mathcal{F}(\Theta)_{\rho}$ up to a matrix factor. This is an unidentifiability similar to the commutative case [28]. Specifically, consider the singular value decomposition (SVD) of $\mathcal{F}(\Theta)_{\rho}$ : + +$$ +\begin{array}{l} \mathcal {F} (\Theta) _ {\rho} = U. \Sigma . V ^ {\dagger} \\ \Rightarrow \mathcal {F} (\Theta) _ {\rho} ^ {\dagger} \mathcal {F} (\Theta) _ {\rho} = (U. \Sigma . V ^ {\dagger}) ^ {\dagger}. U. \Sigma . V ^ {\dagger} = V \Sigma^ {2} V ^ {\dagger} \\ \Rightarrow \mathcal {F} (\Theta) _ {\rho} ^ {\prime} = V \Sigma V ^ {\dagger}. \\ \end{array} +$$ + +Thus, we have: $\mathcal{F}(\Theta)_{\rho} = UV^{\dagger}.\mathcal{F}(\Theta)_{\rho}'$ where $U,V$ are unitary matrices that come from the (unknown) SVD decomposition of the (unknown) $\mathcal{F}(\Theta)_{\rho}$ . By recovering $\mathcal{F}(\Theta)_{\rho}$ as $\mathcal{F}(\Theta)_{\rho}'$ , we fix $UV^{\dagger} = I$ . This is similar to the same way in which [28] fixed $\phi = 0$ (rotation of angle 0, i.e., the identity) in the commutative case. + +Next, we seek to find the remaining Fourier coefficients of the signal $\Theta$ , from a limited subset of the bispectral coefficients. To this aim, we denote $\mathcal{R}$ the set of irreducible representations of the group + +$G$ . We recall that, for a finite group $G$ , the set $\mathcal{R}$ is also finite, with its size equal to the number of conjugacy classes of $G$ . + +We consider the following bispectral coefficient: + +$$ +\begin{array}{l} \beta_ {\rho_ {1}, \rho_ {1}} = (\mathcal {F} (\Theta) _ {\rho_ {1}} \otimes \mathcal {F} (\Theta) _ {\rho_ {1}}) ^ {\dagger} C _ {\rho_ {1} \rho_ {1}} \left[ \bigoplus_ {\rho \in \rho_ {1} \otimes \rho_ {1}} \mathcal {F} (\Theta) _ {\rho} \right] C _ {\rho_ {1} \rho_ {1}} ^ {\dagger} \\ = (\mathcal {F} (\Theta) _ {\rho_ {1}} \otimes \mathcal {F} (\Theta) _ {\rho_ {1}}) ^ {\dagger} C _ {\rho_ {1} \rho_ {1}} \left[ \bigoplus_ {\rho \in \mathcal {R}} \mathcal {F} (\Theta) _ {\rho} ^ {n _ {\rho , \rho_ {1}}} \right] C _ {\rho_ {1} \rho_ {1}} ^ {\dagger}, \\ \end{array} +$$ + +where $n_{\rho, \rho_1}$ is the multiplicity of irreps $\rho$ in the decomposition of $\rho_1, \rho_1$ . This multiplicity is known as it only depends on the group, not on the signal $\Theta$ . + +We get an equation that allows us to recover additional Fourier coefficients of the signal $\Theta$ : + +$$ +\bigoplus_ {\rho \in \mathcal {R}} \mathcal {F} (\Theta) _ {\rho} ^ {n _ {\rho , \rho_ {1}}} = C _ {\rho_ {1} \rho_ {1}} ^ {- 1} \left(\mathcal {F} (\Theta) _ {\rho_ {1}} \otimes \mathcal {F} (\Theta) _ {\rho_ {1}}\right) ^ {- \dagger} \beta_ {\rho_ {1}, \rho_ {1}} C _ {\rho_ {1} \rho_ {1}} ^ {- \dagger}, \tag {19} +$$ + +where everything on the right hand side is known. Therefore, every Fourier coefficient $\mathcal{F}(\Theta)_{\rho}$ that appears in the decomposition of $\rho_{1} \otimes \rho_{1}$ into irreducible irreps $\rho$ can be computed, by reading off the elements of the block diagonal matrix defined by the direct sum. We recover the Fourier coefficients $\mathcal{F}(\Theta)_{\rho}$ for which $n_{\rho, \rho_1} \neq 0$ . + +We assume that this procedure provides at least one other Fourier coefficient, for an irreps $\rho_{2}$ , that we fix. We can then compute the following bispectral coefficient: + +$$ +\beta_ {\rho_ {1}, \rho_ {2}} = (\mathcal {F} (\Theta) _ {\rho_ {1}} \otimes \mathcal {F} (\Theta) _ {\rho_ {2}}) ^ {\dagger} C _ {\rho_ {1} \rho_ {2}} \left[ \bigoplus_ {\rho \in \rho_ {1} \otimes \rho_ {2}} \mathcal {F} (\Theta) _ {\rho} ^ {n _ {\rho , \rho_ {2}}} \right] C _ {\rho_ {1} \rho_ {2}} ^ {\dagger}, +$$ + +to get a novel equation: + +$$ +\bigoplus_ {\rho \in \mathcal {R}} \mathcal {F} (\Theta) _ {\rho} ^ {n _ {\rho , \rho_ {2}}} = C _ {\rho_ {1} \rho_ {2}} ^ {- 1} \left(\mathcal {F} (\Theta) _ {\rho_ {1}} \otimes \mathcal {F} (\Theta) _ {\rho_ {2}}\right) ^ {- \dagger} \beta_ {\rho_ {1}, \rho_ {2}} C _ {\rho_ {1} \rho_ {2}} ^ {- \dagger}, \tag {20} +$$ + +where everything on the right-hand side is known. Thus, every Fourier coefficient $\mathcal{F}(\Theta)_{\rho}$ that appears in the decomposition of $\rho_{1} \otimes \rho_{1}$ into irreducible irreps can be recovered, by reading off the elements of the block diagonal matrix. We get the $\mathcal{F}(\Theta)_{\rho}$ for which $n_{\rho, \rho_2} \neq 0$ . We iterate this procedure to recover more Fourier coefficients. + +# Number of bispectral coefficients. + +Now, we show that our procedure can indeed recover all of the Fourier coefficients of the signal $\Theta$ . Additionally, we show that it only requires a limited number $L$ of bispectral coefficients, where $L$ depends on the group $G$ . Specifically, it depends on the Kronecker product table of $G$ , which is the $|\mathcal{R}| \times |\mathcal{R}|$ table of the decomposition of the tensor product of two irreducible representations into a direct sum of elementary irreps. In this table, the element at row $i$ and column $j$ lists all of the multiplicity of the irreps that appear in the decomposition of $\rho_i \otimes \rho_j$ . + +The Kronecker table, i.e., any multiplicity $m_{k} = n_{\rho_{k}}$ of $\rho_{k}$ in the decomposition $\tilde{\rho} = \rho_{i}\otimes \rho_{j}$ can be computed with a procedure inspired by [9] and described below. + +The procedure relies on the character $\chi_{\tilde{\rho}}(g) = \mathrm{Tr}(\tilde{\rho}(g))$ of the representation $\tilde{\rho}$ to be decomposed. From group theory, we know that the characters of irreps $\rho_i, \rho_j$ are orthogonal, in the following sense: + +$$ +\left\langle \chi_ {\rho_ {i}}, \chi_ {\rho_ {j}} \right\rangle \equiv \frac {1}{| G |} \sum_ {h \in G} \chi_ {\rho_ {i}} (h) \chi_ {\rho_ {j}} (h) = \delta_ {i j}. \tag {21} +$$ + +Thus, we can obtain the multiplicity of irrep $\rho_{k}$ in $\tilde{\rho}$ by computing the inner product with the $k$ -th character: + +$$ +\left\langle \chi_ {\tilde {\rho}}, \chi_ {\rho_ {k}} \right\rangle = \left\langle \chi_ {\oplus l} m _ {l} \rho_ {l}, \chi_ {\rho_ {k}} \right\rangle = \left\langle \sum_ {l} m _ {l} \chi_ {\rho_ {l}}, \chi_ {\rho_ {k}} \right\rangle = \sum_ {l} m _ {l} \left\langle \chi_ {\rho_ {l}}, \chi_ {\rho_ {k}} \right\rangle = m _ {k}, +$$ + +
A1A2B1B2E
A1(1,0,0,0,0)(0,1,0,0,0)(0,0,1,0,0)(0,0,0,1,0)(0,0,0,0,1)
A2(0,1,0,0,0)(1,0,0,0,0)(0,0,0,1,0)(0,0,1,0,0)(0,0,0,0,1)
B1(0,0,1,0,0)(0,0,0,1,0)(1,0,0,0,0)(0,1,0,0,0)(0,0,0,0,1)
B2(0,0,0,1,0)(0,0,1,0,0)(0,1,0,0,0)(1,0,0,0,0)(0,0,0,0,1)
E(0,0,0,0,1)(0,0,0,0,1)(0,0,0,0,1)(0,0,0,0,1)(1,1,1,1,0)
+ +Table 2: Kronecker table for the dihedral group ${D}_{4}$ which has 5 irreps called ${A}_{1},{A}_{2},{B}_{1},{B}_{2}$ and $E$ . + +using the fact that the trace of a direct sum equals the sum of the traces (i.e. $\chi_{\rho \oplus \rho^{\prime}} = \chi_{\rho} + \chi_{\rho^{\prime}}$ ). Thus, we can determine the Kronecker product table of interest. For example, the Kronecker product table for the dihedral group $D_4$ is shown in Table 2. + +The Kronecker product table shows us how many bispectral coefficients we need to complete our algorithmic procedure. Our procedure essentially uses a breadth-first search algorithm on the space of irreducible representations, starting with $\rho_{1}$ and using the tensor product with $\rho_{1}$ as the mechanism to explore the space. Whether this procedure succeeds in including all the irreps in $\mathcal{R}$ might on the group and its Kronecker (tensor) product table. Specifically, consider all the irreps that appear in the row corresponding to $\rho_{1}$ in the Kronecker product table. If these irreps do not cover the set $\mathcal{R}$ of all possible irreps, then the approach will need more than the tensor products of the form $\rho_{1} \otimes \rho_{j}$ to get all of the Fourier coefficients of the signal $\Theta$ . + +□ + +We observe in our experiments that this procedure does indeed succeed in computing all of the Fourier coefficients of the signal $\Theta$ for most of the groups of interest. We provide detailed examples of these computations on our github repository, for the diedral groups $D_4$ , $D_{16}$ and for the chiral octahedral group $O$ . The procedure does not succeed in the case of the full octahedral group $O_h$ does not succeed. + +For the diedral group $D_4$ , which has $\mathcal{R} = 5$ irreps, our approach allows us to recover a signal on $D_4$ from only 3 bispectral coefficients, instead of $5^2 = 25$ . For the diedral group $D_{16}$ , which has $\mathcal{R} = 11$ irreps, we recover the signal from 9 bispectral coefficients instead of $11^2 = 121$ . For the octahedral group, which has $\mathcal{R} = 5$ irreps, we use 4 bispectral coefficients instead of $5^2 = 25$ . This represents a substantial complexity reduction. More theoretical work is needed to prove that our approach applies to a wide range of discrete groups, or to further generalize it for groups such as $O_h$ . + +# G Training and Implementation Details + +The code to implement all models and experiments in this paper can be found at github.com/sophiaas/gtc-invariance. + +For all experiments, we use near-identical, parameter-matched architectures in which only the type of invariant map differs. To isolate the effects of the invariant map, all models are comprised of a single $G$ -Conv block followed by either Max $G$ -Pooling or the $G$ -TC layer, and an MLP Classifier. Here, we perform only pure $G$ -Conv, without translation. Thus, we use filters the same size as the input in all models. The $G$ -Conv block is comprised of a $G$ -Conv layer, a batch norm layer, and an optional nonlinearity. For the Max $G$ -Pool model, ReLU is used as the nonlinearity. Given the third-order nonlinearity of the $G$ -TC, we omit the nonlinearity in the $G$ -Conv block in the $G$ -TC Model. The $G$ -TC layer increases the dimensionality of the output of the $G$ -Conv block; consequently the input dimension of the first layer of the MLP is larger and the weight matrix contains more parameters than for the Max $G$ -Pool model. To compensate for this, we increase the dimension of the output of the first MLP layer in the Max Model, to match the overall number of parameters. + +All models are trained with a cross-entropy loss, using the Adam optimizer, a learning rate of 0.00005, weight decay of 0.00001, betas of [0.9, 0.999], epsilon of $10^{-8}$ , a reduce-on-plateau learning rate scheduler with a factor of 0.5, patience of 2 epochs, and a minimum learning rate of 0.0.0001. Each model is trained with four random seeds [0, 1, 2, 3], and results are averaged across seeds. + +# G.1 MNIST Experiments: $SO(2)$ and $O(2)$ on $\mathbb{R}^2$ + +# G.1.1 Datasets + +The $SO(2)$ -MNIST dataset is generated by applying a random 2D planar rotation to each digit in the MNIST [36] training and test datasets. This results in training and test sets with the standard sizes of 60,000 and 10,000. For the $O(2)$ -MNIST, each image is randomly flipped and rotated—i.e. transformed by a random element of the group $O(2)$ . A random $20\%$ of the training dataset is set aside for model validation and is used to tune hyperparameters. The remaining $80\%$ is used for training. Images are additionally downsized with interpolation to $16 \times 16$ pixels. + +# G.1.2 Models and Training + +# C8-CNN + +TC. The $G$ -Conv block consists of an $C8$ -Conv block using 24 filters followed by Batch Norm. Next, the $G$ -TC Layer is applied. The output is raveled and passed to a three-layer MLP using 1d Batch Norm and ELU nonlinearity and after each linear layer, with all three layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +Max. The $G$ -Conv block consists of an $C8$ -Conv block using 24 filters followed by Batch Norm and a ReLU nonlinearity. Next, a Max $G$ -Pooling Layer is applied. The output is raveled and passed to a three-layer MLP using 1d Batch Norm and ELU nonlinearity and after each linear layer. The first layer has output dimension 275, to compensate for the difference in output size of the $G$ -TC Layer. The remaining two layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +# D16-CNN + +TC. The $G$ -Conv block consists of an $D16$ -Conv block using 24 filters followed by Batch Norm. Next, the $G$ -TC Layer is applied. The output is raveled and passed to a three-layer MLP using 1d Batch Norm and ELU nonlinearity and after each linear layer, with all three layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +Max. The $G$ -Conv block consists of an $D16$ -Conv block using 24 filters followed by Batch Norm and a ReLU nonlinearity. Next, a Max $G$ -Pooling Layer is applied. The output is raveled and passed to a three-layer MLP using 1d Batch Norm and ELU nonlinearity and after each linear layer. The first layer has output dimension 2, 380, to compensate for the difference in output size of the $G$ -TC Layer. The remaining two layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +# G.2 ModelNet10 Experiments: $O$ and $O_h$ acting on $\mathbb{R}^3$ + +# G.2.1 Datasets + +The ModelNet10 dataset is downsampled and voxelized to a $10 \times 10 \times 10$ grid. for the $O$ -ModelNet10 Dataset, each datapoint is transformed by a random cubic rotation. For the $O_h$ -ModelNet10 Dataset, each datapoint is transformed by a random cubic rotation and flip. The standard training and testing sets are used. A random $20\%$ of the training dataset is set aside for model validation and is used to tune hyperparameters. The remaining $80\%$ is used for training. + +# G.2.2 Models and Training + +# O-CNN + +TC. The $G$ -Conv block consists of an $O$ -Conv block using 24 filters followed by a IID 3D Batch Norm Layer. Next, the $G$ -TC Layer is applied. The output is raveled and passed to a three-layer MLP using 1d Batch Norm and ELU nonlinearity and after each linear layer, with all three layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +Max. The $G$ -Conv block consists of an $O$ -Conv block using 24 filters followed by a IID 3D Batch Norm Layer and a ReLU nonlinearity. Next, a Max $G$ -Pooling Layer is applied. The output is raveled and passed to a three-layer MLP using 1d Batch Norm and ELU nonlinearity and after each linear + +layer. The first layer has output dimension 5, 420, to compensate for the difference in output size of the $G$ -TC Layer. The remaining two layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +# $O_{h}$ -CNN + +TC. The $G$ -Conv block consists of an $O_h$ -Conv block using 24 filters followed by a IID 3D Batch Norm Layer. Next, the $G$ -TC Layer is applied. The output is raveled and passed to a three-layer MLP, with all three layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. + +Max. The $G$ -Conv block consists of an $O_h$ -Conv block using 24 filters followed by a IID 3D Batch Norm Layer and a ReLU nonlinearity. Next, a Max $G$ -Pooling Layer is applied. The output is raveled and passed to a three-layer MLP. The first layer has output dimension 20,000, to compensate for the difference in output size of the $G$ -TC Layer. The remaining two layers having output dimension 64. Finally a linear layer is applied, with output dimension 10, for the 10 object categories. \ No newline at end of file diff --git a/ageneralframeworkforrobustginvarianceingequivariantnetworks/images.zip b/ageneralframeworkforrobustginvarianceingequivariantnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c12b4d8c9f6686b2292d93c7119b725b723d2539 --- /dev/null +++ b/ageneralframeworkforrobustginvarianceingequivariantnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a977e8f2c325009661fe5fe3597744980a9de9de938b70a4cd2fc6a1adc4cd7 +size 801492 diff --git a/ageneralframeworkforrobustginvarianceingequivariantnetworks/layout.json b/ageneralframeworkforrobustginvarianceingequivariantnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..672a3d4d644e23d7ae3be15bbf9e2b320781e1e2 --- /dev/null +++ b/ageneralframeworkforrobustginvarianceingequivariantnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f68f55bbcae50b3d06988fee0f31e8c615b034e468884ca2ec4df47b24084979 +size 1178145 diff --git a/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_content_list.json b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4b048671d39008cb6926f643fccc342c3848328c --- /dev/null +++ b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bca422dec58a8ead1ac228a400ad0ff209be4384c7a8da9f640a6972c4b9b4f6 +size 150999 diff --git a/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_model.json b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a17b7a27d895132555d794fca2db8c7955a3a09c --- /dev/null +++ b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b154681e7bde5f95aca08a55d006daa153c95b62786037007e18b733c1687e68 +size 184293 diff --git a/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_origin.pdf b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..03da1efbce766cd5a8f7724169487f4fbfe60807 --- /dev/null +++ b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/02231e40-dbf6-4835-849f-646ca1c5dc6e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a604f2bc62f58cb9319acedfab6b59b201ff3a2004c2031577bba8f5b0b964d7 +size 5959549 diff --git a/ageneraltheoryofcorrectincorrectandextrinsicequivariance/full.md b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1534b9c8ea877a8b08291854af7ae53ab756527f --- /dev/null +++ b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/full.md @@ -0,0 +1,751 @@ +# A General Theory of Correct, Incorrect, and Extrinsic Equivariance + +Dian Wang $^{1}$ Xupeng Zhu $^{1}$ Jung Yeon Park $^{1}$ Mingxi Jia $^{2*}$ Guanang Su $^{3*}$ Robert Platt $^{1}$ Robin Walters $^{1}$ + +$^{1}$ Northeastern University $^{2}$ Brown University $^{3}$ University of Minnesota $\{\text{wang.dian,zhu.xup, park.jungy,r.platt,r.walters}\} \text{@northeastern.edu}$ mingxi_jia@brown.edu su000265@umn.edu + +# Abstract + +Although equivariant machine learning has proven effective at many tasks, success depends heavily on the assumption that the ground truth function is symmetric over the entire domain matching the symmetry in an equivariant neural network. A missing piece in the equivariant learning literature is the analysis of equivariant networks when symmetry exists only partially in the domain. In this work, we present a general theory for such a situation. We propose pointwise definitions of correct, incorrect, and extrinsic equivariance, which allow us to quantify continuously the degree of each type of equivariance a function displays. We then study the impact of various degrees of incorrect or extrinsic symmetry on model error. We prove error lower bounds for invariant or equivariant networks in classification or regression settings with partially incorrect symmetry. We also analyze the potentially harmful effects of extrinsic equivariance. Experiments validate these results in three different environments. + +# 1 Introduction + +Equivariant neural networks [9, 10] have proven to be an effective way to improve generalization and sample efficiency in many machine learning tasks. This is accomplished by encoding task-level symmetry into the structure of the network architecture so that the model does not need to explicitly learn the symmetry from the data. However, encoding a fixed type of symmetry like this can be limiting when the model symmetry does not exactly match the symmetry of the underlying function being modeled, i.e., when there is a symmetry mismatch. For example, consider the digit image classification task. Is it helpful to model this problem using + +![](images/eed57e33f7297b6130fa67e1be0f16f419c5ab5c519c23731780021da61dcaee.jpg) +Figure 1: An example of correct, incorrect, and extrinsic equivariance. The ground truth function $f(x)$ is shown in black and its probability density function $p(x)$ is shown in orange. If we model $f(x)$ using a $G$ -invariant network where $G$ is a reflection group that negates $x$ , different $f(x)$ and $p(x)$ will lead to correct, incorrect, and extrinsic equivariance. See Section 3 for details. + +a model that is invariant to 180-degree rotation of the image? For some digits, the label is invariant (e.g., 0 and 8). However, for other digits, the label changes under rotation (e.g., 6 and 9), suggesting that a rotationally symmetric model would be inappropriate here. However, recent work [58] suggests that this is not necessarily the case - symmetric models are sometimes helpful even when a symmetry mismatch exists between the problem and model. This raises the question - do the advantages obtained by using a symmetric model outweigh the errors introduced by the symmetry mismatch? + +This paper makes four main contributions towards this problem. First, this paper extends the definitions for types of model symmetry with respect to true symmetry introduced in Wang et al. [58]. They classify models as having correct, incorrect, or extrinsic equivariance (see Figure 1), where correct means the ground truth function has the same symmetry as the equivariant model, incorrect means the ground truth function disagrees with the symmetry in the model, and extrinsic means the model symmetry transforms in-distribution data to out-of-distribution data. We generalize this system into a continuum of equivariance types to reflect the fact that a single task may have different proportions of correct, incorrect, and extrinsic symmetry across its domain. For example, in the digit classification task, 0 has correct equivariance, 6 has incorrect equivariance, and 4 has extrinsic equivariance. + +Our second contribution is to introduce an analytical lower bound on model error in classification tasks resulting from incorrect model symmetry. This result can help guide model selection by quantifying error resulting from incorrect equivariance constraints. Our result generalizes that of Wang et al. [58] by removing the simplifying assumption that data density over the domain is group invariant. We prove the minimum error of an invariant classifier can be realized by assigning all data points in the same group orbit the label with the majority of the data density (Theorem 4.3). + +Our third contribution is to develop new lower bounds on the $L_{2}$ error for regression tasks in terms of the variance of the function to be modeled over the orbit of the symmetry group. Like our classification bound, this bound can assist in model selection in situations with symmetry mismatch. + +Fourth, in contrast to Wang et al. [58] who show benefits of extrinsic equivariance, we theoretically demonstrate its potential harm. We perform experiments documenting the error rate across the correct-extrinsic continuum. Finally, we perform empirical studies illustrating the ideas of the paper and showing that the lower bounds obtained in our analysis appear tight in practice. This suggests our analysis can assist practitioners select symmetry groups appropriate for a given problem setting. Our code is available at https://github.com/pointW/ext理論. + +# 2 Related Work + +Equivariant Learning. Originally used for exploiting symmetry in image domains [9, 10], equivariant learning has been very successful in various tasks including molecular dynamics [2, 4], particle physics [6], fluid dynamics [59], trajectory prediction [53], pose estimation [31, 26, 32], shape completion [7], robotics [48, 65, 21, 55, 49, 41, 23, 22, 45] and reinforcement learning [52, 54, 56, 39, 63]. However, most prior work assumes that the symmetry of the ground truth function is perfectly known and matches the model symmetry. Wang et al. [58] go further and define correct, incorrect, and extrinsic equivariance to classify the relationship between model symmetry and domain symmetry. However, they do not discuss the possible combinations of the three categories, and limit their theory to a compact group and invariant classification. Our work extends [58] and allows for a continuum of equivariance types and analyzes error bounds in a more general setup. + +Symmetric Representation Learning. Various works have proposed learning symmetric representations, using transforming autoencoders [20], restricted Boltzmann machines [50], and equivariant descriptors [47]. In particular, [30] shows that convolutional neural networks implicitly learn representations that are equivariant to rotations, flips, and translations, suggesting that symmetric representations are important inductive biases. Other works have considered learning symmetry-aware features using disentanglement [43], projection mapping [25], equivariance constraints [35], separation into invariant and equivariant parts [61] or subgroups [34]. Park et al. [42] propose learning a symmetric encoder that maps to equivariant features and Dangovski et al. [13] learn features that are sensitive and insensitive to different group representations. Other works assume no prior knowledge of symmetry and learn it from data [3, 64, 14, 40]. In particular, Moskalev et al. [40] estimate the difference between the true latent symmetry and the learned symmetry. Similarly, our work considers a gap between the true symmetry and model symmetry and theoretically analyze its effects on error. + +Theory of Equivariant Learning. There are several lines of work on the theory of equivariant learning. Kondor and Trivedi [27] prove that convolutions are sufficient and necessary for equivariance of scalar fields on compact groups, later generalized to the steerable case by Cohen et al. [11]. Certain equivariant networks have been proved to be universal in that such networks can approximate any $G$ -equivariant function [36, 62]. Another line of work has considered equivariant networks in terms of generalization error. Abu-Mostafa [1] show that an invariant model has a VC dimension less than + +or equal to that of a non-equivariant model. Other works studied the generalization error of invariant classifiers by decomposing the input space [51, 46]. Elesedy and Zaidi [17] quantify a generalization benefit for equivariant linear models using the notion of symmetric and anti-symmetric spaces. A PAC Bayes approach was used for generalization bounds of equivariant models [5, 33]. Our work is complimentary to these and quantifies approximation error for equivariant model classes. + +Data Augmentation. Some methods use data augmentation [29, 28] to encourage the network to learn invariance with respect to transformations defined by the augmentation function [8]. Recent works have explored class-specific [19, 44] and instance-specific [38] data augmentation methods to further boost training by avoiding the potential error caused by a uniform augmentation function. Those methods can be viewed as applying data augmentation where pointwise correct or extrinsic invariance exist, while avoiding incorrect invariance. + +# 3 Preliminaries + +Problem Statement. Consider a function $f\colon X\to Y$ . Let $p:X\to \mathbb{R}$ be the probability density function of the domain $X$ . We assume that there is no distribution shift during testing, i.e., $p$ is always the underlying distribution during training and testing. The goal for a model class $\{h:X\to Y\}$ is to fit the function $f$ by minimizing an error function $\mathrm{err}(h)$ . We assume the model class $\{h\}$ is arbitrarily expressive except that it is constrained to be equivariant with respect to a group $G$ . Let $\mathbb{1}$ be an indicator function that equals to 1 if the condition is satisfied and 0 otherwise. In classification, $\mathrm{err}(h)$ is the classification error rate; for regression tasks, the error function is a $L_{2}$ norm function, + +$$ +\operatorname {e r r} _ {\mathrm {c l s}} (h) = \mathbb {E} _ {x \sim p} [ \mathbb {1} (f (x) \neq h (x)) ], \quad \operatorname {e r r} _ {\mathrm {r e g}} (h) = \mathbb {E} _ {x \sim p} [ | | h (x) - f (x) | | _ {2} ^ {2} ]. \tag {1} +$$ + +**Equivariant Function.** A function $f: X \to Y$ is equivariant with respect to a symmetry group $G$ if it commutes with the group transformation $g \in G$ , $f(gx) = gf(x)$ , where $g$ acts on $x \in X$ through the representation $\rho_X(g)$ ; $g$ acts on $y \in Y$ through the representation $\rho_Y(g)$ . + +# 3.1 Correct, Incorrect, and Extrinsic Equivariance. + +Consider a model $h$ which is equivariant with respect to a group $G$ . Since real-world data rarely exactly conforms to model assumptions, in practice there may often be a gap between the symmetry of the model and the ground truth function. Wang et al. [58] propose a three-way classification which describes the relationship between the symmetry of $f$ and the symmetry of $h$ . In this system, $h$ has correct equivariance, incorrect equivariance, or extrinsic equivariance with respect to $f$ . + +Definition 3.1 (Correct Equivariance). For all $x \in X, g \in G$ where $p(x) > 0$ , if $p(gx) > 0$ and $f(gx) = gf(x)$ , $h$ has correct equivariance with respect to $f$ . + +Definition 3.2 (Incorrect Equivalence). If there exist $x \in X, g \in G$ such that $p(x) > 0, p(gx > 0)$ , but $f(gx) \neq gf(x)$ , $h$ has incorrect equivalence with respect to $f$ . + +Definition 3.3 (Extrinsic Equivalence). For all $x \in X, g \in G$ where $p(x) > 0$ , if $p(gx) = 0$ , $h$ has extrinsic equivalence with respect to $f$ . + +Example 3.4. Consider a binary classification task where $X = \mathbb{R}$ and $Y = \{0,1\}$ . If the model $h$ is invariant to a reflection group $G$ where the group element $g \in G$ acts on $x \in X$ by $gx = -x$ , Figure 1 shows examples when correct, incorrect, or extrinsic equivariance is satisfied. + +# 3.2 Pointwise Equivariance Type. + +Although Definitions 3.1- 3.3 are self-contained, they do not consider the mixture of different equivariance types in a single function. In other words, an equivariant model can have correct, incorrect, and extrinsic equivariance in different subsets of the domain. To overcome this issue, we define pointwise correct, incorrect, and extrinsic equivariance, which is a generalization of the prior work. + +Definition 3.5 (Pointwise Correct Equivalence). For $g \in G$ and $x \in X$ where $p(x) \neq 0$ , if $p(gx) \neq 0$ and $f(gx) = gf(x)$ , $h$ has correct equivariance with respect to $f$ at $x$ under transformation $g$ . + +![](images/8441d78ca64ec2bdd8532b9c378dd65642470e989ffea35e095024d6b1ffed62.jpg) +Figure 2: Example of pointwise correct, incorrect, and extrinsic equivariance in a binary classification task. $f(x)$ is in black and $p(x)$ is in orange. $G$ is a reflection group that negates $x$ . + +Definition 3.6 (Pointwise Incorrect Equivalence). For $g \in G$ and $x \in X$ where $p(x) \neq 0$ , if $p(gx) \neq 0$ and $f(gx) \neq gf(x)$ , $h$ has incorrect equivalence with respect to $f$ at $x$ under transformation $g$ . + +Definition 3.7 (Pointwise Extrinsic Equivalence). For $g \in G$ and $x \in X$ where $p(x) \neq 0$ , if $p(gx) = 0$ , $h$ has extrinsic equivalence with respect to $f$ at $x$ under transformation $g$ . + +Notice that the definitions of pointwise correct, incorrect, and extrinsic equivariance are mutually exclusive, i.e., a pair $(x,g)$ can only have one of the three properties. The pointwise definitions are generalizations of the global Definitions 3.1- 3.3. For example, when pointwise correct equivariance holds for all $x\in X$ and $g\in G$ , Definition 3.1 is satisfied. + +Example 3.8 (Example of Pointwise Correct, Incorrect, and Extrinsic Equivalence). Consider the same binary classification task in Example 3.4. Figure 2 shows $f(x)$ , $g(x)$ , and four subsets of $X$ where pointwise correct, incorrect, or extrinsic holds. For $x$ in the correct section (green), $p(x) > 0$ , $p(gx) > 0$ , $f(x) = f(gx)$ . For $x$ in the incorrect sections (red), $p(x) > 0$ , $p(gx) > 0$ , $f(x) \neq f(gx)$ . For $x$ in the extrinsic section (blue), $p(x) > 0$ , $p(gx) = 0$ . + +Definition 3.9 (Correct, Incorrect, and Extrinsic Sets). The Correct Set $C \subseteq X \times G$ is a subset of $X \times G$ where pointwise correct equivariance holds for all $(x, g) \in C$ . Similarly, the Incorrect Set $I$ and the Extrinsic Set $E$ are subsets where incorrect equivariance or extrinsic equivariance holds for all elements in the subset. Denote $U \subseteq X \times G$ as the Undefined Set where $\forall (x, g) \in U, p(x) = 0$ . By definition we have $X \times G = C \amalg I \amalg I \amalg E \amalg U$ , where $\mathrm{II}$ denotes a disjoint union. + +# 4 Approximation Error Lower Bound from Incorrect Equivariance + +Studying the theoretical error lower bound of an equivariant network is essential for model selection, especially when incorrect equivariance exists. Wang et al. [58] prove an error lower bound for an incorrect equivariant network, but their setting is limited to a classification task in the global situation of Definition 3.2 with a discrete group and an invariant density function. In this section, we find the lower bound of $\mathrm{err}(h)$ for an equivariant model $h$ in a general setting. To calculate such a lower bound, we first define the fundamental domain $F$ of $X$ . Let $d$ be the dimension of a generic orbit of $G$ in $X$ and $n$ the dimension of $X$ . Let $\nu$ be the $(n - d)$ dimensional Hausdorff measure in $X$ . + +Definition 4.1 (Fundamental Domain). A closed subset $F$ of $X$ is called a fundamental domain of $G$ in $X$ if $X$ is the union of conjugates $^2$ of $F$ , i.e., $X = \bigcup_{g \in G} gF$ , and the intersection of any two conjugates has 0 measure under $\nu$ . + +We assume further that the set of all $x$ which lie in any pairwise intersection $\bigcup_{g_1F \neq g_2F} (g_1F \cap g_2F)$ has measure 0 under $\nu$ . Let $Gx = \{gx : g \in G\}$ be the orbit of $x$ , then $X$ can be written as the union of the orbits of all points in the fundamental domain $F$ as such $X = \bigcup_{x \in F} Gx$ . + +# 4.1 Lower Bound for Classification + +We first show the lower bound of the error $\mathrm{err}_{\mathrm{cls}}(h)$ (Equation 1) given the invariant constraint in $h$ : $h(gx) = h(x), g \in G$ . In this section, the codomain $Y$ of $f$ is a finite set of possible labels. Since $h$ is $G$ -invariant, $h$ has the same output for all inputs in an orbit $Gx$ . We call the label that causes the minimal error inside the orbit the majority label3, and define the error in the orbit as the total dissent. + +Definition 4.2 (Total Dissent). For the orbit $Gx$ of $x \in X$ , the total dissent $k(Gx)$ is the integrated probability density of the elements in the orbit $Gx$ having a different label than the majority label + +$$ +k (G x) = \min _ {y \in Y} \int_ {G x} p (z) \mathbb {1} (f (z) \neq y) d z. \tag {2} +$$ + +We can also lift the integral to $G$ itself by introducing a factor $\alpha(x, g)$ to account for the Jacobian of the action map and size of the stabilizer of $x$ . (See Appendix A.) + +$$ +k (G x) = \min _ {y \in Y} \int_ {G} p (g x) \mathbb {1} (f (g x) \neq y) \alpha (x, g) d g. \tag {3} +$$ + +Theorem 4.3. $\operatorname{err}(h)$ is lower bounded by $\int_{F} k(Gx) dx$ . + +Proof. Rewriting the error function of Equation 1, we have + +$$ +\operatorname {e r r} (h) = \int_ {X} p (x) \mathbb {1} (f (x) \neq h (x)) d x = \int_ {x \in F} \int_ {z \in G x} p (z) \mathbb {1} (f (z) \neq h (z)) d z d x, \tag {4} +$$ + +using iterated integration (Appendix B) and Definition 4.1. We assume the measure of $F \cap gF$ is 0. Since $h(z)$ can only have a single label in orbit $Gx$ , we can lower bound the inside integral as + +$$ +\int_ {z \in G x} p (z) \mathbb {1} (f (z) \neq h (z)) d z \geq \min _ {y \in Y} \int_ {z \in G x} p (z) \mathbb {1} (f (z) \neq y) d z = k (G x). +$$ + +We obtain the claim by integrating over $F$ . Notice that this is a tight lower bound assuming universal approximation. That is, there exists $h$ which realizes this lower bound. + +We can express the total dissent in terms of the Incorrect Set $I$ (Definition 3.9). + +Proposition 4.4. $k(Gx) = \min_{x' \in (Gx)^+} \int_G p(gx') \mathbb{1}((x', g) \in I) \alpha(x', g) dg$ , where $(Gx)^+ = \{x_0 \in Gx | p(x_0) > 0\}$ . + +Proof. Consider Equation 3, since the minimum over $y$ is obtained for $y = f(x')$ for some $x' \in Gx$ such that $p(x') > 0$ (i.e., $x' \in (Gx)^+$ ), + +$$ +k (G x) = \min _ {x ^ {\prime} \in (G x) ^ {+}} \int_ {G} p (g x) \mathbb {1} \left(f (g x) \neq f \left(x ^ {\prime}\right)\right) \alpha (x, g) d g. +$$ + +Since $x' \in Gx$ , then $Gx' = Gx$ and we have $k(Gx) = k(Gx')$ . Thus, + +$$ +\begin{array}{l} k (G x) = \min _ {x ^ {\prime} \in (G x) ^ {+}} \int_ {G} p (g x ^ {\prime}) \mathbb {1} \left(f \left(g x ^ {\prime}\right) \neq f \left(x ^ {\prime}\right)\right) \alpha \left(x ^ {\prime}, g\right) d g \\ = \min _ {x ^ {\prime} \in (G x) ^ {+}} \int_ {G} p \left(g x ^ {\prime}\right) \mathbb {1} \left(\left(x ^ {\prime}, g\right) \in I\right) \alpha \left(x ^ {\prime}, g\right) d g. \\ \end{array} +$$ + +![](images/1f50669c9af8d0f5210e0832e6d0be3db5302cbfe60327b82e300a7329eb0f47.jpg) + +Example 4.5 (Lower bound example for a binary classification task using Proposition 4.4). + +Let $f\colon X\to \{0,1\}$ be a binary classification function on $X = \{x_0,x_1,x_2,x_3\}$ . Let $G = C_{2} = \{e,r\}$ be the cyclic group of order two that permutes the elements in $X$ . Figure 3 shows $X$ , the label for each $x\in X$ , and how $e,r\in G$ acts on $x\in X$ . $\{x_0,x_3\}$ forms a fundamental domain $F$ , and there are two orbits: $Gx_{0} = \{x_{0},x_{1}\}$ and $Gx_{2} = \{x_{2},x_{3}\}$ . Since both $X$ and $G$ are discrete and $g\in G$ acts on $X$ through permutation, The lower bound can be written as $\mathrm{err}(h)\geq \sum_{x\in F}\min_{x'\in (Gx)^+}\sum_{g\in G}p(gx')\mathbb{1}((x',g)\in I)$ . We can then calculate $\sum_{g\in G}p(gx')\mathbb{1}((x',g)\in I)$ for $x^{\prime}\in X$ : $x_0:0.4,x_1:0.3,x_2:0,x_3:0$ . Taking the min over each orbit we have $k(Gx_0) = 0.3,k(Gx_2) = 0$ . Taking the sum over $F = \{x_0,x_3\}$ we obtain $\mathrm{err}(h)\geq 0.3$ + +![](images/4526afceea496beed8db556e1218c3c988846f9f80c9fad0995bab94af8f760f.jpg) + +![](images/2c942079f17f2cc494972f46d160cb16e82300af527fb226ff34643a9c0c9304.jpg) +Figure 3: An example binary classification task. The circles are elements of $X$ . The arrows show how $g \in G$ acts on $x \in X$ . The arrow color shows whether $(x, g) \in I$ . +Figure 4: An example multi-class classification task. Color indicates the label. The fundamental domain $F$ is a vertical line. For a point $x \in F$ , the orbit $Gx$ is a horizontal line. + +![](images/d57e5de228c94160e6dc2587f52276309532aa25e9212cf2274ee4ccd95d7049.jpg) + +Example 4.6 (Lower bound example for a multi-class classification task using Proposition 4.4). Consider a multi-class classification task $f: \mathbb{R}^2 \to Y$ with $n = |Y|$ classes. For $x = (u, v) \in [0,1]^2$ then $p(u, v) = 1$ and otherwise $p(u, v) = 0$ ; i.e., the support of $p$ is a unit square. Let $G$ denote the group of translations in the $u$ -direction and $h$ a $G$ -invariant network. In a data distribution illustrated in Figure 4, we compute the lower bound for $\mathrm{err}(h)$ . Consider a fundamental domain $F$ (brown line in Figure 4). In the blue area, there is one label across the orbit (i.e., the horizontal + +line), meaning $\forall g\in G,(x',g)\in C$ , yielding Proposition 4.4 equals 0. For points in the yellow area, the majority label is yellow. This means that for $g\in G$ such that $gx$ is in yellow, $(x,g)\in C$ ; for other $g\in G,(x,g)\in I$ . Consequently, Proposition 4.4 is equivalent to the combined green and pink lengths. Taking the integral over $F$ (Theorem 4.3), the lower bound equals the green and pink area $(I$ in Figure 4). We define correct ratio $(c)$ as the blue area's height and majority label ratio $(m)$ as the yellow area's length. Adjusting $c$ and $m$ transitions incorrect to correct equivariance, leading to $\mathrm{err}(h)\geq \mathrm{area}(I) = (1 - c)\times (1 - m)$ . Appendix H.2 shows an experiment where the empirical result matches our analysis. + +Lower Bound When $G$ is Finite and The Action of $G$ is Density Preserving. In this section, we consider the lower bound in Theorem 4.3 when $G$ is finite and the action of $G$ is density preserving, i.e., $p(gx) = p(x)$ . Let $(Gx)_y = \{z \in Gx | f(z) = y\}$ be a subset of $Gx$ with label $y$ . Define $\mathcal{Q}(x) = (\max_{y \in Y} |(Gx)_y|) / |Gx|$ , which is the fraction of data in the orbit $Gx$ that has the majority label. Denote $Q = \{\mathcal{Q}(x) : x \in X\}$ the set of all possible values for $\mathcal{Q}$ . Consider a partition of $X = \coprod_{q \in Q} X_q$ where $X_q = \{x \in X : \mathcal{Q}(x) = q\}$ . Define $c_q = \mathbb{P}(x \in X_q) = |X_q| / |X|$ . + +Proposition 4.7. The error lower bound $\mathrm{err}(h) \geq 1 - \sum_q q c_q$ from Wang et al. [58] (Proposition 4.1) is a special case of Theorem 4.3. + +Proof in Appendix C. The proposition shows Theorem 4.3 is a strict generalization of [58, Prop 4.1]. + +# 4.2 Lower Bound for Invariant Regression + +In this section, we give a lower bound of the error function $\mathrm{err}_{\mathrm{reg}}(h)$ (Equation 1) in a regression task given that $h$ is invariant, i.e., $h(gx) = h(x)$ for all $g \in G$ . Assume $Y = \mathbb{R}^n$ . Denote by $p(Gx) = \int_{z \in Gx} p(z) dz$ the probability of the orbit $Gx$ . Denote by $q(z) = \frac{p(z)}{p(Gx)}$ the normalized probability density of the orbit $Gx$ such that $\int_{Gx} q(z) dz = 1$ . Let $\mathbb{E}_{Gx}[f]$ be the mean of function $f$ on the orbit $Gx$ defined, and let $\mathbb{V}_{Gx}[f]$ be the variance of $f$ on the orbit $Gx$ , + +$$ +\mathbb {E} _ {G x} [ f ] = \int_ {G x} q (z) f (z) d z = \frac {\int_ {G x} p (z) f (z) d z}{\int_ {G x} p (z) d z}, \qquad \mathbb {V} _ {G x} [ f ] = \int_ {G x} q (x) | | \mathbb {E} _ {G x} [ f ] - f (z) | | _ {2} ^ {2}. +$$ + +Theorem 4.8. $\operatorname{err}(h) \geq \int_{F} p(Gx) \mathbb{V}_{Gx}[f] dx$ . + +Proof. The error function (Equation 1) can be written as: + +$$ +\operatorname {e r r} (h) = \int_ {X} p (x) | | f (x) - h (x) | | _ {2} ^ {2} d x = \int_ {x \in F} \int_ {z \in G x} p (z) | | f (z) - h (z) | | _ {2} ^ {2} d z d x. +$$ + +Denote $e(x) = \int_{Gx} p(z) \| f(z) - h(z) \|_2^2 dz$ . Since $h$ is $G$ -invariant, there exists $c \in \mathbb{R}^n$ such that $h(z) = c$ for all $z \in Gx$ . Then $e(x)$ can be written as $e(x) = \int_{Gx} p(z) \| f(z) - c \|_2^2 dz$ . Taking the derivative of $e(x)$ with respect to $c$ and setting it to 0 gives $c^*$ , the minimum of $e(x)$ , $c^* = \frac{\int_{Gx} p(z) f(z) dz}{\int_{Gx} p(z) dz} = \mathbb{E}_{Gx}[f]$ . Substituting $c^*$ into $e(x)$ we have + +$$ +e (x) \geq \int_ {G x} p (G x) \frac {p (z)}{p (G x)} | | \mathbb {E} _ {G x} [ f ] - f (z) | | _ {2} ^ {2} d z = p (G x) \mathbb {V} _ {G x} [ f ]. +$$ + +We can obtain the claim by taking the integral of $e(x)$ over the fundamental domain $F$ . + +![](images/1183fbfc46e9881b7f583771207b90fac7b050da722022135fd5381280ac5d66.jpg) + +# 4.3 Lower Bound for Equivariant Regression + +We now prove a lower bound for $\operatorname{err}(h)$ in a regression task given the model $h$ is equivariant, that is, $h(\rho_X(g)x) = \rho_Y(g)h(x)$ where $g \in G$ , $\rho_X$ and $\rho_Y$ are group representations associated with $X$ and $Y$ . We will denote $\rho_X(g)x$ and $\rho_Y(g)y$ by $gx$ and $gy$ , leaving the representation implicit. Assume $Y = \mathbb{R}^n$ and $\alpha(x,g)$ is the same as in equation 3. Let $\mathrm{Id}$ be the identity. Define a matrix $Q_{Gx} \in \mathbb{R}^{n \times n}$ and $q(gx) \in \mathbb{R}^{n \times n}$ so that $\int_G q(gx) dg = \mathrm{Id}$ by + +$$ +Q _ {G x} = \int_ {G} p (g x) \rho_ {Y} (g) ^ {T} \rho_ {Y} (g) \alpha (x, g) d g, \quad q (g x) = Q _ {G x} ^ {- 1} p (g x) \rho_ {Y} (g) ^ {T} \rho_ {Y} (g) \alpha (x, g). \tag {5} +$$ + +![](images/51afd26a45f3f770e738d8d948b801942a0f5a44c9955126af81da39f5216938.jpg) +(a) + +![](images/3f6f2432ca112c97ec52e7fe29c59a536b1051867853bfa30295649e1e940fdd.jpg) +(b) +Figure 5: An example regression task. (a) The value of $f(x)$ and the transformation rule (purple) with respect to group $G = C_4$ for all $x \in X$ . The four points belong to a single orbit. (b) When using an invariant network, the minimal error (red) is obtained when the invariant network outputs the mean value (green) of the orbit. (c) For an equivariant network, the minimizer (green) can be obtained by taking the mean of the $G$ -stabilized $f(x)$ (inversely transformed) (blue) for all $x$ in the orbit with respect to the transformation rule in the orbit. (d) The minimal error of an equivariant network. + +![](images/a5de3b9676a4a9b822d1261e9c96758597eb5a7549286f61843ba5c7856d39bf.jpg) +(c) + +![](images/070ab46be27db98be89c2e517544ed3c6441ff4f69d5513c2a19df6076132416.jpg) +(d) + +Here, for simplicity, we assume $Q_{Gx}$ is an invertible matrix. (See Appendix D for general case). + +If $f$ is equivariant, $g^{-1}f(gx)$ is a constant for all $g\in G$ . Define $\mathbf{E}_G[f,x]$ + +$$ +\mathbf {E} _ {G} [ f, x ] = \int_ {G} q (g x) g ^ {- 1} f (g x) d g. \tag {6} +$$ + +Theorem 4.9. The error of $h$ has lower bound $\operatorname{err}(h) \geq \int_{F} \int_{G} p(gx) \| f(gx) - g\mathbf{E}_{G}[f, x] \|_{2}^{2} \alpha(x, g) dgdx$ . + +See Appendix D for the proof. Intuitively, $\mathbf{E}_G[f,x]$ is the minimizer obtained by taking the mean of all inversely transformed $f(x)$ for all $x$ in the orbit, see Figure 5cd and Example 4.11 below. + +Corollary 4.10. Denote $p(Gx) = \int_{Gx} p(z) dz$ . Denote $q_x : g \mapsto q(gx)$ . Define $G$ -stabilized $f$ as $f_x : g \mapsto g^{-1} f(gx)$ . When $\rho_Y$ is an orthogonal representation $\rho_Y : G \to \mathrm{O}(n) \subset GL(n)$ , $q_x$ is a probability density function on $G$ . Denote the variance of $f_x$ as $\mathbb{V}_G[f_x]$ where $g \sim q_x$ . The error has a lower bound $\operatorname{err}(h) \geq \int_F p(Gx) \mathbb{V}_G[f_x] dx$ . + +See Appendix E for the proof. Notice that Corollary 4.10 is a generalization of Theorem 4.8. That is, Theorem 4.8 can be recovered by taking $\rho_Y(g) = \mathrm{Id}$ (See the proof in Appendix F). + +Example 4.11 (Lower bound example of a regression task). Consider a regression problem where $X = \{x_0, x_1, x_2, x_3\}$ and $Y = \mathbb{R}^2$ . Assume $p$ is uniform density. The cyclic group $G = C_4 = \{e, r, r^2, r^3\}$ (where $e = 0$ rotation and $r = \pi/2$ rotation) acts on $X$ through $x_1 = rx_0$ ; $x_2 = rx_1$ ; $x_3 = rx_2$ ; $x_0 = rx_3$ (i.e., there is only one orbit $Gx = X$ ). $g \in G$ acts on $y \in Y$ through $\rho_Y(g) = \left( \frac{\cos g - \sin g}{\sin g \cos g} \right)$ . Figure 5a shows the output of $f(x), \forall x \in X$ . First, consider a $G$ -invariant network $h$ . Since there is only one orbit, Theorem 4.8 can be simplified as: $\operatorname{err}(h) \geq \mathbb{V}_X[f]$ , the variance of $f$ over $X$ . This can be calculated by first taking the mean of $f(x)$ then calculating the mean square error (MSE) from all $x$ to the mean (Figure 5b). Second, consider a $G$ -equivariant network $h$ . Since $G$ is discrete, $gx$ permutes the order of $X$ , $\rho_Y$ is an orthogonal representation, and there is only one orbit, Corollary 4.10 can be written as $\operatorname{err}(h) \geq \mathbb{V}_G[f_x]$ , the variance of $G$ -stabilized $f$ . First, to calculate $\mathbb{E}_G[f_x]$ , let $x = x_0$ , we stabilize $g$ from $f$ by $g^{-1}f(gx)$ for all $g \in G$ , then take the mean (Figure 5c). We can then find $\mathbb{V}_G[f_x]$ by calculating the MSE between $f(x)$ and transformed mean $g\mathbb{E}_G[f_x]$ (Figure 5d). Appendix H.3 shows an experiment in this example's environment. + +# 5 Harmful Extrinsic Equivariance + +Wang et al. [58] demonstrate that extrinsic equivariance, where the symmetry imposed on the model leads to out-of-distribution data with respect to the input distribution, can lead to a higher performance on the original training data. In this section, we argue that this is not necessarily true in all cases, and there can exist scenarios where extrinsic equivariance can even be harmful to the learning problem. + +Consider a binary classification task where the domain is discrete and contains only a set of four points $S \subset \mathbb{R}^3$ , and their labels are either $\{-1, +1\}$ as shown in Figure 6a. We consider the probability density $p$ to be uniform for this domain, i.e., $p(x) = 1/4$ for the four points $S$ , and $p = 0$ elsewhere. + +This domain is used for both model training and testing so there is no distribution shift. We consider two model classes, $\mathcal{F}_N$ , the set of all linear models, and $\mathcal{F}_E$ , the set of all linear models which are invariant with respect to the cyclic group $C_2 = \{1, g\}$ , where $g(x_1, x_2, x_3) = (x_1, x_2, -x_3)$ . $\mathcal{F}_N$ corresponds to an unconstrained or a non-equivariant model class and $\mathcal{F}_E$ corresponds to an extrinsically equivariant class for this domain. For the labeling shown in Figure 6, the hyperplane $x_3 = 0$ correctly classifies all samples and is contained in $\mathcal{F}_N$ . However, a function $f_e \in \mathcal{F}_E$ is equivalent to a linear classifier on $\mathbb{R}^2$ and effectively sees the data as Figure 6b4. This exclusive-or problem does not admit a linear solution (it can be correct for at most 3 points). + +Concretely, we can compute the empirical Rademacher complexity, a standard measure of model class expressivity, for nonequivariant and extrinsically equivariant model classes and show that $\mathcal{F}_E$ has lower complexity than $\mathcal{F}_N$ . Recall that empirical Rademacher complexity is defined as $\Re_S(\mathcal{F}) = \mathbb{E}_{\sigma}\left[\sup_{f\in \mathcal{F}}\frac{1}{m}\sum_{i = 1}^{m}\sigma_if(x^i)\right]$ , where $S$ is the set of $m$ samples and $\sigma = (\sigma_1,\dots ,\sigma_m)^\top$ , $\sigma_i\in \{-1, + 1\}$ are independent uniform Rademacher random variables, and $x^{i}$ is the $i$ -th sample. As there exists some linear function $f_{n}\in \mathcal{F}_{N}$ that fully classifies $S$ for any combination of labels, $\Re_S(\mathcal{F}_N) = 1$ . For the extrinsic equivariance case, of the 16 possible label combinations, there are two cases where $f_{e}\in \mathcal{F}_{E}$ can at most classify 3 out of 4 points correctly, and thus $\Re_S(\mathcal{F}_E) = \frac{31}{32} < \Re_S(\mathcal{F}_N)$ (see Appendix G + +![](images/88aa150bb6705b6309254b41e82309cbc0bb040a031391f5d16c75324dcfdb34.jpg) +(a) Data + +![](images/3de372d7cead1388c4060b7c012d83769c7774b04d671c9e29ad4cc514c73a26.jpg) +(b) Transformed (c) $C_2$ -equivariant data view + +Figure 6: An example dataset where extrinsic equivariance increases the problem difficulty. The samples are of the form $x = (x_{1},x_{2},x_{3})$ and the labels are shown as different shapes. A $C_2$ -equivariant linear model transforms the original data (a) into (b), which is equivalent to viewing the data as in (c). The original task has an easy solution (e.g. hyperplane at $x_{3} = 0$ ), while the $C_2$ -invariant view is the classic exclusive-or problem. + +for the calculations). This illustrates that in certain cases, extrinsic equivariance can lead to lower model expressivity than no equivariance and thus be harmful to learning. + +# 6 Experiments + +We perform experiments to validate our theoretical analysis on both the lower bounds (Section 4) and the harmful extrinsic equivariance (Section 5). We find that our bounds accurately predict empirical model error. In addition to the experiments in this section, Appendix H.2 shows an experiment verifying our classification bound (Theorem 4.3) and Appendix H.3 shows an experiment verifying our regression bound (Theorem 4.8 and 4.9). The experiment details are in Appendix I. + +# 6.1 Swiss Roll Experiment + +We first perform an experiment in a vertically separated Swiss Roll data distribution, see Figure 7a5. This example, similar to that in Section 5, demonstrates that a $C_2$ -invariant model effectively "flattens" the $z$ -dimension of the data so it must learn the decision boundary between two spirals (Figure 7b), whereas the non-equivariant model only needs to learn a horizontal plane to separate the classes, a significantly easier task. Besides the extrinsic data distribution, we consider two other data distributions shown in Figure 7c and Figure 7d, where a $C_2$ -invariant model will observe incorrect and correct equivariance due to the mismatched and matched data labels in the two $z$ planes. + +We combine data from all three distributions in various proportions to test the performance of a $z$ -invariant network (INV) with a baseline unconstrained network (MLP). Let $c$ be the correct ratio, the proportion of data from the correct distribution. Define the incorrect ratio $i$ and extrinsic ratio $e$ similarly. We consider all $c, i, e$ that are multiples of 0.125 such that $c + i + e = 1$ . Figure 7ef shows some example data distributions. Relative to INV, this mixed data distribution has partial correct, incorrect, and extrinsic equivariance, which is not fully captured in prior work [58]. Based on Proposition 4.4, we have $k(Gx) = 0.5$ for $x$ drawn from the incorrect distribution, and $k(Gx) = 0$ otherwise. Since the data is evenly distributed, we can calculate the error lower bound $\mathrm{err}(h) \geq 0.5i$ . + +![](images/2bece5bef403530c93760280ec6952a834a23a63e0f23d44dbe19872402d8a2d.jpg) +(a) Swiss Roll Data + +![](images/c21db18829393d77ce699e20d951eb330e5209bb92f6bad27bcbf81bb4841001.jpg) +(b) $C_2$ -Invariant View + +![](images/12533920a326f9d8257749b46483e2ab9d4a6ea93cee0edf3bd5410dd29d19be.jpg) +(c) Incorrect + +![](images/8b67244c3c7be883d25ea4dce0af3f14d94416ae8debc2ddd5e753f2b6a6405e.jpg) +(d) Correct + +![](images/cc95280d00d2d1cc9761feb321b924de2d9b17ab4bb82e610ea81c453aa163e1.jpg) +(e) $c = .5, i = .5, e = 0$ + +![](images/c5ca116b4bb8bb5984831f8f4ee09a30fbb5bfb66da92d5b46175b12f6ce12d1.jpg) +(f) $c = .5, i = 0, e = .5$ + +Results. Figure 8a shows the test success rate of INV compared with MLP when $e$ and $c$ vary with $i = 0$ . When $e$ increases, the performance of INV decreases while the performance of MLP shows an inverse trend, demonstrating that extrinsic equivariance is harmful in this experiment. Figure 8b shows the performance of INV and MLP when $c$ and $i$ vary while $e = 0$ . The green line shows the upper bound of the test success rate $(1 - 0.5i)$ . The experimental result matches our theoretical analysis quite closely. Notice that when $c$ increases, there is a bigger gap between the performance of the network and its theoretical upper bound, since classification in the correct distribution is a harder task. Appendix H.1 shows the complete results of this experiment. + +![](images/6e80edbc0b7f550da44797f3c8ae95a34b8cc455604342f6fd5f08e3b91ec535.jpg) +Figure 7: (a) (b) The Swiss Roll data distribution that leads to harmful extrinsic equivariance. (c) (d) The correct and incorrect data distribution in the Swiss Roll experiment. Here the spirals overlap with mismatched and matched labels respectively. (e) (f) Data distribution example with different correct ratio $(c)$ , incorrect ratio $(i)$ , and extrinsic ratio $(e)$ values. +(a) +Figure 8: Result of the Swiss Roll experiment. (a) test success rate of an invariant network (red) and an unconstrained MLP (blue) with different extrinsic and correct ratio when incorrect ratio is 0. (b) same as (a) with different correct and incorrect ratio when extrinsic ratio is 0. Averaged over 10 runs. + +![](images/bfcfcaf8a800504986851bb21cadf562864b2639b7984bb1dfc568809d896547.jpg) +(b) + +# 6.2 Digit Classification Experiment + +In this experiment, we apply our theoretical analysis to a realistic digit classification task using both the printed digit dataset [16] and the MNIST handwritten digit dataset [15]. We compare a $D_{4}$ -invariant network $(D_{4})$ with an unconstrained CNN. In the printed digit classification, $D_{4}$ exhibits incorrect equivariance for 6 and 9 under a $\pi$ rotation. Using Theorem 4.3, we can calculate a lower bound of error for $D_{4}$ at $10\%$ . However, as shown in Table 1 (top), the experimental results indicate that the actual performance is slightly better than predicted by the theory. We hypothesize that this discrepancy arises because a rotated 9 differs slightly from a 6 in some fonts. We conduct a similar experiment using the MNIST handwritten digit dataset (Table 1 bottom), where $D_{4}$ achieves even better performance in classifying 6 and 9. This improvement is likely due to the more distinguishable handwriting of these digits, although the performance still underperforms the CNN as incorrect equivariance persists. It is important to note that there is a significant decrease in performance for $D_{4}$ when classifying 2/5 and 4/7 compared to the CNN. This is because a vertical flip results in incorrect equivariance when classifying handwritten 2/5, and a similar issue arises for 4/7 under a $\pi /2$ rotation followed by a vertical flip (notice that Weiler and Cesa [60] make a similar observation). These experiments demonstrate that our theory is useful not only for calculating the performance bounds of an equivariant network beforehand, but also for explaining the suboptimal performance of an equivariant network, thereby potentially assisting in model selection. + +# 6.3 Robotic Experiment + +In this experiment, we evaluate our theory in behavior cloning in robotic manipulation. We first preform an experiment where the problem is a mixture of correct and incorrect equivariance for a $D_{1}$ -equivariant policy network $(D_{1})$ where the robot's action will flip when the state is flipped + +
DigitOverall0123456789
Print CNN96.6299.8198.2797.0696.1298.0492.994.9397.8593.6596.13
Print D492.599.7198.6296.9495.9897.9193.5263.0898.4295.8876.17
Print D4 Upper Bound901001001001001001005010010050
MNIST CNN98.2199.5199.6198.6298.8398.0898.4797.9997.0496.9896.81
MNIST D496.1598.9399.2191.8498.2895.4995.0493.7195.6797.7395.34
+ +Table 1: ${\bar{D}}_{4}$ -invariant network compared with an unconstrained CNN in printed and MNIST handwritten digit classification tasks. Bold indicates that there is a $> 1\%$ difference in two models. + +![](images/b6b6ef29d8378d72da2b1f9d46f77932f9bee628890c528ae823331d0dfa40d0.jpg) +Stacking with Correct Equi. +Probability of Stacking (correct equi.): $c$ + +![](images/568eb4538f3b6339f4ea98da37a43f1a12f471cd1011d356fa85700214ccac59.jpg) +Sorting with Incorrect Equi. +Probability of Sorting (incorrect equi.): $1 - c$ + +![](images/27f1f2bc4836081ee6b8ffdb3e9d2db98eb6acb197b7a5750b32381578bbfe7f.jpg) +Sorting with Extrinsic Equi. + +![](images/3d7f54385af580a0869261edf73436413cba1bca1f5cae9d49eadab70d06f219.jpg) +Figure 9: Left: an environment containing both correct (Stacking) and incorrect (Pushing) equivariance for a $D_{1}$ -equivariant (horizontal flip) policy net. Right: an environment with harmful extrinsic equivariance for the same policy. + +![](images/08040e39b1b1130d2d4bed67f5eea72a5fee2b6479c9eaba6059dca76317bc88.jpg) +Figure 10: Result of robotic experiment with different correct ratio. Averaged over 4 runs. + +horizontally. Specifically, the environment contains two possible tasks (Figure 9 left). Stacking requires the robot to stack a green triangle on top of a blue cube. Here flip equivariance is correct. Sorting requires the robot to push the red cube to the left and the yellow triangle to the right. Here flip equivariance is incorrect because the robot should not sort the objects in an opposite way when the state is flipped (in other words, $D_{1}$ cannot distinguish left and right). We vary the probability $c$ of the stacking task (correct ratio) in the task distribution, and compare the performance of $D_{1}$ versus a standard CNN policy. If we view the sorting task as a binary classification task, we can calculate an upper bound of performance for $D_{1}$ using Theorem 4.3: $0.5 + 0.5c$ . Figure 10 shows the result. Notice that the performance of $D_{1}$ closely matches the theoretical upper bound, while the performance of CNN remains relatively stable for all Stacking-Sorting distributions. + +We further evaluate $D_{1}$ in a sorting task with harmful extrinsic equivariance. Here, the goal for the robot is the same as sorting above (push the red cube left and the yellow triangle right), however, the left and right sides of the workspace can now be differentiated by gray-scale colors. The shades of gray are discretized evenly into $n$ bins, where the left side's color is randomly sampled from the odd-numbered bins, and the right side's color is randomly sampled from the even-numbered bins (Figure 9 right). The different color distributions of the left and right sides make $D_{1}$ extrinsically equivariant, but it needs to learn the color distribution to distinguish left and right (while CNN can distinguish left and right directly). We set $n = 10$ , and $D_{1}$ achieves $71.5 \pm 1.6\%$ test success rate, while CNN achieves $99.5 \pm 0.5\%$ , demonstrating that the $D_{1}$ extrinsic equivariance is harmful in this task. See Appendix I.5 for the details of the robot experiment. + +# 7 Discussion + +This paper presents a general theory for when the symmetry of the ground truth function and equivariant network are mismatched. We define pointwise correct, incorrect, and extrinsic equivariance, generalizing prior work [58] to include continuous mixtures of the three extremes. We prove error lower bounds for equivariant networks applied to asymmetric tasks including classification, invariant regression, and equivariant regression without the assumption of invariant data density. Our work discusses the potential disadvantage of extrinsic equivariance, and provides experiments that validate our theoretical analysis. The major limitation of this paper is that our theoretical lower bounds require domain knowledge like the density function over the domain. In future work, we will develop easy-to-apply model selection tools using our theory. Another future direction is theoretically understanding when extrinsic equivariance is helpful or harmful and analyzing the effect of extrinsic equivariance on the decision boundary of an equivariant network. + +# Acknowledgments + +This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, NSF 2107256, NSF 2134178, NSF 2312171, and NASA 80NSSC19K1474. + +# References + +[1] Yaser S Abu-Mostafa. Hints and the vc dimension. Neural Computation, 5(2):278-288, 1993. +[2] Brandon Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. Advances in neural information processing systems, 32, 2019. +[3] Fabio Anselmi, Georgios Evangelopoulos, Lorenzo Rosasco, and Tomaso Poggio. Symmetry-adapted representation learning. Pattern Recognition, 86:201-208, 2019. +[4] Kenneth Atz, Francesca Grisoni, and Gisbert Schneider. Geometric deep learning on molecular representations. Nature Machine Intelligence, 3(12):1023-1032, 2021. +[5] Arash Behboodi, Gabriele Cesa, and Taco Cohen. A pac-bayesian generalization bound for equivariant networks. arXiv preprint arXiv:2210.13150, 2022. +[6] Alexander Bogatskiy, Brandon Anderson, Jan Offermann, Marwah Roussi, David Miller, and Risi Kondor. Lorentz group equivariant neural network for particle physics. In International Conference on Machine Learning, pages 992-1002. PMLR, 2020. +[7] Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Edgar Dobriban, and Kostas Daniilidis. SE(3)-equivariant attention networks for shape reconstruction in function space. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=RDy3IbvjMqT. +[8] Shuxiao Chen, Edgar Dobriban, and Jane H Lee. A group-theoretic framework for data augmentation. The Journal of Machine Learning Research, 21(1):9885-9955, 2020. +[9] Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990-2999. PMLR, 2016. +[10] Taco S. Cohen and Max Welling. Steerable CNNs. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rJQKYt511. +[11] Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. Advances in neural information processing systems, 32, 2019. +[12] Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016-2021. +[13] Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, and Marin Soljacic. Equivariant self-supervised learning: Encouraging equivariance in representations. In International Conference on Learning Representations, 2021. +[14] Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, and Rose Yu. Automatic symmetry discovery with lie algebra convolutional network. Advances in Neural Information Processing Systems, 34:2503-2515, 2021. +[15] Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142, 2012. +[16] Kshitij Dhama. Printed numerical digits image dataset. https://github.com/kaydee0502/printed-digits-dataset, 2021. GitHub Repository. +[17] Bryn Elesedy and Sheheryar Zaidi. Provably strict generalisation benefit for equivariant models. In International Conference on Machine Learning, pages 2959-2969. PMLR, 2021. + +[18] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In International Conference on Machine Learning, pages 3165-3176. PMLR, 2020. +[19] Søren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John Fisher, and Lars Hansen. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. In Artificial intelligence and statistics, pages 342-350. PMLR, 2016. +[20] Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In International conference on artificial neural networks, pages 44–51. Springer, 2011. +[21] Haojie Huang, Dian Wang, Robin Walters, and Robert Platt. Equivariant transporter network. In Robotics: Science and Systems, 2022. +[22] Haojie Huang, Dian Wang, Xupeng Zhu, Robin Walters, and Robert Platt. Edge grasp network: A graph-based SE(3)-invariant approach to grasp detection. In International Conference on Robotics and Automation (ICRA), 2023. +[23] Mingxi Jia, Dian Wang, Guanang Su, David Klee, Xupeng Zhu, Robin Walters, and Robert Platt. Seil: Simulation-augmented equivariant imitation learning. In International Conference on Robotics and Automation (ICRA), 2023. +[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[25] David Klee, Ondrej Biza, Robert Platt, and Robin Walters. 12i: Image to icosahedral projection for SO(3) object reasoning from single-view images. arXiv preprint arXiv:2207.08925, 2022. +[26] David Klee, Ondrej Biza, Robert Platt, and Robin Walters. Image to sphere: Learning equivariant features for efficient pose prediction. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=2bDpAtr7PI. +[27] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International Conference on Machine Learning, pages 2747-2755. PMLR, 2018. +[28] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. +[29] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +[30] Karel Lenc and Andreae Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 991-999, 2015. +[31] Xiaolong Li, Yijia Weng, Li Yi, Leonidas J Guibas, A Abbott, Shuran Song, and He Wang. Leveraging se (3) equivariance for self-supervised category-level object pose estimation from point clouds. Advances in Neural Information Processing Systems, 34:15370-15381, 2021. +[32] Xueyi Liu, Ji Zhang, Ruizhen Hu, Haibin Huang, He Wang, and Li Yi. Self-supervised category-level articulated object pose estimation with part-level SE(3) equivariance. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=20GtJ6hIaPA. +[33] Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, and Benjamin Bloem-Reddy. On the benefits of invariance in neural networks. arXiv preprint arXiv:2005.00178, 2020. +[34] Kaitlin Maile, Dennis George Wilson, and Patrick Forre. Equivariance-aware architectural optimization of neural networks. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=a6rCdfABJXg. +[35] Giovanni Luca Marchetti, Gustaf Tegnér, Anastasiia Varava, and Danica Kragic. Equivariant representation learning via class-posedecomposition. arXiv preprint arXiv:2207.03116, 2022. + +[36] Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In International conference on machine learning, pages 4363-4371. PMLR, 2019. +[37] Haggai Maron, Or Litany, Gal Chechik, and Ethan Fetaya. On learning sets of symmetric elements. In International conference on machine learning, pages 6734-6744. PMLR, 2020. +[38] Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, and Hyunjik Kim. Learning instance-specific augmentations by capturing local invariances. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 24720-24736. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/miao23a.html. +[39] Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, and Siamak Ravanbakhsh. EqR: Equivariant representations for data-efficient reinforcement learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15908-15926. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/mondal22a.html. +[40] Artem Moskalev, Anna Sepliarskaia, Ivan Sosnovik, and Arnold W.M. Smeulders. LieGG: Studying learned lie group generators. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=9sKZ60VtRmi. +[41] Chuer Pan, Brian Okorn, Harry Zhang, Ben Eisner, and David Held. Tax-posed: Task-specific cross-posed estimation for robot manipulation. In Conference on Robot Learning, pages 1783–1792. PMLR, 2023. +[42] Jung Yeon Park, Ondrej Biza, Linfeng Zhao, Jan Willem van de Meent, and Robin Walters. Learning symmetric representations for equivariant world model. In International Conference on Machine Learning, 2022. URL https://arxiv.org/abs/2204.11371. +[43] Robin Quessard, Thomas Barrett, and William Clements. Learning disentangled representations and group structure of dynamical environments. Advances in Neural Information Processing Systems, 33:19727-19737, 2020. +[44] Cédric Rommel, Thomas Moreau, Joseph Paillard, and Alexandre Gramfort. Cadda: Class-wise automatic differentiable data augmentation for eeg signals. In International Conference on Learning Representations, 2021. +[45] Hyunwoo Ryu, Hong in Lee, Jeong-Hoon Lee, and Jondeun Choi. Equivariant descriptor fields: SE(3)-equivariant energy-based models for end-to-end visual robotic manipulation learning. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=dnjZSPGmY50. +[46] Akiyoshi Sannai, Masaaki Imaizumi, and Makoto Kawano. Improved generalization bounds of group invariant/equivariant deep networks via quotient feature spaces. In Uncertainty in Artificial Intelligence, pages 771-780. PMLR, 2021. +[47] Uwe Schmidt and Stefan Roth. Learning rotation-aware features: From invariant priors to equivariant descriptors. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2050–2057. IEEE, 2012. +[48] Anthony Simeonov, Yilun Du, Andrea Tagliasacchi, Joshua B Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, and Vincent Sitzmann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In 2022 International Conference on Robotics and Automation (ICRA), pages 6394-6400. IEEE, 2022. +[49] Anthony Simeonov, Yilun Du, Yen-Chen Lin, Alberto Rodriguez Garcia, Leslie Pack Kaelbling, Tomás Lozano-Pérez, and Pulkit Agrawal. Se (3)-equivariant relational rearrangement with neural descriptor fields. In Conference on Robot Learning, pages 835-846. PMLR, 2023. + +[50] Kihyuk Sohn and Honglak Lee. Learning invariant representations with local transformations. In John Langford and Joelle Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML '12, pages 1311-1318, New York, NY, USA, July 2012. Omnipress. ISBN 978-1-4503-1285-1. +[51] Jure Sokolic, Raja Giryes, Guillermo Sapiro, and Miguel Rodrigues. Generalization error of invariant classifiers. In Artificial Intelligence and Statistics, pages 1094-1103. PMLR, 2017. +[52] Elise van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. Advances in Neural Information Processing Systems, 33, 2020. +[53] Robin Walters, Jinxi Li, and Rose Yu. Trajectory prediction using equivariant continuous convolution. arXiv preprint arXiv:2010.11344, 2020. +[54] Dian Wang, Robin Walters, Xupeng Zhu, and Robert Platt. Equivariant $Q$ learning in spatial action spaces. In 5th Annual Conference on Robot Learning, 2021. URL https://openreview.net/forum?id=IScz42A3iCI. +[55] Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, and Robert Platt. On-robot learning with equivariant models. In 6th Annual Conference on Robot Learning, 2022. URL https://openreview.net/forum?id=K8W60bPZQyh. +[56] Dian Wang, Robin Walters, and Robert Platt. SO(2)-equivariant reinforcement learning. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=7F9c0hdvfk_. +[57] Dian Wang, Colin Kohler, Xupeng Zhu, Mingxi Jia, and Robert Platt. Bulletarm: An open-source robotic manipulation benchmark and learning framework. In Robotics Research, pages 335-350. Springer, 2023. +[58] Dian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S. Wong, Robin Walters, and Robert Platt. The surprising effectiveness of equivariant models in domains with latent symmetry. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=P4MUGRM4Acu. +[59] Rui Wang, Robin Walters, and Rose Yu. Incorporating symmetry into deep dynamics models for improved generalization. arXiv preprint arXiv:2002.03061, 2020. +[60] Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. Advances in Neural Information Processing Systems, 32, 2019. +[61] Robin Winter, Marco Bertolini, Tuan Le, Frank Noe, and Djork-Arné Clevert. Unsupervised learning of group invariant and equivariant representations. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=471pv23LDPr. +[62] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks. Constructive Approximation, 55(1):407-474, 2022. +[63] Linfeng Zhao, Xupeng Zhu, Lingzhi Kong, Robin Walters, and Lawson L.S. Wong. Integrating symmetry into differentiable planning with steerable convolutions. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=n7CPzMPKQ1. +[64] Allan Zhou, Tom Knowles, and Chelsea Finn. Meta-learning symmetries by reparameterization. arXiv preprint arXiv:2007.02933, 2020. +[65] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, and Robert Platt. Sample efficient grasp learning using equivariant models. In Robotics: Science and Systems, 2022. + +# A Integrals on the Group + +Fundamental Domains In this paper, we are interested in cases in which the group $G$ is not necessarily discrete but may have positive dimension. We do not assume the fundamental domain has non-empty interior, and thus domain is a misnomer. In this case the conjugates of the fundamental domain $gF$ have measure 0 and the condition that their intersection have measure 0 is vacuous. Instead we assume a stronger condition, that the union of all pairwise intersections $\bigcup_{g_1 \neq g_2} (g_1F \cap g_2F)$ has measure 0. We also require that $F$ and the orbits $Gx$ are differentiable manifolds such that integrals over $X$ may be evaluated $\int_X f(x) dx = \int_F \int_{Gy} f(z) dz dy$ similar to Equation 8 from [18]. + +Reparameterization Consider the integral + +$$ +\int_ {G x} f (z) d z. \tag {7} +$$ + +Denote the identification of the orbit $Gx$ and coset space $G / G_x$ with respect to the stabilizer $G_x = \{g : gx = x\}$ by $a_x \colon G / G_x \to Gx$ . Then the integral can be written + +$$ +\int_ {G / G _ {x}} f (\bar {g} x) \left| \frac {\partial a _ {x} (\bar {g})}{\partial \bar {g}} \right| d \bar {g}. +$$ + +We can also lift the integral to $G$ itself + +$$ +\begin{array}{l} \int_ {G / G _ {x}} f (\bar {g} x) \left| \frac {\partial a _ {x} (\bar {g})}{\partial \bar {g}} \right| d \bar {g} = \left(\int_ {G _ {x}} d h\right) ^ {- 1} \left(\int_ {G _ {x}} d h\right) \int_ {G / G _ {x}} f (\bar {g} x) \left| \frac {\partial a _ {x} (\bar {g})}{\partial \bar {g}} \right| d \bar {g} \\ = \left(\int_ {G _ {x}} d h\right) ^ {- 1} \int_ {G / G _ {x}} \int_ {G _ {x}} f (\bar {g} h x) \left| \frac {\partial a _ {x} (\bar {g})}{\partial \bar {g}} \right| d h d \bar {g} \\ = \left(\int_ {G _ {x}} d h\right) ^ {- 1} \int_ {G} f (g x) \left| \frac {\partial a _ {x} (\bar {g})}{\partial \bar {g}} \right| d g. \\ \end{array} +$$ + +Define $\alpha(g, x) = \left( \int_{G_x} dh \right)^{-1} \left| \frac{\partial a_x(\bar{g})}{\partial \bar{g}} \right|$ . Then + +$$ +\int_ {G x} f (z) d z = \int_ {G} f (g x) \alpha (g, x) d g. +$$ + +# B Iterated Integral + +Let $X$ be an $n$ -dimensional space, Definition 4.2 (Equation 2) defines $k(Gx)$ as an integral over $Gx \subseteq X$ , which is a $m$ -dimensional sub-manifold of $X$ . In Theorem 4.3, Equation 4 rewrites the error function (Equation 1) as an iterated integral over the orbit $Gx$ and then the fundamental domain $F$ using Definition 4.1. In the discrete group case, $m$ would be 0, Equation 2 is an integral of a 0-form in a 0-manifold, which is a sum: + +$$ +k (G x) = \min _ {y \in Y} \sum_ {z \in G x} p (z) \mathbb {1} (f (z) \neq y) = \min _ {y \in Y} \sum_ {g \in G} p (g x) \mathbb {1} (f (g x) \neq y) \tag {8} +$$ + +# C Proof of Proposition 4.7 + +Proof. Consider the integral of probability density inside $Gx$ , for a given $y$ , it can be separated into two groups: + +$$ +\begin{array}{l} \int_ {G x} p (z) d z = \int_ {G x} p (z) \mathbb {1} (f (z) = y) d z \\ + \int_ {G x} p (z) \mathbb {1} (f (z) \neq y) d z. \\ \end{array} +$$ + +We can then rewrite $k(Gx)$ in Equation 2 as: + +$$ +k (G x) = \min _ {y \in Y} \left[ \int_ {G x} p (z) d z - \int_ {G x} p (z) \mathbb {1} (f (z) = y) d z \right]. \tag {9} +$$ + +Letting $(Gx)_y = \{x' \in Gx \mid f(x') = y\} = f^{-1}(y) \cap Gx$ , Equation 9 can be written as: + +$$ +k (G x) = \min _ {y \in Y} \left[ \int_ {G x} p (z) d z - \int_ {(G x) _ {y}} p (z) d z \right] +$$ + +$$ +\int_ {G x} p (z) d z - \max _ {y \in Y} \int_ {(G x) _ {y}} p (z) d z. +$$ + +Theorem 4.3 can be rewritten as: + +$$ +\begin{array}{l} \operatorname {e r r} (h) \geq \int_ {F} \Big (\int_ {G x} p (z) d z - \max _ {y \in Y} \int_ {(G x) _ {y}} p (z) d z \Big) d x \\ \geq \int_ {F} \int_ {G x} p (z) d z - \int_ {F} \max _ {y \in Y} \int_ {(G x) _ {y}} p (z) d z \\ \geq 1 - \int_ {F} \max _ {y \in Y} | (G x) _ {y} | p (x) d x. \tag {10} \\ \end{array} +$$ + +The first term in Equation 10 uses the fact that $X = \bigcup_{x \in F} Gx$ so the integral of the probability of the orbits of all points in the fundamental domain is the integral of the probability of the input domain $X$ which is 1. The second term of Equation 10 uses $p(gx) = p(x)$ so the integration of $p(z)$ on $(Gx)_y$ becomes $p(x)$ times the range of the limit which is the size of $(Gx)_y, |(Gx)_y|$ . + +Now consider a partition of $F = \coprod_{q} F_{q}$ where $F_{q} = \{x \in F : (\max_{y \in Y} |(Gx)_{y}|) / |Gx| = q\}$ . We can rewrite Equation 10 as: + +$$ +\operatorname {e r r} (h) \geq 1 - \int_ {F} q | G x | p (x) d x \tag {11} +$$ + +$$ +\geq 1 - \sum_ {q} \int_ {F _ {q}} q | G x | p (x) d x \tag {12} +$$ + +$$ +\geq 1 - \sum_ {q} q \int_ {F _ {q}} | G x | p (x) d x. \tag {13} +$$ + +Equation 11 uses the definition of $q$ . Equation 12 separates the integral over $F$ into the partition of $F$ . Equation 13 moves $q$ out from the integral because it is a constant inside the integral. Consider the definition of $c_{q}$ , we have: + +$$ +\begin{array}{l} c _ {q} = \mathbb {P} (x \in X _ {q}) \\ = \int_ {X _ {q}} p (x) d x \\ = \int_ {F _ {q}} \int_ {G x} p (z) d z d x (14) \\ = \int_ {F _ {q}} | G x | p (x) d x. (15) \\ \end{array} +$$ + +Equation 14 uses $X_{q} = \bigcup_{x\in F_{q}}Gx$ . Equation 15 uses $p(x) = p(gx)$ . Now we can write Equation 13 as: + +$$ +\operatorname {e r r} (h) \geq 1 - \sum_ {q} q c _ {q}. +$$ + +□ + +# D Proof of Theorem 4.9 + +Define $q(gx)\in \mathbb{R}^{n\times n}$ such that + +$$ +Q _ {G x} q (g x) = p (g x) \rho_ {Y} (g) ^ {T} \rho_ {Y} (g) \alpha (x, g). \tag {16} +$$ + +In particular, $q(gx)$ exists when $Q_{Gx}$ is full rank. It follows that $Q_{Gx}\int_{G}q(gx)dg = Q_{Gx}$ . Moreover, $Q_{Gx}$ and $q(gx)$ are symmetric matrix. + +Proof. The error function (Equation 1) can be written + +$$ +\begin{array}{l} \operatorname {e r r} (h) = \mathbb {E} _ {x \sim p} [ | | f (x) - h (x) | | _ {2} ^ {2} ] \\ = \int_ {X} p (x) | | f (x) - h (x) | | _ {2} ^ {2} d x \\ = \int_ {x \in F} \int_ {g \in G} p (g x) | | f (g x) - h (g x) | | _ {2} ^ {2} \alpha (x, g) d g d x. \\ \end{array} +$$ + +Denote $e(x) = \int_{G} p(gx) ||f(gx) - h(gx)||_2^2 \alpha(x, g) dg$ . Since $h$ is $G$ -equivariant, for each $x \in F$ the value $c = h(x) \in \mathbb{R}^n$ of $h$ at $x$ determines the value of $h$ across the whole orbit $h(gx) = gh(x) = gc$ for $g \in G$ . Then $e(x)$ can be written + +$$ +\begin{array}{l} e (x) = \int_ {G} p (g x) | | f (g x) - g c | | _ {2} ^ {2} \alpha (x, g) d g \\ = \int_ {G} p (g x) | | g (g ^ {- 1} f (g x) - c) | | _ {2} ^ {2} \alpha (x, g) d g \\ = \int_ {G} (g ^ {- 1} f (g x) - c) ^ {T} p (g x) g ^ {T} g \alpha (x, g) (g ^ {- 1} f (g x) - c) d g \\ = \int_ {G} (g ^ {- 1} f (g x) - c) ^ {T} Q _ {G x} q (g x) (g ^ {- 1} f (g x) - c) d g. \tag {17} \\ \end{array} +$$ + +Taking the derivative of $e(x)$ with respect to $c$ we have + +$$ +\begin{array}{l} \frac {\partial e (x)}{\partial c} = \int_ {G} \left(\left(Q _ {G x} q (g x)\right) ^ {T} + \left(Q _ {G x} q (g x)\right)\right) \left(c - g ^ {- 1} f (g x)\right) d g \\ = \int_ {G} 2 Q _ {G x} q (g x) \left(c - g ^ {- 1} f (g x)\right) d g. \\ \end{array} +$$ + +Setting $\partial e(x) / \partial c = 0$ we can find an equation for $c^*$ which minimizes $e(x)$ + +$$ +\begin{array}{l} Q _ {G x} \int_ {G} q (g x) d g \cdot c ^ {*} = Q _ {G x} \int_ {G} q (g x) g ^ {- 1} f (g x) d g \\ Q _ {G x} c ^ {*} = Q _ {G x} \mathbf {E} _ {G} [ f, x ]. \tag {18} \\ \end{array} +$$ + +Substituting $c^*$ into Equation 17 we have + +$$ +\begin{array}{l} e (x) \geq \int_ {G} \left(g ^ {- 1} f (g x) - c ^ {*}\right) ^ {T} Q _ {G x} q (g x) \left(g ^ {- 1} f (g x) - c ^ {*}\right) d g \\ = \int_ {G} (g ^ {- 1} f (g x)) ^ {T} Q _ {G x} q (g x) (g ^ {- 1} f (g x)) \\ - \left(c ^ {* T} Q _ {G x} q (g x) g ^ {- 1} f (g x)\right) ^ {T} \tag {19} \\ - c ^ {* T} Q _ {G x} q (g x) g ^ {- 1} f (g x) \\ + c ^ {* T} Q _ {G x} q (g x) c ^ {*} d g. \\ \end{array} +$$ + +The term $\int_{G}c^{*T}Q_{Gx}q(gx)g^{-1}f(gx)dg$ could be simplified as + +$$ +\int_ {G} c ^ {* T} Q _ {G x} q (g x) g ^ {- 1} f (g x) d g = \int_ {G} \mathbf {E} _ {G} [ f, x ] Q _ {G x} q (g x) g ^ {- 1} f (g x) d g. \tag {20} +$$ + +Notice that $Q_{Gx}$ , and $q(gx)$ are symmetric matrix + +$$ +\begin{array}{l} \int_ {G} c ^ {* T} Q _ {G x} q (g x) c ^ {*} d g = \int_ {G} c ^ {* T} q (g x) Q _ {G x} c ^ {*} d g \\ = \int_ {G} \mathbf {E} _ {G} ^ {T} [ f, x ] Q _ {G x} q (g x) \mathbf {E} _ {G} [ f, x ] d g. \\ \end{array} +$$ + +Thus Equation 19 becomes + +$$ +\begin{array}{l} e (x) \geq \int_ {G} \left(g ^ {- 1} f (g x)\right) ^ {T} Q _ {G x} q (g x) \left(g ^ {- 1} f (g x)\right) \\ - \left(\mathbf {E} _ {G} ^ {T} [ f, x ] Q _ {G x} q (g x) g ^ {- 1} f (g x)\right) ^ {T} \\ - \mathbf {E} _ {G} ^ {T} [ f, x ] Q _ {G x} q (g x) g ^ {- 1} f (g x) \\ + \mathbf {E} _ {G} ^ {T} [ f, x ] Q _ {G x} q (g x) \mathbf {E} _ {G} [ f, x ] d g \\ = \int_ {G} p (g x) | | f (g x) - g \mathbf {E} _ {G} [ f, x ] | | _ {2} ^ {2} \alpha (x, g) d g. \\ \end{array} +$$ + +Taking the integral over the fundamental domain $F$ we have + +$$ +\begin{array}{l} \operatorname {e r r} (h) = \int_ {F} e (x) \\ \geq \int_ {F} \int_ {G} p (g x) | | f (g x) - g \mathbf {E} _ {G} [ f, x ] | | _ {2} ^ {2} \alpha (x, g) d g d x. \tag {21} \\ \end{array} +$$ + +![](images/5a27285b827189e173d5ee1cb8dd0ef1626e056ed6541077c71dacf2f8afab2f.jpg) + +# E Proof of Corollary 4.10 + +Proof. When $\rho_{Y}$ is an orthogonal representation, we have $\rho_{Y}(g)^{T}\rho_{Y}(g) = I_{n}$ , i.e., the identity matrix. Then $q(gx)$ can be written as $q(gx) = s(gx)\mathrm{Id}$ where $s(gx)$ is a scalar. Since $\int_{G}q(gx)dg = \mathrm{Id}$ , we can re-define $q(gx)$ to drop $\mathrm{Id}$ and only keep the scalar, then $q_{x}(g)$ can be viewed as a probability density function of $g$ because now $\int_{G}q_{x}(g) = 1$ . + +With $q_{x}(g)$ being the probability density function, $\mathbf{E}_G[f,x]$ (Equation 6) naturally becomes the mean $\mathbb{E}_G[f_x]$ where $g\sim q_{x}$ . + +Now consider $e(x) = \int_{G} p(gx) || f(gx) - g\mathbf{E}_{G}[f_{x}]||_{2}^{2}\alpha (x,g)dg$ in Theorem 4.9, it can be written as + +$$ +\begin{array}{l} e (x) = \int_ {G} p (g x) | | f (g x) - g \mathbb {E} _ {G} [ f _ {x} ] | | _ {2} ^ {2} \alpha (x, g) d g \\ = \int_ {G} p (g x) | | g \left(g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ]\right) | | _ {2} ^ {2} \alpha (x, g) d g \\ = \int_ {G} p (g x) \left(g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ]\right) ^ {T} \rho_ {Y} (g) ^ {T} \rho_ {Y} (g) \left(g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ]\right) \alpha (x, g) d g. \\ \end{array} +$$ + +Since $\rho_Y(g)^T\rho_Y(g) = I_n$ , we have + +$$ +\begin{array}{l} e (x) = \int_ {G} p (g x) \left(g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ]\right) ^ {T} \left(g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ]\right) \alpha (x, g) d g \\ = \int_ {G} p (g x) | | g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ] | | _ {2} ^ {2} \alpha (x, g) d g. \tag {22} \\ \end{array} +$$ + +From Equation 5 we have $p(gx)\alpha(x, g) = Q_{Gx}q(gx)$ . Substituting in Equation 22 we have + +$$ +e (x) = \int_ {G} Q _ {G x} q (g x) | | g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ] | | _ {2} ^ {2} d g. +$$ + +Since $Q_{Gx} = \int_G p(gx)\alpha (a,g)dg$ when $\rho_Y(g)^T\rho_Y(g) = I_n$ , we have + +$$ +\begin{array}{l} e (x) = Q _ {G x} \int_ {G} q _ {x} (g) | | g ^ {- 1} f (g x) - \mathbb {E} _ {G} [ f _ {x} ] | | _ {2} ^ {2} d g \\ = Q _ {G x} \mathbb {V} _ {G} [ f _ {x} ]. \tag {23} \\ \end{array} +$$ + +Now consider $Q_{Gx}$ (Equation 5), when $\rho_Y(g)^T\rho_Y(g) = I_n$ , it can be written + +$$ +\begin{array}{l} Q _ {G x} = \int_ {G} p (g x) \alpha (x, g) d g \\ = \int_ {G x} p (z) d z \\ = p (G x). \\ \end{array} +$$ + +Replacing $Q_{Gx}$ with $p(Gx)$ in Equation 23 then taking the integral of $e(x)$ over the fundamental domain gives the result. + +# F Lower Bound of Equivariant Regression when $\rho_{Y} = \mathrm{Id}$ + +Proposition F.1. When $\rho_{Y} = \mathrm{Id}$ , the error of $h$ has lower bound $\mathrm{err}(h) \geq \int_{F} p(Gx) \mathbb{V}_{Gx}[f] dx$ , which is the same as Theorem 4.8. + +Proof. Consider Equation 5, when $\rho_Y(g) = \mathrm{Id}$ , we have + +$$ +Q _ {G x} = \int_ {G} p (g x) \alpha (x, g) d g. +$$ + +Exchange the integration variable using $z = gx$ we have + +$$ +Q _ {G x} = \int_ {G x} p (z) d z. \tag {24} +$$ + +Consider $\mathbb{E}_G[f_x] = \int_G q_x(g)g^{-1}f(gx)dg$ . When $\rho_Y(g) = \mathrm{Id}$ , it becomes + +$$ +\mathbb {E} _ {G} [ f _ {x} ] = \int_ {G} q (g x) f (g x) d g. +$$ + +Substituting $q(gx)$ with Equation 5, considering $\rho_{Y}(g) = \mathrm{Id}$ , we have + +$$ +\mathbb {E} _ {G} [ f _ {x} ] = \int_ {G} Q _ {G x} ^ {- 1} p (g x) f (g x) \alpha (x, g) d g. +$$ + +Exchange the integration variable using $z = g x$ we have + +$$ +\mathbb {E} _ {G} [ f _ {x} ] = \int_ {G x} Q _ {G x} ^ {- 1} p (z) f (z) d z. +$$ + +Substituting Equation 24 we have + +$$ +\begin{array}{l} \mathbb {E} _ {G} [ f _ {x} ] = \int_ {G x} \frac {p (z)}{\int_ {G x} p (z) d z} f (z) d z \\ = \mathbb {E} _ {G x} [ f ]. \\ \end{array} +$$ + +Similarly, we can proof $\mathbb{V}_G[f_x] = \mathbb{V}_{Gx}[f]$ , thus when $\rho_Y = \mathrm{Id}$ , Corollary 4.10 is Theorem 4.8. + +![](images/5b0f7c4b4d4d713a9850f6c7e1c7d660a7318f8c0d65c621e13e2c107a688794.jpg) + +Non-equivariant model class $C_2$ -equivariant model class + +
σTsupfn∈FN1/m∑i=1mσifn(xi)supfn∈FN1/m∑i=1mσifn(xi)
[-1,-1,-1,-1]11
[-1,-1,-1,+1]11
[-1,-1,+1,-1]11
[-1,-1,+1,+1]10.75
[-1,+1,-1,-1]11
[-1,+1,-1,+1]11
[-1,+1,+1,-1]11
[-1,+1,+1,+1]11
[+1,-1,-1,-1]11
[+1,-1,-1,+1]11
[+1,-1,+1,+1]11
[+1,+1,-1,-1]10.75
[+1,+1,-1,+1]11
[+1,+1,-1,+1]11
[+1,+1,+1,+1]11
R_S131/32
+ +![](images/cd0e3b7a6244af49329200e84bbaaf2d30481e4651c2cd21ab9f9803a913a765.jpg) +(a) Correct + +![](images/205159d42484345728454131f2103db38988fdb79c96305fe144be29cf2b278a.jpg) +(b) Incorrect + +![](images/ce0d1ebef276afd9d0e05a6d07aa34e1646e983681232f44fd5b9f2c53cd7fb6.jpg) +(c) Extrinsic +Figure 11: The correct, incorrect, and extrinsic data distribution in the Swiss Roll experiment. + +![](images/9b82980f24f1d3011b8620f6994a2cbff818a9ec0be92118b986333ffff571f5.jpg) +(d) Extrinsic Invariant View + +# G Rademacher Complexity of Harmful Extrinsic Equivariance Example + +Let $S = \{x^{1}, x^{2}, x^{3}, x^{4}\}$ , where the labels are $y^{1}, y^{2} = +1$ and $y^{3}, y^{4} = -1$ . We consider two model classes $\mathcal{F}_N$ , the set of all linear models, and $\mathcal{F}_E$ , the set of all linear models equivariant to $C_2$ , and compute their empirical Rademacher complexity on $S$ . + +For the data $S$ , an extrinsically equivariant linear model class has lower empirical Rademacher complexity than its unconstrained linear counterpart, demonstrating that extrinsic equivariance can be harmful to learning. + +# H Additional Experiments + +# H.1 Swiss Roll Experiment + +Figure 11 and Figure 12 show the actual data distribution for the Swiss Roll experiment in Section 6.1. In the incorrect distribution, the data in the two $z$ planes form two spirals with different labels but the same shape. The equivariance is incorrect because if we translate one spiral to the other spiral's plane, they will overlap but their labels are different. In the correct distribution, there are two different 'dashed' spirals copied into two $z$ -planes. The equivariance is correct because after a $z$ -translation, both the data and their labels exactly overlap. In all three cases, we assume the data has a uniform distribution. Figure 13b shows the ternary plot of MLP for all different $c$ , $ir$ , $er$ , where the performance of MLP decreases as the correct ratio increases. Figure 13a shows an inverse trend: the + +![](images/98fb4ab1399543ca51ed999df8b766e417e05fb0c4b54d86a555275391ee9ee3.jpg) +(a) $c = .5, i = .5, e = 0$ + +![](images/89814ed904f483327e9fdfdc54aa699b472dc44126891de05f8d7a9b064b7644.jpg) +(b) $c = .5, i = 0, e = .5$ + +![](images/03a0310caf3a7df2faa0c84cf2686bd7c8957db5cc267d60e695ecc10eaf6e70.jpg) +(c) $c = 0,i = .5,e = .5$ + +![](images/607979869ada3a63e1c73e05877849195a4bfcb9124ed1a53c2a62e9d9d8b740.jpg) +(d) $c = .5,i = .25,e = .25$ + +![](images/048b0e6b7a2ceca35586f799e79e5e229e2ff70ed96a564b3fca966f187332b2.jpg) +Figure 12: Data distribution example with different correct ratio $(c)$ , incorrect ratio $(ir)$ , and extrinsic ratio $(er)$ values. +(a) + +![](images/bb93f88b9e7e007d32e2f64b2f681644b818cb5c56069467e7db2acd953e0c4a.jpg) +(b) +Figure 13: The ternary plot of the invariant network (a) and unconstrained network (b) with different correct, incorrect, and extrinsic ratio. + +performance of INV increases as the correct ratio increases. Moreover, both extrinsic and incorrect equivariance harms the performance of INV, but incorrect equivariance is more devastating because the error is limited by a theoretical lower bound. + +# H.2 Square Experiment + +We consider the environment shown in Example 4.6. We vary $m \in \{0.2, 0.4, 0.6, 0.8, 1\}$ and $c \in \{0, 0.2, 0.4, 0.6, 0.8, 1\}$ . We train an $u$ -invariant network and evaluate its test performance with the theoretical lower bound $\mathrm{err}(h) \geq (1 - c) \times (1 - m)$ . Figure 14 shows the test error of the trained network compared with the theoretical lower bound. The highest difference is below $3\%$ , demonstrating the correctness of our theory. + +# H.3 Regression Experiment + +In this experiment, we validate our theoretical error lower bound for invariant and equivariant regression (Theorem 4.8, 4.9) in an environment similar to Example 4.11. Consider a regression task \( f: \mathbb{R} \times \mathcal{X} \to \mathbb{R}^2 \) given by \( (\theta, x) \mapsto y \), where \( \mathcal{X} = \{x_0, x_1, x_2, x_3\} \). The group \( g \in G = C_4 = \{e, r, r^2, r^3\} \) acts on \( (\theta, x) \) by \( g(\theta, x) = (\theta, gx) \) through permutation: \( x_1 = rx_0; x_2 = rx_1; x_3 = + +
Invariant NetworkEquivariant Network
Empirical/Theoretical1.002 ±0.0001.001 ±0.000
+ +Table 2: Empirical $\operatorname{err}\left( h\right)$ divided by theoretical $\operatorname{err}\left( h\right)$ for invariant regression and equivariant regression. Results are averaged over 100 runs with different $f$ for each regression. Empirical regression error matches theoretical error. + +![](images/bd0f30e1c3e91a1a7b4ff99be41c50bcf88c40e836e57ab3597791e677ed80c8.jpg) +Figure 14: Result of the square experiment in terms of the $L_{1}$ distance between the network error and the theoretical lower bound in percentage. Each cell corresponds to an experiment with a particular correct ratio (c) and majority label ratio (m). Results are averaged over 10 runs. + +$rx_{2};x_{0} = rx_{3}$ . Let $r^k\in G$ acts on $y$ by $\rho_Y(g) = \left( \begin{array}{cc}\cos g & -\sin g\\ \sin g & \cos g \end{array} \right)$ where $g = k\pi /2$ . Note that fixing a single value of $\theta$ gives Example 4.11; in other words, this experiment has infinitely many orbits where each orbit is similar to Example 4.11. + +We generate random polynomial function $f$ that is not equivariant, i.e., $\exists (\theta, x)$ s.t. $g \cdot f(\theta, x) \neq \rho_Y(g)y$ . Then we try to fit $f$ using a $G$ -invariant network and a $G$ -equivariant network. We measure their error compared with the theoretical lower bound given by Theorem 4.8 and 4.9. As is shown in Table 2, both the invariant network and the equivariant network achieve an error rate nearly the same as our theoretical bound. The empirical error is slightly higher than the theoretical error due to the neural network fitting error. Please refer to I.4 for more experiment details. + +# I Experiment Details + +This section describes the details of our experiments. All of the experiment is performed using a single Nvidia RTX 2080 Ti graphic card. + +# I.1 Swiss Roll Experiment + +In the Swiss Roll Experiment in Section 6.1, we use a three-layer MLP for the unconstrained network. For the $z$ -invariant network, we use a network with two DSS [37] layers to implement the $z$ -invariance, each containing two FC layers. We train the networks using the Adam [24] optimizer with a learning rate of $10^{-3}$ . The batch size is 128. In each run, there are 200 training data, 200 validation data, and 200 test data randomly sampled from the data distribution. The network is trained for a minimal of 1000 epochs and a maximum of 10000 epochs, where the training is terminated after there is no improvement in the classification success rate in the validation set for a consecutive of 1000 epochs. We report the test success rate of the epoch model with the highest validation success rate. + +# I.2 Square Experiment + +In the Square Experiment in Section H.2, we use a network with two DSS [37] layers to implement the horizontal invariance, where each layer contains two FC layers. We train the networks using the Adam [24] optimizer with a learning rate of $10^{-3}$ . The batch size is 128. In each run, there are 1000 training data, 200 validation data, and 200 test data randomly sampled from the data distribution. The network is trained for a minimal of 1000 epochs and a maximum of 10000 epochs, where the training is terminated after there is no improvement in the classification success rate in the validation set for a consecutive of 1000 epochs. We report the test success rate of the epoch model with the highest validation success rate. + +# I.3 Digit Classification Experiment + +In the Digit Classification Experiment in Section 6.2, we use two similar five-layer convolutional networks for the $D_{4}$ -invariant network and the CNN, where the $D_{4}$ -invariant network is implemented using the e2cnn package [60]. Both networks have the similar amount of trainable parameters. We train the networks using the Adam [24] optimizer with a learning rate of $5 \times 10^{-5}$ and weight decay + +![](images/355eefaf4fce4687233940fbb148e88a28e1f8685a30725e25487d29f81a3982.jpg) +Figure 15: The robotic experiment setup and the $D_{1}$ -equivariant policy network. + +of $10^{-5}$ . The batch size is 256. In each run, there are 5000 training data, 1000 validation data, and 1000 test data randomly sampled from the data distribution. The network is trained for a minimal of 50 epochs and a maximum of 1000 epochs, where the training is terminated after there is no improvement in the classification success rate in the validation set for a consecutive of 50 epochs. We report the test success rate of the epoch model with the highest validation success rate. + +# I.4 Regression Experiment + +In the regression experiment, we validate our theoretical error lower bound for invariant and equivariant regression (Theorem 4.8, 4.9) by comparing empirical network fitting error and the theoretical fitting error of a function $f$ . Specifically, the function $f$ maps a distance $\theta$ and an index $x$ pair to a vector $y$ : + +$$ +f: \mathbb {R} \times \mathcal {X} \rightarrow \mathbb {R} ^ {2}, \text {g i v e n b y} (\theta , x) \mapsto y \tag {25} +$$ + +where $\mathcal{X} = \{x_0, x_1, x_2, x_3\}$ . The group $g \in G = C_4 = \{e, r, r^2, r^3\}$ acts on $(\theta, x)$ by $g(\theta, x) = (\theta, gx)$ through permuting the index $x: x_1 = rx_0; x_2 = rx_1; x_3 = rx_2; x_0 = rx_3$ . Let $r^k \in G$ acts on vector $y$ by rotation $\rho_Y(g) = \begin{pmatrix} \cos g & -\sin g \\ \sin g & \cos g \end{pmatrix}$ where $g = k\pi / 2$ . + +We construct function $f$ in the following way: for each $x \in \mathcal{X}$ , choose $l_x: \mathbb{R} \to \mathbb{R}^2$ and define $f(\theta, x) = l_x(\theta)$ . Notice that when $l_{gx} = \rho_Y(g)l_x(\theta)$ , $f$ is $G$ -equivariant. We define $l_x(\theta) = (p_x(\theta), q_x(\theta))$ where $p_x$ and $q_x$ are cubic polynomials of $x$ , i.e., $p_x$ with coefficients $a, b, c, d$ will be $p_x = ax^3 + bx^2 + cx + d$ . We choose $p_x$ and $q_x$ with different coefficients for each $x$ such that $f$ is not equivariant, i.e., $l_{gx} \neq \rho_Y(g)l_x(\theta)$ . For each run, we generate a function $f$ , sample data $\theta, x$ , and evaluate the data obtaining $y$ . Then we train neural networks using L2 loss till converge. Eventually, we sample another set of data to evaluate the empirical L2 error as well as the theoretical L2 error. + +# I.5 Robotic Experiment + +In the robotic manipulation experiment, the state $s$ is defined as a top-down RGBD image of the workspace centered at the gripper's position (Figure 15 middle). The action $a = (x,y,z,\theta ,\lambda)$ is defined as the change of position $(x,y,z)$ and top-down orientation $(\theta)$ of the gripper, with the + +gripper open width $(\lambda)$ . For a $D_{1} = \{1, g\}$ group where $g$ represents a horizontal flip, the group action on the state space $gs$ is defined as flipping the image; the group action on the action space $ga$ is defined as flipping the $y$ and $\theta$ action and leaving the other action components unchanged, $ga = (x, -y, z, -\theta, \lambda)$ . We define a $D_{1}$ -equivariant policy network $\pi: s \mapsto a$ using e2cnn [60], where the output action of $\pi$ will flip accordingly when the input image is flipped (Figure 15 bottom). We train the network using the Adam [24] optimizer with a learning rate of $10^{-3}$ and weight decay of $10^{-5}$ . For each run, we train the network for a total of 20k training steps, where we perform evaluation for 100 episodes every 2k training steps. We report the highest success rate of the 10 evaluations as the result of the run. + +We develop the experimental environments in the PyBullet [12] simulator, based on the BullatArm benchmark [57]. In the Stacking (correct equivariance) and Sorting (incorrect equivariance) experiment, we gather a total of 400 episodes of demonstrations, where $400c$ of them are Stacking and the rest $400(1 - c)$ are Sorting. In evaluation, the task follows the same distribution, where $100c$ of the evaluation episodes are Stacking and the rest are Sorting. Notice that the agent can distinguish the Stacking and Sorting tasks because the object colors are different for the two tasks (green and blue for stacking, yellow and red for sorting). In the Sorting (extrinsic equivariance) experiment, we also use 400 episodes of demonstrations. + +Specifically, in Sorting, the cube and the triangle are initially placed randomly, within a distance of $\pm 1.5cm$ from the horizontal mid-line of the workspace. The objective is to push the triangle at least $9cm$ toward left and to push the cube at least $9cm$ toward right, while ensuring that both objects remain within the boundaries of workspace. In Stacking, two blocks are randomly initialized on the floor of the workspace. The goal is to pick up the triangle and place it on top of the cube. The workspace has a size of $30cm \times 30cm \times 25cm$ . \ No newline at end of file diff --git a/ageneraltheoryofcorrectincorrectandextrinsicequivariance/images.zip b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e14eaaef032bde7d93727f98ef99def3074666ad --- /dev/null +++ b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a563e626d8f10ca28ca68c26eb9626d6e5ac4127d25bdca7bad05a41880e8b76 +size 899009 diff --git a/ageneraltheoryofcorrectincorrectandextrinsicequivariance/layout.json b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b03870934fe229c4683eea17a788976e07476fa0 --- /dev/null +++ b/ageneraltheoryofcorrectincorrectandextrinsicequivariance/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:861b43860ecf7d2d0e198f282aa446b3c5f7549e225f83e82cf30587797bab2d +size 1235489 diff --git a/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_content_list.json b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..af2ba2e436ebc63f262145d2f740b96bde57f769 --- /dev/null +++ b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d48a52023bd3028b81fbd07abe6eeab3033950ab07102eee9bb2adc8da1da1b9 +size 229473 diff --git a/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_model.json b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_model.json new file mode 100644 index 0000000000000000000000000000000000000000..808a989e086334d83fdae05f320a34879deebb29 --- /dev/null +++ b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58cc32c82a1b706547f5a70e57971cb4bb5ab7acce49757c69b6d73dfddbc612 +size 266389 diff --git a/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_origin.pdf b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a132f21d40a714e137ff8b6001a10be52034a1fc --- /dev/null +++ b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/a5ae146d-cb3d-46d0-ae82-b1391fd04e76_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e34c879d8c161953a4273c24b7f2601881de3a8191133e2d0ec5c52d6ff7ffd9 +size 5084450 diff --git a/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/full.md b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..485117c7add7e691d63e68e192fd29c36cbfdf05 --- /dev/null +++ b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/full.md @@ -0,0 +1,1104 @@ +# A Graph-Theoretic Framework for Understanding Open-World Semi-Supervised Learning + +Yiyou Sun, Zhenmei Shi, Yixuan Li + +Department of Computer Sciences + +University of Wisconsin, Madison + +{sunyiyou,zhmeishi,sharonli}@cs.wisc.edu + +# Abstract + +Open-world semi-supervised learning aims at inferring both known and novel classes in unlabeled data, by harnessing prior knowledge from a labeled set with known classes. Despite its importance, there is a lack of theoretical foundations for this problem. This paper bridges the gap by formalizing a graph-theoretic framework tailored for the open-world setting, where the clustering can be theoretically characterized by graph factorization. Our graph-theoretic framework illuminates practical algorithms and provides guarantees. In particular, based on our graph formulation, we apply the algorithm called Spectral Open-world Representation Learning (SORL), and show that minimizing our loss is equivalent to performing spectral decomposition on the graph. Such equivalence allows us to derive a provable error bound on the clustering performance for both known and novel classes, and analyze rigorously when labeled data helps. Empirically, SORL can match or outperform several strong baselines on common benchmark datasets, which is appealing for practical usage while enjoying theoretical guarantees. Our code is available at https://github.com/deeplearning-wisc/sorl. + +# 1 Introduction + +Machine learning models in the open world inevitably encounter data from both known and novel classes [2, 15, 16, 65, 79]. Traditional supervised machine learning models are trained on a closed set of labels, and thus can struggle to effectively cluster new semantic concepts. On the other hand, open-world semi-supervised learning approaches, such as those discussed in studies [7, 63, 69], enable models to distinguish both known and novel classes, making them highly desirable for real-world scenarios. As shown in Figure 1, the learner has access to a labeled training dataset $\mathcal{D}_l$ (from known classes) as well as a large unlabeled dataset $\mathcal{D}_u$ (from both known and novel classes). By optimizing feature representations jointly from both labeled and unlabeled data, the learner aims to create meaningful cluster structures that correspond to either known or novel classes. With the explosive growth of data generated in various + +![](images/fc3ba87d2715f96a0fa8c711c099d477fa4144adc1d301479bffe7cba3eb89d3.jpg) +Figure 1: Open-world Semi-supervised Learning aims to correctly cluster samples in the novel class and classify samples in the known classes by utilizing knowledge from the labeled data. An open question is "what is the role of the label information in shaping representations for both known and novel classes?" This paper aims to provide a formal understanding. + +domains, open-world semi-supervised learning has emerged as a crucial problem in the field of machine learning. + +Motivation. Different from self-supervised learning [5, 8, 11, 12, 23, 26, 68, 77], open-world semi-supervised learning allows harnessing the power of the labeled data for possible knowledge sharing and transfer to unlabeled data, and from known classes to novel classes. In this joint learning process, we argue that interesting intricacies can arise—the labeled data provided may be beneficial or unhelpful to the resulting clusters. We exemplify the nuances in Figure 1. In one scenario, when the model learns the labeled known classes (e.g., traffic light) by pushing red and green lights closer, such a relationship might transfer to help cluster green and red apples into a coherent cluster. Alternatively, when the connection between the labeled data and the novel class (e.g., flower) is weak, the benefits might be negligible. We argue—perhaps obviously—that a formalized understanding of the intricate phenomenon is needed. + +Theoretical significance. To date, theoretical understanding of open-world semi-supervised learning is still in its infancy. In this paper, we aim to fill the critical blank by analyzing this important learning problem from a rigorous theoretical standpoint. Our exposition gravitates around the open question: what is the role of labeled data in shaping representations for both known and novel classes? To answer this question, we formalize a graph-theoretic framework tailored for the open-world setting, where the vertices are all the data points and connected sub-graphs form classes (either known or novel). The edges are defined by a combination of supervised and self-supervised signals, which reflects the availability of both labeled and unlabeled data. Importantly, this graph facilitates the understanding of open-world semi-supervised learning from a spectral analysis perspective, where the clustering can be theoretically characterized by graph factorization. Based on the graph-theoretic formulation, we derive a formal error bound by contrasting the clustering performance for all classes, before and after adding the labeling information. Our Theorem 4.2 reveals the sufficient condition for the improved clustering performance for a class. Under the K-means measurement, the unlabeled samples in one class can be better clustered, if their overall connection to the labeled data is stronger than their self-clusterability. + +Practical significance. Our graph-theoretic framework also illuminates practical algorithms with provided guarantees. In particular, based on our graph formulation, we present the algorithm called Spectral Open-world Representation Learning (SORL) adapted from Sun et al. [64]. Minimizing this loss is equivalent to performing spectral decomposition on the graph (Section 3.2), which brings two key benefits: (1) it allows us to analyze the representation space and resulting clustering performance in closed-form; (2) practically, it enables end-to-end training in the context of deep networks. We show that our learning algorithm leads to strong empirical performance while enjoying theoretical guarantees. The learning objective can be effectively optimized using stochastic gradient descent on modern neural network architecture, making it desirable for real-world applications. + +# 2 Problem Setup + +We formally describe the data setup and learning goal of open-world semi-supervised learning [7]. + +Data setup. We consider the empirical training set $\mathcal{D}_l\cup \mathcal{D}_u$ as a union of labeled and unlabeled data. + +1. The labeled set $\mathcal{D}_l = \{\bar{x}_i, y_i\}_{i=1}^n$ , with $y_i \in \mathcal{Y}_l$ . The label set $\mathcal{Y}_l$ is known. +2. The unlabeled set $\mathcal{D}_u = \{\bar{x}_i\}_{i=1}^m$ , where each sample $\bar{x}_i$ can come from either known or novel classes1. Note that we do not have access to the labels in $\mathcal{D}_u$ . For mathematical convenience, we denote the underlying label set as $\mathcal{V}_{\mathrm{all}}$ , where $\mathcal{V}_l \subset \mathcal{V}_{\mathrm{all}}$ . We denote $C = |\mathcal{V}_{\mathrm{all}}|$ the total number of classes. + +The data setup has practical value for real-world applications. For example, the labeled set is common in supervised learning; and the unlabeled set can be gathered for free from the model's operating environment or the internet. We use $\mathcal{P}_l$ and $\mathcal{P}$ to denote the marginal distributions of labeled data and all data in the input space, respectively. Further, we let $\mathcal{P}_{l_i}$ denote the distribution of labeled samples with class label $i\in \mathcal{V}_l$ . + +Learning target. Under the setting, our goal is to learn distinguishable representations for both known and novel classes simultaneously. The representation quality will be measured using classic metrics, such as K-means clustering accuracy, which we will define mathematically in Section 4.2.2. Unlike classic semi-supervised learning [86], we place no assumption on the unlabeled data and allow its semantic space to cover both known and novel classes. The problem is also referred to as open-world representation learning [63], which emphasizes the role of good representation in distinguishing both known and novel classes. + +Theoretical analysis goal. We aim to comprehend the role of label information in shaping representations for both known and novel classes. It's important to note that our theoretical approach aims to understand the perturbation in the clustering performance by labeling existing, previously unlabeled data points within the dataset. By contrasting the clustering performance before and after labeling these instances, we uncover the underlying structure and relations that the labels may reveal. This analysis provides invaluable insights into how labeling information can be effectively leveraged to enhance the representations of both known and novel classes. + +# 3 A Spectral Approach for Open-world Semi-Supervised Learning + +In this section, we formalize and tackle the open-world semi-supervised learning problem from a graph-theoretic view. Our fundamental idea is to formulate it as a clustering problem—where similar data points are grouped into the same cluster, by way of possibly utilizing helpful information from the labeled data $\mathcal{D}_l$ . This clustering process can be modeled by a graph, where the vertices are all the data points and classes form connected sub-graphs. Specifically, utilizing our graph formulation, we present the algorithm — Spectral Open-world Representation Learning (SORL) in Section 3.2. The process of minimizing the corresponding loss is fundamentally analogous to executing a spectral decomposition on the graph. + +# 3.1 A Graph-Theoretic Formulation + +We start by formally defining the augmentation graph and adjacency matrix. For clarity, we use $\bar{x}$ to indicate the natural sample (raw inputs without augmentation). Given an $\bar{x}$ , we use $\mathcal{T}(x|\bar{x})$ to denote the probability of $x$ being augmented from $\bar{x}$ . For instance, when $\bar{x}$ represents an image, $\mathcal{T}(\cdot |\bar{x})$ can be the distribution of common augmentations [11] such as Gaussian blur, color distortion, and random cropping. The augmentation allows us to define a general population space $\mathcal{X}$ , which contains all the original images along with their augmentations. In our case, $\mathcal{X}$ is composed of augmented samples from both labeled and unlabeled data, with cardinality $|\mathcal{X}| = N$ . We further denote $\mathcal{X}_l$ as the set of samples (along with augmentations) from the labeled data part. + +We define the graph $G(\mathcal{X}, w)$ with vertex set $\mathcal{X}$ and edge weights $w$ . To define edge weights $w$ , we decompose the graph connectivity into two components: (1) self-supervised connectivity $w^{(u)}$ by treating all points in $\mathcal{X}$ as entirely unlabeled, and (2) supervised connectivity $w^{(l)}$ by adding labeled information from $\mathcal{P}_l$ to the graph. We proceed to define these two cases separately. + +First, by assuming all points as unlabeled, two samples $(x,x^{+})$ are considered a positive pair if: + +Unlabeled Case (u): $x$ and $x^{+}$ are augmented from the same image $\bar{x} \sim \mathcal{P}$ . + +For any two augmented data $x, x' \in \mathcal{X}$ , $w_{xx'}^{(u)}$ denotes the marginal probability of generating the pair: + +$$ +w _ {x x ^ {\prime}} ^ {(u)} \triangleq \mathbb {E} _ {\bar {x} \sim \mathcal {P}} \mathcal {T} (x | \bar {x}) \mathcal {T} \left(x ^ {\prime} | \bar {x}\right), \tag {1} +$$ + +which can be viewed as self-supervised connectivity [11, 23]. However, different from self-supervised learning, we have access to the labeled information for a subset of nodes, which allows adding additional connectivity to the graph. Accordingly, the positive pair can be defined as: + +Labeled Case (l): $x$ and $x^{+}$ are augmented from two labeled samples $\bar{x}_l$ and $\bar{x}_l'$ with the same known class $i$ . In other words, both $\bar{x}_l$ and $\bar{x}_l'$ are drawn independently from $\mathcal{P}_{l_i}$ . + +Considering both case (u) and case (l), the overall edge weight for any pair of data $(x,x^{\prime})$ is given by: + +$$ +w _ {x x ^ {\prime}} = \eta_ {u} w _ {x x ^ {\prime}} ^ {(u)} + \eta_ {l} w _ {x x ^ {\prime}} ^ {(l)}, \text {w h e r e} w _ {x x ^ {\prime}} ^ {(l)} \triangleq \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \mathcal {T} \left(x ^ {\prime} | \bar {x} _ {l} ^ {\prime}\right), \tag {2} +$$ + +and $\eta_u, \eta_l$ modulates the importance between the two cases. The magnitude of $w_{xx'}$ indicates the "positiveness" or similarity between $x$ and $x'$ . We then use $w_x = \sum_{x' \in \mathcal{X}} w_{xx'}$ to denote the total edge weights connected to a vertex $x$ . + +Remark: A graph perturbation view. With the graph connectivity defined above, we can now define the adjacency matrix $A \in \mathbb{R}^{N \times N}$ with entries $A_{xx'} = w_{xx'}$ . Importantly, the adjacency matrix can be decomposed into two parts: + +$\boxed{\mathrm{Perturbation by adding labels}}$ + +$$ +A = \eta_ {u} A ^ {(u)} + \left| \eta_ {l} A ^ {(l)} \right., \tag {3} +$$ + +which can be regarded as the self-supervised adjacency matrix $A^{(u)}$ perturbed by additional labeling information encoded in $A^{(l)}$ . This graph perturbation view serves as a critical foundation for our theoretical analysis of the clustering performance in Section 4. As a standard technique in graph theory [14], we use the normalized adjacency matrix of $G(\mathcal{X}, w)$ : + +$$ +\tilde {A} \triangleq D ^ {- \frac {1}{2}} A D ^ {- \frac {1}{2}}, \tag {4} +$$ + +where $D \in \mathbb{R}^{N \times N}$ is a diagonal matrix with $D_{xx} = w_x$ . The normalization balances the degree of each node, reducing the influence of vertices with very large degrees. The normalized adjacency matrix defines the probability of $x$ and $x'$ being considered as the positive pair from the perspective of augmentation, which helps derive the learning loss as we show next. + +# 3.2 SORL: Spectral Open-World Representation Learning + +We present an algorithm called Spectral Open-world Representation Learning (SORL), which can be derived from a spectral decomposition of $\tilde{A}$ . The algorithm has both practical and theoretical values. First, it enables efficient end-to-end training in the context of modern neural networks. More importantly, it allows drawing a theoretical equivalence between learned representations and the top- $k$ singular vectors of $\tilde{A}$ . Such equivalence facilitates theoretical understanding of the clustering structure encoded in $\tilde{A}$ . Specifically, we consider low-rank matrix approximation: + +$$ +\min _ {F \in \mathbb {R} ^ {N \times k}} \mathcal {L} _ {\mathrm {m f}} (F, A) \triangleq \left\| \tilde {A} - F F ^ {\top} \right\| _ {F} ^ {2} \tag {5} +$$ + +According to the Eckart-Young-Mirsky theorem [17], the minimizer of this loss function is $F_{k} \in \mathbb{R}^{N \times k}$ such that $F_{k}F_{k}^{\top}$ contains the top- $k$ components of $\tilde{A}$ 's SVD decomposition. + +Now, if we view each row $\mathbf{f}_x^\top$ of $F$ as a scaled version of learned feature embedding $f: \mathcal{X} \mapsto \mathbb{R}^k$ , the $\mathcal{L}_{\mathrm{mf}}(F, A)$ can be written as a form of the contrastive learning objective. We formalize this connection in Theorem 3.1 below2. + +Theorem 3.1. We define $\mathbf{f}_x = \sqrt{w_x} f(x)$ for some function $f$ . Recall $\eta_u, \eta_l$ are coefficients defined in Eq. (2). Then minimizing the loss function $\mathcal{L}_{\mathrm{mf}}(F, A)$ is equivalent to minimizing the following loss function for $f$ , which we term Spectral Open-world Representation Learning (SORL): + +$$ +\mathcal {L} _ {S O R L} (f) \triangleq - 2 \eta_ {l} \mathcal {L} _ {1} (f) - 2 \eta_ {u} \mathcal {L} _ {2} (f) + \eta_ {l} ^ {2} \mathcal {L} _ {3} (f) + 2 \eta_ {l} \eta_ {u} \mathcal {L} _ {4} (f) + \eta_ {u} ^ {2} \mathcal {L} _ {5} (f), \tag {6} +$$ + +where + +$$ +\mathcal{L}_{1}(f) = \sum_{i\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{l}^{\prime}\sim \mathcal{P}_{l_{i}},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{+}\sim \mathcal{T}(\cdot |\bar{x}_{l}^{\prime}) \end{array} }{\mathbb{E}}\left[f(x)^{\top}f\left(x^{+}\right)\right],\\ \mathcal{L}_{2}(f) = \underset { \begin{array}{c} \bar{x}_{u}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{u}),x^{+}\sim \mathcal{T}(\cdot |\bar{x}_{u}) \end{array} }{\mathbb{E}}\left[f(x)^{\top}f\left(x^{+}\right)\right], +$$ + +$$ +\mathcal{L}_{3}(f) = \sum_{i,j\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{l}^{\prime}\sim \mathcal{P}_{l_{j}},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{l}^{\prime}) \end{array} }{\mathbb{E}}\left[ \big(f(x)^{\top}f\left(x^{-}\right)\big)^{2}\right], +$$ + +$$ +\mathcal{L}_{4}(f) = \sum_{i\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{u}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{u}) \end{array} }{\mathbb{E}}\left[ \big(f(x)^{\top}f\left(x^{-}\right)\big)^{2}\right],\\ \mathcal{L}_{5}(f) = \underset { \begin{array}{c}\bar{x}_{u}\sim \mathcal{P},\bar{x}_{u}^{\prime}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{u}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{u}^{\prime}) \end{array} }{\mathbb{E}}\left[ \big(f(x)^{\top}f\left(x^{-}\right)\big)^{2}\right]. +$$ + +Proof. (sketch) We can expand $\mathcal{L}_{\mathrm{mf}}(F,A)$ and obtain + +$$ +\mathcal {L} _ {\mathrm {m f}} (F, A) = \sum_ {x, x ^ {\prime} \in \mathcal {X}} \left(\frac {w _ {x x ^ {\prime}}}{\sqrt {w _ {x} w _ {x ^ {\prime}}}} - \mathbf {f} _ {x} ^ {\top} \mathbf {f} _ {x ^ {\prime}}\right) ^ {2} = c o n s t + \sum_ {x, x ^ {\prime} \in \mathcal {X}} \left(- 2 w _ {x x ^ {\prime}} f (x) ^ {\top} f (x ^ {\prime}) + w _ {x} w _ {x ^ {\prime}} (f (x) ^ {\top} f (x ^ {\prime})) ^ {2}\right) +$$ + +The form of $\mathcal{L}_{\mathrm{SORL}}(f)$ is derived from plugging $w_{xx'}$ (defined in Eq. (1)) and $w_x$ . Full proof is in Appendix A. + +Interpretation of $\mathcal{L}_{\mathrm{SORL}}(f)$ . At a high level, $\mathcal{L}_1$ and $\mathcal{L}_2$ push the embeddings of positive pairs to be closer while $\mathcal{L}_3$ , $\mathcal{L}_4$ and $\mathcal{L}_5$ pull away the embeddings of negative pairs. In particular, $\mathcal{L}_1$ samples two random augmentation views of two images from labeled data with the same class label, and $\mathcal{L}_2$ samples two views from the same image in $\mathcal{X}$ . For negative pairs, $\mathcal{L}_3$ uses two augmentation views from two samples in $\mathcal{X}_l$ with any class label. $\mathcal{L}_4$ uses two views of one sample in $\mathcal{X}_l$ and another one in $\mathcal{X}$ . $\mathcal{L}_5$ uses two views from two random samples in $\mathcal{X}$ . This training objective, though bearing similarities to NSCL [64], operates within a distinct problem domain. Accordingly, we derive novel theoretical analysis uniquely tailored to our problem setting, which we present next. + +# 4 Theoretical Analysis + +So far we have presented a spectral approach for open-world semi-supervised learning based on graph factorization. Under this framework, we now formally analyze: how does the labeling information shape the representations for known and novel classes? + +# 4.1 An Illustrative Example + +We consider a toy example that helps illustrate the core idea of our theoretical findings. Specifically, the example aims to distinguish 3D objects with different shapes, as shown in Figure 2. These images are generated by a 3D rendering software [31] with user-defined properties including colors, shape, size, position, etc. We are interested in contrasting the representations (in the form of singular vectors), when the label information is either incorporated in training or not. + +Data design. Suppose the training samples come from three types, $\mathcal{X}_{\square}, \mathcal{X}_{\bigcirc}, \mathcal{X}_{\bigtriangledown}$ . Let $\mathcal{X}_{\square}$ be the sample space with known class, and $\mathcal{X}_{\bigcirc}, \mathcal{X}_{\bigtriangledown}$ be the sample space with novel classes. Further, the two novel classes are constructed to have different relationships with the known class. Specifically, $\mathcal{X}_{\bigcirc}$ shares some similarity with $\mathcal{X}_{\square}$ in color (red and blue); whereas another novel class $\mathcal{X}_{\bigtriangledown}$ has no obvious similarity with the known class. Without any labeling information, it can be difficult to distinguish $\mathcal{X}_{\bigcirc}$ from $\mathcal{X}_{\square}$ since samples share common colors. We aim to verify the hypothesis that: adding labeling information to $\mathcal{X}_{\square}$ (i.e., connecting $\square$ and $\triangleright$ ) has a larger (beneficial) impact to cluster $\mathcal{X}_{\bigcirc}$ than $\mathcal{X}_{\bigtriangledown}$ . + +![](images/32f2819c7ac0c6e80f65b335f1c73eedb6c62667fe44bc4afb0968cc6dc5b437.jpg) +Figure 2: An illustrative example for theoretical analysis. We consider a 6-node graph with one known class (cube) and two novel classes (sphere, cylinder). (a) The augmentation probabilities between nodes are defined by their color and shape in Eq. (7). (b) The adjacency matrix can then be calculated by Equations in Sec. 3.1 where we let $\tau_0 = 0$ , $\eta_u = 6$ , $\eta_l = 4$ . The calculation details are in Appendix B. The magnitude order follows $\tau_1 \gg \tau_c > \tau_s > 0$ . + +Augmentation graph. Based on the data design, we formally define the augmentation graph, which encodes the probability of augmenting a source image $\bar{x}$ to the augmented view $x$ : + +$$ +\mathcal {T} (x \mid \bar {x}) = \left\{ \begin{array}{l l} \tau_ {1} & \text {i f c o l o r} (x) = \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) = \operatorname {s h a p e} (\bar {x}); \\ \tau_ {c} & \text {i f c o l o r} (x) = \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) \neq \operatorname {s h a p e} (\bar {x}); \\ \tau_ {s} & \text {i f c o l o r} (x) \neq \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) = \operatorname {s h a p e} (\bar {x}); \\ \tau_ {0} & \text {i f c o l o r} (x) \neq \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) \neq \operatorname {s h a p e} (\bar {x}). \end{array} \right. \tag {7} +$$ + +With Eq. (7) and the definition of the adjacency matrix in Section 3.1, we can derive the analytic form of $A^{(u)}$ and $A$ , as shown in Figure 2(b). We refer readers to Appendix B for the detailed derivation. The two matrices allow us to contrast the connectivity changes in the graph, before and after the labeling information is added. Insights. We are primarily interested in analyzing the difference of the representation space derived from $A^{(u)}$ and $A$ . We visualize the top-3 eigenvectors3 of the normalized adjacency matrix $\tilde{A}^{(u)}$ and $\tilde{A}$ in Figure 3(a), where the results are based on the magnitude order $\tau_{1} \gg \tau_{c} > \tau_{s} > 0$ . Our key takeaway is: adding labeling information to known class $\mathcal{X} \square$ helps better distinguish the known class itself and the novel class $\mathcal{X}_{\bigcirc}$ , which has a stronger connection/similarity with $\mathcal{X} \square$ . + +Qualitative analysis. Our theoretical insight can also be verified empirically, by learning representations on over 10,000 samples using the loss defined in Section 3.2. Due to the space limitation, we include experimental details in Appendix E.1. In Figure 3(b), we visualize the learned features through UMAP [43]. Indeed, we observe that samples become more concentrated around different shape classes after adding labeling information to the cube class. + +![](images/fbb4d8113439e3082f879d88b17226892e96df6c26c6562257227a8e14a84f4d.jpg) +Figure 3: Visualization of representation space for toy example. (a) Theoretically contrasting the feature formed by top-3 eigenvectors of $\tilde{A}^{(u)}$ and $\tilde{A}$ respectively. (b) UMAP visualization of the features learned without (left) and with labeled information (right). Details are in Appendix B (eigenvector calculation) and Appendix E.1 (visualization setting). + +![](images/4f175cbe9769bacf6f3b174df3eac1afe0f574e12450a19fb0f8cad5165ce201.jpg) + +# 4.2 Main Theory + +The toy example offers an important insight that the added labeled information is more helpful for the class with a stronger connection to the known class. In this section, we formalize this insight by extending the toy example to a more general setting. As a roadmap, we derive the result through three steps: (1) derive the closed-form solution of the learned representations; (2) define the clustering performance by the K-means measure; (3) contrast the resulting clustering performance before and after adding labels. We start by deriving the representations. + +# 4.2.1 Learned Representations in Analytic Form + +Representation without labels. To obtain the representations, one can train the neural network $f: \mathcal{X} \mapsto \mathbb{R}^k$ using the spectral loss defined in Equation 6. We assume that the optimizer is capable to obtain the representation $Z^{(u)} \in \mathbb{R}^{N \times k}$ that minimizes the loss, where each row vector $\mathbf{z}_i = f(x_i)^\top$ . Recall that Theorem 3.1 allows us to derive a closed-form solution for the learned feature space by the spectral decomposition of the adjacency matrix, which is $\tilde{A}^{(u)}$ in the case without labeling information. Specifically, we have $F_k^{(u)} = \sqrt{D^{(u)}} Z^{(u)}$ , where $F_k^{(u)}F_k^{(u)\top}$ contains the + +top- $k$ components of $\tilde{A}^{(u)}$ 's SVD decomposition and $D^{(u)}$ is the diagonal matrix defined based on the row sum of $A^{(u)}$ . We further define the top- $k$ singular vectors of $\tilde{A}^{(u)}$ as $V_{k}^{(u)} \in \mathbb{R}^{N \times k}$ , so we have $F_{k}^{(u)} = V_{k}^{(u)} \sqrt{\Sigma_{k}^{(u)}}$ , where $\Sigma_{k}^{(u)}$ is a diagonal matrix of the top- $k$ singular values of $\tilde{A}^{(u)}$ . By equalizing the two forms of $F_{k}^{(u)}$ , the closed-formed solution of the learned feature space is given by $Z^{(u)} = [D^{(u)}]^{-\frac{1}{2}} V_{k}^{(u)} \sqrt{\Sigma_{k}^{(u)}}$ . + +Representation perturbation by adding labels. We now analyze how the representation is "perturbed" as a result of adding label information. We consider $|\mathcal{V}_l| = 1^4$ to facilitate a better understanding of our key insight. We can rewrite $A$ in Eq. 3 as: + +$$ +A (\delta) \triangleq \eta_ {u} A ^ {(u)} + \delta \mathfrak {U} ^ {\top}, +$$ + +where we replace $\eta_{l}$ to $\delta$ to be more apparent in representing the perturbation and define $\mathfrak{l} \in \mathbb{R}^{N}$ , $(\mathfrak{l})_x = \mathbb{E}_{\bar{x}_l \sim \mathcal{P}_{l_1}} \mathcal{T}(x|\bar{x}_l)$ . Note that $\mathfrak{l}$ can be interpreted as the vector of "the semantic connection for sample $x$ to the labeled data". One can easily extend to $r$ classes by letting $\mathfrak{l} \in \mathbb{R}^{N \times r}$ . + +Here we treat the adjacency matrix as a function of the perturbation. In a similar manner as above, we can derive the normalized adjacency matrix $\tilde{A} (\delta)$ and the feature representation $Z(\delta)$ in closed form. The details are included in Appendix C.4. + +# 4.2.2 Evaluation Target + +With the learned representations, we can evaluate their quality by the clustering performance. Our theoretical analysis of the clustering performance can well connect to empirical evaluation strategy in the literature [75] using $K$ -means clustering accuracy/errror. Formally, we define the ground-truth partition of clusters by $\Pi = \{\pi_1,\pi_2,\dots,\pi_C\}$ , where $\pi_{i}$ is the set of samples' indices with underlying label $y_{i}$ and $C$ is the total number of classes (including both known and novel). We further let $\pmb{\mu}_{\pi} = \mathbb{E}_{i\in \pi}\mathbf{z}_i$ be the center of features in $\pi$ , and the average of all feature vectors be $\pmb{\mu}_{\Pi} = \mathbb{E}_{j\in [N]}\mathbf{z}_j$ . + +The clustering performance of K-means depends on two measurements: Intra-class measure and Inter-class measure. Specifically, we let the intra-class measure be the average Euclidean distance from the samples' feature to the corresponding cluster center and we measure the inter-class separation as the distances between cluster centers: + +$$ +\mathcal {M} _ {\text {i n t r a - c l a s s}} (\Pi , Z) \triangleq \sum_ {\pi \in \Pi} \sum_ {i \in \pi} \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| ^ {2}, \mathcal {M} _ {\text {i n t e r - c l a s s}} (\Pi , Z) \triangleq \sum_ {\pi \in \Pi} | \pi | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\Pi} \| ^ {2}. \tag {8} +$$ + +Strong clustering results translate into low $\mathcal{M}_{\mathrm{intra - class}}$ and high $\mathcal{M}_{\mathrm{inter - class}}$ . Thus we define the K-means measure as: + +$$ +\mathcal {M} _ {k m s} (\Pi , Z) \triangleq \mathcal {M} _ {\text {i n t r a - c l a s s}} (\Pi , Z) / \mathcal {M} _ {\text {i n t e r - c l a s s}} (\Pi , Z). \tag {9} +$$ + +We also formally show in Theorem 4.1 that the K-means clustering error5 is asymptotically equivalent to the K-means measure we defined above. + +Theorem 4.1. (Relationship between the $K$ -means measure and $K$ -means error.) We define the $\xi_{\pi \to \pi'}$ as the index set of samples that is from class division $\pi$ however is closer to $\pmb{\mu}_{\pi'}$ than $\pmb{\mu}_{\pi}$ . In other words, $\xi_{\pi \to \pi'} = \{i : i \in \pi, \| \mathbf{z}_i - \pmb{\mu}_{\pi} \|_2 \geq \| \mathbf{z}_i - \pmb{\mu}_{\pi'} \|_2\}$ . Assuming $|\xi_{\pi \to \pi'}| > 0$ , we define below the clustering error ratio from $\pi$ to $\pi'$ as $\mathcal{E}_{\pi \to \pi'}$ and the overall cluster error ratio $\mathcal{E}_{\Pi, Z}$ as the Harmonic Mean of $\mathcal{E}_{\pi \to \pi'}$ among all class pairs: + +$$ +\mathcal{E}_{\Pi ,Z} = C(C - 1) / \left(\sum_{\substack{\pi \neq \pi^{\prime}\\ \pi ,\pi^{\prime}\in \Pi}}\frac{1}{\mathcal{E}_{\pi\to\pi^{\prime}}}\right),where \mathcal{E}_{\pi \to \pi^{\prime}} = \frac{|\xi_{\pi\to\pi^{\prime}}|}{|\pi^{\prime}| + |\pi|}. +$$ + +The $K$ -means measure $\mathcal{M}_{kms}(\Pi, Z)$ has the same order of the Harmonic Mean of the cluster error ratio between all cluster pairs with proof in Appendix C.3. + +$$ +\mathcal {E} _ {\Pi , Z} = O \left(\mathcal {M} _ {k m s} (\Pi , Z)\right). +$$ + +The K-means measure $\mathcal{M}_{kms}(\Pi, Z)$ have a nice matrix form as shown in Appendix C.2 which facilitates theoretical analysis. Our analysis revolves around contrasting the resulting clustering performance before and after adding labels as we will shown next. + +# 4.2.3 Perturbation in Clustering Performance + +With the evaluation target defined above, our main analysis will revolve around analyzing "how the extra label information help reduces $\mathcal{M}_{kms}(\Pi, Z)$ ". Formally, we investigate the following error difference, as a result of added label information: + +$$ +\Delta_ {k m s} (\delta) = \mathcal {M} _ {k m s} (\Pi , Z) - \mathcal {M} _ {k m s} (\Pi , Z (\delta)), +$$ + +where the closed-form solution is given by the following theorem. Positive $\Delta_{kms}(\delta)$ means improved clustering, as a result of adding labeling information. + +Theorem 4.2. (Main result.) Denote $V_{\emptyset}^{(u)} \in \mathbb{R}^{N \times (N - k)}$ as the null space of $V_{k}^{(u)}$ and $\tilde{A}_{k}^{(u)} = V_{k}^{(u)}\Sigma_{k}^{(u)}V_{k}^{(u)\top}$ as the rank- $k$ approximation for $\tilde{A}^{(u)}$ . Given $\delta, \eta_1 > 0$ and let $\mathcal{G}_k$ as the spectral gap between $k$ -th and $k + 1$ -th singular values of $\tilde{A}^{(u)}$ , we have: + +$$ +\Delta_ {k m s} (\delta) = \delta \eta_ {1} \mathrm {T r} \left(\Upsilon \left(V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} (I + V _ {\emptyset} ^ {(u)} V _ {\emptyset} ^ {(u) \top}) - 2 \tilde {A} _ {k} ^ {(u)} d i a g (\mathfrak {l})\right)\right) + O (\frac {1}{\mathcal {G} _ {k}} + \delta^ {2}), +$$ + +where $\text{diag}(\cdot)$ converts the vector to the corresponding diagonal matrix and $\Upsilon \in \mathbb{R}^{N \times N}$ is a matrix encoding the ground-truth clustering structure in the way that $\Upsilon_{xx'} > 0$ if $x$ and $x'$ has the same label and $\Upsilon_{xx'} < 0$ otherwise. The concrete form and the proof are in Appendix C.4. + +Theorem 4.2 is more general but less intuitive to understand. To gain a better insight, we introduce Theorem 4.3 which provides more direct implications. We provide the justification of the assumptions and the formal proof in Appendix C.5. + +Theorem 4.3. (Intuitive result.) Assuming the spectral gap $\mathcal{G}_k$ is sufficiently large and $\mathfrak{l}$ lies in the linear span of $V_k^{(u)}$ . We also assume $\forall \pi_c \in \Pi, \forall i \in \pi_c, \mathfrak{l}_{(i)} =: \mathfrak{l}_{\pi_c}$ which represents the connection between class $c$ to the labeled data. Given $\delta, \eta_1, \eta_2 > 0$ , we have: + +$$ +\Delta_ {k m s} (\delta) \geq \delta \eta_ {1} \eta_ {2} \sum_ {\pi_ {c} \in \Pi} | \pi_ {c} | \mathfrak {l} _ {\pi_ {c}} \Delta_ {\pi_ {c}} (\delta), +$$ + +where Connection from class c to the labeled data. + +$$ +\Delta_ {\pi_ {c}} (\delta) = \stackrel {\downarrow} {\left(\mathfrak {l} _ {\pi_ {c}} - \frac {1}{N}\right)} - 2 \left(1 - \frac {\left| \pi_ {c} \right|}{N}\right) \left(\mathbb {E} _ {i \in \pi_ {c}} \mathbb {E} _ {j \in \pi_ {c}} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {j} - \mathbb {E} _ {i \in \pi_ {c}} \mathbb {E} _ {j \notin \pi_ {c}} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {j}\right). +$$ + +Intra-class similarity Inter-class similarity + +Implications. In Theorem 4.3, we define the class-wise perturbation of the K-means measure as $\Delta_{\pi_c}(\delta)$ . This way, we can interpret the effect of adding labels for a specific class $c$ . If we desire $\Delta_{\pi_c}(\delta)$ to be large, the sufficient condition is that + +connection of class $c$ to the labeled data $>$ intra-class similarity - inter-class similarity. + +We use examples in Figure 1 to epitomize the core idea. Specifically, our unlabeled samples consist of three underlying classes: traffic lights (known), apples (novel), and flowers (novel). (a) For unlabeled traffic lights from known classes which are strongly connected to the labeled data, adding labels to traffic lights can largely improve the clustering performance; (b) For novel classes like apples, it may also help when they have a strong connection to the traffic light, and their intra-class similarity is not as strong (due to different colors); (c) However, labeled data may offer little improvement in clustering the flower class, due to the minimal connection to the labeled data and that flowers' self-clusterability is already strong. + +# 5 Empirical Validation of Theory + +Beyond theoretical insights, we show empirically that SORL is effective on standard benchmark image classification datasets CIFAR-10/100 [35]. Following the seminal work ORCA [7], classes are divided into $50\%$ known and $50\%$ novel classes. We then use $50\%$ of samples from the known classes as the labeled dataset, and the rest as the unlabeled set. We follow the evaluation strategy in [7] and report the following metrics: (1) classification accuracy on known classes, (2) clustering accuracy on the novel data, and (3) overall accuracy on all classes. More experiment details are in Appendix E.2. + +Table 1: Main Results. Mean and std are estimated on five different runs. Baseline numbers are from [7, 63]. + +
MethodCIFAR-10CIFAR-100
AllNovelKnownAllNovelKnown
FixMatch [37]49.550.471.520.323.539.6
\( \mathbf{DS^{3}L}\) [21]40.245.377.624.023.755.1
CGDL [62]39.744.672.323.622.549.3
DTC [22]38.339.553.918.322.931.3
RankStats [82]82.981.086.623.128.436.4
SimCLR [11]51.763.458.322.321.228.6
ORCA [7]\( 88.3^{\pm 0.3} \)\( 87.5^{\pm 0.2} \)\( 89.9^{\pm 0.4} \)\( 47.2^{\pm 0.7} \)\( 41.0^{\pm 1.0} \)\( 66.7^{\pm 0.2} \)
GCD [69]\( 87.5^{\pm 0.5} \)\( 86.7^{\pm 0.4} \)\( 90.1^{\pm 0.3} \)\( 46.8^{\pm 0.5} \)\( 43.4^{\pm 0.7} \)\( 69.7^{\pm 0.4} \)
OpenCon [63]\( 90.4^{\pm 0.6} \)\( 91.1^{\pm 0.1} \)\( 89.3^{\pm 0.2} \)\( 52.7^{\pm 0.6} \)\( 47.8^{\pm 0.6} \)\( 69.1^{\pm 0.3} \)
SORL (Ours)\( 93.5^{\pm 1.0} \)\( 92.5^{\pm 0.1} \)\( 94.0^{\pm 0.2} \)\( 56.1^{\pm 0.3} \)\( 52.0^{\pm 0.2} \)\( 68.2^{\pm 0.1} \)
+ +SORL achieves competitive performance. Our proposed loss SORL is amenable to the theoretical understanding, which is our primary goal of this work. Beyond theory, we show that SORL is equally desirable in empirical performance. In particular, SORL displays competitive performance compared to existing methods, as evidenced in Table 1. Our comparison covers an extensive collection of very recent algorithms developed for this problem, including ORCA [7], GCD [69], and OpenCon [63]. We also compare methods in related problem domains: (1) Semi-Supervised Learning [21, 37, 62], (2) Novel Class Discovery [22, 82], (3) common representation learning method SimCLR [11]. In particular, on CIFAR-100, we improve upon the best baseline OpenCon by $3.4\%$ in terms of overall accuracy. Our result further validates that putting analysis on SORL is appealing for both theoretical and empirical reasons. + +# 6 Broader Impact + +From a theoretical perspective, our graph-theoretic framework can facilitate and deepen the understanding of other representation learning methods that commonly involve the notion of positive/negative pairs. In Appendix D, we exemplify how our framework can be potentially generalized to other common contrastive loss functions [11, 34, 68], and baseline methods that are tailored for the open-world semi-supervised learning problem (e.g., GCD [69], OpenCon [63]). Hence, we believe our theoretical framework has a broader utility and significance. + +From a practical perspective, our work can directly impact and benefit many real-world applications, where unlabeled data are produced at an incredible rate today. Major companies exhibit a strong need for making their machine learning systems and services amendable for the open-world setting but lack fundamental and systematic knowledge. Hence, our research advances the understanding of open-world machine learning and helps the industry improve ML systems by discovering insights and structures from unlabeled data. + +# 7 Related Work + +Semi-supervised learning. Semi-supervised learning (SSL) is a classic problem in machine learning. SSL typically assumes the same class space between labeled and unlabeled data, and hence remains closed-world. A rich line of empirical works [9, 13, 21, 29, 37, 38, 39, 42, 48, 50, 53, 54, 74, 76, 78] and theoretical efforts [3, 46, 47, 51, 60, 61, 73] have been made to address this problem. An important class of SSL methods is to represent data as graphs and predict labels by aggregating proximal nodes' labels [1, 18, 30, 71, 80, 84, 85]. Different from classic SSL, we allow its semantic space to cover both known and novel classes. Accordingly, we contribute a graph-theoretic framework tailored to the open-world setting, and reveal new insights on how the labeled data can benefit the clustering performance on both known and novel classes. + +Open-world semi-supervised learning. The learning setting that considers both labeled and unlabeled data with a mixture of known and novel classes is first proposed in [7] and inspires a proliferation of follow-up works [49, 52, 63, 69, 81] advancing empirical success. Most works put emphasis on learning high-quality embeddings [49, 63, 69, 81]. In particular, Sun and Li [63] employ contrastive learning with both supervised and self-supervised signals, which aligns with our theoretical setup in Sec. 3.1. Different from prior works, our paper focuses on advancing theoretical understanding. To the best of our knowledge, we are the first to theoretically investigate the problem from a graph-theoretic perspective and provide a rigorous error bound. + +Spectral graph theory. Spectral graph theory is a classic research problem [10, 14, 33, 40, 44, 70], which aims to partition the graph by studying the eigenspace of the adjacency matrix. The spectral graph theory is also widely applied in machine learning [1, 6, 45, 56, 58, 64, 86]. Recently, HaoChen et al. [23] derive a spectral contrastive loss from the factorization of the graph's adjacency matrix which facilitates theoretical study in unsupervised domain adaptation [24, 57]. In these works, the graph's formulation is exclusively based on unlabeled data. Sun et al. [64] later expanded this spectral contrastive loss approach to cater to learning environments that encompass both labeled data from known classes and unlabeled data from novel ones. In this paper, our adaptation of the loss function from [64] is tailored to address the open-world semi-supervised learning challenge, considering known class samples within unlabeled data. + +Theory for self-supervised learning. A proliferation of works in self-supervised representation learning demonstrates the empirical success [5, 8, 11, 12, 23, 26, 68, 77] with the theoretical foundation by providing provable guarantees on the representations learned by contrastive learning for linear probing [4, 41, 55, 59, 66, 67]. From the graphic view, HaoChen et al. [23, 24], Shen et al. [57] model the pairwise relation by the augmentation probability and provided error analysis of the downstream tasks. The existing body of work has mostly focused on unsupervised learning. In this paper, we systematically investigate how the label information can change the representation manifold and affect the downstream clustering performance on both known and novel classes. + +# 8 Conclusion + +In this paper, we present a graph-theoretic framework for open-world semi-supervised learning. The framework facilitates the understanding of how representations change as a result of adding labeling information to the graph. Specifically, we learn representation through Spectral Open-world Representation Learning (SORL). Minimizing this objective is equivalent to factorizing the graph's adjacency matrix, which allows us to analyze the clustering error difference between having vs. excluding labeled data. Our main results suggest that the clustering error can be significantly reduced if the connectivity to the labeled data is stronger than their self-clusterability. Our framework is also empirically appealing to use since it achieves competitive performance on par with existing baselines. Nevertheless, we acknowledge two limitations to practical application within our theoretical construct: + +- The augmentation graph serves as a potent theoretical tool for elucidating the success of modern representation learning methods. However, it is challenging to ensure that current augmentation strategies, such as cropping, color jittering, can transform two dissimilar images into identical ones. +- The utilization of Theorems 4.1 and 4.2 necessitates an explicit knowledge of the adjacency matrix of the augmentation graph, a requirement that can be intractable in practice. + +In light of these limitations, we encourage further research to enhance the practicality of these theoretical findings. We also hope our framework and insights can inspire the broader representation learning community to understand the role of labeling prior. + +# Acknowledgement + +Research is supported by the AFOSR Young Investigator Program under award number FA9550-23-1-0184, National Science Foundation (NSF) Award No. IIS-2237037 & IIS-2331669, Office of Naval Research under grant number N00014-23-1-2643, and faculty research awards/gifts from Google and Meta. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements either expressed or implied, of the sponsors. The authors would also like to thank Tengyu Ma, Xuefeng Du, and Yifei Ming for their helpful suggestions and feedback. + +# References + +[1] Andreas Argyriou, Mark Herbster, and Massimiliano Pontil. Combining graph laplacians for semi-supervised learning. Advances in Neural Information Processing Systems, 18, 2005. +[2] Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert D Nowak, and Yixuan Li. Feed two birds with one scone: Exploiting wild data for both out-of-distribution generalization and detection. In International Conference on Machine Learning, pages 1454-1471. PMLR, 2023. +[3] Maria-Florina Balcan and Avrim Blum. A pac-style model for learning from labeled and unlabeled data. In *Colt*, pages 111–126. Springer, 2005. +[4] Randall Balestriero and Yann LeCun. Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods. Advances in Neural Information Processing Systems, 2022. +[5] Adrien Bardes, Jean Ponce, and Yann Lecun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. In ICLR 2022-10th International Conference on Learning Representations, 2022. +[6] Avrim Blum. Learning form labeled and unlabeled data using graph mincuts. In Proc. 18th International Conference on Machine Learning, 2001, 2001. +[7] Kaidi Cao, Maria Brbic, and Jure Leskovec. Open-world semi-supervised learning. In Proceedings of the International Conference on Learning Representations, 2022. +[8] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Proceedings of Advances in Neural Information Processing Systems, 2020. +[9] Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien, editors. Semi-Supervised Learning. The MIT Press, 2006. +[10] Jeff Cheeger. A lower bound for the smallest eigenvalue of the laplacian. In *Problems in analysis*, pages 195-200. Princeton University Press, 2015. +[11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the international conference on machine learning, pages 1597-1607. PMLR, 2020. +[12] Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750–15758, 2021. +[13] Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class distribution mismatch. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3569-3576, 2020. +[14] Fan RK Chung. Spectral graph theory, volume 92. American Mathematical Soc., 1997. +[15] Xuefeng Du, Gabriel Gozum, Yifei Ming, and Yixuan Li. Siren: Shaping representations for detecting out-of-distribution objects. Advances in Neural Information Processing Systems, 35: 20434-20449, 2022. +[16] Xuefeng Du, Yiyou Sun, Xiaojin Zhu, and Yixuan Li. Dream the impossible: Outlier imagination with diffusion models. Advances in Neural Information Processing Systems, 2023. +[17] Carl Eckart and Gale Young. The approximation of one matrix by another of lower rank. Psychometrika, 1(3):211-218, 1936. +[18] Rob Fergus, Yair Weiss, and Antonio Torralba. Semi-supervised learning in gigantic image collections. Advances in neural information processing systems, 22, 2009. + +[19] Enrico Fini, Enver Sangineto, Stéphane Lathuilière, Zhun Zhong, Moin Nabi, and Elisa Ricci. A unified objective for novel class discovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9284-9292, 2021. +[20] Anne Greenbaum, Ren-cang Li, and Michael L Overton. First-order perturbation theory for eigenvalues and eigenvectors. SIAM review, 62(2):463-482, 2020. +[21] Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, and Zhi-Hua Zhou. Safe deep semi-supervised learning for unseen-class unlabeled data. In Proceedings of the international conference on machine learning, volume 119 of Proceedings of Machine Learning Research, pages 3897-3906. PMLR, 13-18 Jul 2020. +[22] Kai Han, Andrea Vedaldi, and Andrew Zisserman. Learning to discover novel visual categories via deep transfer clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. +[23] Jeff Z HaoChen, Colin Wei, Adrien Gaidon, and Tengyu Ma. Provable guarantees for self-supervised deep learning with spectral contrastive loss. Advances in Neural Information Processing Systems, 34:5000-5011, 2021. +[24] Jeff Z HaoChen, Colin Wei, Ananya Kumar, and Tengyu Ma. Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations. Advances in Neural Information Processing Systems, 2022. +[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[26] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9729-9738, 2020. +[27] Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira. Learning to cluster in order to transfer across domains and tasks. Proceedings of the International Conference on Learning Representations, 2018. +[28] Yen-Chang Hsu, Zhaoyang Lv, Joel Schlosser, Phillip Odom, and Zsolt Kira. Multi-class classification without multi-class labels. Proceedings of the International Conference on Learning Representations, 2019. +[29] Junkai Huang, Chaowei Fang, Weikai Chen, Zhenhua Chai, Xiaolin Wei, Pengxu Wei, Liang Lin, and Guanbin Li.trash to treasure: Harvesting ood data with cross-modal matching for open-set semi-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8310-8319, 2021. +[30] Tony Jebara, Jun Wang, and Shih-Fu Chang. Graph construction and b-matching for semi-supervised learning. In Proceedings of the 26th annual international conference on machine learning, pages 441-448, 2009. +[31] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In CVPR, 2017. +[32] Antony Joseph and Bin Yu. Impact of regularization on spectral clustering. The Annals of Statistics, 44(4):1765-1791, 2016. +[33] Ravi Kannan, Santosh Vempala, and Adrian Vetta. On clusterings: Good, bad and spectral. Journal of the ACM (JACM), 51(3):497-515, 2004. +[34] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in Neural Information Processing Systems, 33:18661-18673, 2020. + +[35] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. +[36] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83-97, 1955. +[37] Alex Kurakin, Chun-Liang Li, Colin Raffel, David Berthelot, Ekin Dogus Cubuk, Han Zhang, Kihyuk Sohn, Nicholas Carlini, and Zizhao Zhang. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Proceedings of Advances in Neural Information Processing Systems, 2020. +[38] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. Proceedings of the International Conference on Learning Representations, 2017. +[39] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, page 896, 2013. +[40] James R Lee, Shayan Oveis Gharan, and Luca Trevisan. Multiway spectral partitioning and higher-order cheeger inequalities. Journal of the ACM (JACM), 61(6):1-30, 2014. +[41] Jason D Lee, Qi Lei, Nikunj Saunshi, and Jiacheng Zhuo. Predicting what you already know helps: Provable self-supervised learning. Advances in Neural Information Processing Systems, 34:309-323, 2021. +[42] Wei Liu, Junfeng He, and Shih-Fu Chang. Large graph construction for scalable semi-supervised learning. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 679–686. CiteSeer, 2010. +[43] Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861, 2018. +[44] Frank McSherry. Spectral partitioning of random graphs. In Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pages 529-537. IEEE, 2001. +[45] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 14, 2001. +[46] Partha Niyogi. Manifold regularization and semi-supervised learning: Some theoretical analyses. Journal of Machine Learning Research, 14(5), 2013. +[47] Samet Oymak and Talha Cihad Gulcu. A theoretical characterization of semi-supervised learning with self-training for gaussian mixture models. In International Conference on Artificial Intelligence and Statistics, pages 3601-3609. PMLR, 2021. +[48] Jongjin Park, Sukmin Yun, Jongheon Jeong, and Jinwoo Shin. Opencos: Contrastive semi-supervised learning for handling open-set unlabeled data. In Computer Vision-ECCV 2022 Workshops: Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part II, pages 134-149. Springer, 2023. +[49] Nan Pu, Zhun Zhong, and Nicu Sebe. Dynamic conceptual contrastive learning for generalized category discovery. 2023. +[50] Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Semi-supervised learning with scarce annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 762-763, 2020. +[51] Philippe Rigollet. Generalization error bounds in semi-supervised classification under the cluster assumption. Journal of Machine Learning Research, 8(7), 2007. +[52] Mamshad Nayeem Rizve, Navig Kardan, Salman Khan, Fahad Shahbaz Khan, and Mubarak Shah. Openlndn: Learning to discover novel classes for open-world semi-supervised learning. Proceedings of the European Conference on Computer Vision, 2022. + +[53] Kuniaki Saito, Donghyun Kim, and Kate Saenko. Openmatch: Open-set consistency regularization for semi-supervised learning with outliers. Proceedings of Advances in Neural Information Processing Systems, 2021. +[54] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Proceedings of Advances in Neural Information Processing Systems, 29, 2016. +[55] Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. A theoretical analysis of contrastive unsupervised representation learning. In International Conference on Machine Learning, pages 5628-5637. PMLR, 2019. +[56] Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, and Yuval Kluger. Spectralnet: Spectral clustering using deep neural networks. In 6th International Conference on Learning Representations, ICLR 2018, 2018. +[57] Kendrick Shen, Robbie M Jones, Ananya Kumar, Sang Michael Xie, Jeff Z HaoChen, Tengyu Ma, and Percy Liang. Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation. In International Conference on Machine Learning, pages 19847-19878. PMLR, 2022. +[58] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888-905, 2000. +[59] Zhenmei Shi, Jiefeng Chen, Kunyang Li, Jayaram Raghuram, Xi Wu, Yingyu Liang, and Somesh Jha. The trade-off between universality and label efficiency of representations from contrastive learning. In *The Eleventh International Conference on Learning Representations*, 2023. +[60] Aarti Singh, Robert Nowak, and Jerry Zhu. Unlabeled data: Now it helps, now it doesn't. Advances in neural information processing systems, 21, 2008. +[61] Nataliya Sokolovska, Olivier Cappé, and François Yvon. The asymptotics of semi-supervised learning in discriminative probabilistic models. In Proceedings of the 25th international conference on Machine learning, pages 984-991, 2008. +[62] Xin Sun, Zhenning Yang, Chi Zhang, Keck-Voon Ling, and Guohao Peng. Conditional gaussian distribution learning for open set recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 13480-13489, 2020. +[63] Yiyou Sun and Yixuan Li. Opencon: Open-world contrastive learning. Transaction on Machine Learning Research, 2023. +[64] Yiyou Sun, Zhenmei Shi, Yingyu Liang, and Yixuan Li. When and how does known class help discover unknown ones? Provable understanding through spectral analysis. In Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research, pages 33014-33043. PMLR, 2023. +[65] Leitian Tao, Xuefeng Du, Xiaojin Zhu, and Yixuan Li. Non-parametric outlier synthesis. International Conference on Learning Representations, 2023. +[66] Christopher Tosh, Akshay Krishnamurthy, and Daniel Hsu. Contrastive estimation reveals topic posterior information to linear models. J. Mach. Learn. Res., 22:281-1, 2021. +[67] Christopher Tosh, Akshay Krishnamurthy, and Daniel Hsu. Contrastive learning, multi-view redundancy, and linear models. In Algorithmic Learning Theory, pages 1179-1206. PMLR, 2021. +[68] Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv e-prints, pages arXiv-1807, 2018. +[69] Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Generalized category discovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. + +[70] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395-416, 2007. +[71] Fei Wang and Changshui Zhang. Label propagation through linear neighborhoods. In Proceedings of the 23rd international conference on Machine learning, pages 985-992, 2006. +[72] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR, 2020. +[73] Larry Wasserman and John Lafferty. Statistical analysis of semi-supervised regression. Advances in Neural Information Processing Systems, 20, 2007. +[74] Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, and Long Zeng. Class-aware contrastive semi-supervised learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. +[75] Muli Yang, Yuehua Zhu, Jiaping Yu, Among Wu, and Cheng Deng. Divide and conquer: Compositional experts for generalized novel class discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14268-14277, 2022. +[76] Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set semi-supervised learning. In Proceedings of the European Conference on Computer Vision, pages 438-454. Springer, 2020. +[77] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, pages 12310-12320. PMLR, 2021. +[78] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Self-supervised semi-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1476-1485, 2019. +[79] Jingyang Zhang, Jingkang Yang, Pengyun Wang, Haoqi Wang, Yueqian Lin, Haoran Zhang, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, et al. Openood v1.5: Enhanced benchmark for out-of-distribution detection. arXiv preprint arXiv:2306.09301, 2023. +[80] Kai Zhang, James T Kwok, and Bahram Parvin. Prototype vector machine for large scale semi-supervised learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1233-1240, 2009. +[81] Sheng Zhang, Salman Khan, Zhiqiang Shen, Muzammal Naseer, Guangyi Chen, and Fahad Khan. Promptcal: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery. 2022. +[82] Bingchen Zhao and Kai Han. Novel visual category discovery with dual ranking statistics and mutual knowledge distillation. Proceedings of Advances in Neural Information Processing Systems, 34, 2021. +[83] Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, and Nicu Sebe. Neighborhood contrastive learning for novel class discovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10867-10875, 2021. +[84] Dengyong Zhou, Thomas Hofmann, and Bernhard Scholkopf. Semi-supervised learning on directed graphs. Advances in neural information processing systems, 17, 2004. +[85] Xiaojin Zhu. Learning from labeled and unlabeled data with label propagation. Tech. Report, 2002. +[86] Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919, 2003. + +# Appendix + +# A Technical Details of Spectral Open-world Representation Learning + +Theorem A.1. (Recap of Theorem 3.1) We define $\mathbf{f}_x = \sqrt{w_x} f(x)$ for some function $f$ . Recall $\eta_u, \eta_l$ are two hyper-parameters defined in Eq. (1). Then minimizing the loss function $\mathcal{L}_{\mathrm{mf}}(F, A)$ is equivalent to minimizing the following loss function for $f$ , which we term Spectral Open-world Representation Learning (SORL): + +$$ +\begin{array}{l} \mathcal {L} _ {S O R L} (f) \triangleq - 2 \eta_ {l} \mathcal {L} _ {1} (f) - 2 \eta_ {u} \mathcal {L} _ {2} (f) \tag {10} \\ + \eta_ {l} ^ {2} \mathcal {L} _ {3} (f) + 2 \eta_ {l} \eta_ {u} \mathcal {L} _ {4} (f) + \eta_ {u} ^ {2} \mathcal {L} _ {5} (f), \\ \end{array} +$$ + +where + +$$ +\mathcal{L}_{1}(f) = \sum_{i\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{l}^{\prime}\sim \mathcal{P}_{l_{i}},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{+}\sim \mathcal{T}(\cdot |\bar{x}_{l}^{\prime}) \end{array} }{\mathbb{E}}\left[f(x)^{\top}f\left(x^{+}\right)\right], +$$ + +$$ +\mathcal{L}_{2}(f) = \underset { \begin{array}{c}\bar{x}_{u}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{u}),x^{+}\sim \mathcal{T}(\cdot |\bar{x}_{u}) \end{array} }{\mathbb{E}}\left[f(x)^{\top}f\left(x^{+}\right)\right], +$$ + +$$ +\mathcal{L}_{3}(f) = \sum_{i,j\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{l}^{\prime}\sim \mathcal{P}_{l_{j}},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{l}^{\prime}) \end{array} }{\mathbb{E}}\left[ \left(f(x)^{\top}f\left(x^{-}\right)\right)^{2}\right], +$$ + +$$ +\mathcal{L}_{4}(f) = \sum_{i\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{u}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{u}) \end{array} }{\mathbb{E}}\left[ \left(f(x)^{\top}f\left(x^{-}\right)\right)^{2}\right], +$$ + +$$ +\mathcal{L}_{5}(f) = \underset { \begin{array}{c}\bar{x}_{u}\sim \mathcal{P},\bar{x}_{u}^{\prime}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{u}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{u}^{\prime}) \end{array} }{\mathbb{E}}\left[ \left(f(x)^{\top}f\left(x^{-}\right)\right)^{2}\right]. +$$ + +Proof. We can expand $\mathcal{L}_{\mathrm{mf}}(F,A)$ and obtain + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {m f}} (F, A) = \sum_ {x, x ^ {\prime} \in \mathcal {X}} \left(\frac {w _ {x x ^ {\prime}}}{\sqrt {w _ {x} w _ {x ^ {\prime}}}} - \mathbf {f} _ {x} ^ {\top} \mathbf {f} _ {x ^ {\prime}}\right) ^ {2} \\ = \operatorname {c o n s t} + \sum_ {x, x ^ {\prime} \in \mathcal {X}} \left(- 2 w _ {x x ^ {\prime}} f (x) ^ {\top} f \left(x ^ {\prime}\right) + w _ {x} w _ {x ^ {\prime}} \left(f (x) ^ {\top} f \left(x ^ {\prime}\right)\right) ^ {2}\right), \\ \end{array} +$$ + +where $\mathbf{f}_x = \sqrt{w_x} f(x)$ is a re-scaled version of $f(x)$ . At a high level, we follow the proof in [23], while the specific form of loss varies with the different definitions of positive/negative pairs. The form of $\mathcal{L}_{\mathrm{SORL}}(f)$ is derived from plugging $w_{xx'}$ and $w_x$ . + +Recall that $w_{xx'}$ is defined by + +$$ +w _ {x x ^ {\prime}} = \eta_ {l} \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \mathcal {T} (x ^ {\prime} | \bar {x} _ {l} ^ {\prime}) + \eta_ {u} \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \mathcal {T} (x | \bar {x} _ {u}) \mathcal {T} (x ^ {\prime} | \bar {x} _ {u}), +$$ + +and $w_{x}$ is given by + +$$ +\begin{array}{l} w _ {x} = \sum_ {x ^ {\prime}} w _ {x x ^ {\prime}} \\ = \eta_ {l} \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \sum_ {x ^ {\prime}} \mathcal {T} \left(x ^ {\prime} | \bar {x} _ {l} ^ {\prime}\right) + \eta_ {u} \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \mathcal {T} (x | \bar {x} _ {u}) \sum_ {x ^ {\prime}} \mathcal {T} \left(x ^ {\prime} | \bar {x} _ {u}\right) \\ = \eta_ {l} \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) + \eta_ {u} \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \mathcal {T} (x | \bar {x} _ {u}). \\ \end{array} +$$ + +Plugging in $w_{xx'}$ we have, + +$$ +\begin{array}{l} - 2 \sum_ {x, x ^ {\prime} \in \mathcal {X}} w _ {x x ^ {\prime}} f (x) ^ {\top} f \left(x ^ {\prime}\right) \\ = - 2 \sum_ {x, x ^ {+} \in \mathcal {X}} w _ {x x ^ {+}} f (x) ^ {\top} f \left(x ^ {+}\right) \\ = - 2 \eta_ {l} \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {i}}} \sum_ {x, x ^ {\prime} \in \mathcal {X}} \mathcal {T} (x | \bar {x} _ {l}) \mathcal {T} \left(x ^ {\prime} | \bar {x} _ {l} ^ {\prime}\right) f (x) ^ {\top} f \left(x ^ {\prime}\right) \\ - 2 \eta_ {u} \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \sum_ {x, x ^ {\prime}} \mathcal {T} (x | \bar {x} _ {u}) \mathcal {T} \left(x ^ {\prime} | \bar {x} _ {u}\right) f (x) ^ {\top} f \left(x ^ {\prime}\right) \\ = -2\eta \sum_{i\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}}, \bar{x}_{l}^{\prime}\sim \mathcal{P}_{l_{i}},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{+}\sim \mathcal{T}(\cdot |\bar{x}_{l}^{\prime}) \end{array} }{\mathbb{E}}\left[f(x)^{\top}f\left(x^{+}\right)\right] \\ - 2\eta_{u}\underset { \begin{array}{c}\bar{x}_{u}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{u}),x^{+}\sim \mathcal{T}(\cdot |\bar{x}_{u}) \end{array} }{\mathbb{E}}\left[f(x)^{\top}f\left(x^{+}\right)\right] \\ = - 2 \eta_ {l} \mathcal {L} _ {1} (f) - 2 \eta_ {u} \mathcal {L} _ {2} (f). \\ \end{array} +$$ + +Plugging $w_{x}$ and $w_{x^{\prime}}$ we have, + +$$ +\begin{array}{l} \sum_ {x, x ^ {\prime} \in \mathcal {X}} w _ {x} w _ {x ^ {\prime}} \left(f (x) ^ {\top} f \left(x ^ {\prime}\right)\right) ^ {2} \\ = \sum_ {x, x ^ {-} \in \mathcal {X}} w _ {x} w _ {x ^ {-}} \left(f (x) ^ {\top} f \left(x ^ {-}\right)\right) ^ {2} \\ = \sum_ {x, x ^ {\prime} \in \mathcal {X}} \left(\eta_ {l} \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) + \eta_ {u} \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \mathcal {T} (x | \bar {x} _ {u})\right) \\ \cdot \left(\eta_ {l} \sum_ {j \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {j}}} \mathcal {T} \left(x ^ {-} | \bar {x} _ {l} ^ {\prime}\right) + \eta_ {u} \mathbb {E} _ {\bar {x} _ {u} ^ {\prime} \sim \mathcal {P}} \mathcal {T} \left(x ^ {-} | \bar {x} _ {u} ^ {\prime}\right)\right) \left(f (x) ^ {\top} f \left(x ^ {-}\right)\right) ^ {2} \\ = \eta_ {l} ^ {2} \sum_ {x, x ^ {-} \in \mathscr {X}} \sum_ {i \in \mathscr {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathscr {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \sum_ {j \in \mathscr {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathscr {P} _ {l _ {j}}} \mathcal {T} (x ^ {-} | \bar {x} _ {l} ^ {\prime}) \left(f (x) ^ {\top} f \left(x ^ {-}\right)\right) ^ {2} \\ + 2 \eta_ {u} \eta_ {l} \sum_ {x, x ^ {-} \in \mathcal {X}} \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \mathcal {T} (x ^ {-} | \bar {x} _ {u}) \left(f (x) ^ {\top} f (x ^ {-})\right) ^ {2} \\ + \eta_ {u} ^ {2} \sum_ {x, x ^ {-} \in \mathcal {X}} \mathbb {E} _ {\bar {x} _ {u} \sim \mathcal {P}} \mathcal {T} (x | \bar {x} _ {u}) \mathbb {E} _ {\bar {x} _ {u} ^ {\prime} \sim \mathcal {P}} \mathcal {T} (x ^ {-} | \bar {x} _ {u} ^ {\prime}) \left(f (x) ^ {\top} f (x ^ {-})\right) ^ {2} \\ = \eta_{l}^{2}\sum_{i\in \mathcal{Y}_{l}}\sum_{j\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{l}^{\prime}\sim \mathcal{P}_{l_{j}},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{l}^{\prime}) \end{array} }{\mathbb{E}}\left[ \big(f(x)^{\top}f\left(x^{-}\right)\big)^{2}\right] \\ +2\eta_{u}\eta_{l}\sum_{i\in \mathcal{Y}_{l}}\underset { \begin{array}{c}\bar{x}_{l}\sim \mathcal{P}_{l_{i}},\bar{x}_{u}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{l}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{u}) \end{array} }{\mathbb{E}}\left[ \left(f(x)^{\top}f\left(x^{-}\right)\right)^{2}\right] \\ + \eta_{u}^{2}\underset { \begin{array}{c}\bar{x}_{u}\sim \mathcal{P},\bar{x}_{u}^{\prime}\sim \mathcal{P},\\ x\sim \mathcal{T}(\cdot |\bar{x}_{u}),x^{-}\sim \mathcal{T}(\cdot |\bar{x}_{u}^{\prime}) \end{array} }{\mathbb{E}}\left[ \left(f(x)^{\top}f\left(x^{-}\right)\right)^{2}\right] \\ = \eta_ {l} ^ {2} \mathcal {L} _ {3} (f) + 2 \eta_ {u} \eta_ {l} \mathcal {L} _ {4} (f) + \eta_ {u} ^ {2} \mathcal {L} _ {5} (f). \\ \end{array} +$$ + +# B Technical Details for Toy Example + +# B.1 Calculation Details for Figure 2. + +We first recap the toy example, which illustrates the core idea of our theoretical findings. Specifically, the example aims to distinguish 3D objects with different shapes, as shown in Figure 2. These images are generated by a 3D rendering software [31] with user-defined properties including colors, shape, size, position, etc. + +Data design. Suppose the training samples come from three types, $\mathcal{X}_{\square}, \mathcal{X}_{\bigcirc}, \mathcal{X}_{\bigodot}$ . Let $\mathcal{X}_{\square}$ be the sample space with known class, and $\mathcal{X}_{\bigcirc}, \mathcal{X}_{\bigodot}$ be the sample space with novel classes. Further, the two novel classes are constructed to have different relationships with the known class. Specifically, we construct the toy dataset with 6 elements as shown in Figure 4(a). + +Augmentation graph. Based on the data design, we formally define the augmentation graph, which encodes the probability of augmenting a source image $\bar{x}$ to the augmented view $x$ : + +$$ +\mathcal {T} (x \mid \bar {x}) = \left\{ \begin{array}{l l} \tau_ {1} & \text {i f c o l o r} (x) = \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) = \operatorname {s h a p e} (\bar {x}); \\ \tau_ {c} & \text {i f c o l o r} (x) = \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) \neq \operatorname {s h a p e} (\bar {x}); \\ \tau_ {s} & \text {i f c o l o r} (x) \neq \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) = \operatorname {s h a p e} (\bar {x}); \\ \tau_ {0} & \text {i f c o l o r} (x) \neq \operatorname {c o l o r} (\bar {x}), \operatorname {s h a p e} (x) \neq \operatorname {s h a p e} (\bar {x}). \end{array} \right. \tag {11} +$$ + +According to the definition above, the corresponding augmentation matrix $T$ with each element formed by $\mathcal{T}(\cdot \mid \cdot)$ is given in Figure 4(b). We proceed by showing the details to derive $A^{(u)}$ and $A$ using $T$ . + +![](images/85590647d2bdfe45628542003332eb40ad45593814d3db1d11aaf51ccaa988af.jpg) +(a) Definition of Augmentation Probability + +![](images/aa99b6a038bfe5d6413cdf154d763807ca4cd554ef95449cc5cc30f675ec079f.jpg) +(b) Augmentation Graph +Figure 4: An illustrative example for theoretical analysis. We consider a 6-node graph with one known class (cube) and two novel classes (sphere, cylinder). (a) The augmentation probabilities between nodes are defined by their color and shape in Eq. (11). (b) The augmentation matrices $T$ derived by Eq. (11) where we let $\tau_0 = 0$ . + +Derivation details for $A^{(u)}$ and $A$ . Recall that each element of $A^{(u)}$ is formed by $w_{xx'}^{(u)} = \mathbb{E}_{\bar{x} \sim \mathcal{P}} \mathcal{T}(x|\bar{x}) \mathcal{T}(x'|\bar{x})$ . In this toy example, one can then see that $A^{(u)} = \frac{1}{6} TT^\top$ since augmentation matrix $T$ is defined that each element $T_{x\bar{x}} = \mathcal{T}(x|\bar{x})$ . Note that $T$ is explicitly given in Figure 4(b) and then if we let $\eta_u = 6$ , we have the close-from: + +$$ +\eta_ {u} A ^ {(u)} = T ^ {2} = \left[ \begin{array}{c c c c c c} \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 2 \tau_ {1} \tau_ {s} & 2 \tau_ {1} \tau_ {c} & 2 \tau_ {c} \tau_ {s} & 0 & 0 \\ 2 \tau_ {1} \tau_ {s} & \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 2 \tau_ {c} \tau_ {s} & 2 \tau_ {1} \tau_ {c} & 0 & 0 \\ 2 \tau_ {1} \tau_ {c} & 2 \tau_ {c} \tau_ {s} & \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 2 \tau_ {1} \tau_ {s} & 0 & 0 \\ 2 \tau_ {c} \tau_ {s} & 2 \tau_ {1} \tau_ {c} & 2 \tau_ {1} \tau_ {s} & \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 \tau_ {1} ^ {2} & 2 \tau_ {1} ^ {2} \\ 0 & 0 & 0 & 0 & 2 \tau_ {1} ^ {2} & 2 \tau_ {1} ^ {2} \end{array} \right]. +$$ + +We then derive the second part $A^{(l)}$ whose element is given by: + +$$ +w _ {x x ^ {\prime}} ^ {(l)} \triangleq \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \mathcal {T} \left(x ^ {\prime} | \bar {x} _ {l} ^ {\prime}\right). +$$ + +Such a form can be simplified in Section 4 by defining $\mathfrak{l} \in \mathbb{R}^N$ , $(\mathfrak{l})_x = \mathbb{E}_{\bar{x}_l \sim \mathcal{P}_{l_1}} \mathcal{T}(x|\bar{x}_l)$ and by letting $|\mathcal{Y}_l| = 1$ . In this toy example, the known class only has two elements, so $\mathfrak{l} = \frac{1}{2}(T_{:,1} + T_{:,2})$ (average of $T$ 's 1st & 2nd column), we then have: + +$$ +A ^ {(l)} = \mathfrak {U} ^ {\top} = \frac {1}{4} \left[ \begin{array}{c c c c c c} (\tau_ {1} + \tau_ {s}) ^ {2} & (\tau_ {1} + \tau_ {s}) ^ {2} & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & 0 & 0 \\ (\tau_ {1} + \tau_ {s}) ^ {2} & (\tau_ {1} + \tau_ {s}) ^ {2} & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & 0 & 0 \\ \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} ^ {2} & \tau_ {c} ^ {2} & 0 & 0 \\ \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} ^ {2} & \tau_ {c} ^ {2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]. +$$ + +Finally, if we let $\eta_l = 4$ and $A = \eta_u A^{(u)} + \eta_l A^{(l)}$ , we have the full results in Figure 2. + +# B.2 Calculation Details for Figure 3. + +In this section, we present the analysis of eigenvectors and their orders for toy examples shown in Figure 2. In Theorem B.1 we present the spectral analysis for the adjacency matrix with additional label information while in Theorem B.2, we show the spectral analysis for the unlabeled case. + +Theorem B.1. Let + +$$ +\eta_ {u} A ^ {(u)} = \left[ \begin{array}{c c c c c c} \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 2 \tau_ {1} \tau_ {s} & 2 \tau_ {1} \tau_ {c c c c c} & 2 \tau_ {c} \tau_ {s} & 0 & 0 \\ 2 \tau_ {1} \tau_ {s} & \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 2 \tau_ {c} \tau_ {s} & 2 \tau_ {1} \tau_ {c c c c c} & 0 & 0 \\ 2 \tau_ {1} \tau_ {c c c c c} & 2 \tau_ {c} \tau_ {s} & \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 2 \tau_ {1} \tau_ {s} & 0 & 0 \\ 2 \tau_ {c} \tau_ {s} & 2 \tau_ {1} \tau_ {c c c c c} & 2 \tau_ {1} \tau_ {s} & \tau_ {1} ^ {2} + \tau_ {s} ^ {2} + \tau_ {c} ^ {2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 \tau_ {1} ^ {2} & 2 \tau_ {1} ^ {2} \\ 0 & 0 & 0 & 0 & 2 \tau_ {1} ^ {2} & 2 \tau_ {1} ^ {2} \end{array} \right], +$$ + +$$ +A = \eta_ {u} A ^ {(u)} + \left[ \begin{array}{c c c c c c} (\tau_ {1} + \tau_ {s}) ^ {2} & (\tau_ {1} + \tau_ {s}) ^ {2} & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & 0 & 0 \\ (\tau_ {1} + \tau_ {s}) ^ {2} & (\tau_ {1} + \tau_ {s}) ^ {2} & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & 0 & 0 \\ \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} ^ {2} & \tau_ {c} ^ {2} & 0 & 0 \\ \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} (\tau_ {1} + \tau_ {s}) & \tau_ {c} ^ {2} & \tau_ {c} ^ {2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right], +$$ + +and we assume that $1 \gg \frac{\tau_c}{\tau_1} > \frac{\tau_s}{\tau_1} > 0$ , $\frac{4}{9}\tau_c \leq \tau_s \leq \tau_c$ and $\tau_1 + \tau_c + \tau_s = 1$ . + +Let $\lambda_1, \lambda_2, \lambda_3$ and $v_1, v_2, v_3$ be the largest three eigenvalues and their corresponding eigenvectors of $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ , which is the normalized adjacency matrix of $A$ . Then the concrete form of $\lambda_1, \lambda_2, \lambda_3$ and $v_1, v_2, v_3$ can be approximately given by: + +$$ +\hat {\lambda} _ {1} = 1, \hat {\lambda} _ {2} = 1, \hat {\lambda} _ {3} = 1 - \frac {1 6}{3} \frac {\tau_ {c}}{\tau_ {1}}, +$$ + +$$ +\hat {v} _ {1} = [ 0, 0, 0, 0, 1, 1 ], +$$ + +$$ +\hat {v} _ {2} = [ \sqrt {3}, \sqrt {3}, 1, 1, 0, 0 ], +$$ + +$$ +\hat {v} _ {3} = [ 1, 1, - \sqrt {3}, - \sqrt {3}, 0, 0 ]. +$$ + +Note that the approximation gap can be tightly bounded. Specifically, for $i \in \{1,2,3\}$ , we have $|\lambda_i - \hat{\lambda}_i| \leq O((\frac{\tau_c}{\tau_1})^2)$ and $\| \sin (U,\hat{U})^6\| _F \leq O(\frac{\tau_c}{\tau_1})$ , where $U = [v_{1},v_{2},v_{3}],\hat{U} = [\hat{v}_{1},\hat{v}_{2},\hat{v}_{3}]$ . + +Proof. By $\tau_{1} + \tau_{c} + \tau_{s} = 1$ and $1 \gg \frac{\tau_{c}}{\tau_{1}} > \frac{\tau_{s}}{\tau_{1}} > 0$ , we define the following equation which approximates the corresponding terms up to error $O\left(\left(\frac{\tau_{c}}{\tau_{1}}\right)^{2}\right)$ : + +$$ +A \approx \widehat {A} = \tau_ {1} ^ {2} \left[ \begin{array}{c c c c c c} 2 + 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 + 4 \frac {\tau_ {s}}{\tau_ {1}} & 3 \frac {\tau_ {c}}{\tau_ {1}} & \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ 1 + 4 \frac {\tau_ {s}}{\tau_ {1}} & 2 + 2 \frac {\tau_ {s}}{\tau_ {1}} & \frac {\tau_ {c}}{\tau_ {1}} & 3 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ 3 \frac {\tau_ {c}}{\tau_ {1}} & \frac {\tau_ {c}}{\tau_ {1}} & 1 & 2 \frac {\tau_ {s}}{\tau_ {1}} & 0 & 0 \\ \frac {\tau_ {c}}{\tau_ {1}} & 3 \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 2 \\ 0 & 0 & 0 & 0 & 2 & 2 \end{array} \right]. +$$ + +$$ +D \approx \widehat {D} = \tau_ {1} ^ {2} d i a g \left(\left[ 3 \left(1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + \frac {4}{3} \frac {\tau_ {c}}{\tau_ {1}}\right), 3 \left(1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + \frac {4}{3} \frac {\tau_ {c}}{\tau_ {1}}\right), 1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + 4 \frac {\tau_ {c}}{\tau_ {1}}, 1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + 4 \frac {\tau_ {c}}{\tau_ {1}}, 4, 4 \right]\right). +$$ + +$$ +D ^ {- \frac {1}{2}} \approx \widehat {D ^ {- \frac {1}{2}}} = \frac {1}{\tau_ {1}} d i a g \left(\left[ \sqrt {3} \left(1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {2}{3} \frac {\tau_ {c}}{\tau_ {1}}\right), \sqrt {3} \left(1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {2}{3} \frac {\tau_ {c}}{\tau_ {1}}\right), 1 - \frac {\tau_ {s}}{\tau_ {1}} - 2 \frac {\tau_ {c}}{\tau_ {1}}, 1 - \frac {\tau_ {s}}{\tau_ {1}} - 2 \frac {\tau_ {c}}{\tau_ {1}}, 2, 2 \right]\right). +$$ + +$$ +\begin{array}{l} D ^ {- \frac {1}{2}} A D ^ {- \frac {1}{2}} \approx \widehat {D ^ {- \frac {1}{2}}} \widehat {A D ^ {- \frac {1}{2}}} \\ = \left[ \begin{array}{c c c c c} \frac {2}{3} \left(1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {4}{3} \frac {\tau_ {c}}{\tau_ {1}}\right) & \frac {1}{3} \left(1 + 2 \frac {\tau_ {s}}{\tau_ {1}} - \frac {4}{3} \frac {\tau_ {c}}{\tau_ {1}}\right) & \sqrt {3} \frac {\tau_ {c}}{\tau_ {1}} & \frac {1}{\sqrt {3}} \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ \frac {1}{3} \left(1 + 2 \frac {\tau_ {s}}{\tau_ {1}} - \frac {4}{3} \frac {\tau_ {c}}{\tau_ {1}}\right) & \frac {2}{3} \left(1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {4}{3} \frac {\tau_ {c}}{\tau_ {1}}\right) & \frac {1}{\sqrt {3}} \frac {\tau_ {c}}{\tau_ {1}} & \sqrt {3} \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ \sqrt {3} \frac {\tau_ {c}}{\tau_ {1}} & \frac {1}{\sqrt {3}} \frac {\tau_ {c}}{\tau_ {1}} & 1 - 2 \frac {\tau_ {s}}{\tau_ {1}} - 4 \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 0 & 0 \\ \frac {1}{\sqrt {3}} \frac {\tau_ {c}}{\tau_ {1}} & \sqrt {3} \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 - 2 \frac {\tau_ {s}}{\tau_ {1}} - 4 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac {1}{2} & \frac {1}{2} \\ 0 & 0 & 0 & 0 & \frac {1}{2} & \frac {1}{2} \end{array} \right]. \\ \end{array} +$$ + +And we have + +$$ +\begin{array}{l} \left\| D ^ {- \frac {1}{2}} A D ^ {- \frac {1}{2}} - \widehat {D ^ {- \frac {1}{2}}} \widehat {A} D ^ {- \frac {1}{2}} \right\| _ {2} \\ \leq \left\| D ^ {- \frac {1}{2}} A D ^ {- \frac {1}{2}} - \widehat {D ^ {- \frac {1}{2}}} \widehat {A} D ^ {- \frac {1}{2}} \right\| _ {F} \\ \leq O ((\frac {\tau_ {c}}{\tau_ {1}}) ^ {2}). \end{array} +$$ + +Let $\hat{\lambda}_a, \dots, \hat{\lambda}_f$ be six eigenvalues of $\widehat{D^{-\frac{1}{2}}\hat{A}D^{-\frac{1}{2}}}$ , and $\hat{v}_a, \dots, \hat{v}_f$ be corresponding eigenvectors. By direct calculation we have + +$$ +\hat {\lambda} _ {a} = 1, \hat {\lambda} _ {b} = 1, \hat {\lambda} _ {c} = 1 - \frac {1 6}{3} \frac {\tau_ {c}}{\tau_ {1}}, \hat {\lambda} _ {d} = 0 +$$ + +and corresponding eigenvectors as + +$$ +\hat {v} _ {a} = [ 0, 0, 0, 0, 1, 1 ], +$$ + +$$ +\hat {v} _ {b} = [ \sqrt {3}, \sqrt {3}, 1, 1, 0, 0 ], +$$ + +$$ +\hat {v} _ {c} = [ 1, 1, - \sqrt {3}, - \sqrt {3}, 0, 0 ], +$$ + +$$ +\hat {v} _ {d} = [ 0, 0, 0, 0, 1, - 1 ]. +$$ + +For the remaining two eigenvectors, by the symmetric property, they have the formula + +$$ +\hat {v} _ {e} = \left[ \alpha \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), - \alpha \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), \beta \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), - \beta \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), 0, 0 \right], +$$ + +$$ +\hat {v} _ {f} = \left[ \beta \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), - \beta \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), - \alpha \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), \alpha \left(\frac {\tau_ {s}}{\tau_ {1}}, \frac {\tau_ {c}}{\tau_ {1}}\right), 0, 0 \right], +$$ + +where $\alpha, \beta$ are some real functions. Then, by solving + +$$ +\widehat {D ^ {- \frac {1}{2}}} \widehat {A D ^ {- \frac {1}{2}}} \hat {v} _ {e} = \hat {\lambda} _ {e} \hat {v} _ {e} +$$ + +$$ +\widehat {D ^ {- \frac {1}{2}}} \widehat {\bar {A}} \widehat {D ^ {- \frac {1}{2}}} \hat {v} _ {f} = \hat {\lambda} _ {f} \hat {v} _ {f}, +$$ + +we get + +$$ +\hat {\lambda} _ {e} = \frac {1}{9} \left(\sqrt {(3 - 1 2 \frac {\tau_ {s}}{\tau_ {1}} - 1 6 \frac {\tau_ {c}}{\tau_ {1}}) ^ {2} + 1 0 8 (\frac {\tau_ {c}}{\tau_ {1}}) ^ {2}} - 2 4 \frac {\tau_ {s}}{\tau_ {1}} - 2 0 \frac {\tau_ {c}}{\tau_ {1}} + 6\right) +$$ + +$$ +\hat {\lambda} _ {f} = \frac {1}{9} \left(- \sqrt {(3 - 1 2 \frac {\tau_ {s}}{\tau_ {1}} - 1 6 \frac {\tau_ {c}}{\tau_ {1}}) ^ {2} + 1 0 8 (\frac {\tau_ {c}}{\tau_ {1}}) ^ {2}} - 2 4 \frac {\tau_ {s}}{\tau_ {1}} - 2 0 \frac {\tau_ {c}}{\tau_ {1}} + 6\right). +$$ + +Now, we show that $\hat{\lambda}_c > \hat{\lambda}_e$ . By $\frac{\tau_c}{\tau_1} \ll 1$ and $\frac{4}{9}\tau_c \leq \tau_s \leq \tau_c$ + +$$ +\begin{array}{l} \hat {\lambda} _ {c} \geq \hat {\lambda} _ {e} \Leftrightarrow 3 + 2 4 \frac {\tau_ {s}}{\tau_ {1}} - 2 8 \frac {\tau_ {c}}{\tau_ {1}} \geq \sqrt {(3 - 1 2 \frac {\tau_ {s}}{\tau_ {1}} - 1 6 \frac {\tau_ {c}}{\tau_ {1}}) ^ {2} + 1 0 8 (\frac {\tau_ {c}}{\tau_ {1}}) ^ {2}} \\ \Leftrightarrow 3 6 (\frac {\tau_ {s}}{\tau_ {1}}) ^ {2} + 3 5 (\frac {\tau_ {c}}{\tau_ {1}}) ^ {2} - 1 4 4 \frac {\tau_ {s}}{\tau_ {1}} \frac {\tau_ {c}}{\tau_ {1}} + 1 8 \frac {\tau_ {s}}{\tau_ {1}} - 6 \frac {\tau_ {c}}{\tau_ {1}} \geq 0. \\ \end{array} +$$ + +Thus, we have $1 = \hat{\lambda}_a = \hat{\lambda}_b > \hat{\lambda}_c > \hat{\lambda}_e > \hat{\lambda}_f > \hat{\lambda}_d = 0$ . Moreover, we also have + +$$ +\begin{array}{l} \hat {\lambda} _ {c} - \hat {\lambda} _ {e} = 1 - \frac {1 6}{3} \frac {\tau_ {c}}{\tau_ {1}} - \frac {1}{9} \left(\sqrt {(3 - 1 2 \frac {\tau_ {s}}{\tau_ {1}} - 1 6 \frac {\tau_ {c}}{\tau_ {1}}) ^ {2} + 1 0 8 (\frac {\tau_ {c}}{\tau_ {1}}) ^ {2}} - 2 4 \frac {\tau_ {s}}{\tau_ {1}} - 2 0 \frac {\tau_ {c}}{\tau_ {1}} + 6\right) \\ \geq \Omega \left(\frac {\tau_ {c}}{\tau_ {1}}\right). \\ \end{array} +$$ + +Let $\hat{\lambda}_1 = \hat{\lambda}_a, \hat{\lambda}_2 = \hat{\lambda}_b, \hat{\lambda}_3 = \hat{\lambda}_c$ . Then, by Weyl's Theorem, for $i \in \{1, 2, 3\}$ , we have + +$$ +\left| \lambda_ {i} - \hat {\lambda} _ {i} \right| \leq \left\| D ^ {- \frac {1}{2}} A D ^ {- \frac {1}{2}} - \widehat {D ^ {- \frac {1}{2}}} \widehat {A D ^ {- \frac {1}{2}}} \right\| _ {2} \leq O \left(\left(\frac {\tau_ {c}}{\tau_ {1}}\right) ^ {2}\right). +$$ + +By Davis-Kahan theorem, we have + +$$ +\| \sin (U, \hat {U}) \| _ {F} \leq \frac {O \left(\left(\frac {\tau_ {c}}{\tau_ {1}}\right) ^ {2}\right)}{\Omega \left(\frac {\tau_ {c}}{\tau_ {1}}\right)} \leq O \left(\frac {\tau_ {c}}{\tau_ {1}}\right). +$$ + +We finish the proof. + +Theorem B.2. Recall $\eta_uA^{(u)}$ is defined in Theorem B.1. Assume $1\gg \frac{\tau_c}{\tau_1} >\frac{\tau_s}{\tau_1} >0$ and $\tau_{1} + \tau_{c} + \tau_{s} = 1$ . Let $\lambda_1^{(u)},\lambda_2^{(u)},\lambda_3^{(u)}$ and $v_{1}^{(u)},v_{2}^{(u)},v_{3}^{(u)}$ be the largest three eigenvalues and their corresponding eigenvectors of $D^{(u) - \frac{1}{2}}(\eta_uA^{(u)})D^{(u) - \frac{1}{2}}$ , which is the normalized adjacency matrix of $\eta_uA^{(u)}$ . Let + +$$ +\hat {\lambda} _ {1} ^ {(u)} = 1, \hat {\lambda} _ {2} ^ {(u)} = 1, \hat {\lambda} _ {3} ^ {(u)} = 1 - 4 \frac {\tau_ {s}}{\tau_ {1}}, +$$ + +$$ +\hat {v} _ {1} ^ {(u)} = [ 0, 0, 0, 0, 1, 1 ], +$$ + +$$ +\hat {v} _ {2} ^ {(u)} = [ 1, 1, 1, 1, 0, 0 ], +$$ + +$$ +\hat {v} _ {3} ^ {(u)} = [ 1, - 1, 1, - 1, 0, 0 ]. +$$ + +Let $U^{(u)} = [v_1^{(u)}, v_2^{(u)}, v_3^{(u)}], \hat{U}^{(u)} = [\hat{v}_1^{(u)}, \hat{v}_2^{(u)}, \hat{v}_3^{(u)}]$ . Then, for $i \in \{1, 2, 3\}$ , we have $|\lambda_i^{(u)} - \hat{\lambda}_i^{(u)}| \leq O((\frac{\tau_c}{\tau_1})^2)$ and $\| \sin(U^{(u)}, \hat{U}^{(u)}) \|_F \leq O(\frac{\tau_c^2}{\tau_1 (\tau_c - \tau_s)})$ . + +Proof. Similar to the proof of Theorem B.1, up to error $O\left(\left(\frac{\tau_{e}}{\tau_{1}}\right)^{2}\right)$ , we have the following equation, + +$$ +\widehat {\eta_ {u} A ^ {(u)}} = \tau_ {1} ^ {2} \left[ \begin{array}{c c c c c c} 1 & 2 \frac {\tau_ {s}}{\tau_ {1}} & 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 & 0 \\ 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 & 0 & 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 1 & 2 \frac {\tau_ {s}}{\tau_ {1}} & 0 & 0 \\ 0 & 2 \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 2 \\ 0 & 0 & 0 & 0 & 2 & 2 \end{array} \right]. +$$ + +$$ +\begin{array}{l} \widehat {D ^ {(u)}} = \tau_ {1} ^ {2} d i a g \left(\left[ 1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + 2 \frac {\tau_ {c}}{\tau_ {1}}, 1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + 2 \frac {\tau_ {c}}{\tau_ {1}}, 1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + 2 \frac {\tau_ {c}}{\tau_ {1}}, 1 + 2 \frac {\tau_ {s}}{\tau_ {1}} + 2 \frac {\tau_ {\mathrm {c}}}{\tau_ {1}}, 4, 4 \right]\right). \\ \widehat {D ^ {(u) - \frac {1}{2}}} = \frac {1}{\tau_ {1}} d i a g \left(\left[ 1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {\tau_ {c}}{\tau_ {1}}, 1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {\tau_ {c}}{\tau_ {1}}, 1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {\tau_ {c}}{\tau_ {1}}, 1 - \frac {\tau_ {s}}{\tau_ {1}} - \frac {\tau_ {c}}{\tau_ {1}}, 2, 2 \right]\right). \\ \widehat {D ^ {(u) - \frac {1}{2}}} \widehat {\eta_ {u} A ^ {(u)}} \widehat {D ^ {(u) - \frac {1}{2}}} = \left[ \begin{array}{c c c c c c} 1 - 2 \frac {\tau_ {s}}{\tau_ {1}} - 2 \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 & 0 \\ 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 - 2 \frac {\tau_ {s}}{\tau_ {1}} - 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 1 - 2 \frac {\tau_ {s}}{\tau_ {1}} - 2 \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 0 & 0 \\ 0 & 2 \frac {\tau_ {c}}{\tau_ {1}} & 2 \frac {\tau_ {s}}{\tau_ {1}} & 1 - 2 \frac {\tau_ {s}}{\tau_ {1}} - 2 \frac {\tau_ {c}}{\tau_ {1}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac {1}{2} & \frac {1}{2} \\ 0 & 0 & 0 & 0 & \frac {1}{2} & \frac {1}{2} \end{array} \right]. \\ \end{array} +$$ + +Let $\hat{\lambda}_1^{(u)},\ldots ,\hat{\lambda}_6^{(u)}$ be six eigenvalue of $\widehat{D^{(u) - \frac{1}{2}}\eta_uA^{(u)}}\widehat{D^{(u) - \frac{1}{2}}}$ , and $\hat{v}_1^{(u)},\ldots ,\hat{v}_6^{(u)}$ be corresponding eigenvectors. By direct calculation we have + +$$ +\hat {\lambda} _ {1} ^ {(u)} = 1, \hat {\lambda} _ {2} ^ {(u)} = 1, \hat {\lambda} _ {3} ^ {(u)} = 1 - 4 \frac {\tau_ {s}}{\tau_ {1}}, \hat {\lambda} _ {4} ^ {(u)} = 1 - 4 \frac {\tau_ {c}}{\tau_ {1}}, \hat {\lambda} _ {5} ^ {(u)} = 1 - 4 \frac {\tau_ {s}}{\tau_ {1}} - 4 \frac {\tau_ {c}}{\tau_ {1}}, \hat {\lambda} _ {6} ^ {(u)} = 0 +$$ + +and corresponding eigenvector as + +$$ +\begin{array}{l} \hat {v} _ {1} ^ {(u)} = [ 0, 0, 0, 0, 1, 1 ], \\ \hat {v} _ {2} ^ {(u)} = [ 1, 1, 1, 1, 0, 0 ], \\ \hat {v} _ {3} ^ {(u)} = [ 1, - 1, 1, - 1, 0, 0 ], \\ \hat {v} _ {4} ^ {(u)} = [ 1, 1, - 1, - 1, 0, 0 ], \\ \hat {v} _ {5} ^ {(u)} = [ 1, - 1, - 1, 1, 0, 0 ], \\ \hat {v} _ {6} ^ {(u)} = [ 0, 0, 0, 0, 1, - 1 ]. \\ \end{array} +$$ + +Then, by Weyl's Theorem, for $i \in \{1, 2, 3\}$ , we have + +$$ +\left| \lambda_ {i} ^ {(u)} - \hat {\lambda} _ {i} ^ {(u)} \right| \leq \left\| D ^ {(u) - \frac {1}{2}} \eta_ {u} A ^ {(u)} D ^ {(u) - \frac {1}{2}} - \widehat {D ^ {(u) - \frac {1}{2}}} \widehat {\eta_ {u} A ^ {(u)}} \widehat {D ^ {(u) - \frac {1}{2}}} \right\| _ {2} \leq O \left(\left(\frac {\tau_ {c}}{\tau_ {1}}\right) ^ {2}\right). +$$ + +By Davis-Kahan theorem, we have + +$$ +\| \sin (U ^ {(u)}, \hat {U} ^ {(u)}) \| _ {F} \leq \frac {O ((\frac {\tau_ {c}}{\tau_ {1}}) ^ {2})}{4 (\frac {\tau_ {c}}{\tau_ {1}} - \frac {\tau_ {s}}{\tau_ {1}})} \leq O (\frac {\tau_ {c} ^ {2}}{\tau_ {1} (\tau_ {c} - \tau_ {s})}). +$$ + +We finish the proof. + +# C Technical Details for Main Theory + +# C.1 Notation + +We let $\mathbf{1}_n$ , $\mathbf{0}_n$ be the $n$ -dimensional vector with all 1 or 0 values respectively. $\mathbf{1}_{m\times n}$ , $\mathbf{0}_{m\times n}$ are similarly defined for $m$ -by- $n$ matrix. $I_{n}$ is the identity matrix with shape $n\times n$ . For any matrix $V$ , $V_{(i,j)}$ indicts the value at $i$ -th row and $j$ -th column of $V$ . If the matrix is subscripted like $V_{k}$ , we use a comma in-between like $V_{k,(i,j)}$ . Similarly, $\mathbf{v}_{(i)}$ and $\mathbf{v}_{k,(i)}$ are the $i$ -th value for vector $\mathbf{v}$ and $\mathbf{v}_k$ respectively. $[n]$ is used to abbreviate the set $\{1,2,\dots,n\}$ . + +# C.2 Matrix Form of K-means and the Derivative + +Recall that we defined the K-means clustering measure of features in Sec. 4: + +$$ +\mathcal {M} _ {k m s} (\Pi , Z) = \sum_ {\pi \in \Pi} \sum_ {i \in \pi} \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| ^ {2} / \sum_ {\pi \in \Pi} | \pi | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\Pi} \| ^ {2}, \tag {12} +$$ + +where the numerator measures the intra-class distance: + +$$ +\mathcal {M} _ {\text {i n t r a}} (\Pi , Z) = \sum_ {\pi \in \Pi} \sum_ {i \in \pi} \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| ^ {2}, \tag {13} +$$ + +and the denominator measures the inter-class distance: + +$$ +\mathcal {M} _ {\text {i n t e r}} (\Pi , Z) = \sum_ {\pi \in \Pi} | \pi | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\Pi} \| ^ {2}. \tag {14} +$$ + +We will show next how to convert the intra-class and the inter-class measure into a matrix form, which is desirable for analysis. + +Intra-class measure. Note that the $K$ -means intra-class measure can be rewritten in a matrix form: + +$$ +\mathcal {M} _ {i n t r a} (\Pi , Z) = \| Z - H _ {\Pi} Z \| _ {F} ^ {2}, +$$ + +where $H_{\Pi}$ is a matrix to convert $Z$ to mean vectors w.r.t clusters defined by $\Pi$ . Without losing the generality, we assume $Z$ is ordered according to the partition in $\Pi$ — first $|\pi_1|$ vectors are in $\pi_1$ , next $|\pi_2|$ vectors are in $\pi_2$ , etc. Then $H_{\Pi}$ is given by: + +$$ +H _ {\Pi} = \left[ \begin{array}{c c c c} \frac {1}{| \pi_ {1} |} \mathbf {1} _ {| \pi_ {1} | \times | \pi_ {1} |} & \mathbf {0} & \ldots & \mathbf {0} \\ \mathbf {0} & \frac {1}{| \pi_ {2} |} \mathbf {1} _ {| \pi_ {2} | \times | \pi_ {2} |} & \ldots & \mathbf {0} \\ \ldots & \ldots & \ldots & \ldots \\ \mathbf {0} & \mathbf {0} & \ldots & \frac {1}{| \pi_ {k} |} \mathbf {1} _ {| \pi_ {k} | \times | \pi_ {k} |} \end{array} \right]. +$$ + +Going further, we have: + +$$ +\begin{array}{l} \mathcal {M} _ {i n t r a} (\Pi , Z) = \| Z - H _ {\Pi} Z \| _ {F} ^ {2} \\ = \operatorname {T r} \left(\left(I - H _ {\Pi}\right) ^ {2} Z Z ^ {\top}\right) \\ = \mathrm {T r} ((I - 2 H _ {\Pi} + H _ {\Pi} ^ {2}) Z Z ^ {\top}) \\ = \operatorname {T r} \left(\left(I - H _ {\mathrm {I I}}\right) Z Z ^ {\top}\right). \\ \end{array} +$$ + +Inter-class measure. The inter-class measure can be equivalently given by: + +$$ +\mathcal {M} _ {i n t e r} (\Pi , Z) = \left\| H _ {\Pi} Z - \frac {1}{N} \mathbf {1} _ {N \times N} Z \right\| _ {F} ^ {2}, +$$ + +where $H_{\Pi}$ is defined as above. And we can also derive: + +$$ +\begin{array}{l} \mathcal {M} _ {i n t e r} (\Pi , Z) = \| H _ {\Pi} Z - \frac {1}{N} \mathbf {1} _ {N \times N} Z \| _ {F} ^ {2} \\ = \operatorname {T r} \left(\left(H _ {\Pi} - \frac {1}{N} \mathbf {1} _ {N \times N}\right) ^ {2} Z Z ^ {\top}\right) \\ = \operatorname {T r} \left(\left(H _ {\Pi} ^ {2} - \frac {2}{N} H _ {\Pi} \mathbf {1} _ {N \times N} + \frac {1}{N ^ {2}} \mathbf {1} _ {N \times N} ^ {2}\right) Z Z ^ {\top}\right) \\ = \operatorname {T r} \left(\left(H _ {\Pi} - \frac {1}{N} \mathbf {1} _ {N \times N}\right) Z Z ^ {\top}\right). \\ \end{array} +$$ + +# C.3 K-means Measure Has the Same Order as K-means Error + +Theorem C.1. (Recap of Theorem 4.1) We define the $\xi_{\pi \to \pi'}$ as the index of samples that is from class division $\pi$ however is closer to $\pmb{\mu}_{\pi'}$ than $\pmb{\mu}_{\pi}$ . In other words, $\xi_{\pi \to \pi'} = \{i : i \in \pi, \| \mathbf{z}_i - \pmb{\mu}_{\pi} \|_2 \geq \| \mathbf{z}_i - \pmb{\mu}_{\pi'} \|_2\}$ . Assuming $|\xi_{\pi \to \pi'}| > 0$ , we define below the clustering error ratio from $\pi$ to $\pi'$ as $\mathcal{E}_{\pi \to \pi'}$ and the overall cluster error ratio $\mathcal{E}_{\Pi, Z}$ as the Harmonic Mean of $\mathcal{E}_{\pi \to \pi'}$ among all class pairs: + +$$ +\mathcal{E}_{\Pi ,Z} = C(C - 1) / \left(\sum_{\substack{\pi \neq \pi^{\prime}\\ \pi ,\pi^{\prime}\in \Pi}}\frac{1}{\mathcal{E}_{\pi\to\pi^{\prime}}}\right),where \mathcal{E}_{\pi \to \pi^{\prime}} = \frac{|\xi_{\pi\to\pi^{\prime}}|}{|\pi^{\prime}| + |\pi|}. +$$ + +The $K$ -means measure $\mathcal{M}_{kms}(\Pi, Z)$ has the same order as the Harmonic Mean of the cluster error ratio between all cluster pairs: + +$$ +\mathcal {E} _ {\Pi , Z} = O \left(\mathcal {M} _ {k m s} (\Pi , Z)\right). +$$ + +Proof. We have the following inequality for $i \in \xi_{\pi \to \pi'}$ : + +$$ +4 \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| _ {2} ^ {2} \geq 2 \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| _ {2} ^ {2} + 2 \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2} \geq \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}. +$$ + +Then we have: + +$$ +\begin{array}{l} \mathcal {M} _ {i n t r a} (\Pi , Z) = \sum_ {\pi \in \Pi} \sum_ {i \in \pi} \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| _ {2} ^ {2} \\ \geq \sum_ {i \in \pi} \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| _ {2} ^ {2} \\ \geq \sum_ {i \in \xi_ {\pi \rightarrow \pi^ {\prime}}} \| \mathbf {z} _ {i} - \boldsymbol {\mu} _ {\pi} \| _ {2} ^ {2} \\ \geq \frac {1}{4} \sum_ {i \in \xi_ {\pi \rightarrow \pi^ {\prime}}} \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2} \\ = \frac {1}{4} | \xi_ {\pi \rightarrow \pi^ {\prime}} | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}. \\ \end{array} +$$ + +Note that the inter-class measure can be decomposed into the summation of cluster center distances: + +$$ +\begin{array}{l} \mathcal {M} _ {i n t e r} (\Pi , Z) = \sum_ {\pi \in \Pi} | \pi | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\Pi} \| _ {2} ^ {2} \\ = \sum_ {\pi \in \Pi} \frac {| \pi |}{N ^ {2}} \left\| \left(\sum_ {\pi^ {\prime} \in \Pi} | \pi^ {\prime} |\right) \boldsymbol {\mu} _ {\pi} - \sum_ {\pi^ {\prime} \in \Pi} | \pi^ {\prime} | \boldsymbol {\mu} _ {\pi^ {\prime}} \right\| _ {2} ^ {2} \\ \leq \frac {C}{N ^ {2}} \sum_ {\pi \in \Pi} | \pi | \sum_ {\pi^ {\prime} \in \Pi} | \pi^ {\prime} | ^ {2} \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2} \\ = \frac {C}{N ^ {2}} \sum_ {\pi \neq \pi^ {\prime}} | \pi | | \pi^ {\prime} | (| \pi^ {\prime} | + | \pi |) \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}, \\ \end{array} +$$ + +where $\sum_{\pi \neq \pi'}$ is enumerating over any two different class partitions in $\Pi$ . Combining together, we have: + +$$ +\begin{array}{l} C (C - 1) / \left(\sum_ {\pi \neq \pi^ {\prime}} \frac {(| \pi^ {\prime} | + | \pi |)}{| \xi_ {\pi \rightarrow \pi^ {\prime}} |}\right) = C (C - 1) / \left(\sum_ {\pi \neq \pi^ {\prime}} \frac {(| \pi^ {\prime} | + | \pi |) \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}}{| \xi_ {\pi \rightarrow \pi^ {\prime}} | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}}\right) \\ \leq C (C - 1) / \left(\sum_ {\pi \neq \pi^ {\prime}} \frac {| \pi^ {\prime} | | \pi | (| \pi^ {\prime} | + | \pi |) \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}}{N ^ {2} | \xi_ {\pi \rightarrow \pi^ {\prime}} | \| \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi^ {\prime}} \| _ {2} ^ {2}}\right) \\ \leq C (C - 1) / \left(\frac {\mathcal {M} _ {i n t e r} (\Pi , Z)}{4 C \mathcal {M} _ {i n t r a} (\Pi , Z)}\right) \\ = O \left(\mathcal {M} _ {k m s} (\Pi , Z)\right). \\ \end{array} +$$ + +# C.4 Proof of Theorem 4.2 + +![](images/c3bd405216220562c7fc45e1f32c3c277b62c1f65c99aa27c0f3f558f97707e2.jpg) + +We start by providing more details to supplement Sec. 4.2.1. + +Matrix perturbation by adding labels. Recall that we define in Eq. 3 that the adjacency matrix is the unlabeled one $A^{(u)}$ plus the perturbation of the label information $A^{(l)}$ : + +$$ +A = \eta_ {u} A ^ {(u)} + \eta_ {l} A ^ {(l)}. +$$ + +We study the perturbation from two aspects: (1) The direction of the perturbation which is given by $A^{(l)}$ , (2) The perturbation magnitude $\eta_{l}$ . We first consider the perturbation direction $A^{(l)}$ and recall that we defined the concrete form in Eq. 2: + +$$ +A _ {x x ^ {\prime}} ^ {(l)} = w _ {x x ^ {\prime}} ^ {(l)} \triangleq \sum_ {i \in \mathcal {Y} _ {l}} \mathbb {E} _ {\bar {x} _ {l} \sim \mathcal {P} _ {l _ {i}}} \mathbb {E} _ {\bar {x} _ {l} ^ {\prime} \sim \mathcal {P} _ {l _ {i}}} \mathcal {T} (x | \bar {x} _ {l}) \mathcal {T} (x ^ {\prime} | \bar {x} _ {l} ^ {\prime}). +$$ + +For simplicity, we consider $|\mathcal{Y}_l| = 1$ in this theoretical analysis. Then we observe that $A_{xx'}^{(l)}$ is a rank-1 matrix can be written as + +$$ +A _ {x x ^ {\prime}} ^ {(l)} = \mathbb {U} ^ {\top}, +$$ + +where $\mathfrak{l} \in \mathbb{R}^{N \times 1}$ with $(\mathfrak{l})_x = \mathbb{E}_{\bar{x}_l \sim \mathcal{P}_{l_1}} \mathcal{T}(x|\bar{x}_l)$ . And we define $D_l \triangleq \text{diag}(\mathfrak{l})$ . + +The perturbation function of representation. We then consider a more generalized form for the adjacency matrix: + +$$ +A (\delta) \triangleq \eta_ {u} A ^ {(u)} + \delta \mathfrak {U} ^ {\top}. +$$ + +where we treat the adjacency matrix as a function of the "labeling perturbation" degree $\delta$ . It is clear that $A(0) = \eta_u A^{(u)}$ which is the scaled adjacency matrix for the unlabeled case and that $A(\eta_l) = A$ . When we let the adjacency matrix be a function of $\delta$ , the normalized form and the derived feature representation should also be the function of $\delta$ . We proceed by defining these terms. + +Without losing the generality, we let $\text{diag}(\mathbf{1}_N^\top A(0)) = I_N$ which means the node in the unlabeled graph has equal degree. We then have: + +$$ +D (\delta) \triangleq d i a g \left(\mathbf {1} _ {N} ^ {\top} A (\delta)\right) = I _ {N} + \delta D _ {l}. +$$ + +The normalized adjacency matrix is given by: + +$$ +\tilde {A} (\delta) \triangleq D (\delta) ^ {- \frac {1}{2}} A (\delta) D (\delta) ^ {- \frac {1}{2}}. +$$ + +For feature representation $Z(\delta)$ , it is derived from the top- $k$ SVD components of $\tilde{A}(\delta)$ . Specifically, we have: + +$$ +Z (\delta) Z (\delta) ^ {\top} = D (\delta) ^ {- \frac {1}{2}} \tilde {A} _ {k} (\delta) D (\delta) ^ {- \frac {1}{2}} = D (\delta) ^ {- \frac {1}{2}} \sum_ {j = 1} ^ {k} \lambda_ {j} (\delta) \Phi_ {j} (\delta) D (\delta) ^ {- \frac {1}{2}}, +$$ + +where we define $\tilde{A}_k(\delta)$ as the top- $k$ SVD components of $\tilde{A}(\delta)$ and can be further written as $\tilde{A}_k(\delta) = \sum_{j=1}^k \lambda_j(\delta) \Phi_j(\delta)$ . Here the $\lambda_j(\delta)$ is the $j$ -th singular value and $\Phi_j(\delta)$ is the $j$ -th singular projector $(\Phi_j(\delta) = v_j(\delta) v_j(\delta)^\top)$ defined by the $j$ -th singular vector $v_j(\delta)$ . For brevity, when $\delta = 0$ , we remove the suffix (0) since it is equivalent to the unperturbed version of notations. For example, we let + +$$ +\tilde {A} (0) = \tilde {A} ^ {(u)}, Z (0) = Z ^ {(u)}, \lambda_ {i} (0) = \lambda_ {i} ^ {(u)}, v _ {i} (0) = v _ {i} ^ {(u)}, \Phi_ {i} (0) = \Phi_ {i} ^ {(u)}. +$$ + +Theorem C.2. (Recap of Th. 4.2) Denote $V_{\emptyset}^{(u)} \in \mathbb{R}^{N \times (N - k)}$ as the null space of $V_{k}^{(u)}$ and $\tilde{A}_{k}^{(u)} = V_{k}^{(u)}\Sigma_{k}^{(u)}V_{k}^{(u)\top}$ as the rank- $k$ approximation for $\tilde{A}^{(u)}$ . Given $\delta, \eta_1 > 0$ and let $\mathcal{G}_k$ as the spectral gap between $k$ -th and $k + 1$ -th singular values of $\tilde{A}^{(u)}$ , we have: + +$$ +\Delta_ {k m s} (\delta) = \delta \eta_ {1} \mathrm {T r} \left(\Upsilon \left(V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} (I + V _ {\emptyset} ^ {(u)} V _ {\emptyset} ^ {(u) \top}) - 2 \tilde {A} _ {k} ^ {(u)} d i a g (\mathfrak {l})\right)\right) + O (\frac {1}{\mathcal {G} _ {k}} + \delta^ {2}), +$$ + +where $\mathrm{diag}(\cdot)$ converts the vector to the corresponding diagonal matrix and $\Upsilon \in \mathbb{R}^{N \times N}$ is a matrix encoding the ground-truth clustering structure in the way that $\Upsilon_{xx'} > 0$ if $x$ and $x'$ has the same label and $\Upsilon_{xx'} < 0$ otherwise. + +Proof. As we shown in Sec C.2, we can now also write the K-means measure as the function of perturbation: + +$$ +\mathcal {M} _ {k m s} (\delta) = \frac {\mathrm {T r} ((I - H _ {\Pi}) Z (\delta) Z (\delta) ^ {\top})}{\mathrm {T r} ((H _ {\Pi} - \frac {1}{N} \mathbf {1} _ {N \times N}) Z (\delta) Z (\delta) ^ {\top})}. +$$ + +The proof is directly given by the following Lemma C.3. + +![](images/534bb7a4a7644696ade7b61485f43828e29f31448924f4bbb1e76ace78096f5f.jpg) + +Lemma C.3. Let $\eta_1, \eta_2$ be two real values and $\Upsilon = (1 + \eta_2)H_{\Pi} - I - \frac{\eta_2}{N}\mathbf{1}_N\mathbf{1}_N^\top$ . Let the spectrum gap $\mathcal{G}_k = \frac{\lambda_k^{(u)}}{\lambda_{k+1}^{(u)}}$ , we have the derivative of the $K$ -means measure evaluated at $\delta = 0$ : + +$$ +\left. \left[ \mathcal {M} _ {k m s} (\delta) \right] ^ {\prime} \right| _ {\delta = 0} = - \eta_ {1} \operatorname {T r} \left(\Upsilon \left(V _ {k} ^ {(u)} V _ {k} ^ {(u) ^ {\top}} \mathfrak {u} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l} + V _ {k} ^ {(u)} V _ {k} ^ {(u) ^ {\top}} \mathfrak {u} ^ {\top} V _ {\emptyset} ^ {(u)} V _ {\emptyset} ^ {(u) ^ {\top}}\right)\right) + O \left(\frac {1}{\mathcal {G} _ {k}}\right). +$$ + +The proof for Lemma C.3 is lengthy. We postpone it to Sec. C.6. + +# C.5 Proof of Theorem 4.3 + +We start by showing the justification of the assumptions made in Theorem 4.3. + +Assumption C.4. We assume the spectral gap $\mathcal{G}_k$ is large. Such an assumption is commonly used in theory works using spectral analysis [32, 57]. + +Assumption C.5. We assume $\mathfrak{t}$ lies in the linear span of $V_{k}^{(u)}$ . i.e., $V_{k}^{(u)}V_{k}^{(u)^{\top}}\mathfrak{t} = \mathfrak{t}, V_{\emptyset}^{(u)^{\top}}\mathfrak{t} = 0$ . The goal of this assumption is to simplify $(V_{k}^{(u)}V_{k}^{(u)^{\top}}\mathfrak{u}^{\top} + V_{k}^{(u)}V_{k}^{(u)^{\top}}\mathfrak{u}^{\top}V_{\emptyset}^{(u)}V_{\emptyset}^{(u)^{\top}})$ to $\mathfrak{u}^{\top}$ . + +Assumption C.6. For any $\pi_c \in \Pi$ , $\forall i, j \in \pi_c, \mathfrak{l}_{(i)} = \mathfrak{l}_{(j)} =: \mathfrak{l}_{\pi_c}$ . Recall that the $\mathfrak{l}_{(i)}$ means the connection between the $i$ -th sample to the labeled data. Here we can view $\mathfrak{l}_{\pi_c}$ as the connection between class $c$ to the labeled data. + +Theorem C.7. (Recap of Theorem 4.3.) With Assumption C.4, C.5 and C.6. Given $\delta, \eta_1, \eta_2 > 0$ , we have: + +$$ +\Delta_ {k m s} (\delta) \geq \delta \eta_ {1} \eta_ {2} \sum_ {\pi_ {c} \in \Pi} | \pi_ {c} | \mathbb {I} _ {\pi_ {c}} \Delta_ {\pi_ {c}} (\delta), +$$ + +where + +$$ +\Delta_ {\pi_ {c}} (\delta) = \left(\mathfrak {l} _ {\pi_ {c}} - \frac {1}{N}\right) - 2 \left(1 - \frac {| \pi_ {c} |}{N}\right) \left(\mathbb {E} _ {i \in \pi_ {c}} \mathbb {E} _ {j \in \pi_ {c}} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {j} - \mathbb {E} _ {i \in \pi_ {c}} \mathbb {E} _ {j \notin \pi_ {c}} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {j}\right). +$$ + +Proof. The proof is directly given by Lemma C.8 and plugging the definition of $\Delta_{kms}(\delta)$ . + +![](images/a16a9dae3fe76fcaf8b98f3449d9b4618285110b871b9d60d938dfc715708db7.jpg) + +Lemma C.8. With Assumption C.4 C.5 and C.6, we have the derivative of $K$ -means measure with the upper bound: + +$$ +\left. [ \mathcal {M} _ {k m s} (\delta) ] ^ {\prime} \right| _ {\delta = 0} \leq - \eta_ {1} \eta_ {2} \sum_ {\pi \in \Pi} | \pi | \mathfrak {l} _ {\pi} \left((\mathfrak {l} _ {\pi} - \frac {1}{N}) - 2 (\pmb {\mu} _ {\pi} ^ {\top} \pmb {\mu} _ {\pi} - \pmb {\mu} _ {\pi} ^ {\top} \pmb {\mu} _ {\Pi})\right). +$$ + +Proof. By Assumption C.4 C.5 and C.6 and Theorem 4.2, we have + +$$ +\begin{array}{l} \frac {1}{\eta_ {1}} [ \mathcal {M} _ {k m s} (\delta) ] ^ {\prime} \Big | _ {\delta = 0} = - \mathrm {T r} \left(\Upsilon \left(V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l}\right)\right) \\ = - \operatorname {T r} \left(\Upsilon \left(\mathfrak {u} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l}\right)\right) \\ = - \operatorname {T r} \left(\left((1 + \eta_ {2}) H _ {\Pi} - I - \frac {\eta_ {2}}{N} \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top}\right) \left(\mathfrak {U} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l}\right)\right) \\ = (1 + \eta_ {2}) \mathcal {M} _ {H} ^ {\prime} + \mathcal {M} _ {I} ^ {\prime} + \eta_ {2} \mathcal {M} _ {1} ^ {\prime}, \\ \end{array} +$$ + +where + +$$ +\begin{array}{l} \mathcal {M} _ {H} ^ {\prime} = - \operatorname {T r} \left(H _ {\Pi} \left(\mathfrak {U} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l}\right)\right) \\ = - \sum_ {\pi \in \Pi} \left(| \pi | \left(\mathbb {E} _ {i \in \pi} \mathfrak {l} _ {(i)}\right) ^ {2} - \frac {2}{| \pi |} \sum_ {i \in \pi} \sum_ {j \in \pi} \mathfrak {l} _ {(i)} \tilde {A} _ {k, (i, j)} ^ {(u)}\right) \\ = - \sum_ {\pi \in \Pi} \left(| \pi | \mathbf {l} _ {\pi} ^ {2} - 2 | \pi | \mathbf {l} _ {\pi} \mathbb {E} _ {(i, j) \in \pi \times \pi} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {j}\right) \\ = - \sum_ {\pi \in \Pi} | \pi | l _ {\pi} \left(l _ {\pi} - 2 \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\pi}\right), \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {M} _ {I} ^ {\prime} = \operatorname {T r} \left(\left(\mathfrak {U} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l}\right)\right) \\ = \sum_ {\pi \in \Pi} | \pi | \mathfrak {l} _ {\pi} \left(\mathfrak {l} _ {\pi} - 2 \mathbb {E} _ {i \in \pi} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {i}\right), \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathcal {M} _ {\mathbf {1}} ^ {\prime} = \operatorname {T r} \left(\frac {1}{N} \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} \left(\mathfrak {U} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l}\right)\right) \\ = \frac {1}{N} - 2 \sum_ {\pi \in \Pi} \sum_ {i \in \pi} \mathfrak {l} _ {(i)} \mathbb {E} _ {j \in [ N ]} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {j} \\ = \frac {1}{N} - 2 \sum_ {\pi \in \Pi} | \pi | l _ {\pi} \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\Pi}. \\ \end{array} +$$ + +We observe that + +$$ +\begin{array}{l} \mathcal {M} _ {I} ^ {\prime} + \mathcal {M} _ {H} ^ {\prime} = - \sum_ {\pi \in \Pi} | \pi | \mathrm {l} _ {\pi} \left(\mathrm {l} _ {\pi} - 2 \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\pi}\right) + \sum_ {\pi \in \Pi} | \pi | \mathrm {l} _ {\pi} \left(\mathrm {l} _ {\pi} - 2 \mathbb {E} _ {i \in \pi} \mathbf {z} _ {i} ^ {\top} \mathbf {z} _ {i}\right) \\ = 2 \sum_ {\pi \in \Pi} | \pi | \mathfrak {l} _ {\pi} \left(\left\| \mathbb {E} _ {i \in \pi} \mathbf {z} _ {i} \right\| _ {2} ^ {2} - \mathbb {E} _ {i \in \pi} \| \mathbf {z} _ {i} \| _ {2} ^ {2}\right) \\ \leq 0, \\ \end{array} +$$ + +where the last inequality is by Jensen's Inequality. We then have + +$$ +\begin{array}{l} \frac {1}{\eta_ {1} \eta_ {2}} [ \mathcal {M} _ {k m s} (\delta) ] ^ {\prime} \bigg | _ {\delta = 0} \leq \mathcal {M} _ {H} ^ {\prime} + \mathcal {M} _ {1} ^ {\prime} \\ = - \sum_ {\pi \in \Pi} | \pi | \mathrm {l} _ {\pi} \left(\mathrm {l} _ {\pi} - 2 \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\pi}\right) + \frac {1}{N} - 2 \sum_ {\pi \in \Pi} | \pi | \mathrm {l} _ {\pi} \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\Pi} \\ = \frac {1}{N} - \sum_ {\pi \in \Pi} | \pi | \mathfrak {l} _ {\pi} \left(\mathfrak {l} _ {\pi} - 2 \left(\boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\Pi}\right)\right) \\ = - \sum_ {\pi \in \Pi} | \pi | \mathfrak {l} _ {\pi} ((\mathfrak {l} _ {\pi} - \frac {1}{N}) - 2 (\boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\pi} - \boldsymbol {\mu} _ {\pi} ^ {\top} \boldsymbol {\mu} _ {\Pi})). \\ \end{array} +$$ + +![](images/b7b6ed6cb9e73bb430c36e4723935330f35fa01d031b6821251597b8ed375715.jpg) + +# C.6 Proof of Lemma C.3 + +Notation Recap: We define $\tilde{A}_k(\delta)$ as the top- $k$ SVD components of $\tilde{A}(\delta)$ and can be further written as $\tilde{A}_k(\delta) = \sum_{j=1}^k \lambda_j(\delta) \Phi_j(\delta)$ . Here the $\lambda_j(\delta)$ is the $j$ -th singular value and $\Phi_j(\delta)$ is the $j$ -th singular projector $(\Phi_j(\delta) = v_j(\delta) v_j(\delta)^\top)$ defined by the $j$ -th singular vector $v_j(\delta)$ . For brevity, when $\delta = 0$ , we remove the suffix (0) since it is equivalent to the unperturbed version of notations. For example, we let + +$$ +\tilde {A} (0) = \tilde {A} ^ {(u)}, Z (0) = Z ^ {(u)}, \lambda_ {i} (0) = \lambda_ {i} ^ {(u)}, v _ {i} (0) = v _ {i} ^ {(u)}, \Phi_ {i} (0) = \Phi_ {i} ^ {(u)}. +$$ + +Proof. By the derivative rule, we have, + +$$ +\begin{array}{l} \mathcal {M} _ {k m s} ^ {\prime} (\delta) = \frac {1}{\mathcal {M} _ {i n t e r} (\Pi , Z)} \mathcal {M} _ {i n t r a} ^ {\prime} (\delta) - \frac {\mathcal {M} _ {i n t r a} (\Pi , Z)}{\mathcal {M} _ {i n t e r} (\Pi , Z) ^ {2}} \mathcal {M} _ {i n t e r} ^ {\prime} (\delta) \\ = \eta_ {1} \mathcal {M} _ {i n t r a} ^ {\prime} (\delta) - \eta_ {1} \eta_ {2} \mathcal {M} _ {i n t e r} ^ {\prime} (\delta) \\ = \eta_ {1} \left(\mathrm {T r} ((I _ {\Pi} - H _ {\Pi}) [ Z (\delta) Z (\delta) ^ {\top} ] ^ {\prime}) - \eta_ {2} \mathrm {T r} ((H _ {\Pi} - \frac {1}{N} \mathbf {1} _ {N \times N}) [ Z (\delta) Z (\delta) ^ {\top} ] ^ {\prime})\right) \\ = \eta_ {1} \left(\operatorname {T r} \left(\left(I _ {\Pi} + \frac {\eta_ {2}}{N} \mathbf {1} _ {N \times N} - (\eta_ {2} + 1) H _ {\Pi}\right) [ Z (\delta) Z (\delta) ^ {\top} ] ^ {\prime}\right)\right) \\ = - \eta_ {1} \left(\operatorname {T r} \left(\Upsilon \left[ Z (\delta) Z (\delta) ^ {\top} \right] ^ {\prime}\right)\right) \\ = - \eta_ {1} \sum_ {j = 1} ^ {k} \operatorname {T r} (\Upsilon [ D (\delta) ^ {- \frac {1}{2}} \lambda_ {j} (\delta) \Phi_ {j} (\delta) D (\delta) ^ {- \frac {1}{2}} ] ^ {\prime}), \\ \end{array} +$$ + +where we let $\eta_{1} = \frac{1}{\mathcal{M}_{inter}(\Pi,Z)}$ , $\eta_{2} = \frac{\mathcal{M}_{intra}(\Pi,Z)}{\mathcal{M}_{inter}(\Pi,Z)}$ and $\Upsilon = (1 + \eta_2)H_{\Pi} - I_{\Pi} - \frac{\eta_2}{N}\mathbf{1}_N\mathbf{1}_N^\top$ . We proceed by showing the calculation of $[D(\delta)^{-\frac{1}{2}}]^{\prime}$ , $[\lambda_j(\delta)]'$ and $[\Phi_j(\delta)]'$ . + +Since $D(\delta) = I + \delta D_{l}$ , then $[D(\delta)^{-\frac{1}{2}}]^{\prime}\bigg|_{\delta = 0} = -\frac{1}{2} D_{l}$ . To calculate $[\lambda_j(\delta)]'$ and $[\Phi_j(\delta)]'$ , we first need: + +$$ +\begin{array}{l} \left. \left[ \tilde {A} (\delta) \right] ^ {\prime} \right| _ {\delta = 0} = \left[ D (\delta) ^ {- \frac {1}{2}} A (\delta) D (\delta) ^ {- \frac {1}{2}} \right] ^ {\prime} \\ = \left[ D (\delta) ^ {- \frac {1}{2}} \right] ^ {\prime} \tilde {A} ^ {(u)} + \left[ A (\delta) \right] ^ {\prime} + \tilde {A} ^ {(u)} \left[ D (\delta) ^ {- \frac {1}{2}} \right] ^ {\prime} \\ = - \frac {1}{2} D _ {l} \tilde {A} ^ {(u)} + \mathfrak {U} ^ {\top} - \frac {1}{2} \tilde {A} ^ {(u)} D _ {l}. \\ \end{array} +$$ + +Then, according to Equation (3) in [20], we have: + +$$ +\begin{array}{l} \left. \left[ \lambda_ {j} (\delta) \right] ^ {\prime} \right| _ {\delta = 0} = \operatorname {T r} \left(\Phi_ {j} ^ {(u)} [ \tilde {A} (\delta) ] ^ {\prime}\right) \\ = \operatorname {T r} \left(\Phi_ {j} ^ {(u)} \left(- \frac {1}{2} D _ {l} \tilde {A} ^ {(u)} + \mathfrak {U} ^ {\top} - \frac {1}{2} \tilde {A} ^ {(u)} D _ {l}\right)\right) \\ = \operatorname {T r} \left(\left(- \frac {\lambda_ {j} ^ {(u)}}{2} D _ {l} \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} \mathbb {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)}}{2} \Phi_ {j} ^ {(u)} D _ {l}\right)\right) \\ = \operatorname {T r} \left(\Phi_ {j} ^ {(u)} \left(\mathfrak {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l}\right)\right). \\ \end{array} +$$ + +According to Equation (10) in [20], we have: + +$$ +\begin{array}{l} \left. \left[ \Phi_ {j} (\delta) \right] ^ {\prime} \right| _ {\delta = 0} = \left(\lambda_ {j} ^ {(u)} I _ {N} - \tilde {A} ^ {(u)}\right) ^ {\dagger} [ \tilde {A} (\delta) ] ^ {\prime} \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} [ \tilde {A} (\delta) ] ^ {\prime} \left(\lambda_ {j} ^ {(u)} I _ {N} - \tilde {A} ^ {(u)}\right) ^ {\dagger} \\ = \sum_ {i \neq j} ^ {N} \frac {1}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left(\Phi_ {i} ^ {(u)} [ \tilde {A} (\delta) ] ^ {\prime} \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} [ \tilde {A} (\delta) ] ^ {\prime} \Phi_ {i} ^ {(u)}\right) \\ = \sum_ {i \neq j} ^ {N} \frac {1}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} (\Phi_ {i} ^ {(u)} (- \frac {1}{2} D _ {l} \tilde {A} ^ {(u)} + \mathfrak {U} ^ {\top} - \frac {1}{2} \tilde {A} ^ {(u)} D _ {l}) \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} (... \Phi_ {i} ^ {(u)}) \\ = \sum_ {i \neq j} ^ {N} \frac {1}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} (\Phi_ {i} ^ {(u)} (\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}) \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} (\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}) \Phi_ {i} ^ {(u)}). \\ \end{array} +$$ + +Now we calculate the derivative of the $K$ -means loss: + +$$ +\begin{array}{l} \frac {1}{\eta_ {1}} \left[ \mathcal {M} _ {k m s} (\delta) \right] ^ {\prime} \Big | _ {\delta = 0} = - \sum_ {j = 1} ^ {k} \left[ \operatorname {T r} \left(\Upsilon D (\delta) ^ {- \frac {1}{2}} \lambda_ {j} (\delta) \Phi_ {j} (\delta) D (\delta) ^ {- \frac {1}{2}}\right) \right] ^ {\prime} \Big | _ {\delta = 0} \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\Upsilon \left([ D (\delta) ^ {- \frac {1}{2}} ] ^ {\prime} \lambda_ {j} ^ {(u)} \Phi_ {j} ^ {(u)} + \lambda_ {j} ^ {(u)} \Phi_ {j} ^ {(u)} [ D (\delta) ^ {- \frac {1}{2}} ] ^ {\prime} + [ \lambda_ {j} (\delta) ] ^ {\prime} \Phi_ {j} ^ {(u)} + \lambda_ {j} ^ {(u)} [ \Phi_ {j} (\delta) ] ^ {\prime}\right)\right) \\ = \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\Upsilon \left(\frac {\lambda_ {j} ^ {(u)}}{2} D _ {l} \Phi_ {j} ^ {(u)} + \frac {\lambda_ {j} ^ {(u)}}{2} \Phi_ {j} ^ {(u)} D _ {l} - [ \lambda_ {j} (\delta) ] ^ {\prime} \Phi_ {j} ^ {(u)} - \lambda_ {j} ^ {(u)} [ \Phi_ {j} (\delta) ] ^ {\prime}\right)\right) \\ = \mathcal {M} _ {a} ^ {\prime} + \mathcal {M} _ {b} ^ {\prime} + \mathcal {M} _ {c} ^ {\prime}, \\ \end{array} +$$ + +where + +$$ +\mathcal {M} _ {a} ^ {\prime} = \sum_ {j = 1} ^ {k} \frac {\lambda_ {j} ^ {(u)}}{2} \mathrm {T r} \left(\Upsilon \left(D _ {l} \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} D _ {l}\right)\right), +$$ + +$$ +\begin{array}{l} \mathcal {M} _ {b} ^ {\prime} = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\Upsilon [ \lambda_ {j} (\delta) ] ^ {\prime} \Phi_ {j} ^ {(u)}\right) = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left((\mathbb {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l}) \Phi_ {j} ^ {(u)}\right) \operatorname {T r} \left(\Upsilon \Phi_ {j} ^ {(u)}\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\left(\mathbb {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l}\right) \Phi_ {j} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right), \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {M} _ {c} ^ {\prime} = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\Upsilon \lambda_ {j} ^ {(u)} [ \Phi_ {j} (\delta) ] ^ {\prime}\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i \neq j} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left(\Upsilon \Phi_ {i} ^ {(u)} \left(\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right) \Phi_ {j} ^ {(u)} + \Upsilon \Phi_ {j} ^ {(u)} \left(\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right) \Phi_ {i} ^ {(u)}\right)\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i \neq j} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i \neq j, i \leq k} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) \left(\mathbb {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right)\right)\right) \\ \left. \right. - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i < j} \left(\frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} + \frac {\lambda_ {i} ^ {(u)}}{\lambda_ {i} ^ {(u)} - \lambda_ {j} ^ {(u)}}\right) \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) \left(\mathbb {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right)\right)\right) \\ - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathbb {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i < j} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) \left(\mathbb {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right)\right)\right) \\ \left. \right. - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathbb {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i \neq j, i \leq k} \frac {1}{2} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) \left(\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right)\right)\right) \\ - \sum_ {j = 1} ^ {k} \mathrm {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right). \\ \end{array} +$$ + +Thus, we have: + +$$ +\begin{array}{l} \mathcal {M} _ {b} ^ {\prime} + \mathcal {M} _ {c} ^ {\prime} = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = 1} ^ {k} \frac {1}{2} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) \left(\mathrm {I} \Uparrow^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l}\right)\right)\right) \\ - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \frac {\lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right), \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {M} _ {a} ^ {\prime} = \sum_ {j = 1} ^ {k} \frac {\lambda_ {j} ^ {(u)}}{2} \operatorname {T r} \left(\Upsilon \left(D _ {l} \Phi_ {j} ^ {(u)} + \Phi_ {j} ^ {(u)} D _ {l}\right)\right) \\ = \sum_ {j = 1} ^ {k} \frac {\lambda_ {j} ^ {(u)}}{2} \operatorname {T r} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon + \Upsilon \Phi_ {j} ^ {(u)}\right) D _ {l}\right) \\ = \sum_ {j = 1} ^ {k} \frac {\lambda_ {j} ^ {(u)}}{2} \operatorname {T r} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \sum_ {i = 1} ^ {N} \Phi_ {i} ^ {(u)} + \sum_ {i = 1} ^ {N} \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) D _ {l}\right) \\ = \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{2} \left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) D _ {l}\right). \\ \end{array} +$$ + +Then $\left[\mathcal{M}_{kms - all}(\delta)\right]'\bigg|_{\delta = 0} / \eta_1$ is given by: + +$$ +\begin{array}{l} \mathcal {M} _ {a} ^ {\prime} + \mathcal {M} _ {b} ^ {\prime} + \mathcal {M} _ {c} ^ {\prime} = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = 1} ^ {k} \frac {1}{2} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \frac {3 \lambda_ {j} ^ {(u)} + \lambda_ {i} ^ {(u)}}{2} D _ {l})\right)\right) \\ - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l})\right)\right) \\ = - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = 1} ^ {k} \frac {1}{2} \left(\left(\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}\right) \left(\mathfrak {U} ^ {\top} - 2 \lambda_ {j} ^ {(u)} D _ {l}\right)\right)\right) \\ - \sum_ {j = 1} ^ {k} \operatorname {T r} \left(\sum_ {i = k + 1} ^ {N} \frac {\lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} \left((\Phi_ {j} ^ {(u)} \Upsilon \Phi_ {i} ^ {(u)} + \Phi_ {i} ^ {(u)} \Upsilon \Phi_ {j} ^ {(u)}) (\mathfrak {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l})\right)\right) \\ = - \sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {k} v _ {i} ^ {(u) \top} \Upsilon v _ {j} ^ {(u)} \cdot v _ {i} ^ {(u) \top} \left(\mathbb {U} ^ {\top} - 2 \lambda_ {j} ^ {(u)} D _ {l}\right) v _ {j} ^ {(u)} \\ - \sum_ {j = 1} ^ {k} \sum_ {i = k + 1} ^ {N} \frac {2 \lambda_ {j} ^ {(u)}}{\lambda_ {j} ^ {(u)} - \lambda_ {i} ^ {(u)}} v _ {i} ^ {(u) \top} \Upsilon v _ {j} ^ {(u)} \cdot v _ {i} ^ {(u) \top} (\mathfrak {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l}) v _ {j} ^ {(u)}. \\ \end{array} +$$ + +We can represent $\frac{\lambda_j^{(u)}}{\lambda_j^{(u)} - \lambda_i^{(u)}} = 1 + \sum_{p=1}^{\infty} \left( \frac{\lambda_i^{(u)}}{\lambda_j^{(u)}} \right)^p$ . Denote the residual term as: + +$$ +\mathcal {M} _ {e} ^ {\prime} = - \sum_ {j = 1} ^ {k} \sum_ {i = k + 1} ^ {N} \sum_ {p = 1} ^ {\infty} 2 (\frac {\lambda_ {i} ^ {(u)}}{\lambda_ {j} ^ {(u)}}) ^ {p} v _ {i} ^ {(u) \top} \Upsilon v _ {j} ^ {(u)} \cdot v _ {i} ^ {(u) \top} (\mathbb {U} ^ {\top} - \lambda_ {j} ^ {(u)} D _ {l}) v _ {j} ^ {(u)} = O (\frac {1}{\mathcal {G} _ {k}}). +$$ + +We then have: + +$$ +\begin{array}{l} \left. \frac {1}{\eta_ {1}} \left[ \mathcal {M} _ {k m s - a l l} (\delta) \right] ^ {\prime} \right| _ {\delta = 0} \\ = - \operatorname {T r} \left(V _ {k} ^ {(u) \top} \Upsilon V _ {k} ^ {(u)} \cdot V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} V _ {k} ^ {(u)}\right) + 2 \operatorname {T r} \left(V _ {k} ^ {(u) \top} \Upsilon V _ {k} ^ {(u)} \cdot \Sigma_ {k} ^ {(u)} V _ {k} ^ {(u) \top} D _ {l} V _ {k} ^ {(u)}\right) \\ - 2 \operatorname {T r} (V _ {\varnothing} ^ {(u) \top} \Upsilon V _ {k} ^ {(u)} \cdot V _ {k} ^ {(u) \top} \mathbb {1} ^ {\top} V _ {\varnothing} ^ {(u)}) + 2 \operatorname {T r} (V _ {\varnothing} ^ {(u) \top} \Upsilon V _ {k} ^ {(u)} \cdot \Sigma_ {k} ^ {(u)} V _ {k} ^ {(u) \top} D _ {l} V _ {\varnothing} ^ {(u)}) + \mathcal {M} _ {e} ^ {\prime} \\ = - \operatorname {T r} \left(\Upsilon V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} V _ {k} ^ {(u)} V _ {k} ^ {(u) \top}\right) + 2 \operatorname {T r} \left(\Upsilon \tilde {A} _ {k} ^ {(u)} D _ {l} V _ {k} ^ {(u)} V _ {k} ^ {(u) \top}\right) \\ - 2 \operatorname {T r} (\Upsilon V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} (I _ {N} - V _ {k} ^ {(u)} V _ {k} ^ {(u) \top})) + 2 \operatorname {T r} (\Upsilon \tilde {A} _ {k} ^ {(u)} D _ {l} (I _ {N} - V _ {k} ^ {(u)} V _ {k} ^ {(u) \top})) + \mathcal {M} _ {e} ^ {\prime} \\ = - 2 \operatorname {T r} \left(\Upsilon V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top}\right) + 2 \operatorname {T r} \left(\Upsilon \tilde {A} _ {k} ^ {(u)} D _ {l}\right) + \operatorname {T r} \left(\Upsilon V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} V _ {k} ^ {(u)} V _ {k} ^ {(u) \top}\right) + \mathcal {M} _ {e} ^ {\prime} \\ = - 2 \operatorname {T r} \left(\Upsilon \left(V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} - \tilde {A} _ {k} ^ {(u)} D _ {l} - \frac {1}{2} V _ {k} ^ {(u)} V _ {k} ^ {(u) \top} \mathfrak {U} ^ {\top} V _ {k} ^ {(u)} V _ {k} ^ {(u) \top}\right)\right) + \mathcal {M} _ {e} ^ {\prime} \\ = - \operatorname {T r} \left(\Upsilon \left(V _ {k} ^ {(u)} V _ {k} ^ {(u) ^ {\top}} \mathfrak {U} ^ {\top} - 2 \tilde {A} _ {k} ^ {(u)} D _ {l} + V _ {k} ^ {(u)} V _ {k} ^ {(u) ^ {\top}} \mathfrak {U} ^ {\top} V _ {\emptyset} V _ {\emptyset} ^ {\top}\right)\right) + O (\frac {1}{\mathcal {G} _ {k}}). \\ \end{array} +$$ + +# D Analysis on Other Contrastive Losses + +In this section, we discuss the extension of our graphic-theoretic analysis to one of the most common contrastive loss functions – SimCLR [11]. SimCLR loss is an extended version of InfoNCE loss [68] that achieves great empirical success and inspires a proliferation of follow-up works [5, 8, 12, 26, 34, 69, 77]. Specifically, SupCon [34] extends SimCLR to the supervised setting. GCD [69] and OpenCon [63] further leverage the SupCon and SimCLR losses, and are tailored to the open-world representation learning setting considering both labeled and unlabeled data. + +At a high level, we consider a general form of the SimCLR and its extensions (including SupCon, GCD, OpenCon) as: + +$$ +\mathcal {L} _ {\mathrm {g n l}} (f; \mathcal {P} _ {+}) = - \frac {1}{\tau} \underset {(x, x ^ {+}) \sim \mathcal {P} _ {+}} {\mathbb {E}} [ f (x) ^ {\top} f (x ^ {+}) ] + \underset {x \sim \mathcal {P}} {\mathbb {E}} \left[ \log \left(\underset {x ^ {\prime} \sim \mathcal {P}} {\mathbb {E}} e ^ {f \left(x ^ {\prime}\right) ^ {\top} f (x) / \tau}\right) \right], \tag {15} +$$ + +where we let the $\mathcal{P}_{+}$ as the distribution of positive pairs defined in Section 3.1. In SimCLR [11], the positive pairs are purely sampled in the unlabeled case $(u)$ while SupCon [34] considers the labeled case $(l)$ . With both labeled and unlabeled data, GCD [69] and OpenCon [63] sample positive pairs in both cases. + +In this section, we investigate an alternative form that eases the theoretical analysis (also applied in [72]): + +$$ +\begin{array}{l} \widehat {\mathcal {L}} _ {\mathrm {g n l}} (f; \mathcal {P} _ {+}) = - \frac {1}{\tau (x , x ^ {+}) \sim \mathcal {P} _ {+}} \left[ f (x) ^ {\top} f \left(x ^ {+}\right) \right] + \log \left(\underset { \begin{array}{c} x, x ^ {\prime} \sim \mathcal {P} \\ x \neq x ^ {\prime} \end{array} } {\mathbb {E}} e ^ {f \left(x ^ {\prime}\right) ^ {\top} f (x) / \tau}\right) (16) \\ \geq \mathcal {L} _ {\mathrm {g n l}} (f; \mathcal {P} _ {+}), (17) \\ \end{array} +$$ + +which serves an upper bound of $\mathcal{L}_{\mathrm{gnl}}(f)$ according to Jensen's Inequality. + +A graph-theoretic view. Recall in Section 3.1, we define the graph $G(\mathcal{X}, w)$ with vertex set $\mathcal{X}$ and edge weights $w$ . Each entry of adjacency matrix $A$ is given by $w_{xx'}$ , which denotes the marginal probability of generating the pair for any two augmented data $x, x' \in \mathcal{X}$ : + +$$ +w _ {x x ^ {\prime}} = \eta_ {u} w _ {x x ^ {\prime}} ^ {(u)} + \eta_ {l} w _ {x x ^ {\prime}} ^ {(l)}, +$$ + +and $w_{x}$ measures the degree of node $x$ : + +$$ +w _ {x} = \sum_ {x ^ {\prime}} w _ {x x ^ {\prime}}. +$$ + +One can view the difference between SimCLR and its variants in the following way: (1) SimCLR [11] corresponds to $\eta_{l} = 0$ when there is no labeled case; (2) SupCon [34] corresponds to $\eta_{u} = 0$ when only labeled case is considered. (3) GCD [69] and OpenCon [63] correspond to the cases when $\eta_{u},\eta_{l}$ are both non-zero due to the availability of both labeled and unlabeled data. + +With the define marginal probability of sampling positive pairs $w_{xx'}$ and the marginal probability of sampling a single sample $w_x$ , we have: + +$$ +\begin{array}{l} \widehat{\mathcal{L}}_{\mathrm{gnl}}\left(Z;G(\mathcal{X},w)\right) = -\frac{1}{\tau}\sum_{x,x^{\prime}\in \mathcal{X}}w_{xx^{\prime}}f(x)^{\top}f\left(x^{\prime}\right) + \log \left(\sum_{\substack{x,x^{\prime}\in \mathcal{X}\\ x\neq x^{\prime}}}w_{x}w_{x^{\prime}}e^{f(x^{\prime})^{\top}f(x) / \tau}\right) \\ = - \frac {1}{\tau} \operatorname {T r} (Z ^ {\top} A Z) + \log \operatorname {T r} \left((D \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} D - D ^ {2}) \exp (\frac {1}{\tau} Z Z ^ {T})\right). \\ \end{array} +$$ + +When $\tau$ is large: + +$$ +\begin{array}{l} \widehat {\mathcal {L}} _ {\mathrm {s i m c l r}} \left(Z; G (\mathcal {X}, w)\right) \approx - \frac {1}{\tau} \operatorname {T r} (Z ^ {\top} A Z) + \log \operatorname {T r} \left((D \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} D - D ^ {2}) (\mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} + \frac {1}{\tau} Z Z ^ {T})\right) \\ = - \frac {1}{\tau} \operatorname {T r} \left(Z ^ {\top} A Z\right) + \log \left(1 + \frac {\frac {1}{\tau} \operatorname {T r} \left(Z ^ {\top} \left(D \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} D - D ^ {2}\right) Z\right)}{\operatorname {T r} (D) ^ {2} - \operatorname {T r} \left(D ^ {2}\right)}\right) + \text {c o n s t} \\ \approx - \frac {1}{\tau} \operatorname {T r} \left(Z ^ {\top} A Z\right) + \frac {\frac {1}{\tau} \operatorname {T r} \left(Z ^ {\top} \left(D \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} D - D ^ {2}\right) Z\right)}{\operatorname {T r} (D) ^ {2} - \operatorname {T r} \left(D ^ {2}\right)} + \text {c o n s t} \\ = - \frac {1}{\tau} \operatorname {T r} \left(Z ^ {\top} \left(A - \frac {D \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {\top} D - D ^ {2}}{\operatorname {T r} (D) ^ {2} - \operatorname {T r} (D ^ {2})}\right) Z\right) + \text {c o n s t .} \\ \end{array} +$$ + +If we further consider the constraint that the $Z^{\top}Z = I$ , minimizing $\widehat{\mathcal{L}}_{\mathrm{simcr}}(Z;G(\mathcal{X},w))$ boils down to the eigenvalue problem such that $Z$ is formed by the top- $k$ eigenvectors of matrix $(A - D1_N1_N^\top D - D^2)$ . Recall that our main analysis for Theorem 4.2 and Theorem 4.3 is based on the insight that the feature space is formed by the top- $k$ eigenvectors of the normalized adjacency matrix $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ . Viewed in this light, the same analysis could be applied to the SimCLR loss as well, which only differs in the concrete matrix form. We do not include the details in this paper but leave it as a future work. + +# E Additional Experiments Details + +# E.1 Experimental Details of Toy Example + +Recap of set up. In Section 4.1 we consider a toy example that helps illustrate the core idea of our theoretical findings. Specifically, the example aims to cluster 3D objects of different colors and shapes, generated by a 3D rendering software [31] with user-defined properties including colors, shape, size, position, etc. Suppose the training samples come from three shapes, $\mathcal{X}_{\square}, \mathcal{X}_{\bigcirc}, \mathcal{X}_{\bigodot}$ . Let $\mathcal{X}_{\square}$ be the sample space with known class, and $\mathcal{X}_{\bigcirc}, \mathcal{X}_{\bigodot}$ be the sample space with novel classes. Further, the two novel classes are constructed to have different relationships with the known class. Specifically, the toy dataset contains elements with 5 unique types: + +$$ +\mathcal {X} = \mathcal {X} _ {\perp} \cup \mathcal {X} _ {\bigcirc} \cup \mathcal {X} _ {\bigtriangledown}, +$$ + +where + +$$ +\mathcal {X} _ {\square} = \{x _ {\square}, x _ {\square} \}, +$$ + +$$ +\mathcal {X} _ {\bigcirc} = \{x _ {\bigcirc}, x _ {\bigcirc} \}, +$$ + +$$ +\mathcal {X} _ {\oslash} = \{x _ {\oslash} \}. +$$ + +Experimental details for Figure 3(b). We rendered 2500 samples for each type of data. In total, we have 12500 samples. For known class $\mathcal{X}$ $\square$ , we randomly select $50\%$ as labeled data and treat the rest as unlabeled. For training, we use the same data augmentation strategy as in SimSiam [12]. We use ResNet18 and train the model for 40 epochs (sufficient for convergence) with a fixed learning rate of 0.005, using SORL defined in Eq. (6). We set $\eta_l = 0.04$ and $\eta_u = 1$ , respectively. Our visualization is by PyTorch implementation of UMAP [43], with parameters (n_neighbors=30, min_dist=1.5, spread=2, metric=euclidean). + +# E.2 Experimental Details for Benchmarks + +Hardware and software. We run all experiments with Python 3.7 and PyTorch 1.7.1, using NVIDIA GeForce RTX 2080Ti and A6000 GPUs. + +Training settings. For a fair comparison, we use ResNet-18 [25] as the backbone for all methods. Similar to [7], we pre-train the backbone using the unsupervised Spectral Contrastive Learning [23] for 1200 epochs. The configuration for the pre-training stage is consistent with [23]. Note that the pre-training stage does not incorporate any label information. At the training stage, we follow the same practice in [7, 63], and train our model $f(\cdot)$ by only updating the parameters of the last block of ResNet. In addition, we add a trainable two-layer MLP projection head that projects the feature from the penultimate layer to an embedding space $\mathbb{R}^k$ ( $k = 1000$ ). We use the same data augmentation strategies as SimSiam [12, 23]. For CIFAR-10, we set $\eta_l = 0.25$ , $\eta_u = 1$ with training epoch 100, and we evaluate using features extracted from the layer preceding the projection. For CIFAR-100, we set $\eta_l = 0.0225$ , $\eta_u = 3$ with 400 training epochs and assess based on the projection layer's features. We use SGD with momentum 0.9 as an optimizer with cosine annealing (lr=0.05), weight decay 5e-4, and batch size 512. + +Evaluation settings. At the inference stage, we evaluate the performance in a transductive manner (evaluate on $\mathcal{D}_u$ ). We run a semi-supervised K-means algorithm as proposed in [69]. We follow the evaluation strategy in [7] and report the following metrics: (1) classification accuracy on known classes, (2) clustering accuracy on the novel data, and (3) overall accuracy on all classes. The accuracy of the novel classes is measured by solving an optimal assignment problem using the Hungarian algorithm [36]. When reporting accuracy on all classes, we solve optimal assignments using both known and novel classes. \ No newline at end of file diff --git a/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/images.zip b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..26d2414332c9fe6e910408589cc6298d85a0864c --- /dev/null +++ b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e08df0202e662eb0fc0b85bf7cad7478faf261d5d85da50972779a6376002da +size 2005412 diff --git a/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/layout.json b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c4bc54ee567393b5dc050f877249329fe20bdc90 --- /dev/null +++ b/agraphtheoreticframeworkforunderstandingopenworldsemisupervisedlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb0e75f1c4f0566c3aa40c9b20485d14f9e6c003bef4c19e4521007d6947e37d +size 1218200 diff --git a/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_content_list.json b/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ee14490c8524e75cf42520699b960917e039a4c8 --- /dev/null +++ b/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bc56dd24594e87086358848c07d816b674a34005d209d6ff4e876834a3809e6 +size 87184 diff --git a/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_model.json b/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_model.json new file mode 100644 index 0000000000000000000000000000000000000000..764b06eb612b38c452f62b951f4f2fcdd983d654 --- /dev/null +++ b/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8abead0b8230fc49215ae110cd41db1b8eb70ab50a21f2051f302908ef04be63 +size 109729 diff --git a/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_origin.pdf b/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..53f88edd60305a8cacbd4f04ae203f02958ec7a3 --- /dev/null +++ b/aguidethroughthezooofbiasedsgd/befb4b32-0f4b-4e8c-b8de-221047e33696_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03fc668c2d86bed0e4b9a4da1b004df9d3dd0bd055ab9e75288dd2ed6e1657fc +size 583814 diff --git a/aguidethroughthezooofbiasedsgd/full.md b/aguidethroughthezooofbiasedsgd/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5c5b82f02a7cdfcff04ad62b8930f73f77a6f5cb --- /dev/null +++ b/aguidethroughthezooofbiasedsgd/full.md @@ -0,0 +1,426 @@ +# A Guide Through the Zoo of Biased SGD + +Yury Demidovich + +AI Initiative, KAUST + +yury.demidovich@kaust.edu.sa + +Grigory Malinovsky + +AI Initiative, KAUST + +grigorii.malinovskii@kaust.edu.sa + +Igor Sokolov + +AI Initiative, KAUST + +igor.sokolov.1@kaust.edu.sa + +Peter Rictárik + +AI Initiative, KAUST + +peter.richtarik@kaust.edu.sa + +# Abstract + +Stochastic Gradient Descent (SGD) is arguably the most important single algorithm in modern machine learning. Although SGD with unbiased gradient estimators has been studied extensively over at least half a century, SGD variants relying on biased estimators are rare. Nevertheless, there has been an increased interest in this topic in recent years. However, existing literature on SGD with biased estimators (BiasedSGD) lacks coherence since each new paper relies on a different set of assumptions, without any clear understanding of how they are connected, which may lead to confusion. We address this gap by establishing connections among the existing assumptions, and presenting a comprehensive map of the underlying relationships. Additionally, we introduce a new set of assumptions that is provably weaker than all previous assumptions, and use it to present a thorough analysis of BiasedSGD in both convex and non-convex settings, offering advantages over previous results. We also provide examples where biased estimators outperform their unbiased counterparts or where unbiased versions are simply not available. Finally, we demonstrate the effectiveness of our framework through experimental results that validate our theoretical findings. + +# 1 Introduction + +Stochastic Gradient Descent (SGD) [Robbins and Monro, 1951] is a widely used and effective algorithm for training various models in machine learning. The current state-of-the-art methods for training deep learning models are all variants of SGD [Goodfellow et al., 2016; Sun, 2020]. The algorithm has been extensively studied in recent theoretical works [Bottou et al., 2018; Gower et al., 2019; Khaled and Richtárik, 2023]. In practice and theory, SGD with unbiased gradient oracles is mostly used. However, there has been a recent surge of interest in SGD with biased gradient oracles, which has been studied in several papers and applied in different domains. + +In distributed parallel optimization where data is partitioned across multiple nodes, communication can be a bottleneck, and techniques such as structured sparsity [Alistarh et al., 2018; Wangni et al., 2018] or asynchronous updates [Niu et al., 2011] are involved to reduce communication costs. Nonetheless, sparsified or delayed SGD-updates are not unbiased anymore and require additional analysis [Stich and Karimireddy, 2020; Beznosikov et al., 2020]. + +Zeroth-order methods are often utilized when there is no access to unbiased gradients, e.g., for optimization of black-box functions [Nesterov and Spokoiny, 2017], or for finding adversarial examples in deep learning [Moosavi-Dezfooli et al., 2017; Chen et al., 2017]. Many zeroth-order training methods exploit biased gradient oracles [Nesterov and Spokoiny, 2017; Liu et al., 2018; Bergou et al., 2020; Boucherouite et al., 2022]. Various other techniques as smoothing, proximate + +updates and preconditioning operate with inexact gradient estimators [d'Aspremont, 2008; Schmidt et al., 2011; Devolder et al., 2014; Tappenden et al., 2016; Karimireddy et al., 2018]. + +The aforementioned applications illustrate that SGD can converge even if it performs biased gradient updates, provided that certain "regularity" conditions are satisfied by the corresponding gradient estimators [Bottou et al., 2018; Ajalloeian and Stich, 2020; Beznosikov et al., 2020; Condat et al., 2022]. Moreover, biased estimators may show better performance over their unbiased equivalents in certain settings [Beznosikov et al., 2020]. + +In this work we study convergence properties and worst-case complexity bounds of stochastic gradient descent (SGD) with a biased gradient estimator (BiasedSGD; see Algorithm 1) for solving general optimization problems of the form + +$$ +\min _ {x \in \mathbb {R} ^ {d}} f (x), +$$ + +where the function $f: \mathbb{R}^d \to \mathbb{R}$ is possibly nonconvex, satisfies several smoothness and regularity conditions. + +Assumption 0 Function $f$ is differentiable, $L$ -smooth (i.e., $\| \nabla f(x) - \nabla f(y) \| \leq L \| x - y \|$ for all $x, y \in \mathbb{R}^d$ ), and bounded from below by $f^{*} \in \mathbb{R}$ . + +We write $g(x)$ for the gradient estimator, which is biased (i.e., $\mathbb{E}[g(x)]$ is not equal to $\nabla f(x)$ , $\mathbb{E}[\cdot]$ stands for the expectation with respect to the randomness of the algorithm), in general. By a gradient estimator we mean a (possibly random) mapping $g: \mathbb{R}^d \to \mathbb{R}^d$ with some constraints. We denote by $\gamma$ an appropriately chosen learning rate, and $x^0 \in \mathbb{R}^d$ is a starting point of the algorithm. + +Algorithm 1 Biased Stochastic Gradient Descent (BiasedSGD) +Input: initial point $x^0\in \mathbb{R}^d$ ; learning rate $\gamma >0$ 1: for $t = 0,1,2,\ldots$ do 2: Construct a (possibly biased) estimator $g^{t}\stackrel {\mathrm{def}}{=}g(x^{t})$ of the gradient $\nabla f(x^t)$ 3: Compute $x^{t + 1} = x^t -\gamma g^t$ 4: end for + +In the strongly convex case, $f$ has a unique global minimizer which we denote by $x^{*}$ , and $f(x^{*}) = f^{*}$ . In the nonconvex case, $f$ can have many local minima and/or saddle points. It is theoretically intractable to solve this problem to global optimality [Nemirovsky and Yudin, 1983]. Depending on the assumptions on $f$ , and given some error tolerance $\varepsilon > 0$ , will seek to find a random vector $x \in \mathbb{R}^d$ such that one of the following inequalities holds: i) $\mathbb{E}\left[f(x) - f^{*}\right] \leq \varepsilon$ (convergence in function values); ii) $\mathbb{E}\|x - x^{*}\|^{2} \leq \varepsilon\left\|x^{0} - x^{*}\right\|^{2}$ (iterate convergence); iii) $\mathbb{E}\|\nabla f(x)\|^{2} \leq \varepsilon^{2}$ (gradient norm convergence). + +# 2 Sources of bias + +Practical applications of SGD typically involve the training of supervised machine learning models via empirical risk minimization [Shalev-Shwartz and Ben-David, 2014], which leads to optimization problems of a finite-sum structure: + +$$ +f (x) = \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x). \tag {1} +$$ + +In the single-machine setup, $n$ is the number of data points, $f_{i}(x)$ represents the loss of a model $x$ on a data point $i$ . In this setting, data access is expensive, $g(x)$ is usually constructed with subsampling techniques such as minibatching and importance sampling. Generally, a subset $S \subseteq [n]$ of examples is chosen, and subsequently $g(x)$ is assembled from the information stored in the gradients of $\nabla f_{i}(x)$ for $i \in S$ only. This leads to estimators of the form $g(x) = \sum_{i \in S} v_{i} \nabla f_{i}(x)$ , where $v_{i}$ are random variables typically designed to ensure the unbiasedness [Gower et al., 2019]. In practice, points might be sampled with unknown probabilities. In this scenario, a reasonable strategy to estimate the gradient is to take an average of all sampled $\nabla f_{i}$ . In general, the estimator obtained is biased, and + +![](images/ce0506bfc0670d8d6de11450655bff20440537fc978ff0824507acfab6d2ed92.jpg) +Figure 1: Assumption hierarchy. A single arrow indicates an implication and an absence of a reverse implication. The implications are transitive. A dashed line indicates a mutual absence of implications. Our newly proposed assumption Biased ABC is the most general one. + +such sources of bias can be characterized as arising from a lack of information about the subsampling strategy. + +In the distributed setting, $n$ represents the number of machines, and each $f_{i}$ represents the loss of model $x$ on all the training data stored on machine $i$ . Since communication is typically very expensive, modern gradient-type methods rely on various gradient compression mechanisms that are usually randomized. Given an appropriately chosen compression map $\mathcal{C} : \mathbb{R}^d \to \mathbb{R}^d$ , the local gradients $\nabla f_{i}(x)$ are first compressed to $\mathcal{C}_{i}(\nabla f_{i}(x))$ , where $\mathcal{C}_{i}$ is an independent realization of $\mathcal{C}$ sampled by machine $i$ in each iteration, and subsequently communicated to the master node, which performs aggregation (typically averaging). This gives rise to SGD with the gradient estimator of the form + +$$ +g (x) = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathcal {C} _ {i} (\nabla f _ {i} (x)). \tag {2} +$$ + +Many important compressors performing well in practice are of biased nature (e.g., Top- $k$ , see Def. 3), which, in general, makes $g(x)$ biased as well. + +Biased estimators are capable of absorbing useful information in certain settings, e.g., in the heterogeneous data regime. Unbiased estimators have to be random, otherwise they are equal to the identity mapping. However, greedy deterministic gradient estimators such as Top- $k$ often lead to better practical performance. In [Beznosikov et al., 2020, Section 4] the authors show an advantage of the Top- $k$ compressor over its randomized counterpart Rand- $k$ when the coordinates of the vector that we wish to compress are distributed uniformly or exponentially. In practice, deterministic biased compressors are widely used for low precision training, and exhibit great performance [Alistarh et al., 2018; Beznosikov et al., 2020]. + +# 3 Contributions + +The most commonly used assumptions for analyzing SGD with biased estimators take the form of various structured bounds on the first and the second moments of $g(x)$ . We argue that assumptions proposed in the literature are often too strong, and may be unrealistic as they do not fully capture how + +bias and randomness in $g(x)$ arise in practice. In order to retrieve meaningful theoretical insights into the operation of BiasedSGD, it is important to model the bias and randomness both correctly, so that the assumptions we impart are provably satisfied, and accurately, so as to obtain as tight bounds as possible. Our work is motivated by the need of a more accurate and informative analysis of BiasedSGD in the strongly convex and nonconvex settings, which are problems of key importance in optimization research and deep learning. Our results are generic and cover both subsampling and compression-based estimators, among others. + +The key contributions of our work are: + +- Inspired by recent developments in the analysis of SGD in the nonconvex setting [Khaled and Richtárik, 2023], the analysis of BiasedSGD [Bottou et al., 2018; Ajalloeian and Stich, 2020], the analysis of biased compressors [Beznosikov et al., 2020], we propose a new assumption, which we call Biased ABC, for modeling the first and the second moments of the stochastic gradient. +- We show in Section 5.2 that Biased ABC is the weakest, and hence the most general, among all assumptions in the existing literature on BiasedSGD we are aware of (see Figure 1), including concepts such as Contractive (CON) [Cordonnier, 2018; Stich et al., 2018; Beznosikov et al., 2020], Absolute (ABS) [Sahu et al., 2021], Bias-Variance Decomposition (BVD) [Condat et al., 2022], Bounded Relative Error Quantization (BREQ) [Khirirat et al., 2018b], Bias-Noise Decomposition (BND) [Ajalloeian and Stich, 2020], Strong Growth 1 (SG1) and Strong Growth 2 (SG2) [Beznosikov et al., 2020], and First and Second Moment Limits (FSML) [Bottou et al., 2018] estimators. +- We prove that unlike the existing assumptions, which implicitly assume that the bias comes from either perturbation or compression, Biased ABC also holds in settings such as subsampling. +- We recover the optimal rates for general smooth nonconvex problems and for problems under the PL condition in the unbiased case and prove that these rates are also optimal in the biased case. +- In the strongly convex case, we establish a similar convergence result in terms of iterate norms as in [Hu et al., 2021a], however, under milder assumptions and not only for the classical version of SGD. Our proof strategy is very different and much simpler. + +# 4 Existing models of biased gradient estimators + +Since application of a gradient compressor to the gradient constitutes a gradient estimator, below we often reformulate known assumptions and results obtained for biased compressors in the more general form of biased gradient estimators. Beznosikov et al. [2020] analyze SGD under the assumption that $f$ is $\mu$ -strongly convex, and propose three different assumptions for compressors. + +Assumption 1 (Strong Growth 1, SG1 - Beznosikov et al. [2020]) Let us say that $g(x)$ belongs to a set $\mathbb{B}^1(\alpha, \beta)$ of biased gradient estimators, if, for some $\alpha, \beta > 0$ , for every $x \in \mathbb{R}^d$ , $g(x)$ satisfies + +$$ +\alpha \| \nabla f (x) \| ^ {2} \leq \mathbb {E} \left[ \| g (x) \| ^ {2} \right] \leq \beta \langle \mathbb {E} [ g (x) ], \nabla f (x) \rangle . \tag {3} +$$ + +Assumption 2 (Strong Growth 2, SG2 - Beznosikov et al. [2020]) Let us say that $g(x)$ belongs to a set $\mathbb{B}^2(\tau, \beta)$ of biased gradient estimators, if, for some $\tau, \beta > 0$ , for every $x \in \mathbb{R}^d$ , $g(x)$ satisfies + +$$ +\left. \max \left\{\tau \| \nabla f (x) \| ^ {2}, \frac {1}{\beta} \mathbb {E} \left[ \| g (x) \| ^ {2} \right] \right\} \leq \langle \mathbb {E} [ g (x) ], \nabla f (x) \rangle . \right. \tag {4} +$$ + +Note that each of Assumptions 1 and 2 imply + +$$ +\mathbb {E} \left[ \| g (x) \| ^ {2} \right] \leq \beta^ {2} \| \nabla f (x) \| ^ {2}. \tag {5} +$$ + +Assumption 3 (Contractive, CON - Beznosikov et al. [2020]) Let us say that $g(x)$ belongs to a set $\mathbb{B}^3(\delta)$ of biased gradient estimators, if for some $\delta > 0$ , for every $x \in \mathbb{R}^d$ , $g(x)$ satisfies + +$$ +\mathbb {E} \left[ \| g (x) - \nabla f (x) \| ^ {2} \right] \leq \left(1 - \frac {1}{\delta}\right) \| \nabla f (x) \| ^ {2}. \tag {6} +$$ + +The last condition is an abstraction of the contractive compression property (see Appendix L). Condat et al. [2022] introduce another assumption for biased compressors, influenced by a bias-variance decomposition equation for the second moment: + +$$ +\mathbb {E} \left[ \| g (x) - \nabla f (x) \| ^ {2} \right] = \| \mathbb {E} [ g (x) ] - \nabla f (x) \| ^ {2} + \mathbb {E} \left[ \| g (x) - \mathbb {E} [ g (x) ] \| ^ {2} \right]. \tag {7} +$$ + +Let us write the assumption itself. + +Assumption 4 (Bias-Variance Decomposition, BVD - Condat et al. [2022]) Let $0 \leq \eta \leq 1$ , $\xi \geq 0$ , for all $x \in \mathbb{R}^d$ , the gradient estimator $g(x)$ satisfies + +$$ +\left\| \mathbb {E} [ g (x) ] - \nabla f (x) \right\| ^ {2} \leq \eta \| \nabla f (x) \| ^ {2}, \tag {8} +$$ + +$$ +\mathbb {E} \left[ \| g (x) - \mathbb {E} [ g (x) ] \| ^ {2} \right] \leq \xi \| \nabla f (x) \| ^ {2}. \tag {9} +$$ + +Khirirat et al. [2018b] proposed another assumption on deterministic compressors. + +Assumption 5 (Bounded Relative Error Quantization, BREQ - Khirrat et al. [2018b]) For all $x \in \mathbb{R}^d$ , for any $\rho, \zeta \geq 0$ + +$$ +\langle g (x), \nabla f (x) \rangle \geq \rho \| \nabla f (x) \| ^ {2}, \tag {10} +$$ + +$$ +\left\| g (x) \right\| ^ {2} \leq \zeta \left\| \nabla f (x) \right\| ^ {2}. \tag {11} +$$ + +The restriction below was imposed on the gradient estimator $g(x)$ by Ajalloeian and Stich [2020]. For the purpose of clarity, we rewrote it in the notation adopted in our paper. We refer the reader to Appendix O for the proof of equivalence of these two definitions. + +Assumption 6 (Bias-Noise Decomposition, BND - Ajalloeian and Stick [2020]) Let $M, \sigma^2, \varphi^2$ be nonnegative constants, and let $0 \leq m < 1$ . For all $x \in \mathbb{R}^d$ , $g(x)$ satisfies + +$$ +\mathbb {E} \left[ \| g (x) - \mathbb {E} [ g (x) ] \| ^ {2} \right] \leq M \| \mathbb {E} [ g (x) ] \| ^ {2} + \sigma^ {2}, \tag {12} +$$ + +$$ +\left\| \mathbb {E} [ g (x) ] - \nabla f (x) \right\| ^ {2} \leq m \| \nabla f (x) \| ^ {2} + \varphi^ {2}. \tag {13} +$$ + +The following assumption was introduced by Sahu et al. [2021] (see also the work of Danilova and Gorbunov [2022]). + +Assumption 7 (Absolute Estimator, ABS - Sahu et al. [2021]) For all $x \in \mathbb{R}^d$ , there exists $\Delta \geq 0$ such that + +$$ +\mathbb {E} \left[ \| g (x) - \nabla f (x) \| ^ {2} \right] \leq \Delta^ {2}. \tag {14} +$$ + +This condition is tightly related to the contractive compression property (see Appendix M). Further, Bottou et al. [2018] proposed the following restriction on a stochastic gradient estimator. + +Assumption 8 (First and Second Moment Limits, FSML - Bottou et al. [2018]) There exist constants $0 < q \leq u$ , $U \geq 0$ , $Q \geq 0$ , such that, for all $x \in \mathbb{R}^d$ + +$$ +\langle \nabla f (x), \mathbb {E} [ g (x) ] \rangle \geq q \| \nabla f (x) \| ^ {2}, \tag {15} +$$ + +$$ +\left\| \mathbb {E} [ g (x) ] \right\| \leq u \| \nabla f (x) \|, \tag {16} +$$ + +$$ +\mathbb {E} \left[ \| g (x) - \mathbb {E} [ g (x) ] \| ^ {2} \right] \leq U \| \nabla f (x) \| ^ {2} + Q. \tag {17} +$$ + +Our first theorem, described informally below and stated and proved formally in the appendix, provides required counterexamples of problems and estimators for the diagram in Figure 1. + +Theorem 1 (Informal) The assumptions connected by dashed lines in Figure 1 are mutually non-implicative. + +The result says that some pairs of assumptions are in a certain sense unrelated: none implies the other, and vice versa. In the next section, we introduce a new assumption, and provide deeper connections between all assumptions. + +# 5 New approach: biased ABC assumption + +# 5.1 Brief history + +Several existing restrictions on the first moment of the estimator were very briefly outlined in the previous section (see (3), (8), (10), (13), (15)). Khaled and Richtárik [2023] recently introduced a very general and accurate Expected Smoothness assumption (we will call it the ABC-assumption in this paper) on the second moment of the unbiased estimator. Let us note that Polyak and Tsypkin [1973] explored a related assumption during their analysis of pseudogradient algorithms. They succeeded in establishing an asymptotic convergence bound for a variant of gradient descent in the unbiased scenario. In contrast, our study focuses on non-asymptotic convergence rates in the biased setting. We generalize the restrictions (3), (10), (15) on the first moment and combine them with the ABC-assumption to develop our Biased ABC framework. + +Assumption 9 (Biased ABC) There exist constants $A, B, C, b, c \geq 0$ such that the gradient estimator $g(x)$ for every $x \in \mathbb{R}^d$ satisfies + +$$ +\langle \nabla f (x), \mathbb {E} [ g (x) ] \rangle \geq b \| \nabla f (x) \| ^ {2} - c, \tag {18} +$$ + +$$ +\mathbb {E} \left[ \| g (x) \| ^ {2} \right] \leq 2 A \left(f (x) - f ^ {*}\right) + B \| \nabla f (x) \| ^ {2} + C. \tag {19} +$$ + +The term $A\left(f(x) - f^{*}\right)$ in (19) naturally emerges when we bound the expression of the form $\sum_{i=1}^{n} q_{i} \left\| \nabla f_{i}(x) \right\|^{2}, q_{i} \geq 0, i \in [n]$ : while it can not be confined solely by the norm of the overall gradient $B \left\| \nabla f(x) \right\|^{2}$ , nor by a constant $C$ , smoothness suffices to bound this by $A\left(f(x) - f^{*}\right)$ . Further, there exist quadratic stochastic optimization problems where the second moment of a stochastic gradient is precisely equal to $2(f(x) - f^{*})$ (see Richtárik and Takáč [2020]). + +Concerning the challenges in verifying the Biased ABC assumption, it is worth mentioning that in Machine Learning, loss functions are commonly bounded from below by $f^{*} = 0$ . In Tables 2 and 8, we provide the constants that validate the fulfillment of our assumption by a wide range of practical estimators. Furthermore, Claims 2-4 can aid in determining these constants for various sampling schemes. + +# 5.2 Biased ABC as the weakest assumption + +As discussed in Section 4, there exists a Zoo of assumptions on the stochastic gradients in literature on BiasedSGD. Our second theorem, described informally below and stated and proved formally in the appendix, says that our new Biased ABC assumption is the least restrictive of all the assumptions reviewed in Section 4. + +Theorem 2 (Informal) Assumption 9 (Biased ABC) is the weakest among Assumptions 1 - 9. + +Inequality (8) of BVD or inequality (13) of BND show that one can impose the restriction on the first moment by bounding the norm of the bias. We choose inequality (18) that restrains the scalar product between the estimator and the gradient on purpose: this approach turns out to be more general on its own. In the proof of Theorem 2-ix (see (46) and (47)) we show that (13) implies (18). Below we show the existence of a counterexample that the reverse implication does not hold. + +Claim 1 There exists a finite-sum minimization problem for which a gradient estimator that satisfies inequality (18) of Assumption 9 does not satisfy inequality (13) of Assumption 6. + +The relationships among Assumptions 1-9 are depicted in Figure 1 based on the results of Theorem 1 and Theorem 2. It is evident from Figure 1 that Assumption 6 (BND) and Assumption 8 (FSML) are mutually non-implicative and represent the most general assumptions among those proposed in Assumptions 1-8. + +The most significant difference between our Assumption 9 (Biased ABC) and Assumptions 6 and 8 is the inclusion of the term $A\left(f(x) - f^{*}\right)$ in the bound on the second moment of the estimator. + +
AssumptionABCbc
Asm 1 (SG1) [Beznosikov et al., 2020]0β20α/β0
Asm 2 (SG2) [Beznosikov et al., 2020]0β20τ0
Asm 3 (CON) [Beznosikov et al., 2020]02(2-1/δ)01/2δ0
Asm 4 (BVD) [Condat et al., 2022]02(1+ξ+η)01-η/20
Asm 5 (BREQ) [Khirirat et al., 2018b]0ζ0ρ0
Asm 6 (BND) [Ajalloeian and Stich, 2020]02(M+1)(m+1)2(M+1)φ2+σ21-m/2φ2/2
Asm 7 (ABS) [Sahu et al., 2021]022Δ21/2Δ2/2
Asm 8 (FSML) [Bottou et al., 2018]0U+u2Qq0
+ +The rationale behind this inclusion was detailed in Section 5.1. In general, estimators of the form $\sum_{i=1}^{n} q_i |\nabla f_i(x)|^2$ , where $q_i \geq 0$ , for $i \in [n]$ , often arise in sampling schemes. We present two practical settings with sampling schemes (see Definitions 1 and 2) that can be described within the Biased ABC framework. These settings, in general, fall outside of the BND and FSML frameworks. + +In Section D.2 (see Proof of Theorem 2, parts viii and ix) we present an example of a setting with a minimization problem and a gradient estimator that justifies the introduction of this term: BND and FSML frameworks do not capture the proposed setting, while Biased ABC does capture it. + +In Table 1 we provide a representation of each of Assumptions 1-8 in our Biased ABC framework (based on the results of Theorem 13). Note that the constants in Table 1 are too pessimistic: given the estimator satisfying one of these assumptions, direct computation of constants in Biased ABC scope for it might lead to much more accurate results. In Table 2 we give a description of popular gradient estimators in terms of the Biased ABC framework. Finally, in Table 3 we list several popular estimators and indicate which of Assumptions 1-9 they satisfy. + +Table 1: Summary of known assumptions on biased stochastic gradients. Estimators satisfying any of them, belong to our general Biased ABC framework with parameters $A$ , $B$ , $C$ , $b$ and $c$ provided in this table. For proofs, we refer the reader to Theorem 13. + +
EstimatorDefABCbc
Biased independent sampling +[This paper]Def. 1maxi{Li} / minipi02AΔ* + s2min{i} {pi}0
Top-k +[Aji and Heafield, 2017]Def. 3010k/d0
Rand-k +Stich et al. [2018]Def. 40d/k010
Biased Rand-k +[Beznosikov et al., 2020]Def. 50kd0k/d0
Adaptive random sparsification +[Beznosikov et al., 2020]Def. 60101/d0
General unbiased rounding +[Beznosikov et al., 2020]Def. 70 sup k∈Z ak+1/4ak+1k+1 + 1/2010
Natural compression +[Horváth et al., 2022]Def. 909/8010
Scaled integer rounding +[Sapio et al., 2021]Def. 15022d/χ21/2d/2χ2
+ +Table 2: Summary of popular estimators with respective parameters $A$ , $B$ , $C$ , $b$ and $c$ , satisfying our general Biased ABC framework. Constants $L_{i}$ are from Assumption 13, $\Delta^{*}$ is defined in (26). For more estimators, see Table 8. + +# 6 Convergence of biased SGD under the biased ABC assumption + +Convergence rates of theorems below are summarized in Table 4 and compared to their counterparts. + +
Estimator\AssumptionA1A2A3A4A5A6A7A8A9
Biased independent sampling [This paper]
Top- \( k \) sparsification [Aji and Heafield, 2017]
Rand- \( k \) [Stich et al., 2018]
Biased Rand- \( k \) [Beznosikov et al., 2020]
Adaptive random sparsification [Beznosikov et al., 2020]
General unbiased rounding [Beznosikov et al., 2020]
Natural compression [Horváth et al., 2022]
Scaled integer rounding [Sapio et al., 2021]
+ +Table 3: Coverage of popular estimators by known frameworks. For more estimators, see Table 9. + +# 6.1 General nonconvex case + +Theorem 3 Let Assumptions 0 and 9 hold. Let $\delta^0 \stackrel{\text{def}}{=} f(x^0) - f^*$ , and choose the stepsize such that $0 < \gamma \leq \frac{b}{LB}$ . Then the iterates $\{x^t\}_{t \geq 0}$ of BiasedSGD (Algorithm (1)) satisfy + +$$ +\min _ {0 \leq t \leq T - 1} \mathbb {E} \left[ \left\| \nabla f (x ^ {t}) \right\| ^ {2} \right] \leq \frac {2 (1 + L A \gamma^ {2}) ^ {T}}{b \gamma T} \delta^ {0} + \frac {L C \gamma}{b} + \frac {c}{b}. \tag {20} +$$ + +While one can notice the possibility of an exponential blow-up in (20), by carefully controlling the stepsize we still can guarantee the convergence of BiasedSGD. In Corollaries 5 and 6 (see the appendix) we retrieve the results of Theorem 2 and Corollary 1 from [Khaled and Richtárik, 2023] for the unbiased case. In Corollary 7 (see the appendix) we retrieve the result that is worse than that in [Ajalloeian and Stich, 2020, Theorem 4] by a multiplicative factor and an extra additive term, but under milder conditions (cf. Biased ABC and BND in Figure 1; see also Claim 1). If we set $A = c = 0$ , we recover the result of [Bottou et al., 2018, Theorem 4.8] (see Corollary 8 in the appendix). + +# 6.2 Convergence under PL-condition + +One of the popular generalizations of strong convexity in the literature is the Polyak-Lojasiewicz assumption [Polyak, 1963; Karimi et al., 2016; Lei et al., 2019]. First, we define this condition. + +Assumption 10 (Polyak-Lojasiewicz) There exists $\mu > 0$ such that $\| \nabla f(x)\|^2 \geq 2\mu (f(x) - f^*)$ , for all $x \in \mathbb{R}^d$ . + +We now formulate a theorem that establishes the convergence of BiasedSGD for functions satisfying this assumption and Assumption 9. + +Theorem 4 Let Assumptions 0, 9 and 10 hold. Choose a stepsize such that + +$$ +0 < \gamma < \min \left\{\frac {\mu b}{L (A + \mu B)}, \frac {1}{\mu b} \right\}. \tag {21} +$$ + +Letting $\delta^0\stackrel {def}{=}f(x^0) - f^*$ , for every $T\geq 1$ , we have + +$$ +\mathbb {E} \left[ f (x ^ {T}) - f ^ {*} \right] \leq \left(1 - \gamma \mu b\right) ^ {T} \delta^ {0} + \frac {L C \gamma}{2 \mu b} + \frac {c}{\mu b}. \tag {22} +$$ + +When $c = 0$ , the last term in (22) disappears, and we recover the best known rates under the Polyak-Lojasiewicz condition [Karimi et al., 2016], but under milder conditions (see Corollary 10 in the appendix). Further, if we set $A = 0$ , we obtain a result that is slightly weaker than the one obtained by Ajalloeian and Stich [2020, Theorem 6], but under milder assumptions (cf. Biased ABC and BND in Figure 1; see also Claim 1). + +# 6.3 Strongly convex case + +Assumption 11 Let $f$ be $\mu$ -strongly-convex and continuously differentiable. + +
TheoremConvergence rateCompared toRate we compare toMatch?
Thm 3O\(\left(\frac{\delta^0L}{\varepsilon^2}\max\left\{B,\frac{126^0A}{\varepsilon^2},\frac{2C}{\varepsilon^2}\right\}\right)\)34-Thm 2O\(\left(\frac{\delta^0L}{\varepsilon^2}\max\left\{B,\frac{126^0A}{\varepsilon^2},\frac{2C}{\varepsilon^2}\right\}\right)\)
Thm 3O\(\left(\max\left\{\frac{8(M+1)(m+1)}{(1-m)^2\varepsilon},\frac{16(M+1)\varphi^2+2\sigma^2}{(1-m)^2\varepsilon^2}\right\}L\delta^0\right)\)1-Thm 4O\(\left(\max\left\{\frac{M+1}{(1-m)\varepsilon},\frac{2\sigma^2}{(1-m)^2\varepsilon^2}\right\}L\delta^0\right)\)
Thm 3O\(\left(\max\left\{\frac{8Q}{\varepsilon^2q^2},\frac{4(U+u^2)}{\varepsilon^2q^2}\right\}L\delta^0\right)\)6-Thm 4.8O\(\left(\max\left\{\frac{8Q}{\varepsilon^2q^2},\frac{4(U+u^2)}{\varepsilon^2q^2}\right\}L\delta^0\right)\)
Thm 4\(\tilde{\mathcal{O}}\left(\max\left\{\frac{2(M+1)(m+1)}{1-m},\frac{2(M+1)\varphi^2+\sigma^2}{\epsilon\mu(1-m)+2\varphi^2}\right\}\frac{\kappa}{1-m}\right)\)1-Thm 6\(\tilde{\mathcal{O}}\left(\max\left\{(M+1),\frac{\sigma^2}{\epsilon\mu(1-m)+\varphi^2}\right\}\frac{\kappa}{1-m}\right)\)
Thm 12\(\tilde{\mathcal{O}}\left(\max\left\{2,\frac{L(U+u^2)}{q^2\mu},\frac{LQ}{\varepsilon\mu^2q^2}\right\}\right)\)6-Thm 4.6\(\tilde{\mathcal{O}}\left(\max\left\{2,\frac{L(U+u^2)}{q^2\mu},\frac{LQ}{\varepsilon\mu^2q^2}\right\}\right)\)
Thm 12\(\tilde{\mathcal{O}}\left(\left(\frac{\beta^2}{\alpha}\right)^2\frac{L}{\mu}\right)\)5-Thm 12\(\tilde{\mathcal{O}}\left(\frac{\beta^2}{\alpha}\frac{L}{\mu}\right)\)
Thm 12\(\tilde{\mathcal{O}}\left(\left(\frac{\beta}{\tau}\right)^2\frac{L}{\mu}\right)\)5-Thm 13\(\tilde{\mathcal{O}}\left(\frac{\beta}{\tau}\frac{L}{\mu}\right)\)
Thm 12\(\tilde{\mathcal{O}}\left(\delta^2\frac{L}{\mu}\right)\)5-Thm 14\(\tilde{\mathcal{O}}\left(\delta\frac{L}{\mu}\right)\)
+ +Table 4: Complexity comparison. We examine whether we can achieve the same convergence rate as obtained under stronger assumptions. In most cases, we ensure the same rate, albeit with inferior multiplicative factors due to the broader scope of the analysis. The notation $\tilde{\mathcal{O}} (\cdot)$ hides a logarithmic factor of $\log \frac{2\delta^0}{\varepsilon}$ . + +Since Assumption 10 is more general than Assumption 11, Theorem 4 can be applied to functions that satisfy Assumption 11. If we set $A = c = 0$ , we recover [Bottou et al., 2018, Theorem 4.6] (see Corollary 13 in the appendix). If $A = C = c = 0$ , we retrieve results comparable to those in [Beznosikov et al., 2020, Theorems 12-14], up to a multiplicative factor (see Corollary 14 in the appendix). Due to $\mu$ -strong convexity, our result (22) also implies an iterate convergence, since we have $\left\| x^T - x^* \right\|^2 \leq \frac{2}{\mu} \mathbb{E}\left[f(x^T) - f(x^*)\right]$ . However, in this case an additional factor of $\frac{2}{\mu}$ arises. Below we present a stronger result, yet, at a cost of imposing a stricter condition on the control variables from Assumption 9. + +Assumption 12 Let $A, B, C$ and $b$ be parameters from Assumption 9. Let $\mu$ be a strong convexity constant. Let $L$ be a smoothness constant. Suppose $A + L(B + 1 - 2b) < \mu$ holds. + +Under Assumptions 9 and 12 we establish a similar result as the one obtained by Hu et al. [2021a, Theorem 1]. The authors impose a restriction of $\frac{1}{\kappa}$ from above on a constant with an analogous role as $B + 1 - 2b$ in Assumptions 9 and 12 with $A = 0$ . However, unlike us, the authors consider only a finite sum case which makes our result more general. Moreover, only a biased version of SGD with a simple sampling strategy is analyzed by Hu et al. [2021a]. Our results are applicable to a larger variety of gradient estimators and obtained under milder assumptions. Also, our proof strategy is different, and much simpler. + +Theorem 5 Let Assumptions 0, 9, 11 and 12 hold. For every positive $s$ , satisfying $A + L(B + 1 - 2b) < s < \mu$ , choose a stepsize $\gamma$ such that + +$$ +0 < \gamma \leq \min \left\{\frac {1 - \frac {1}{s} (A + L (B + 1 - 2 b))}{A + L B}, \frac {1}{\mu - s} \right\}. \tag {23} +$$ + +Then the iterates of BiasedSGD (Algorithm 1) for every $T \geq 1$ satisfy + +$$ +\mathbb {E} \left[ \left\| x ^ {T} - x ^ {*} \right\| ^ {2} \right] \leq (1 - \gamma (\mu - s)) ^ {T} \left\| x ^ {0} - x ^ {*} \right\| ^ {2} + \frac {\gamma C + \frac {C + 2 c}{s}}{\mu - s}. \tag {24} +$$ + +In the standard result for (unbiased) SGD, the convergence neighborhood term has the form of $\frac{\gamma C}{\mu}$ , and it can be controlled by adjusting the stepsize. However, due to the generality of our analysis in the biased case, in (24) we obtain an extra uncontrollable neighborhood term of the form $\frac{C + 2c}{s(\mu - s)}$ . + +When $A = C = c = 0$ , $B = 1$ , $b = 1$ , $s \to 0$ , we recover exactly the classical result for GD. + +# 7 Experiments + +Datasets, Hardware, and Code Implementation. The experiments utilized publicly available LibSVM datasets Chang and Lin [2011], specifically the splice, a9a, and w8a. These algorithms were developed using Python 3.8 and executed on a machine equipped with 48 cores of Intel(R) Xeon(R) Gold 6246 CPU @ 3.30GHz. + +Experiment: Problem Setting. To validate our theoretical findings, we conducted a series of numerical experiments on a binary classification problem. Specifically, we employed logistic regression with a non-convex regularizer: + +$$ +\min _ {x \in \mathbb {R} ^ {d}} \left[ f (x) \stackrel {\text {d e f}} {=} \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x) \right], \text {w h e r e} f _ {i} (x) \stackrel {\text {d e f}} {=} \log \left(1 + \exp \left(- y _ {i} a _ {i} ^ {\top} x\right)\right) + \lambda \sum_ {j = 1} ^ {d} \frac {x _ {j} ^ {2}}{1 + x _ {j} ^ {2}}, +$$ + +and $(a_{i},y_{i})\in \mathbb{R}^{d}\times \{-1,1\}$ $i = 1,\dots ,n$ represent the training data samples. In all experiments, we set the regularization parameter $\lambda$ to a fixed value of $\lambda = 1$ . We use datasets from the open LibSVM library [Chang and Lin, 2011]. We examine the performance of the proposed BiasedSGD method with biased independent sampling without replacement (we call it BiasedSGD-ind) in various settings (see Definition 1). The primary goal of these numerical experiments is to demonstrate the alignment of our theoretical findings with the observed experimental results. To assess the performance of the methods throughout the optimization process, we monitor the metric $\| \nabla f(x^t)\|^2$ , recomputed after every 10 iterations. The algorithms are terminated after completing 5000 iterations. For each method, we use the largest theoretical stepsize. Specifically, for BiasedSGD-ind, the stepsize is determined according to Corollary 4 and Claim 2 with $\gamma = \min \left\{\frac{1}{\sqrt{LAK}},\frac{b}{LB},\frac{c}{LC}\right\}$ , where $c = 0$ , $A = \max_i\frac{L_i}{\min_i p_i}$ , $B = 0$ , $C = 2A\Delta^{*} + s^{2}$ , $b = \min_{i}p_{i}$ and $s = 0$ . + +More experimental details are provided in Appendix A. + +![](images/376ba8442d467372b048d1c4a7b2c7e3daab5cd1eb8c4e7368a41d04f202aeb3.jpg) +Figure 2: The performance of BiasedSGD-ind with different choices of probabilities. + +![](images/b641555c509633d3bec8042dfe54f392a411a9dd69b67c6282e40203b661b9a5.jpg) + +![](images/e1ed5bae8372a69702a5b58c43d0fbe0833eabcde4ec670f6ed9d8e932ccbe08.jpg) + +Experiment: The impact of the parameter $p$ on the convergence behavior. In the first experiment, we investigate how the convergence of BiasedSGD-ind is affected as we increase the probabilities $p_i$ , while keeping them equal for all data samples. According to the Corollary 4, larger $p_i$ values (resulting in an increase of the expected batch size) allow for a larger stepsize, which, in turn, improves the overall convergence. This behavior is evident in Figure 2. The experiment visualized in Figure 2 involves varying the probability parameter $p$ within the set $\{0.01, 0.1, 0.5\}$ . This manipulation directly influences the value of $A$ , consequently affecting the theoretical stepsize $\gamma$ . In the context of BiasedSGD-ind, the stepsize $\gamma$ is defined as $\frac{1}{\sqrt{LAK}}$ . A comprehensive compilation of these parameters is represented in Table 7. + +# 8 Conclusion + +In this work, we consolidate various recent assumptions regarding the convergence of biasedSGD and elucidate their implication relationships. Moreover, we introduce a weaker assumption, referred to as Biased ABC. We also demonstrate that Biased ABC encompasses stochastic gradient oracles that previous assumptions excluded. With this assumption, we provide a proof of biasedSGD convergence across multiple scenarios, including strongly convex, non-convex, and under the PL-condition. Convergence rates that we obtain are the same up to a constant factor due to the broader setting and in some cases they coincide with the rates obtained under stricter assumptions. Furthermore, we examine the most widely used estimators in the literature related to SGD, represent them within the context of our Biased ABC framework, and analyze their compatibility with all previous frameworks. + +# Acknowledgements + +This work of all authors was supported by the KAUST Baseline Research Scheme (KAUST BRF). The work of Y. Demidovich and P. Richtárik was supported by the KAUST Extreme Computing Research Center (KAUST ECRC), and the work of P. Richtárik was supported by the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). + +# References + +Ahmad Ajalloeian and Sebastian U Stich. Analysis of SGD with biased gradient estimators. arXiv preprint arXiv:2008.00051, 2020. +Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 440-445, Copenhagen, Denmark, 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-[]1045. +Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cedric Renggli. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems (NeurIPS), volume 31, pages 5973-5983, 2018. +El Houcine Bergou, Eduard Gorbunov, and Peter Richtárik. Stochastic three points method for unconstrained smooth minimization. SIAM Journal on Optimization, 30(4):2726-2749, 2020. +Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, and Mher Safaryan. On biased compression for distributed learning. arXiv preprint arXiv:2002.12410, 2020. +Léon Bottou, Frank Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60(2):223-311, 2018. +Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, and EL Bergou. Minibatch stochastic three points method for unconstrained smooth minimization. arXiv preprint arXiv:2209.07883, 2022. +Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):1-27, 2011. +Congliang Chen, Li Shen, Haozhi Huang, and Wei Liu. Quantized Adam with error feedback. ACM Transactions on Intelligent Systems and Technology (TIST), 12(5):1-26, 2021a. +Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15-26, 2017. +Tianyi Chen, Yuejiao Sun, and Wotao Yin. Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 25294-25307. Curran Associates, Inc., 2021b. +Tianyi Chen, Yuejiao Sun, and Wotao Yin. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. IEEE Transactions on Signal Processing, 69:4937-4948, 2021c. doi: 10.1109/TSP.2021.3092377. +Tianyi Chen, Yuejiao Sun, and Wotao Yin. A single-timescale stochastic bilevel optimization method. arXiv preprint arXiv:2102.04671, 2021d. +Laurent Condat, Kai Yi, and Peter Richtárik. EF-BV: A unified theory of error feedback and variance reduction mechanisms for biased and unbiased compression in distributed optimization. Advances in Neural Information Processing Systems (NeurIPS), 2022. +Jean-Baptiste Cordonnier. Convex optimization using sparsified stochastic gradient descent with memory. Technical report, 2018. + +Marina Danilova and Eduard Gorbunov. Distributed methods with absolute compression and error compensation. In *Mathematical Optimization Theory and Operations Research: Recent Trends: 21st International Conference, MOTOR* 2022, Petrozavodsk, Russia, July 2-6, 2022, Revised Selected Papers, pages 163-177. Springer, 2022. +Alexandre d'Aspremont. Smooth optimization with approximate gradient. SIAM Journal on Optimization, 19(3):1171-1183, 2008. +Olivier Devolder, François Glineur, and Yuri Nesterov. First-order methods of smooth convex optimization with inexact oracle. Mathematical Programming, 146(1-2):37-75, 2014. +Aritra Dutta, El Houcine Bergou, Ahmed M Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3817-3824, 2020. +Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, and Peter Richtárik. EF21 with bells & whistles: practical algorithmic extensions of modern error feedback. arXiv preprint arXiv:2110.03294, 2021. +Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. The MIT Press, 2016. +Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, and Peter Richtárik. Linearly converging error compensated SGD. Advances in Neural Information Processing Systems, 33:20889-20900, 2020. +Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richtárik. SGD: General analysis and improved rates. In International Conference on Machine Learning (ICML), pages 5200-5209. PMLR, 2019. +Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International conference on machine learning, pages 1737-1746. PMLR, 2015. +Samuel Horváth, Chen-Yu Ho, Ludovit Horváth, Atal Narayan Sahu, Marco Canini, and Peter Richtárik. Natural compression for distributed deep learning. In *Mathematical and Scientific Machine Learning*, pages 129–141. PMLR, 2022. +Bin Hu, Peter Seiler, and Laurent Lessard. Analysis of biased stochastic gradient descent using sequential semidefinite programs. Mathematical Programming, 187:383-408, 2021a. +Yifan Hu, Xin Chen, and Niao He. Sample complexity of sample average approximation for conditional stochastic. SIAM J. on Optimization, 30(3):2103-2133, 2020a. ISSN 1052-6234. doi: 10.1137/19M1284865. URL https://doi.org/10.1137/19M1284865. +Yifan Hu, Siqi Zhang, Xin Chen, and Niao He. Biased stochastic first-order methods for conditional stochastic optimization and applications in meta learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 2759-2770. Curran Associates, Inc., 2020b. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-[]Paper.pdf. +Yifan Hu, Xin Chen, and Niao He. On the bias-variance-cost tradeoff of stochastic optimization. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 22119-22131. Curran Associates, Inc., 2021b. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/b986700c627db479a4d9460b75de7222-[]Paper.pdf. +Kaiyi Ji, Junjie Yang, and Yingbin Liang. Theoretical convergence of multi-step model-agnostic meta-learning. 23(1), 2022. ISSN 1532-4435. +Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the Polyak-Łojasiewicz condition. Machine Learning and Knowledge Discovery in Databases, pages 795--811, 2016. + +Sai Praneeth Karimireddy, Sebastian Stich, and Martin Jaggi. Adaptive balancing of gradient and update computation times using global geometry and approximate subproblems. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS, 2018). +Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian U. Stich, and Martin Jaggi. Error feedback fixes SignSGD and other gradient compression schemes. In International Conference on Machine Learning (ICML), volume 97, pages 3252-3261, 2019. +Ahmed Khaled and Peter Rictarik. Better theory for SGD in the nonconvex world. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum? id=AU4qHN2VkS. Survey Certification. +Sarit Khirirat, Hamid Reza Feyzmahdavian, and Mikael Johansson. Distributed learning with compressed gradients. arXiv preprint arXiv:1806.06573, 2018a. +Sarit Khirirat, Mikael Johansson, and Dan Alistarh. Gradient compression for communication-limited convex optimization. In 2018 IEEE Conference on Decision and Control (CDC), pages 166-171. IEEE, 2018b. +Sarit Khirirat, Sindri Magnússon, and Mikael Johansson. Compressed gradient methods with Hessian-aided error compensation. IEEE Transactions on Signal Processing, 69:998-1011, 2020. +Sarit Khirirat, Sindri Magnusson, and Mikael Johansson. Eco-Fedsplit: Federated learning with error-compensated compression. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5952-5956. IEEE, 2022. +Yunwei Lei, Ting Hu, Guiying Li, and Ke Tang. Stochastic Gradient Descent for nonconvex learning without bounded gradient assumptions. In IEEE Transactions on Neural Networks and Learning Systems, pages 1-7. ISSN 2162-2388, 2019. +Rémi Leluc and François Portier. SGD with coordinate sampling: Theory and practice. Journal of Machine Learning Research, 23(342):1-47, 2022. +Daniel Levy, Yair Carmon, John C. Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. +Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Paishun Ting, Shiyu Chang, and Lisa Amini. Zeroth-order stochastic variance reduction for nonconvex optimization. Advances in Neural Information Processing Systems (NeurIPS), 31:3727-3737, 2018. +Konstantin Mishchenko, Bokun Wang, Dmitry Kovalev, and Peter Richtárik. IntSGD: Adaptive floatless compression of stochastic gradients. arXiv preprint arXiv:2102.08374, 2021. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. +Arkadi Nemirovsky and David Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, New York, 1983. +Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527-566, 2017. +Feng Niu, Benjamin Recht, Christopher Re, and Stephen Wright. Hogwild: A lock-free approach to parallelizing Stochastic Gradient Descent. In Advances in Neural Information Processing Systems (NeurIPS), volume 24, pages 693-701, 2011. +Boris Polyak. Gradient methods for minimizing functionals. U.S.S.R. Computational Mathematics and Mathematical Physics, 3(4):864-878, 1963. +Boris Polyak. Introduction to Optimization. Optimization Software, Inc., 1987. + +Boris Polyak and Y.Z. Tsypkin. Pseudogradient adaptation and training algorithms. Automation and Remote Control, 34:377-397, 01 1973. +Peter Richtárik and Martin Takáč. Stochastic reformulations of linear systems: Algorithms and convergence theory. SIAM J. Matrix Anal. Appl., 41(2):487-524, 2020. ISSN 0895-4798. +Peter Richtárik, Igor Sokolov, and Ilyas Fatkhullin. EF21: A new, simpler, theoretically better, and practically faster error feedback. In Advances in Neural Information Processing Systems (NeurIPS), 2021. +Peter Richtárik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, and Eduard Gorbunov. 3PC: Three point compressors for communication-efficient distributed training and a better theory for lazy aggregation. In International Conference on Machine Learning (ICML), pages 18596-18648. PMLR, 2022. +Herbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400-407, 1951. +Atal Sahu, Aritra Dutta, Ahmed M Abdelmoniem, Trambak Banerjee, Marco Canini, and Panos Kalnis. Rethinking gradient sparsification as total error minimization. Advances in Neural Information Processing Systems (NeurIPS, 34:8133-8146, 2021). +Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan Ports, and Peter Richtárik. Scaling distributed machine learning with in-network aggregation. In In 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21), pages 785-808, 2021. +Mark Schmidt, Nicolas Roux, and Francis Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. Advances in Neural Information Processing Systems (NeurIPS), 24:1458-1466, 2011. +Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. +Sebastian U Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed updates. The Journal of Machine Learning Research, 21(1):9613-9648, 2020. +Sebastian U. Stich, J.-B. Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Nikko Ström. Scalable distributed dnn training using commodity GPU cloud computing. In *Interspeech* 2015, 2015. +Ruo-Yu Sun. Optimization for deep learning: An overview. Journal of the Operations Research Society of China, 8(2):249-294, 2020. +Hanlin Tang, Xiangru Lian, Chen Yu, Tong Zhang, and Ji Liu. DoubleSqueeze: Parallel Stochastic Gradient Descent with double-pass error-compensated compression. In Proceedings of the 36th International Conference on Machine Learning (ICML), 2020. +Rachael Tappenden, Peter Richtárik, and Jacek Gondzio. Inexact coordinate descent: Complexity and preconditioning. Journal of Optimization Theory and Applications, 170:144-176, 2016. +Jie Wang, Rui Gao, and Yao Xie. Sinkhorn distributionally robust optimization. arXiv preprint arXiv:2109.11926, 2021. +Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. Gradient sparsification for communication-efficient distributed optimization. In Advances in Neural Information Processing Systems (NeurIPS), volume 31, pages 1306-1316, 2018. \ No newline at end of file diff --git a/aguidethroughthezooofbiasedsgd/images.zip b/aguidethroughthezooofbiasedsgd/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c35d296a6f47f1a9a8b7cb6f6fb00683622f2fe --- /dev/null +++ b/aguidethroughthezooofbiasedsgd/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57f7e35df01e5a3d67c5c6fdcda4b8b5e4cfc4a246fb2fbb1bc982741799a973 +size 457566 diff --git a/aguidethroughthezooofbiasedsgd/layout.json b/aguidethroughthezooofbiasedsgd/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..925fd9bc42362553b2adc8efc7435f3c7d674141 --- /dev/null +++ b/aguidethroughthezooofbiasedsgd/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7471624285451b94a10402b344b31cc0f850051df3883d97ebf204db86574c5d +size 511942 diff --git a/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_content_list.json b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3a00238b698da4875e142fd53ff07d1429160b6e --- /dev/null +++ b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15880eba909f38370b1e50da0ae3eed0d20865b8867116fe9999ae4a629b3921 +size 187777 diff --git a/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_model.json b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0803d4794f2bdf1325def71c086059f4d0619796 --- /dev/null +++ b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7560e4c733f6b941b59e46251aefce0916d1ffd90081a32b757a8071ff85f7a +size 213564 diff --git a/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_origin.pdf b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..078c9f8dd1d7ed079bb633c7542d13c8d3c42e1c --- /dev/null +++ b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/2b9dc9ec-1715-46bf-9f91-51079821d9c1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:773f4206c6c23682ef31dc4f5a6dd70c2e46026881071de19fd7e5fc1afdf0c6 +size 12459816 diff --git a/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/full.md b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7d06e83cc35c445a18e11d48fd1048b4a53ade4e --- /dev/null +++ b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/full.md @@ -0,0 +1,728 @@ +# A Heat Diffusion Perspective on Geodesic Preserving Dimensionality Reduction + +Guillaume Huguet $^{1*}$ Alexander Tong $^{1*}$ Edward De Brouwer $^{2*}$ + +Yanlei Zhang $^{1}$ Guy Wolf $^{1}$ Ian Adelstein $^{2\dagger}$ Smita Krishnaswamy $^{2\dagger}$ + +1Université de Montréal; Mila - Quebec AI Institute 2 Yale University + +# Abstract + +Diffusion-based manifold learning methods have proven useful in representation learning and dimensionality reduction of modern high dimensional, high throughput, noisy datasets. Such datasets are especially present in fields like biology and physics. While it is thought that these methods preserve underlying manifold structure of data by learning a proxy for geodesic distances, no specific theoretical links have been established. Here, we establish such a link via results in Riemannian geometry explicitly connecting heat diffusion to manifold distances. In this process, we also formulate a more general heat kernel based manifold embedding method that we call heat geodesic embeddings. This novel perspective makes clearer the choices available in manifold learning and denoising. Results show that our method outperforms existing state-of-the-art in preserving ground truth manifold distances, and preserving cluster structure in toy datasets. We also showcase our method on single cell RNA-sequencing datasets with both continuum and cluster structure, where our method enables interpolation of withheld timepoints of data. Finally, we show that parameters of our more general method can be configured to give results similar to PHATE (a state-of-the-art diffusion based manifold learning method) as well as SNE (an attraction/repulsion neighborhood based method that forms the basis of t-SNE). + +# 1 Introduction + +The advent of high throughput and high dimensional data in various fields of science have made dimensionality reduction and visualization techniques an indispensable part of exploratory analysis. Diffusion-based manifold learning methods, based on the data diffusion operator, first defined in [5], have proven especially useful due to their ability to handle noise and density variations while preserving structure. As a result, diffusion-based dimensionality reduction methods, such as PHATE [22], T-PHATE [3], and diffusion maps [5], have emerged as methods for analyzing high throughput noisy data in various situations. While these methods are surmised to learn manifold geodesic distances, no specific theoretical links have been established. Here, we establish such a link by using Varadhan's formula [34] and a parabolic Harnack inequality [17, 25], which relate manifold distances to heat diffusion directly. This lens gives new insight into existing dimensionality reduction methods, including when they preserve geodesics, and suggests a new method for dimensionality reduction to explicitly preserve geodesics, which we call heat geodesic embeddings3. Furthermore, based on our understanding of other methods [22, 5], we introduce theoretically justified parameter choices that + +allow our method to have greater versatility in terms of distance denoising and emphasis on local versus global distances. + +Generally, data diffusion operators are created by first computing distances between datapoints, transforming these distances into affinities by pointwise application of a kernel function (like a Gaussian kernel), and then row normalizing with or without first applying degree normalization into a Markovian diffusion operator $P$ [5, 9, 14, 21, 33]. The entries of $P(x,y)$ then contain probabilities of diffusing (or random walk probabilities) from one datapoint to another. Diffusion maps and PHATE use divergences between these diffusion or random walk-based probability distributions $P(x,\cdot)$ and $P(y,\cdot)$ to design a diffusion-based distance that may not directly relate to manifold distance. Our framework directly utilizes a heat kernel based distance, and offers a more comprehensive perspective to study these diffusion methods. By configuring parameters in our framework, we show how we can navigate a continuum of embeddings methods from PHATE [22] to Stochastic Neighbor Embedding (SNE) [11]. + +In summary, our contributions are as follows: + +- We define the heat-geodesic dissimilarity based on Varadhan's formula and the two-sided heat kernel bounds. +- Based on this dissimilarity, we present a versatile geodesic-preserving method for dimensionality reduction which we call heat geodesic embedding. +- We establish a relationship between diffusion-based distances and the heat-geodesic dissimilarity. +- We establish connections between our method and popular dimensionality reduction techniques such as PHATE and SNE, shedding light on their geodesic preservation and denoising properties based on modifications of the computed dissimilarity and distance preservation losses. +- We empirically demonstrate the advantages of Heat Geodesic Embedding in preserving manifold geodesic distances in several experiments showcasing more faithful manifold distances in the embedding space, as well as our ability to interpolate data within the manifold. + +![](images/bafd22dab50a831c1ef968cb146b51eb17497b9568363327c1c73d67fa8dfa4b.jpg) +Figure 1: Embeddings of the Swiss roll (top) and Tree (bottom) datasets for different manifold learning methods. Our HeatGeo method correctly unrolls the Swiss roll while t-SNE and UMAP create undesirable artificial clusters. + +# 2 Preliminaries + +First, we introduce fundamental notions that form the basis of our manifold learning methods: Varadhan's formula [34] on a manifold, diffusion processes on graphs, efficient heat kernel approximations, and multidimensional scaling [4, 12, 16]. + +Varadhan's formula Varadhan's formula is a powerful tool in differential geometry that establishes a connection between the heat kernel and the shortest path (geodesic) distance on a Riemannian manifold. Its versatility has led to widespread applications in machine learning [6, 10, 15, 27-29]. Let $(M,g)$ be a closed Riemannian manifold, and $\Delta$ the Laplace-Beltrami operator on $M$ . The heat kernel $h_t(x,y)$ on $M$ is the minimal positive fundamental solution of the heat equation $\frac{\partial u}{\partial t} = \Delta u$ with initial condition $h_0(x,y) = \delta_x(y)$ . In a $d$ -dimensional Euclidean space the heat kernel is $h_t(x,y) = (4\pi t)^{-d/2} e^{-d(x,y)^2/4t}$ so that $-4t \log h_t(x,y) = 2dt \log (4\pi t) + d^2(x,y)$ and + +we observe the following limiting behavior: + +$$ +\lim _ {t \rightarrow 0} - 4 t \log h _ {t} (x, y) = d ^ {2} (x, y). \tag {1} +$$ + +Varadhan [34] (see also [20]) proved that eq. 1 (now Varadhan's formula) holds more generally on complete Riemannian manifolds $M$ , where $d(x,y)$ is the geodesic distance on $M$ , and the convergence is uniform over compact subsets of $M$ . A related result for complete Riemannian manifolds that satisfy the parabolic Harnack inequality (which includes convex domains in Euclidean space and Riemannian manifolds with non-negative Ricci curvature) is the two-sided heat kernel bound [25, 17], showing that for any $\epsilon \in (0,1)$ there exist constants $c(\epsilon)$ and $C(\epsilon)$ such that + +$$ +\frac {c (\epsilon)}{V (x , \sqrt {t})} \exp \left(- \frac {d (x , y) ^ {2}}{4 (1 + \epsilon) t}\right) \leq h _ {t} (x, y) \leq \frac {C (\epsilon)}{V (x , \sqrt {t})} \exp \left(- \frac {d (x , y) ^ {2}}{4 (1 - \epsilon) t}\right), \tag {2} +$$ + +where $V(x, \sqrt{t})$ is the volume of a ball or radius $\sqrt{t}$ centered at $x$ . We denote this relation by $h_t(x,y) \simeq V(x, \sqrt{t})^{-1} \exp(-d(x,y)^2 / t)$ and note that it again recovers eq. 1 in the $t \to 0$ limit, which is unsurprising as Varadhan's result holds more generally. More important for our purposes is that $h_t(x,y) \simeq V(x, \sqrt{t})^{-1} \exp(-d(x,y)^2 / t)$ holds for $t > 0$ which will allow us to approximate geodesic distances $d(x,y)$ from a diffusion based estimation of the heat kernel $h_t(x,y)$ and the volume $V(x, \sqrt{t})$ . In appendix C.3, we provide examples using inequality (2). + +Graph construction and diffusion Our construction starts by creating a graph from a point cloud dataset $X$ of size $n$ . We use a kernel function $\kappa: \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^+$ , such that the (weighted) adjacency matrix is $W_{ij} := \kappa(x_i, x_j)$ for all $x_i, x_j \in X$ . The kernel function could be a Gaussian kernel, or constructed from a nearest neighbor graph. The resulting graph $\mathcal{G}$ is characterized by the set of nodes (an ordering of the observations), the adjacency matrix, and the set of edges, i.e. pairs of nodes with non-zero weights. The graph Laplacian is an operator acting on signals on $\mathcal{G}$ such that it mimics the negative of the Laplace operator. The combinatorial graph Laplacian matrix is defined as $L := Q - W$ and its normalized version as $L = I_n - Q^{-1/2}WQ^{-1/2}$ , where $Q$ is a diagonal degree matrix with $Q_{ii} := \sum_j W_{ij}$ . The Laplacian is symmetric positive semi-definite, and has an eigen-decomposition $L = \Psi \Lambda \Psi^T$ . Throughout the presentation, we assume that $Q_{ii} > 0$ for all $i \in \{1, \dots, n\}$ . The Laplacian allows us to define the heat equation on $\mathcal{G}$ , with respect to an initial signal $f_0 \in \mathbb{R}^n$ on $\mathcal{G}$ : + +$$ +\frac {\partial}{\partial t} \boldsymbol {f} (t) + \boldsymbol {L} \boldsymbol {f} (t) = \mathbf {0}, s. t. \quad \boldsymbol {f} (0) = \boldsymbol {f} _ {0} \quad t \in \mathbb {R} ^ {+}. \tag {3} +$$ + +The solution of the above differential equation is obtained with the matrix exponential $\pmb{f}(t) = e^{-t\pmb{L}}\pmb{f}_0$ , and we define the heat kernel on the graph as $\pmb{H}_t \coloneqq e^{-t\pmb{L}}$ . By eigendecomposition, we have $\pmb{H}_t = \Psi e^{-t\Lambda}\Psi^T$ . The matrix $\pmb{H}_t$ is a diffusion matrix that characterizes how a signal propagates through the graph according to the heat equations. + +Other diffusion matrices on graphs have also been investigated in the literature. The transition matrix $\pmb{P} \coloneqq \pmb{Q}^{-1}\pmb{W}$ characterizing a random walk on the graph is another common diffusion matrix used for manifold learning such as PHATE [22] and diffusion maps [5]. It is a stochastic matrix that converges to a stationary distribution $\pi_{i} \coloneqq Q_{ii} / \sum_{i}Q_{ii}$ , under mild assumptions. + +Fast computation of heat diffusion Exact computation of the (discrete) heat kernel $H_{t}$ is computationally costly, requiring a full eigendecomposition in $O(n^{3})$ time. Fortunately, multiple fast approximations have been proposed, including using orthogonal polynomials or the Euler backward methods. In this work, we use Chebyshev polynomials, as they have been shown to converge faster than other polynomials on this problem [13]. + +Chebyshev polynomials are defined by the recursive relation $\{T_k\}_{k \in \mathbb{N}}$ with $T_0(y) = 0$ , $T_1(y) = y$ and $T_k(y) = 2yT_{k-1}(y) - T_{k-2}(y)$ for $k \geq 2$ . Assuming that the largest eigenvalue is less than two (which holds for the normalized Laplacian), we approximate the heat kernel with the truncated polynomials of order $K$ + +$$ +\boldsymbol {H} _ {t} \approx p _ {K} (\boldsymbol {L}, t) := \frac {b _ {t , 0}}{2} + \sum_ {k = 1} ^ {K} b _ {t, k} T _ {k} (\boldsymbol {L} - \boldsymbol {I} _ {n}), \tag {4} +$$ + +where the $K + 1$ scalar coefficients $\{b_{t,i}\}$ depend on time and are evaluated with the Bessel function. Computing $p_K(L,t)f$ requires $K$ matrix-vector product and $K + 1$ Bessel function evaluation. The expensive part of the computation are the matrix-vector products, which can be efficient if the Laplacian matrix is sparse. Interestingly, we note that the evaluation of $T_{k}$ do not depend on the diffusion time. Thus, to compute multiple approximations of the heat kernel $\{p_K(L,t)\}_{t\in \mathcal{T}}$ , only necessitates reweighting the truncated polynomial $\{T_k\}_{k\in [1,\dots,K]}$ with the corresponding $|\mathcal{T}|$ sets of Bessel coefficients. The overall complexity is dominated by the truncated polynomial computation which takes $O(K(E + n))$ time where $E$ is the number of non-zero values in $\pmb{L}$ . + +Another possible approximation is using the Euler backward method. It requires solving $K$ systems of linear equations $\pmb{f}(t) = (\pmb{I}_n + (t / K)\pmb{L})^{-K}\pmb{f}(0)$ , which can be efficient for sparse matrices using the Cholesky decomposition [10, 28]. We quantify the differences between the heat kernel approximations in Appendix C. + +Metric multidimensional scaling Given a dissimilarity function $d$ between data points, metric multidimensional scaling (MDS) [16] finds an embedding $\phi$ such that the difference between the given dissimilarity and the Euclidean distance in the embedded space is minimal across all data points. Formally, for a given function $d:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}^+$ , MDS minimizes the following objective: + +$$ +L (\boldsymbol {X}) = \left(\sum_ {i j} \left(d \left(x _ {i}, x _ {j}\right) - \left\| \phi \left(x _ {i}\right) - \phi \left(x _ {j}\right) \right\| _ {2}\right) ^ {2}\right) ^ {1 / 2}, \tag {5} +$$ + +In metric MDS the solution is usually found by the SMACOF algorithm [30], or stochastic gradient descent [37]. + +# 3 Related Work + +We review state-of-the-art embedding methods and contextualize them with respect to Heat Geodesic Embedding. A formal theoretical comparison of all methods is given in Section 5. Given a set of high-dimensional datapoints, the objective of embedding methods is to create a map that embeds the observations in a lower dimensional space, while preserving distances or similarities. Different methods vary by their choice of distance or dissimilarity functions, as shown below. + +Diffusion maps In diffusion maps [5], an embedding in $k$ dimensions is defined via the first $k$ non-trivial right eigenvectors of the t-steps random walk $\pmb{P}^t$ weighted by their eigenvalues. The embedding preserves the diffusion distance $DM_{\pmb{P}}(x_i, x_j) \coloneqq \| (\delta_i \pmb{P}^t - \delta_j \pmb{P}^t)(1 / \pi)\|_2$ , where $\delta_i$ is a vector such that $(\delta_i)_j = 1$ if $j = i$ and 0 otherwise, and $\pi$ is the stationary distribution of $\pmb{P}$ . Intuitively, $DM_{\pmb{P}}(x_i, x_j)$ considers all the $t$ -steps paths between $x_i$ and $x_j$ . A larger diffusion time can be seen as a low frequency graph filter, i.e. keeping only information from the low frequency transitions such as the stationary distributions. For this reason, using diffusion with $t > 1$ helps denoising the relationship between observations. + +PHATE This diffusion-based method preserves the potential distance [22] $PH_{P} \coloneqq \| -\log \delta_{i}P^{t} + \log \delta_{j}P^{t}\|_{2}$ , and justifies this approach using the log transformation to prevent nearest neighbors from dominating the distances. An alternative approach is suggested using a square root transformation. Part of our contributions is to justify the log transformation from a geometric point of view. The embedding is defined using multidimensional scaling, which we present below. + +SNE, t-SNE, UMAP Well-known attraction/repulsion methods such as SNE [11], t-SNE [32], and UMAP [19] define an affinity matrix with entries $p_{ij}$ in the ambient space, and another affinity matrix with entries $q_{ij}$ in the embedded space. To define the embedding, a loss between the two affinity matrices is minimized. Specifically, the loss function is $D_{\mathrm{KL}}(p||q)\coloneqq \sum_{ij}p_{ij}\log p_{ij} / q_{ij}$ in SNE and t-SNE, whereas UMAP is equivalent to adding $D_{\mathrm{KL}}(1 - p||1 - q)$ for unnormalized densities [2]. While these methods preserve affinities, they do not preserve any types of distances in the embedding. + +# 4 Heat-Geodesic Embedding + +In this section, we present our Heat Geodesic Embedding which is summarized in Alg. 1. We start by introducing the heat-geodesic dissimilarity, then present a robust transformation, and a heuristic to choose the optimal diffusion time. Proofs not present in the main text are given in AppendixA. + +We consider the discrete case, where we have a set of $n$ points $\{x_i\}_{i=1}^n \eqqcolon \mathbf{X}$ in a high dimensional Euclidean space $x_i \in \mathbb{R}^d$ . From this point cloud, we want to define a map $\phi: \mathbb{R}^d \to \mathbb{R}^k$ that embeds the observation in a lower dimensional space. An important property of our embedding is that we preserve manifold geodesic distances in a low dimensional space. + +Heat-geodesic dissimilarity Inspired by Varadhan's formula and the Harnack inequalities, we defined a heat-geodesic dissimilarity based on heat diffusion on graphs. From observations (data-points) in $\mathbb{R}^d$ , we define an undirected graph $\mathcal{G}$ , and compute its heat kernel $H_{t} = e^{-tL}$ , where $L$ is the combinatorial or symmetrically normalized graph Laplacian (the heat kernel is thus symmetric). Following the inequality (2), we can rearrange the terms to isolate the geodesic distance, inspired by this observation, we define the following dissimilarity. + +Definition 4.1. For a diffusion time $t > 0$ and tunable parameter $\sigma > 0$ , we define the heat-geodesic dissimilarity between $x_{i}, x_{j} \in X$ as + +$$ +d _ {t} \left(x _ {i}, x _ {j}\right) := \left[ - 4 t \log \left(\boldsymbol {H} _ {t}\right) _ {i j} - \sigma 4 t \log \left(\boldsymbol {V} _ {t}\right) _ {i j} \right] ^ {1 / 2} +$$ + +where $\pmb{H}_t$ is the heat kernel on the graph $\mathcal{G}$ , and $(\pmb{V}_t)_{ij} \coloneqq 2[(\pmb{H}_t)_{ii} + (\pmb{H}_t)_{jj}]^{-1}$ . + +Here the log is applied elementwise, and the term $-4t\log (H_t)_{ij}$ corresponds to the geodesic approximation when $t\to 0$ as in Varadhan's formula. In practice one uses a fixed diffusion time $t > 0$ , so we add a symmetric volume correction term as in the Harnack inequality, ensuring that $d_{t}(x_{i},x_{j})$ is symmetric. From Sec. 2, we have $h_t(x,x)\simeq V(x,\sqrt{t})^{-1}$ , and we use the diagonal of $\pmb{H}_{t}$ to approximate the inverse of the volume. With this volume correction term and $\sigma = 1$ , the dissimilarity is such that $d_{t}(x_{i},x_{i}) = 0$ for all $t > 0$ . When $\sigma = 0$ or the manifold has uniform volume growth (as in the constant curvature setting) we show that the heat-geodesic dissimilarity is order preserving: + +Proposition 4.2. When $\sigma = 0$ or the manifold has uniform volume growth, i.e. $(\pmb{H}_t)_{ii} = (\pmb{H}_t)_{jj}$ , and the heat kernel is pointwise monotonically decreasing w.r.t. a norm $|\cdot|$ in ambient space, we have for triples $x_i, x_j, x_k \in \pmb{X}$ that $|x_i - x_j| > |x_i - x_k|$ implies $d_t(x_i, x_j) > d_t(x_i, x_k)$ , i.e. the heat-geodesic dissimilarity is order preserving. + +Proof. When $\sigma = 0$ or the manifold has uniform volume growth we need only consider the $-4t\log (H_t)_{ij}$ terms. The assumption of pointwise monotonicity of the heat kernel entails that $|x_i - x_j| > |x_i - x_k|$ implies $H_{t}(x_{i},x_{j}) < H_{t}(x_{i},x_{k})$ . We are able to conclude that $-4t\log H_t(x_i,x_j) > - 4t\log H_t(x_i,x_k)$ and thus $d_{t}(x_{i},x_{j}) > d_{t}(x_{i},x_{k})$ . + +Denoising distances with triplet computations We note that both diffusion maps and PHATE compute a triplet distance between datapoints, i.e., rather than using the direct diffusion probability between datapoints, they use a distance between corresponding rows of a diffusion operator. In particular, diffusion maps use Euclidean distance, and PHATE uses an M-divergence. Empirically, we notice that this step acts as a denoiser for distances. We formalize this observation in the following proposition. We note $D_{\mathrm{T}}$ the triplet distance. The triplet distance compares the distances relative to other points. Intuitively, this is a denoising step, since the effect of the noise is spread across the entire set of points. For a reference dissimilarity like the heat-geodesic, it is defined as $D_{\mathrm{T}}(x_i,x_j)\coloneqq \| d_t(x_i,\cdot) - d_t(x_j,\cdot)\| _2$ . For linear perturbations of the form $d_{t}(x_{i},x_{j}) + \epsilon$ , where $\epsilon \in \mathbb{R}$ , the effect of $\epsilon$ on $D_{\mathrm{T}}(x_i,x_j)$ is less severe than on $d_{t}(x_{i},x_{j})$ . Our embedding is based on a linear combination between the heat-geodesic dissimilarity and its triplet distance $(1 - \rho)d_{t} + \rho D_{\mathrm{T}}$ , where $\rho \in [0,1]$ . + +Proposition 4.3. Denote the perturbed triplet distance by $\widetilde{D_{\mathrm{T}}} (x_i,x_j) = ||\tilde{d}_t(x_i,\cdot) - \tilde{d}_t(x_j,\cdot)||_2$ where $\tilde{d}_t(x_i,x_j)\coloneqq d_t(x_i,x_j) + \epsilon$ and $\tilde{d}_t(x_i,x_k)\coloneqq d_t(x_i,x_k)$ for $k\neq j$ . Then the triplet distance $D_{\mathrm{T}}$ is robust to perturbations, i.e., for all $\epsilon >0$ + +$$ +\left(\frac {\widetilde {D _ {\mathrm {T}}} (x _ {i} , x _ {j})}{D _ {\mathrm {T}} (x _ {i} , x _ {j})}\right) ^ {2} \leq \left(\frac {d _ {t} (x _ {i} , x _ {j}) + \epsilon}{d _ {t} (x _ {i} , x _ {j})}\right) ^ {2}. +$$ + +Optimal diffusion time Varadhan's formula suggests a small value of diffusion time $t$ to approximate geodesic distance on a manifold. However, in the discrete data setting, geodesics are based on graph constructions, which in turn rely on nearest neighbors. Thus, small $t$ can lead to disconnected graphs. Additionally, increasing $t$ can serve as a way of denoising the kernel (which is often computed from noisy data) as it implements a low-pass filter over the eigenvalues, providing the additional advantage of adding noise tolerance. By computing a sequence of heat kernels $(\pmb{H}_t)_t$ and evaluating their entropy $H(\pmb{H}_t) \coloneqq -\sum_{ij}(\pmb{H}_t)_{ij}\log (\bar{\pmb{H}}_t)_{ij}$ , we select $t$ with the knee-point method [26] on the function $t\mapsto H(H_t)$ . We show in Sec. 6.1 that our heuristic for determining the diffusion time automatically leads to better overall results. + +Weighted MDS The loss in MDS (eq.5) is usually defined with uniform weights. Here, we optionally weight the loss by the heat kernel. In Sec. 5, we will show how this modification relates our method to the embedding defined by SNE[11]. For $x_{i}, x_{j} \in X$ , we minimize $(H_{t})_{ij}(d_{t}(x_{i}, x_{j}) - \| \phi(x_{i}) - \phi(x_{j}) \|_{2})^{2}$ . This promotes geodesic preservation of local neighbors, since more weights are given to points with higher affinities. + +Heat-geodesic embedding To define a lower dimensional embedding of a point cloud $X$ , we construct a matrix from the heat-geodesic dissimilarity, and then use MDS to create the embedding. Our embedding defines a map $\phi$ that minimizes $\left(d_t(x_i,x_j) - \| \phi (x_i) - \phi (x_j)\| _2\right)^2$ , for all $x_{i},x_{j}\in X$ . Hence, it preserves the heat-geodesic dissimilarity as the loss decreases to zero. In Alg. 1, we present the main steps of our algorithm using the heat-geodesic dissimilarity. A detailed version is presented in Appendix A. + +# Algorithm 1 Heat Geodesic Embedding + +1: Input: $N \times d$ dataset matrix $\mathbf{X}$ , denoising parameter $\rho \in [0,1]$ , Harnack regularization $\sigma > 0$ , output dimension $k$ . +2: Returns: $N \times k$ embedding matrix $E$ . +3: $\mathbf{H}_t \gets p_K(\mathbf{L}, t)$ +4: $t\gets$ Kneedle $\{H(H_t)\} _t$ +5: $\pmb{D} \gets [-4t\log(\pmb{H}_t)_{ij} - \sigma 4t\log(\pmb{V}_t)_{ij}]^{1/2}$ +6: $D\gets (1 - \rho)D + \rho D_{\mathrm{T}}$ +7: Return $\pmb{E} \gets$ MetricMDS $(\pmb{D}, \| \cdot \|_2, k)$ +$\triangleright$ Heat approximation +$\triangleright$ Knee detection e.g. [26] +$\triangleright$ log is applied elementwise +$\triangleright$ Triplet interpolation step + +# 5 Relation to other manifold learning methods + +In this section, we elucidate theoretical connections between the Heat Geodesic Embedding and other manifold learning methods. We relate embeddings via the eigenvalues of $\mathbf{H}_t$ or $\mathbf{P}^t$ with Laplacian eigenmaps and diffusion maps. We then present the relation between our methods and PHATE and SNE. We provide further analysis in the Appendix A. In particular, we introduce a new definition of kernel preserving embeddings; either via kernel-based distances (diffusion maps, PHATE) or via similarities (e.g. t-SNE, UMAP). + +Diffusion maps with the heat kernel Diffusion maps [5] define an embedding with the first $k$ eigenvectors $(\phi_i)_i$ of $\pmb{P}$ , while Laplacian eigenmaps [1] uses the eigenvectors $(\psi_i)_i$ of $\pmb{L}$ . In the following, we recall the links between the two methods, and show that a rescaled Laplacian eigenmaps preserves the diffusion distance with the heat kernel $\pmb{H}_t$ . + +Lemma 5.1. Rescaling the Laplacian eigenmaps embedding with $x_{i} \mapsto (e^{-2t\lambda_{1}}\psi_{1,i},\ldots ,e^{-2t\lambda_{k}}\psi_{k,i})$ preserves the diffusion distance $DM_{H_t}$ . + +Relation to PHATE The potential distance in PHATE (Sec. 3) is defined by comparing the transition probabilities of two $t$ -steps random walks initialized from different vertices. The transition matrix $P^t$ mimics the heat propagation on a graph. The heat-geodesic dissimilarity provides a new interpretation of PHATE. In the following proposition, we show how the heat-geodesic relates to the PHATE potential distance with a linear combination of $t$ -steps random walks. + +Proposition 5.2. The PHATE potential distance with the heat kernel $PH_{H_t}$ can be expressed in terms of the heat-geodesic dissimilarity with $\sigma = 0$ + +$$ +P H _ {\boldsymbol {H} _ {t}} = (1 / 4 t) ^ {2} \left\| d _ {t} \left(x _ {i}, \cdot\right) - d _ {t} \left(x _ {j}, \cdot\right) \right\| _ {2} ^ {2}, +$$ + +and it is equivalent to a multiscale random walk distance with kernel $\sum_{k > 0}m_t(k)\pmb{P}^k$ , where $m_t(k) := t^k e^{-t} / k!$ . + +Proof. We present a simplified version of the proof, more details are available in Appendix A. For $\sigma = 0$ , we have $d_{t}(x_{i},x_{j}) = -4t\log (\pmb{H}_{t})_{ij}$ , the relation between the PHATE potential and the heat-geodesic follows from the definition + +$$ +P H _ {\boldsymbol {H} _ {t}} (x _ {i}, x _ {j}) = \sum_ {k} \left(- \log \boldsymbol {H} _ {t} (x _ {i}, x _ {k}) + \log \boldsymbol {H} _ {t} (x _ {j}, x _ {k})\right) ^ {2} = (1 / 4 t) ^ {2} \| d _ {t} (x _ {i}, \cdot) - d _ {t} (x _ {j}, \cdot) \| _ {2} ^ {2}. +$$ + +Using the heat kernel $H_{t}$ with the random walk Laplacian $L_{rw} = Q^{-1}L = I_{n} - Q^{-1}W$ corresponds to a multiscale random walk kernel. We can write $L_{rw} = S\Lambda S^{-1}$ , where $S := Q^{-1/2}\Psi$ . Since $P = I_{n} - L_{rw}$ , we have $P^{t} = S(I_{n} - \Lambda)^{t}S^{-1}$ . Interestingly, we can relate the eigenvalues of $H_{t}$ and $P$ with the Poisson distribution. The probability mass function of a Poisson distribution with mean $t$ is given by $m_{t}(k) := t^{k}e^{-t}/k!$ . For $t \geq 0$ , we have $e^{-t(1 - \mu)} = \sum_{k \geq 0}m_{t}(k)\mu^{k}$ . With this relationship, we can express $H_{t}$ as a linear combination of $P^{t}$ weighted by the Poisson distribution. Indeed, substituting $\lambda = 1 - \mu$ in yields + +$$ +\boldsymbol {H} _ {t} = \boldsymbol {S} e ^ {- t \Lambda} \boldsymbol {S} ^ {- 1} = \boldsymbol {S} \sum_ {k = 0} ^ {\infty} m _ {t} (k) \left(\boldsymbol {I} _ {n} - \Lambda\right) ^ {k} \boldsymbol {S} ^ {- 1} = \sum_ {k = 0} ^ {\infty} m _ {t} (k) \boldsymbol {P} ^ {k}. +$$ + +![](images/1d60c2a25c7b8ae18faf63b3a383a48963ce1a65e436862132e04691d996dd8a.jpg) + +Remark 5.3. In the previous proposition, the same argument holds for the symmetric Laplacian and the affinity matrix $\mathbf{A} \coloneqq \mathbf{Q}^{-1/2}\mathbf{W}\mathbf{Q}^{-1/2}$ used in other methods such as diffusion maps [5]. This is valid since we can write $\mathbf{L}_{sym} = \mathbf{Q}^{-1/2}\Psi\Lambda\Psi^{T}\mathbf{Q}^{-1/2}$ , and $\mathbf{A} = \mathbf{I}_n - \mathbf{L}_{sym}$ . + +Remark 5.4. This proposition shows that, as the denoising parameter $\rho \to 1$ , Heat Geodesic Embedding interpolates to the PHATE embeddings with a weighted kernel $\sum_{k=0}^{\infty} m_t(k) P^k$ . + +Relation to SNE The heat-geodesic method also relates to SNE [11], and its variation using the Student distribution t-SNE [18]. In SNE, the similarity between points is encoded via transition probabilities $p_{ij}$ . The objective is to learn an affinity measure $q$ , that depends on the embedding distances $\| y_i - y_j\| _2$ , such that it minimizes $D_{\mathrm{KL}}(p\| q)$ . Intuitively, points that have a strong affinity in the ambient space, should also have a strong affinity in the embedded space. Even though the heat-geodesic minimization is directly on the embedding distances, we can show an equivalent with SNE. In Appendix A, we provide additional comparisons between SNE and our method. + +Proposition 5.5. The Heat-Geodesic embedding with $\sigma = 0$ and squared distances minimization weighted by the heat kernel is equivalent to SNE with the heat kernel affinity in the ambient space, and a Gaussian kernel in the embedded space $q_{ij} = \exp (-\| y_i - y_j\|^2 /4t)$ . + +# 6 Results + +In this section, we show the versatility of our method, showcasing its performance in terms of clustering and preserving the structure of continuous manifolds. We compare the performance of Heat Geodesic Embedding with multiple state-of-the-art baselines on synthetic datasets and real-world datasets. For all models, we perform sample splitting with a 50/50 validation-test split. The validation and test sets each consists of 5 repetitions with different random initializations. The hyper-parameters are selected according to the performance on the validation set. We always report the results on the test set, along with the standard deviations computed over the five repetitions. We use the following methods in our experiments: our Heat Geodesic Embedding, diffusion maps [5], PHATE [22], shortest-path (used in Isomap [31]) which estimates the geodesic distance by computing the shortest path between two nodes in a graph built on the point clouds, $t$ -SNE [32], UMAP [19], and metric MDS with Euclidean distance. Details about each of these methods, and results for different parameters (graph type, heat approximation, etc.) are given in Appendix C. + +Table 1: Pearson and Spearman correlation between the inferred and ground truth distance matrices on the Swiss roll and Tree datasets (higher is better). Best models on average are bolded. + +
Swiss rollTree
DistancePearsonSpearmanPearsonSpearman
Diffusion distance0.476 ± 0.2260.478 ± 0.1380.656 ± 0.0540.653 ± 0.057
PHATE potential0.457 ± 0.010.404 ± 0.0240.766 ± 0.0230.743 ± 0.028
Shortest path0.497 ± 0.1440.558 ± 0.1340.780 ± 0.0090.757 ± 0.019
Euclidean0.365 ± 0.0060.413 ± 0.0050.735 ± 0.0140.704 ± 0.033
Heat-geodesic (ours)0.702 ± 0.0860.700 ± 0.0730.822 ± 0.0080.807 ± 0.016
+ +# 6.1 Distance matrix comparison + +We start by evaluating the ability of the different distances or dissimilarities to recover the ground truth distance matrix of a point cloud. For this task, we use the Swiss roll and Tree datasets, for which the ground truth geodesic distance is known. The Swiss roll dataset consists of data points sampled on a smooth manifold (see Fig. 1). The Tree dataset is created by connecting multiple high-dimensional Brownian motions in a tree-shape structure. In Fig. 1, we present embeddings of both datasets. Our method recovers the underlying geometry, while other methods create artificial clusters or have too much denoising. Because we aim at a faithful relative distance between data points, we compare the methods according to the Pearson and Spearman correlations of the estimated distance matrices with respect to ground truth. Results are displayed in Tab. 1. We observe that Heat Geodesic Embedding typically outperforms previous methods in terms of the correlation with the ground truth distance matrix, confirming the theoretical guarantees provided in Sec. 4 & 2. Additional results such as computation time and correlation for different noise levels are available in Appendix C. + +Optimal diffusion time In Section 4, we described a heuristic to automatically choose the diffusion time based on the entropy of $H_{t}$ . In Fig. 2, we show that the knee-point of $t \mapsto H(H_{t})$ , corresponds to a high correlation with the ground distance, while yielding a low approximation error of the distance matrix (measured by the Frobenius norm of the difference between $D$ and the ground truth). + +# 6.2 Preservation of the inherent data structure + +A crucial evaluation criteria of manifold learning methods is the ability to capture the inherent structure of the data. For instance, clusters in the data should be visible in the resulting low dimensional representation. Similarly, when the dataset consists of samples taken at different time points, one expects to be able to characterize this temporal evolution in the low dimensional embedding [22]. We thus compare the different embedding methods according to their ability to retain clusters and temporal evolution of the data. + +![](images/8e1a81b1c545e804360f82fef258522c4b9de49883cbaffb8900c4fd0c9bb39f.jpg) +Figure 2: Evolution of the correlation between estimated and ground truth distance matrices in function of the diffusion time $t$ . + +![](images/735c616a64a517e4b18b55e86aafc5b756024ec722fc60a942aa578b3e395fc9.jpg) +Figure 3: Embeddings of 2000 differentiating cells from embryoid body [22] over 28 days. UMAP and t-SNE do not capture the continuous manifold representing the cells' evolution. + +Identifying clusters. We use the PBMC dataset, the Swiss roll, the Tree dataset, MNIST [8], and COIL-20 [23] dataset. The PBMC dataset consists of single-cell gene expressions from 3000 individual peripheral blood mononuclear cells. Cells are naturally clustered by their cell type. For + +![](images/0e56e81aa1b2b95917db4998a21ce38dfa67c26d39eb4751003336a34ea18e3b.jpg) +Figure 4: Embeddings on PBMC using the triplet distance with the heat-geodesic for different regularization parameter $\rho$ . + +the Tree dataset, we use the branches as clusters. For the Swiss roll dataset, we sample data points on the manifold according to a mixture of Gaussians and use the mixture component as the ground truth cluster labels. The MNIST and COIL-20 datasets are clustered by digits or objects but may not respect the manifold hypothesis. For each method, we run k-means on the two-dimensional embedding and compare the resulting cluster assignments with ground truth. Tab. 10 reports the results in terms of homogeneity and adjusted mutual information (aMI). Heat Geodesic Embedding is competitive with PHATE and outperforms t-SNE and UMAP on all metrics except on the MNIST and COIL-20 datasets. Yet, we show in Appendix C that all methods tend to perform equally well when the noise level increases. In Fig. 4, we present the PBMC embeddings of PHATE and HeatGeo, showing that HeatGeo interpolates to PHATE for $\rho \rightarrow 1$ . + +Table 2: Clustering quality metrics for different methods. We report the homogeneity and the adjusted mutual information (aMI). Best models on average are bolded (higher is better). + +
Swiss rollTreePBMCMNIST (Non Manifold)COIL (Non Manifold)
MethodHomogeneityaMIHomogeneityaMIHomogeneityaMIHomogeneityaMIHomogeneityaMI
UMAP0.810 ± 0.0360.726 ± 0.0450.678 ± 0.0860.681 ± 0.0860.177 ± 0.0370.148 ± 0.0350.851 ± 0.0160.86 ± 0.0150.871 ± 0.0090.826 ± 0.012
t-SNE0.748 ± 0.0670.668 ± 0.0680.706 ± 0.0540.712 ± 0.0550.605 ± 0.0190.544 ± 0.0220.903 ± 0.0030.902 ± 0.0030.907 ± 0.0140.88 ± 0.02
Isomap0.806 ± 0.0890.743 ± 0.0650.673 ± 0.0450.691 ± 0.0420.242 ± 0.0070.21 ± 0.0070.742 ± 0.0010.74 ± 0.001//
Non-Metric MDS0.003 ± 0.0010.0 ± 0.0010.003 ± 0.0010.001 ± 0.0010.011 ± 0.0030.001 ± 0.0030.019 ± 0.0030.001 ± 0.0040.296 ± 0.020.005 ± 0.027
PHATE0.731 ± 0.0350.652 ± 0.0460.550 ± 0.0420.555 ± 0.0420.798 ± 0.0120.785 ± 0.010.822 ± 0.010.835 ± 0.0110.804 ± 0.0170.735 ± 0.021
Diffusion Maps0.643 ± 0.0530.585 ± 0.0510.341 ± 0.1030.358 ± 0.0930.026 ± 0.0010.038 ± 0.0010.556 ± 0.0020.622 ± 0.0020.21 ± 0.0360.142 ± 0.024
HeatGeo (ours)0.820 ± 0.0080.740 ± 0.0180.784 ± 0.0510.786 ± 0.0510.734 ± 0.0090.768 ± 0.0170.785 ± 0.00.829 ± 0.0010.849 ± 0.0160.806 ± 0.022
+ +Temporal data representation. For this task, we aim at representing data points from population observed at consecutive points in time. We use single cell gene expression datasets collected across different time points, including the Embryoid Body (EB), IPSC [22], and two from the 2022 NeurIPS multimodal single-cell integration challenge (Cite & Multi). To quantitatively evaluate the quality of the continuous embeddings, we first embed the entire dataset and obfuscate all samples from a particular time point ( $e.g., t = 2$ ). We then estimate the distribution of the missing time point by using displacement interpolation [35] between the adjacent time points ( $e.g., t = 1$ and $t = 3$ ). We report the Earth Mover Distance (EMD) between the predicted distribution and true distribution. A low EMD suggests that the obfuscated embeddings are naturally located between the previous and later time points, and that the generated embedding captures the temporal evolution of the data adequately. Results are presented in Tab. 3. Heat Geodesic Embedding outperforms other methods on the EB, Multi, and IPSC datasets and is competitive with other approaches on Cite. We show a graphical depiction of the different embeddings for the embryoid (EB) dataset in Fig. 3. + +Table 3: EMD between a linear interpolation of two consecutive time points $t - 1, t + 1$ , and the time points $t$ . Best models on average are bolded (lower is better). + +
MethodCiteEBMultiIPSC
Euclidean0.978 ± 0.0691.012 ± 0.0391.212 ± 0.1991.085 ± 0.234
Isomap0.978 ± 0.1050.993 ± 0.0621.299 ± 0.3071.026 ± 0.253
Non-metric MDS0.81 ± 0.0120.85 ± 0.0140.806 ± 0.0151.013 ± 0.067
UMAP0.791 ± 0.0450.942 ± 0.0531.418 ± 0.0420.866 ± 0.058
t-SNE0.905 ± 0.0340.964 ± 0.0321.208 ± 0.0871.006 ± 0.026
PHATE1.032 ± 0.0371.088 ± 0.0121.254 ± 0.0420.955 ± 0.033
Diffusion Maps0.989 ± 0.0800.965 ± 0.0771.227 ± 0.0860.821 ± 0.039
HeatGeo (ours)0.890 ± 0.0460.733 ± 0.0360.958 ± 0.0440.365 ± 0.056
+ +# 7 Conclusion and Limitations + +The ability to visualize complex high-dimensional data in an interpretable and rigorous way is a crucial tool of scientific discovery. In this work, we took a step in that direction by proposing a general framework for understanding diffusion-based dimensionality reduction methods through the lens of Riemannian geometry. This allowed us to define a novel embedding based on the heat geodesic dissimilarity—a more direct measure of manifold distance. Theoretically, we showed that our methods brings greater versatility than previous approaches and can help gaining insight into popular manifold learning methods such as diffusion maps, PHATE, and SNE. Experimentally, we demonstrated that it also results in better geodesic distance preservation and excels both at clustering and preserving the structure of a continuous manifold. This contrasts with previous methods that are typically only effective at a single of these tasks. + +Despite the strong theoretical and empirical properties, our work presents some limitations. For instance, our method is based on a similarity measure, which is a relaxation of a distance metric. Additionally, the Harnack equation suggests that our parameters for the volume correction could be tuned depending on the underlying manifold. We envision that further analysis of this regularization is a fruitful direction for future work. + +# Acknowledgments and Disclosure of Funding + +This research was enabled in part by compute resources provided by Mila (mila.quebec). It was partially funded and supported by ESP Mérète [G.H.], CIFAR AI Chair [G.W.], NSERC Discovery grant 03267 [G.W.], NIH grants (1F30AI157270-01, R01HD100035, R01GM130847, R01GM135929) [G.W.,S.K.], NSF Career grant 2047856 [S.K.], the Chan-Zuckerberg Initiative grants CZF2019-182702 and CZF2019-002440 [S.K.], the Sloan Fellowship FG-2021-15883 [S.K.], and the Novo Nordisk grant GR112933 [S.K.]. The content provided here is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. The funders had no role in study design, data collection & analysis, decision to publish, or preparation of the manuscript. + +# References + +[1] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373-1396, 2003. +[2] Jan Niklas Böhm, Philipp Berens, and Dmitry Kobak. Attraction-repulsion spectrum in neighbor embeddings. Journal of Machine Learning Research, 23(95):1-32, 2022. +[3] Erica L Busch, Jessie Huang, Andrew Benz, Tom Wallenstein, Guillaume Lajoie, Guy Wolf, Smita Krishnaswamy, and Nicholas B Turk-Browne. Multi-view manifold learning of human brain-state trajectories. Nature Computational Science, pages 1-14, 2023. +[4] J Douglas Carroll and Phipps Arabie. Multidimensional scaling. Measurement, judgment and decision making, pages 179-250, 1998. +[5] Ronald R. Coifman and Stéphane Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5-30, 2006. +[6] Keenan Crane, Clarisse Weischedel, and Max Wardetzky. Geodesics in heat: A new approach to computing distance based on heat flow. ACM Transactions on Graphics (TOG), 32(5):1-11, 2013. +[7] Michael Defferrard, Lionel Martin, Rodrigo Pena, and Nathanael Perraudin. Pygsp: Graph signal processing in python. +[8] Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142, 2012. +[9] Laleh Haghverdi, Maren Böttner, F. Alexander Wolf, Florian Buettner, and Fabian J. Theis. Diffusion pseudotime robustly reconstructs lineage branching. Nature Methods, 13(10):845-848, 2016. + +[10] Matthieu Heitz, Nicolas Bonneel, David Coeurjolly, Marco Cuturi, and Gabriel Peyre. Ground metric learning on graphs. Journal of Mathematical Imaging and Vision, 63:89-107, 2021. +[11] Geoffrey E Hinton and Sam Roweis. Stochastic neighbor embedding. Advances in neural information processing systems, 15, 2002. +[12] Michael C Hout, Megan H Papesh, and Stephen D Goldinger. Multidimensional scaling. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1):93-103, 2013. +[13] Shih-Gu Huang, Ilwoo Lyu, Anqi Qiu, and Moo K Chung. Fast polynomial approximation of heat kernel convolution on manifolds and its application to brain sulcal and gyral graph pattern analysis. IEEE transactions on medical imaging, 39(6):2201-2212, 2020. +[14] Guillaume Huguet, Alexander Tong, Bastian Rieck, Jessie Huang, Manik Kuchroo, Matthew Hirn, Guy Wolf, and Smita Krishnaswamy. Time-inhomogeneous diffusion geometry and topology. arXiv preprint arXiv:2203.14860, 2022. +[15] Guillaume Huguet, Alexander Tong, María Ramos Zapatero, Guy Wolf, and Smita Krishnaswamy. Geodesic Sinkhorn: optimal transport for high-dimensional datasets. arXiv preprint arXiv:2211.00805, 2022. +[16] Joseph B Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1-27, 1964. +[17] Peter Li and Shing Tung Yau. On the parabolic kernel of the Schrödinger operator. Acta Mathematica, 156(none):153 - 201, 1986. +[18] George C. Linderman and Stefan Steinerberger. Clustering with t-SNE, provably. arXiv:1706.02582 [cs, stat], 2017. +[19] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018. +[20] Stanislav A Molchanov. Diffusion processes and riemannian geometry. Russian Mathematical Surveys, 30(1):1, 1975. +[21] Kevin R. Moon, Jay S. Stanley, Daniel Burkhardt, David van Dijk, Guy Wolf, and Smita Krishnaswamy. Manifold learning-based methods for analyzing single-cell RNA-sequencing data. Current Opinion in Systems Biology, 7:36–46, 2018. +[22] Kevin R. Moon, David van Dijk, Zheng Wang, Scott Gigante, Daniel B. Burkhardt, William S. Chen, Kristina Yim, Antonia van den Elzen, Matthew J. Hirn, Ronald R. Coifman, Natalia B. Ivanova, Guy Wolf, and Smita Krishnaswamy. Visualizing structure and transitions in high-dimensional biological data. Nat Biotechnol, 37(12):1482-1492, 2019. +[23] Samer A. Nene, Shree K. Nayar, and Hiroshi Murase. Columbia object image library (coil-20). Technical Report CUCS-005-96, Department of Computer Science, Columbia University, February 1996. +[24] Adam Nowak, Peter Sjögren, and Tomasz Z Szarek. Sharp estimates of the spherical heat kernel. Journal de Mathématiques Pures et Appliquées, 129:23-33, 2019. +[25] Laurent Saloff-Coste. The heat kernel and its estimates. Probabilistic approach to geometry, 57:405-436, 2010. +[26] Ville Satopaa, Jeannie Albrecht, David Irwin, and Barath Raghavan. Finding a" kneedle" in a haystack: Detecting knee points in system behavior. In 2011 31st international conference on distributed computing systems workshops, pages 166-171. IEEE, 2011. +[27] Nicholas Sharp, Yousuf Soliman, and Keenan Crane. The vector heat method. ACM Transactions on Graphics (TOG), 38(3):1-19, 2019. +[28] Justin Solomon, Fernando De Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (ToG), 34(4):1-11, 2015. + +[29] Jian Sun, Maks Ovsjanikov, and Leonidas Guibas. A concise and provably informative multiscale signature based on heat diffusion. In Computer graphics forum, volume 28, pages 1383-1392. Wiley Online Library, 2009. +[30] Yoshio Takane, Forrest W Young, and Jan De Leeuw. Nonmetric individual differences multidimensional scaling: An alternating least squares method with optimal scaling features. Psychometrika, 42:7-67, 1977. +[31] Joshua B Tenenbaum, Vin de Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319-2323, 2000. +[32] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. +[33] David Van Dijk, Roshan Sharma, Juozas Nainys, Kristina Yim, Pooja Kathail, Ambrose J Carr, Cassandra Burdziak, Kevin R Moon, Christine L Chaffer, Diwakar Pattabiraman, et al. Recovering gene interactions from single-cell data using data diffusion. Cell, 174(3):716-729, 2018. +[34] Sathamangalam R Srinivasa Varadhan. On the behavior of the fundamental solution of the heat equation with variable coefficients. Communications on Pure and Applied Mathematics, 20(2): 431-455, 1967. +[35] Cédric Villani. Displacement interpolation. Optimal Transport: Old and New, pages 113-162, 2009. +[36] F Alexander Wolf, Philipp Angerer, and Fabian J Theis. Scanpy: large-scale single-cell gene expression data analysis. Genome biology, 19:1-5, 2018. +[37] Jonathan X Zheng, Samraat Pawar, and Dan FM Goodman. Graph drawing by stochastic gradient descent. IEEE transactions on visualization and computer graphics, 25(9):2738-2748, 2018. + +# Appendix + +# A Theory and algorithm details + +# A.1 Kernel preserving embeddings + +In this section, we attempt to create a generalized framework for dimensionality reduction methods. These methods often have been viewed as disparate or competing but here we show that many of them are related to one another given the right template for methodology comparison. In order to do this, we introduce a general definition suited for distance-preserving dimensionality reduction methods. With this definition, we can cast many dimensionality reduction methods within the same framework, and easily compare them. We recall that the observations in the ambient space are denoted $x$ , and those in the embedded space are denoted $y$ . The definition relies on kernel functions $H_{t}^{x}$ , $H_{t}^{y}$ defined respectively on the ambient and embedded spaces and on transformations $T^{x}$ , $T^{y}$ applied to the kernels. We recall that a divergence $f: \mathbb{R} \times \mathbb{R} \to \mathbb{R}^{+}$ is such that $f(a, b) = 0$ if and only if $a = b$ and $f(a, a + \delta)$ is a positive semi-definite quadratic form for infinitesimal $\delta$ . + +Definition A.1. We define a kernel features preserving embedding as an embedding which minimizes a loss $L$ between a transformation $T^x$ of the ambient space kernel $H_t^x$ and its embedded space counterpart + +$$ +L := f \left(T ^ {x} \left(H _ {t} ^ {x}\right), T ^ {y} \left(H _ {t ^ {\prime}} ^ {y}\right)\right), \tag {6} +$$ + +where $f$ is any $C^2$ divergence on $\mathbb{R}_{\geq 0}$ + +Example 1. We formulate MDS as a kernel feature-preserving embedding. Suppose we want to preserve the Euclidean distance, we have $H_{t}^{x}(x_{i},x_{j}) = \| x_{i} - x_{j}\|_{2}$ , $H_{t}^{y}(y_{i},y_{j}) = \| y_{i} - y_{j}\|_{2}$ , $f(a,b) = \| a - b\|_{2}$ , and $T^x = T^y = I$ . + +In the following, we present popular dimensionality reduction methods that are kernel features preserving embeddings. With this definition, we can distinguish between methods that a preserve a kernel via affinities or distances. For the methods considered in this work, $H_{t}^{x}$ is an affinity kernel, but its construction varies from one method to another. In PHATE and Diffusion maps, $H_{t}^{x}$ is a random walk $P$ , while in Heat Geodesic Embedding we use the heat kernel $H_{t}$ . t-SNE defines $H_{t}^{x}$ as a symmetrized random walk matrix from a Gaussian kernel, while UMAP uses an unnormalized version. Methods such as PHATE and diffusion maps define a new distance matrix from a kernel in the ambient space and preserve these distances in the embedded space. Other methods like t-SNE and UMAP define similarities from a kernel and aim to preserve these similarities in the ambient space and embedded space via an entropy-based loss. We note the Kullback-Leibler divergence $D_{\mathrm{KL}}(a,b) = \sum_{ij}a_{ij}\log [a_{ij} / b_{ij}]$ . + +Proposition A.2. The embeddings methods HeatGeo, PHATE, Diffusion Maps, SNE, t-SNE, and UMAP are kernel feature-preserving embeddings. + +Proof. We assume that the affinity kernel in the ambient space $H_{t}^{x}$ , is given, to complete the proof we need to define $f, H_{t}^{y}, T^{x}, T^{y}$ for all methods. + +We start with the distance preserving embeddings; HeatGeo, PHATE, and Diffusion Maps. For these methods, the kernel in the embed space is simply $H_{t}^{y}(y_{i},y_{j}) = \| y_{i} - y_{j}\|_{2}$ , without transformation, i.e. $T^y = I$ . Since they preserve a distance, the loss is $f(T^{x}(H_{t}^{x}),T^{y}(H_{t^{\prime}}^{y})) = \| H_{t}^{x} - H_{t^{\prime}}^{y}\|_{2}$ . + +In the Heat Geodesic Embedding we apply a transformation on $H_{t}^{x} = \pmb{H}_{t}$ to define a dissimilarity, hence $T^{x}(H_{t}^{x}) = -t\log H_{t}^{x}$ (for $\sigma = 0$ ), where log is applied elementwise. + +In PHATE, the potential distance is equivalent to $(T^{x}(H_{t}^{x}))_{ij} = \| -\log (H_{t}^{x})_{i} + \log (H_{t}^{x})_{j}\|_{2}$ . In Diffusion Maps, the diffusion distance is $(T^{x}(H_{t}^{x}))_{ij} = \| (H_{t}^{x})_{i} - (H_{t}^{x})_{j}\|_{2}$ . + +SNE, t-SNE, and UMAP preserve affinities from a kernel. For these three methods, the loss is a divergence between distributions, namely $f = D_{\mathrm{KL}}$ . They vary by defining different affinity kernel and transformation in the embedded space. SNE uses the unnormalized kernel $H_t^y(y_i, y_j) = \exp(-\left(1/t\right)\|y_i - y_j\|_2^2)$ , with $T^x = T^y = I$ . Whereas, t-SNE uses $(H_1^y)_{ij} = (1 + \|y_i - y_j\|^2)^{-1}$ , + +Table 4: Overview of kernel preserving methods. + +
MethodHyt(yi,yj)Tx(Hxt)Ty(Hyt)f
PHATE||yi-yj||2||-log(Hxt)i+log(Hxt)j||2Hyt||·||2
Heat Geodesic||yi-yj||2-t log(Hxt)ijHyt||·||2
Diffusion Maps||yi-yj||2||(Hxt)i-(Hxt)j||2Hyt||·||2
SNEexp(-(1/t)||yi-yj||2)HxtHytDKL
t-SNE(1+||yi-yj||2)-1HxtHytDKL
UMAP(1+||yi-yj||2)-1Hxt(H1y)ij/(1-(H1y)ij)DKL
+ +and $T^x = T^y = I$ . UMAP define a pointwise transformation in the embedded space with $(H_1^y)_{ij} = (1 + \| y_i - y_j\|^2)^{-1}$ , $(T^y(H_t^y))_{ij} = (H_1^y)_{ij} / (1 - (H_1^y)_{ij})$ , and $T^x = I$ . + +We summarize the choice of kernels and functions in Tab. 4 + +![](images/be187d0dd30c936150fd50a7a18e4dc6faf3cf3626c3b51df741f4724f48fb6f.jpg) + +# A.2 Proofs + +Proposition 4.3. Denote the perturbed triplet distance by $\widetilde{D_{\mathrm{T}}} (x_i,x_j) = ||\tilde{d}_t(x_i,\cdot) - \tilde{d}_t(x_j,\cdot)||_2$ where $\tilde{d}_t(x_i,x_j)\coloneqq d_t(x_i,x_j) + \epsilon$ and $\tilde{d}_t(x_i,x_k)\coloneqq d_t(x_i,x_k)$ for $k\neq j$ . Then the triplet distance $D_{\mathrm{T}}$ is robust to perturbations, i.e., for all $\epsilon >0$ + +$$ +\left(\frac {\widetilde {D _ {\mathrm {T}}} (x _ {i} , x _ {j})}{D _ {\mathrm {T}} (x _ {i} , x _ {j})}\right) ^ {2} \leq \left(\frac {d _ {t} (x _ {i} , x _ {j}) + \epsilon}{d _ {t} (x _ {i} , x _ {j})}\right) ^ {2}. +$$ + +Proof of Proposition 4.3. The effect of the noise on the square distance is $(d_t(x_i, x_j) + \epsilon)^2 / d(x_i, x_j)^2 = 1 + (2\epsilon d_t(x_i, x_j) + \epsilon^2) / d(x_i, x_j)^2$ . Denoting the perturbed triplet distance by $\widetilde{D_{\mathrm{T}}}$ , we have + +$$ +\frac {\widetilde {D _ {\mathrm {T}}} (x _ {i} , x _ {j}) ^ {2}}{D _ {\mathrm {T}} (x _ {i} , x _ {j}) ^ {2}} = \frac {\sum_ {k \neq i , j} \left(d _ {t} (x _ {i} , x _ {k}) - d _ {t} (x _ {j} , x _ {k})\right) ^ {2} + 2 (d _ {t} (x _ {i} , x _ {j}) + \epsilon) ^ {2}}{D _ {\mathrm {T}} (x _ {i} , x _ {j}) ^ {2}} = 1 + \frac {4 \epsilon d (x _ {i} , x _ {j}) + 2 \epsilon^ {2}}{D _ {\mathrm {T}} (x _ {i} , x _ {j}) ^ {2}}, +$$ + +and we have + +$$ +\frac {4 \epsilon d (x _ {i} , x _ {j}) + 2 \epsilon^ {2}}{D _ {T} (x _ {i} , x _ {j}) ^ {2}} \leq \frac {2 \epsilon d _ {t} (x _ {i} , x _ {j}) + \epsilon^ {2}}{d _ {t} (x _ {i} , x _ {j}) ^ {2}} +$$ + +For $\epsilon > 0$ , this gives + +$$ +\epsilon \geq \frac {4 d _ {t} (x _ {i} , x _ {j}) ^ {3} - 2 d _ {t} (x _ {i} , x _ {j}) D _ {T} (x _ {i} , x _ {j}) ^ {2}}{D _ {t} (x _ {i} , x _ {j}) ^ {2} - 2 d _ {t} (x _ {i} , x _ {j}) ^ {2}} = - 2 d _ {t} (x _ {i}, x _ {j}). +$$ + +For $\epsilon < 0$ , we have + +$$ +\epsilon \leq \frac {4 d _ {t} (x _ {i} , x _ {j}) ^ {3} - 2 d _ {t} (x _ {i} , x _ {j}) D _ {T} (x _ {i} , x _ {j}) ^ {2}}{D _ {t} (x _ {i} , x _ {j}) ^ {2} - 2 d _ {t} (x _ {i} , x _ {j}) ^ {2}} = - 2 d _ {t} (x _ {i}, x _ {j}). +$$ + +Thus $\epsilon \in (-\infty, -2d_t(x_i, x_j)) \cup (0, \infty)$ . As we require the perturbation factor $\epsilon < d_t(x_i, x_j)$ , hence we choose $\epsilon \in (0, \infty)$ . + +![](images/e2bb3dc3dfb5697b61ae4e112590e171bee108cc56f4cfe5bbcbaea80e2ecb18.jpg) + +Lemma 5.1. Rescaling the Laplacian eigenmaps embedding with $x_{i} \mapsto (e^{-2t\lambda_{1}}\psi_{1,i},\ldots ,e^{-2t\lambda_{k}}\psi_{k,i})$ preserves the diffusion distance $DM_{H_t}$ . + +Proof of Lemma 5.1. Since the eigendecomposition of $\mathbf{H}_t$ form an orthonormal basis of $\mathbb{R}^n$ , and since its first eigenvector is constant, we can write the diffusion distance $\| \delta_i \mathbf{H}_t - \delta_i \mathbf{H}_t \|_2^2 = \sum_{k \geq 0} e^{-2t\lambda_k} (\psi_{ki} - \psi_{kj})^2 = \sum_{k \geq 1} e^{-2t\lambda_k} (\psi_{ki} - \psi_{kj})^2$ . In particular, this defines the $k$ dimensional embedding $x \mapsto (e^{-t\lambda_1} \psi_1(x), \ldots, e^{-t\lambda_k} \psi_k(x))$ . + +Proposition 5.2. The PHATE potential distance with the heat kernel $PH_{H_t}$ can be expressed in terms of the heat-geodesic dissimilarity with $\sigma = 0$ + +$$ +P H _ {\boldsymbol {H} _ {t}} = (1 / 4 t) ^ {2} \left\| d _ {t} (x _ {i}, \cdot) - d _ {t} (x _ {j}, \cdot) \right\| _ {2} ^ {2}, +$$ + +and it is equivalent to a multiscale random walk distance with kernel $\sum_{k > 0}m_t(k)\pmb{P}^k$ , where $m_t(k)\coloneqq t^k e^{-t} / k!$ . + +Proof of Proposition 5.2. For $\sigma = 0$ , we have $d_{t}(x_{i},x_{j}) = -4t\log (H_{t})_{ij}$ , the relation between the PHATE potential and the heat-geodesic follows from the definition + +$$ +\begin{array}{l} P H _ {\boldsymbol {H} _ {t}} = \sum_ {k} \left(- \log \boldsymbol {H} _ {t} (x _ {i}, x _ {k}) + \log \boldsymbol {H} _ {t} (x _ {j}, x _ {k})\right) ^ {2} \\ = (1 / 4 t) ^ {2} \left\| d _ {t} \left(x _ {i}, \cdot\right) - d _ {t} \left(x _ {j}, \cdot\right) \right\| _ {2} ^ {2}. \\ \end{array} +$$ + +Using the heat kernel $H_{t}$ with the random walk Laplacian $L_{rw} = Q^{-1}L = I_{n} - Q^{-1}W$ corresponds to a multiscale random walk kernel. Recall that we can write $L_{rw}$ in terms of the symmetric Laplacian $L_{rw} = Q^{-1 / 2}L_sQ^{1 / 2}$ , meaning that the two matrices are similar, hence admit the same eigenvalues $\Lambda$ . We also know that $L_{s}$ is diagonalizable, since we can write $L_{s} = Q^{-1 / 2}LQ^{-1 / 2} = Q^{-1 / 2}\Psi \Lambda \Psi^{T}Q^{-1 / 2}$ . In particular, we have $L_{rw} = S\Lambda S^{-1}$ , where $S\coloneqq Q^{-1 / 2}\Psi$ . The random walk matrix can be written as $P = I_n - R_{rw}$ , hence its eigenvalues are $(I_n - \Lambda)$ , and we can write $P^t = S(I_n - \Lambda)^t S^{-1}$ . Similarly, the heat kernel with the random walk Laplacian can be written as $H_{t} = Se^{-t\Lambda}S^{-1}$ . Interestingly, we can relate the eigenvalues of $H_{t}$ and $P$ with the Poisson distribution. Note the probability mass function of a Poisson as $m_t(k)\coloneqq t^k e^{-t} / k!$ , for $t\geq 0$ , we have + +$$ +e ^ {- t (1 - \mu)} = e ^ {- t} \sum_ {k \geq 0} \frac {(t \mu) ^ {k}}{k !} = \sum_ {k \geq 0} m _ {t} (k) \mu^ {k}. \tag {7} +$$ + +We note that (7) is the probability generating function of a Poisson distribution with parameter $t$ , i.e. $\mathbb{E}[\mu^X]$ , where $X \sim \mathrm{Poisson}(t)$ . With this relationship, we can express $\pmb{H}_t$ as a linear combination of $\pmb{P}^t$ weighted by the Poisson distribution. Indeed, substituting $\lambda = 1 - \mu$ in (7) links the eigenvalues of $\pmb{H}_t$ and $\pmb{P}$ . We write the heat kernel as a linear combination of random walks weighted by the Poisson distribution, we have + +$$ +\boldsymbol {H} _ {t} = \boldsymbol {S} e ^ {- t \Lambda} \boldsymbol {S} ^ {- 1} = \boldsymbol {S} \sum_ {k = 0} ^ {\infty} m _ {t} (k) \left(\boldsymbol {I} _ {n} - \Lambda\right) ^ {k} \boldsymbol {S} ^ {- 1} = \sum_ {k = 0} ^ {\infty} m _ {t} (k) \boldsymbol {P} ^ {k}. +$$ + +Proposition 5.5. The Heat-Geodesic embedding with $\sigma = 0$ and squared distances minimization weighted by the heat kernel is equivalent to SNE with the heat kernel affinity in the ambient space, and a Gaussian kernel in the embedded space $q_{ij} = \exp (-\| y_i - y_j\|^2 /4t)$ . + +Proof of Proposition 5.5. The squared MDS weighted by the heat kernel corresponds to + +$$ +\begin{array}{l} \sum_ {i j} h _ {t} (x _ {i}, x _ {j}) (d _ {i j} ^ {2} - \| y _ {i} - y _ {j} \| ^ {2}) ^ {2} = \sum_ {i j} h _ {t} (x _ {i}, x _ {j}) (- t \log h _ {t} (x _ {i}, x _ {j}) - \| y _ {i} - y _ {j} \| ^ {2}) ^ {2} \\ = \sum_ {i j} h _ {t} (x _ {i}, x _ {j}) t ^ {2} (\log h _ {t} (x _ {i}, x _ {j}) - \log \exp (- \| y _ {i} - y _ {j} \| ^ {2} / t) ^ {2}. \\ \end{array} +$$ + +If there exists an embedding that attain a zero loss, then it is the same as $\sum_{ij}h_t(x_i,x_j)(\log h_t(x_i,x_j) - \log \exp (-\| y_i - y_j\|^2 /t) = D_{\mathrm{KL}}(h_t\| q).$ + +# A.3 Algorithm details + +We present a detailed version of the Heat Geodesic Embedding algorithm in Alg.2. + +For the knee-point detection we use the Kneedle algorithm [26]. It identifies a knee-point as a point where the curvature decreases maximally between points (using finite differences). We summarize the four main steps of the algorithm for a function $f(x)$ , and we refer to [26] for additional details. + +Algorithm 2 Heat Geodesic Embedding +1: Input: $N \times d$ dataset matrix $\mathbf{X}$ , denoising parameter $\rho \in [0,1]$ , Harnack regularization $\sigma > 0$ , output dimension $k$ . +2: Returns: $N \times e$ embedding matrix $\mathbf{E}$ . +3: $\triangleright$ 1. Calculate Heat Operator $\mathbf{H}_t$ +4: if $t$ is "auto" then +5: $\lfloor t \leftarrow$ Kneedle $\{H(H_t)\}_t$ +6: $\mathbf{W} \leftarrow$ kernel $(\mathbf{X})$ +7: $\mathbf{L} \leftarrow Q - \mathbf{W}$ +8: if Exact then +9: $\mathbf{H}_t \leftarrow \Psi e^{-t\Lambda} \Psi^T$ +10: else +11: $\mathbf{H}_t \leftarrow p_K(\mathbf{L}, t)$ +12: $\triangleright$ 2. Calculate Pairwise Distances $D$ +13: $\mathbf{D} \leftarrow -4t \log \mathbf{H}_t$ +14: $\mathbf{D} \leftarrow (1 - \rho) \mathbf{D} + \rho D_T$ +15: Return $\mathbf{E} \leftarrow$ MetricMDS $(\mathbf{D}, \| \cdot \|_2, k)$ + +1. Smoothing with a spline to preserve the shape of the function. +2. Normalize the values, so the algorithm does not depend on the magnitude of the observations. +3. Computing the set of finite differences for $x$ and $y \coloneqq f(x)$ , e.g. $y_{d_i} \coloneqq f(x_i) - x_i$ . +4. Evaluating local maxima of the difference curve $y_{d_i}$ , and select the knee-point using a threshold based on the average difference between consecutive $x$ . + +# B Experiments and datasets details + +Our experiments compare our approach with multiple state-of-the-art baselines for synthetic datasets (for which the true geodesic distance is known) and real-world datasets. For all models, we perform sample splitting with a 50/50 validation-test split. The validation and test sets each consist of 5 repetitions with different random initializations. The hyper-parameters are selected according to the performance on the validation set. We always report the results on the test set, along with the standard deviations computed over the five repetitions. We use the following state-of-the-art methods in our experiments: our Heat Geodesic Embedding, diffusion maps[5], PHATE [22], Heat-PHATE (a variation of PHATE using the Heat Kernel), Rand-Geo (a variation of Heat Geodesic Embedding where we use the random walk kernel), Shortest-path which estimates the geodesic distance by computing the shortest path between two nodes in a graph built on the point clouds, $t$ -SNE[32], and UMAP[19]. + +# B.1 Datasets + +We consider two synthetic datasets, the well known Swiss roll and the tree datasets. The exact geodesic distance can be computed for these datasets. We additionally consider real-world datasets: PBMC, IPSC [22], EB [22], and two from the from the 2022 NeurIPS multimodal single-cell integration challenge4. + +# B.1.1 Swiss Roll + +The Swiss roll dataset consists of data points samples on a smooth manifold inspired by shape of the famous alpine pastry. In its simplest form, it is a 2-dimensional surface embedded in $\mathbb{R}^3$ given by + +$$ +\begin{array}{l} x = t \cdot \cos (t) \\ y = h \\ z = t \cdot \sin (t) \\ \end{array} +$$ + +where $t \in [T_0, T_1]$ and $h \in [0, W]$ . In our experiments we used $T_0 = \frac{3}{2}\pi$ , $T_1 = \frac{9}{2}\pi$ , and $W = 5$ . We use two sampling mechanisms for generating the data points: uniformly and clustered. In the first, we sample points uniformly at random in the $[T_0, T_1] \times [0, W]$ plane. In the second, we sample according to a mixture of isotropic multivariate Gaussian distributions in the same plane with equal weights, means $[(7, W/2), (12, W/2)]$ , and standard deviations [1, 1]. In the clustered case, data samples are given a label $y$ according to the Gaussian mixture component from which they were sampled. + +We consider variations of the Swiss roll by projecting the data samples in higher dimension using a random rotation matrix sampled from the Haar distribution. We use three different ambient dimensions: 3, 10, and 50. + +Finally, we add isotropic Gaussian noise to the data points in the ambient space with a standard deviation $\sigma$ . + +# B.1.2 Tree + +The tree dataset is created by generating $K$ branches from a $D$ -dimensional Brownian motion that are eventually glued together. Each branch is sampled from a multidimensional Brownian motion $d\mathbf{X}_{\mathbf{k}} = 2d\mathbf{W}(t)$ at times $t = 0,1,2,\dots,L - 1$ for $k\in [K]$ . The first branch is taken as the main branch and the remaining branches are glued to the main branch by setting $X_{k} = X_{k} + X_{0}[i_{k}]$ where $i_k$ is a random index of the main branch vector. The total number of samples is thus $L\cdot K$ + +In our experiments, we used $L = 500$ , $K = 5$ , and $D = 5,10$ (i.e., two versions with different dimensions of the ambient space). + +# B.2 Evaluation Metrics + +We compare the performance of the different methods according to several metrics. For synthetic datasets, where ground truth geodesic distance is available, we directly compare the estimated distance matrices and ground truth geodesic distance matrices. For real-world datasets, we use clustering quality and continuous interpolation as evaluation metrics. + +# B.2.1 Distance matrix evaluation + +The following methods use an explicit distance matrix: diffusion maps, Heat Geodesic Embedding, Heat-Phate, Phate, Rand-Geo and Shortest Path. For these methods, we compare their ability their ability to recover the ground truth distance matrix several metrics. Letting $D$ and $\hat{D}$ the ground truth and inferred distance matrices respectively, and $N$ the number of points in the dataset, we use the following metrics. + +Pearson $\rho$ We compute the average Pearson correlation between the rows of the distance matrices, $\frac{1}{N}\sum_{i = 1}^{N}r_{D_i,\hat{D}_i}$ , where $r_{x,y}$ is the Pearson correlation coefficient between vectors $x$ and $y$ . $D_{i}$ stands for the $i$ -th row of $D$ . + +Spearman $\rho$ We compute the average Spearman correlation between the rows of the distance matrices, $\frac{1}{N}\sum_{i=1}^{N}r_{D_i,\hat{D}_i}$ , where $r_{x,y}$ is the Spearman correlation coefficient between vectors $x$ and $y$ . $D_i$ stands for the $i$ -th row of $D$ . + +Frobenius Norm We use $\| D - \hat{D}\| _F$ , where $\| A\| _F = \sqrt{\sum_{i = 1}^N\sum_{j = 1}^N|A_{i,j}|^2}$ + +Maximum Norm We use $\| D - \hat{D}\|_{\infty}$ , where $\| A\|_{\infty} = \max_{i,j}|A_{i,j}|$ + +# B.2.2 Embedding evaluation + +Some methods produce low-dimensional embeddings without using an explicit distance matrix for the data points. This is the case for UMAP and t-SNE. To compare against these methods, we use the distance matrix obtained by considering Euclidean distance between the low-dimensional embeddings. We used 2-dimensional embeddings in our experiments. For diffusion maps, we obtain + +these embeddings by using the first two eigenvectors of the diffusion operator only. For Heat Geodesic Embedding, Heat-PHATE, PHATE, Rand-GEO and Shortest Path, we use multidimensional scaling (MDS) on the originally inferred distance matrix. + +**Clustering** We evaluate the ability of Heat Geodesic Embedding to create meaningful embeddings when clusters are present in the data. To this end, we run a k-means clustering on the two dimensional embeddings obtained with each method and compare them against the ground truth labels. For the Tree dataset, we use the branches as clusters. For the Swiss roll dataset, we sample data points on the manifold according to a mixture of Gaussians and use the mixture component as the ground truth cluster label. + +Interpolation To quantitatively evaluate the quality of the continuous embeddings, we first embed the entire dataset and obfuscate all samples from a particular time point $(e.g., t = 2)$ . We then estimate the distribution of the missing time point by using displacement interpolation [35] between the adjacent time points $(e.g., t = 1$ and $t = 3)$ . We report the Earth Mover Distance (EMD) between the predicted distribution and true distribution. A low EMD suggests that the obfuscated embeddings are naturally located between the previous and later time points, and that the generated embedding captures the temporal evolution of the data adequately. + +# B.3 Hyperparameters + +In Table 5, we report the values of hyperparameters used to compute the different embeddings. + +
HyperparameterDescriptionValues
Heat Geodesic Embedding
kNumber of neighbours in k-NN graph5,10,15
orderorder of the approximation30
tDiffusion time0.1,1,10,50,auto
Approximation methodApproximation method for Heat KernelEuler, Chebyshev
LaplacianType of laplacianCombinatorial
Harnack ρHarnack Regularization0,0.25,0.5,0.75,1,1.5
PHATE
n-PCANumber of PCA components50,100
tDiffusion time1,5,10,20,auto
kNumber of neighbours10
Diffusion Maps
kNumber of neighbours in k-NN graph5,10,15
tDiffusion time1,5,10,20
Shortest Path
kNumber of neighbours in k-NN graph5,10,15
UMAP
kNumber of neighbours5,10,15
min-distMinimum distance0.1,0.5,0.99
t-SNE
pPerplexity10,30,100
early exaggerationEarly exaggeration parameter12
+ +Table 5: Hyperparameters used in our experiments + +# B.4 Hardware + +The experiments were performed on a compute node with 16 Intel Xeon Platinum 8358 Processors and 64GB RAM. + +# C Additional results + +# C.1 HeatGeo weighted + +Following Sec. 5, we know that weighting the MDS loss by the heat kernel corresponds to a specific parametrization of SNE, and thus promote the identification of cluster. In Fig. 5, we show the embeddings of four Gaussian distributions in 10 dimensions (top), and the PBMC dataset (bottom). The reference embedding is using t-SNE, as it models as it also minimizes the KL between the ambient and embedded distributions. We see that HeatGeo weighted form cluster that are shaped like a Gaussian. This is expected as Prop. 5.5, indicates that this is equivalent to minimizing the $D_{\mathrm{KL}}$ between the heat kernel and a Gaussian affinity kernel. + +![](images/396e6ae99fd039b69cb06f49b26d670c10c089da23d69c63ef45dca095e4f840.jpg) +Figure 5: Embeddings of four Gaussian distributions in 10 dimensions (top), and the PBMC dataset (bottom). HeatGeo with weight is equivalent to minimizing the $D_{\mathrm{KL}}$ between the heat kernel and a Gaussian affinity kernel, hence produces clusters shaped similar to a Gaussian. + +# C.2 Truncated distance + +In Fig.6, we discretize the interval [0, 51] in 51 nodes, and we compute the heat-geodesic distance of the midpoint with respect to the other points, effectively approximating the Euclidean distance. Using Chebyshev polynomials of degree of 20, we see that the impact of the truncation is greater as the diffusion time increases. The backward Euler methods does not result in a truncated distance. + +![](images/72e659f3f970fe80a6293c06721924f4245f1c282409c251ab9e64e079bf94e0.jpg) +Figure 6: Approximation of the squared Euclidean distance with the Heat-geodesic for the exact computation, Backward Euler approximation, and Chebyshev polynomials. For larger diffusion time, the Chebyshev approximation results in a thresholded distance. The Harnack regularization ensures $d_{t}(x,x) = 0$ . + +![](images/578c94890891b44e78a6b79f3336d98f51ac1f7aba0ed87bfe6c51459bab956e.jpg) +Figure 7: Impact of the Checbshev approximation order on the embedding of HeatGeo for the PBMC dataset. + +# C.3 Harnack inequality + +For complete Riemannian manifolds that satisfy the parabolic Harnack inequality (PHI) we have $h_t(x,y) \simeq V^{-1}(x,\sqrt{t}) e^{-d(x,y)^2 / t}$ so that $-t \log h_t(x,y) \simeq t \log V(x,\sqrt{t}) + d^2(x,y)$ [25]. + +$$ +h _ {t} (x, x) = \frac {1}{V (x , \sqrt {t})} \tag {8} +$$ + +$$ +V (x, \sqrt {t}) = h _ {t} (x, x) ^ {- 1} \tag {9} +$$ + +We then have, + +$$ +d ^ {2} (x, y) \simeq - t \log h _ {t} (x, y) - t \log V (x, \sqrt {t}) +$$ + +$$ +d ^ {2} (x, y) \simeq - t \log h _ {t} (x, y) - t \log h _ {t} (x, x) ^ {- 1} +$$ + +$$ +d ^ {2} (x, y) \simeq - t \log h _ {t} (x, y) + t \log h _ {t} (x, x) +$$ + +# C.3.1 Case studies for specific manifolds + +The circle - $\mathbb{S}_1$ We now show that our expression for the Heat Geodesic Embedding-distance is monotonically increasing with respect to the ground truth geodesic distance $d\in \mathbb{R}^{+}$ for a fixed diffusion time $t$ and for any Harnack regularization in $\mathbb{S}_1$ . + +Our expression for the Heat Geodesic Embedding-distance is + +$$ +\hat {d} = \sqrt {- 4 t \log (h _ {t} (d)) + 4 t \log (h _ {t} (0))} +$$ + +As the square-root is monotonic, and $4t\log h_t(0)$ is constant with respect to $d$ , we need to show that $f(d) = -\log (h_t(d))$ is monotonically increasing. + +For $\mathbb{S}_1$ , we have + +$$ +h _ {t} (d) = \sum_ {m \in \mathbb {Z}} \frac {1}{\sqrt {4 \pi t}} e ^ {- \frac {(d + 2 \pi m) ^ {2}}{4 t}} +$$ + +As log is monotonically increasing, it suffices to show that $\sum_{m\in \mathbb{Z}}e^{-\frac{(d + 2\pi m)^2}{4t}}$ is monotonically decreasing, which is the case as for any $d^{\prime} > d,\forall m\in \mathbb{Z}$ , we have + +$$ +e ^ {- \frac {(d + 2 \pi m) ^ {2}}{4 t}} > e ^ {- \frac {(d ^ {\prime} + 2 \pi m) ^ {2}}{4 t}}. +$$ + +In general, one can see that (1) the heat kernel depending only on the geodesic distance and (2) the heat kernel being monotonically decreasing with respect to the geodesic distance are sufficient conditions for preserving ordering of pair-wise distances with Heat Geodesic Embedding. + +The sphere $-\mathbb{S}_n$ The above result can be applied to the higher-dimensional sphere $\mathbb{S}_n$ . It is known that the heat kernel on manifold of constant curvatures is a function of the geodesic distance $(d)$ and time only. For $\mathbb{S}_n$ the heat kernel is given by + +$$ +h _ {t} (x, y) = \sum_ {l = 0} ^ {\infty} e ^ {- l (l + n) - 2 t} \frac {2 l + n - 2}{n - 2} C _ {l} ^ {\frac {n}{2} - 1} (\cos (d)) +$$ + +with $I$ the regularized incomplete beta function and $C$ the Gegenbauer polynomials. + +Furthermore, Nowak et al. [24] showed that the heat kernel of the sphere is monotonically decreasing. The distance inferred from Heat Geodesic Embedding thus preserves ordering of the pair-wise distances. + +Euclidean $(\mathbb{R}^3)$ For the euclidean space, we have for the volume of $\sqrt{t}$ -geodesic ball and for the heat kernel: + +$$ +V _ {\sqrt {t}} = \frac {4}{3} \pi t ^ {3 / 2} +$$ + +$$ +h _ {t} (x, y) = \frac {1}{(4 \pi t) ^ {3 / 2}} e ^ {- \frac {\rho^ {2}}{4 t}}. +$$ + +Recalling Harnack inequality, + +$$ +\frac {c _ {1}}{V (x , \sqrt {t})} e ^ {- \frac {d (x , y) ^ {2}}{c _ {2} t}} \leq h _ {t} (x, y) \leq \frac {c _ {3}}{V (x , \sqrt {t})} e ^ {- \frac {d (x , y) ^ {2}}{c _ {4} t}} +$$ + +With $c_{2} = c_{4} = 4$ , we have + +$$ +\frac {c _ {1}}{V (x , \sqrt {t})} \leq \frac {1}{(4 \pi t) ^ {3 / 2}} \leq \frac {c _ {3}}{V (x , \sqrt {t})} +$$ + +In this case, the bound can be made tight, by setting + +$$ +\begin{array}{l} c _ {1} = c _ {3} = \frac {V (x , \sqrt {t})}{(4 \pi t) ^ {3 / 2}} \\ = \frac {\frac {4}{3} \pi t ^ {3 / 2}}{(4 \pi t) ^ {3 / 2}} \\ = \frac {1}{3 \sqrt {4 \pi}} = \frac {1}{6 \sqrt {\pi}}, \\ \end{array} +$$ + +we recover the exact geodesic distance. + +# C.4 Quantitative results + +# C.4.1 Distance matrix evaluation + +We report the performance of the different methods in terms of the ground truth geodesic matrix reconstruction in Table. 6 for the Swiss roll dataset and in Table. 7, for the Tree dataset. + +# C.4.2 Distance matrix evaluation via two-dimensional embeddings + +We report the performance of the different methods in terms of the ground truth geodesic matrix reconstruction in Table 8 for the Swiss roll dataset and in Table 9, for the Tree dataset. + +# C.4.3 Clustering quality evaluation + +On Tables 10, we report the performance on clustering quality for the synthetic datasets with different noise level. + +# C.5 Impact of the different hyperparameters + +We investigate the impact of the different hyperparameters on the quality of the embeddings. In Figure 8, we show the embeddings of HeatGeo for different values of diffusion time, number of neighbours, order, and Harnack regularization. + +In Figures 9, 10, 11, and 12, we show the impact of different hyperparameters on the Pearson correlation between the estimated distance matrix and ground truth distance matrix for different methods on the Swiss roll dataset. + +# C.6 Graph construction + +We compare the embeddings of the heat-geodesic distance for different graph construction. Throughout the paper we used the graph construction from PHATE [22]. In the following we present additional results depending on the choice of kernel to construct the graph. Specifically, we use a simple nearest neighbor (kNN) graph implemented in [7], the graph from UMAP [19], and the implementation in the package Scany [36] for single-cell analysis. In figure, we present the embeddings 2500 points of a tree with five branches in 10 dimensions, where the observations are perturbed with a standard Gaussian noise. All methods used five nearest neighbors and a diffusion time of 20. In Figure 13, we show the evolution of the Pearson correlation between estimated and ground truth distance matrices for the 10-dimensional Swiss roll dataset for various graph constructions. We note that the results are stable across different graph construction strategies. + +
dataNoise levelMethodPearsonRSpearmanRNorm Fro N2Norm inf N2
Swiss roll0.1Diffusion Map0.974 ± 0.010.983 ± 0.0070.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-Geo0.992 ± 0.0030.995 ± 0.0020.002 ± 0.00.003 ± 0.0
Swiss roll0.1Heat-PHATE0.99 ± 0.0020.997 ± 0.0010.079 ± 0.0020.1 ± 0.003
Swiss roll0.1PHATE0.621 ± 0.0060.58 ± 0.010.022 ± 0.00.026 ± 0.0
Swiss roll0.1Rand-Geo0.956 ± 0.0030.993 ± 0.0010.009 ± 0.00.012 ± 0.0
Swiss roll0.1Shortest Path1.0 ± 0.01.0 ± 0.00.0 ± 0.00.001 ± 0.0
Swiss roll0.1Euclidean0.379 ± 0.0030.424 ± 0.0030.014 ± 0.00.018 ± 0.0
Swiss roll0.5Diffusion Map0.982 ± 0.0030.987 ± 0.0020.018 ± 0.00.026 ± 0.0
Swiss roll0.5Heat-Geo0.994 ± 0.0020.996 ± 0.0010.002 ± 0.00.004 ± 0.0
Swiss roll0.5Heat-PHATE0.993 ± 0.0010.998 ± 0.00.064 ± 0.0010.083 ± 0.002
Swiss roll0.5PHATE0.649 ± 0.0070.615 ± 0.0060.023 ± 0.00.028 ± 0.0
Swiss roll0.5Rand-Geo0.969 ± 0.0020.995 ± 0.0010.009 ± 0.00.011 ± 0.0
Swiss roll0.5Shortest Path0.999 ± 0.00.999 ± 0.00.001 ± 0.00.002 ± 0.0
Swiss roll0.5Euclidean0.376 ± 0.0040.422 ± 0.0040.013 ± 0.00.018 ± 0.0
Swiss roll1.0Diffusion Map0.476 ± 0.2260.478 ± 0.1380.018 ± 0.00.026 ± 0.0
Swiss roll1.0Heat-Geo0.702 ± 0.0860.7 ± 0.0730.01 ± 0.00.012 ± 0.0
Swiss roll1.0Heat-PHATE0.623 ± 0.1440.633 ± 0.1140.01 ± 0.0020.019 ± 0.004
Swiss roll1.0PHATE0.457 ± 0.010.404 ± 0.0240.024 ± 0.00.028 ± 0.0
Swiss roll1.0Rand-Geo0.521 ± 0.0420.608 ± 0.0250.01 ± 0.00.014 ± 0.0
Swiss roll1.0Shortest Path0.497 ± 0.1440.558 ± 0.1340.011 ± 0.0010.015 ± 0.002
Swiss roll1.0Euclidean0.365 ± 0.0060.413 ± 0.0050.013 ± 0.00.019 ± 0.001
Swiss roll high0.1Diffusion Map0.98 ± 0.0030.986 ± 0.0010.018 ± 0.00.026 ± 0.0
Swiss roll high0.1Heat-Geo0.992 ± 0.0030.996 ± 0.0020.002 ± 0.00.003 ± 0.0
Swiss roll high0.1Heat-PHATE0.991 ± 0.0020.997 ± 0.0010.079 ± 0.0020.101 ± 0.004
Swiss roll high0.1PHATE0.625 ± 0.0130.582 ± 0.0170.022 ± 0.00.026 ± 0.0
Swiss roll high0.1Rand-Geo0.956 ± 0.0020.993 ± 0.0010.009 ± 0.00.012 ± 0.0
Swiss roll high0.1Shortest Path1.0 ± 0.01.0 ± 0.00.001 ± 0.00.002 ± 0.0
Swiss roll high0.1Euclidean0.379 ± 0.0020.424 ± 0.0020.014 ± 0.00.018 ± 0.0
Swiss roll high0.5Diffusion Map0.98 ± 0.0020.985 ± 0.0020.018 ± 0.00.026 ± 0.0
Swiss roll high0.5Heat-Geo0.997 ± 0.0010.997 ± 0.00.005 ± 0.00.007 ± 0.0
Swiss roll high0.5Heat-PHATE0.995 ± 0.00.997 ± 0.00.041 ± 0.0010.054 ± 0.002
Swiss roll high0.5PHATE0.717 ± 0.0040.707 ± 0.0050.026 ± 0.00.034 ± 0.001
Swiss roll high0.5Rand-Geo0.984 ± 0.00.996 ± 0.00.008 ± 0.00.01 ± 0.0
Swiss roll high0.5Shortest Path0.999 ± 0.00.998 ± 0.00.006 ± 0.00.009 ± 0.0
Swiss roll high0.5Euclidean0.369 ± 0.0030.421 ± 0.0030.013 ± 0.00.018 ± 0.0
Swiss roll high1.0Diffusion Map0.555 ± 0.1550.526 ± 0.0810.018 ± 0.00.026 ± 0.0
Swiss roll high1.0Heat-Geo0.705 ± 0.0650.695 ± 0.0520.011 ± 0.00.012 ± 0.0
Swiss roll high1.0Heat-PHATE0.63 ± 0.1060.625 ± 0.0740.011 ± 0.0010.014 ± 0.002
Swiss roll high1.0PHATE0.473 ± 0.0260.419 ± 0.0240.027 ± 0.00.039 ± 0.001
Swiss roll high1.0Rand-Geo0.563 ± 0.050.644 ± 0.0330.01 ± 0.00.012 ± 0.0
Swiss roll high1.0Shortest Path0.384 ± 0.020.461 ± 0.0170.011 ± 0.00.015 ± 0.0
Swiss roll high1.0Euclidean0.349 ± 0.0040.409 ± 0.0030.013 ± 0.00.018 ± 0.0
Swiss roll very high0.1Diffusion Map0.977 ± 0.0050.984 ± 0.0040.018 ± 0.00.026 ± 0.0
Swiss roll very high0.1Heat-Geo0.992 ± 0.0020.996 ± 0.0010.002 ± 0.00.003 ± 0.0
Swiss roll very high0.1Heat-PHATE0.991 ± 0.0010.997 ± 0.0010.079 ± 0.0030.101 ± 0.003
Swiss roll very high0.1PHATE0.631 ± 0.010.594 ± 0.0110.023 ± 0.00.028 ± 0.001
Swiss roll very high0.1Rand-Geo0.957 ± 0.0020.994 ± 0.0010.009 ± 0.00.012 ± 0.0
Swiss roll very high0.1Shortest Path0.999 ± 0.00.999 ± 0.00.006 ± 0.00.007 ± 0.0
Swiss roll very high0.1Euclidean0.378 ± 0.0020.424 ± 0.0020.013 ± 0.00.018 ± 0.0
Swiss roll very high0.5Diffusion Map0.978 ± 0.0020.984 ± 0.0010.018 ± 0.00.026 ± 0.0
Swiss roll very high0.5Heat-Geo0.997 ± 0.00.998 ± 0.00.008 ± 0.00.01 ± 0.0
Swiss roll very high0.5Heat-PHATE0.996 ± 0.0010.997 ± 0.00.016 ± 0.00.02 ± 0.001
Swiss roll very high0.5PHATE0.815 ± 0.0020.823 ± 0.0040.032 ± 0.00.049 ± 0.002
Swiss roll very high0.5Rand-Geo0.986 ± 0.00.996 ± 0.00.008 ± 0.00.009 ± 0.0
Swiss roll very high0.5Shortest Path0.998 ± 0.00.998 ± 0.00.019 ± 0.0010.027 ± 0.001
Swiss roll very high0.5Euclidean0.361 ± 0.0020.42 ± 0.0020.013 ± 0.00.018 ± 0.0
Swiss roll very high1.0Diffusion Map0.324 ± 0.0610.399 ± 0.0330.018 ± 0.00.026 ± 0.0
Swiss roll very high1.0Heat-Geo0.466 ± 0.0070.506 ± 0.0060.011 ± 0.00.013 ± 0.0
Swiss roll very high1.0Heat-PHATE0.369 ± 0.0110.43 ± 0.0190.011 ± 0.00.014 ± 0.0
Swiss roll very high1.0PHATE0.377 ± 0.0110.425 ± 0.0090.036 ± 0.00.062 ± 0.004
Swiss roll very high1.0Rand-Geo0.398 ± 0.0090.516 ± 0.0080.01 ± 0.00.012 ± 0.0
Swiss roll very high1.0Shortest Path0.367 ± 0.0180.443 ± 0.0160.012 ± 0.00.015 ± 0.0
Swiss roll very high1.0Euclidean0.336 ± 0.0020.402 ± 0.0020.012 ± 0.00.018 ± 0.0
+ +Table 6: Comparison of the estimated distance matrices with the ground truth geodesic distance matrices on the Swiss roll dataset. Best models on average are bolded (not necessarily significant). + +
dataNoise levelMethodPearsonRSpearmanRNorm Fro N2Norm inf N2
Tree1.0Diffusion Map0.748 ± 0.1250.733 ± 0.1110.113 ± 0.0120.161 ± 0.019
Tree1.0Heat-Geo0.976 ± 0.0190.977 ± 0.020.092 ± 0.0110.135 ± 0.018
Tree1.0Heat-PHATE0.918 ± 0.0320.885 ± 0.040.03 ± 0.0050.044 ± 0.007
Tree1.0PHATE0.671 ± 0.0210.398 ± 0.0520.051 ± 0.0080.084 ± 0.017
Tree1.0Rand-Geo0.926 ± 0.0110.966 ± 0.0190.076 ± 0.010.117 ± 0.018
Tree1.0Shortest Path0.965 ± 0.0260.963 ± 0.0270.039 ± 0.0080.06 ± 0.008
Tree1.0Euclidean0.508 ± 0.0390.483 ± 0.0520.092 ± 0.0110.138 ± 0.018
Tree5.0Diffusion Map0.656 ± 0.0540.653 ± 0.0570.113 ± 0.0120.161 ± 0.019
Tree5.0Heat-Geo0.822 ± 0.0080.807 ± 0.0160.1 ± 0.0120.146 ± 0.019
Tree5.0Heat-PHATE0.765 ± 0.0250.751 ± 0.0230.043 ± 0.0060.08 ± 0.01
Tree5.0PHATE0.766 ± 0.0230.743 ± 0.0280.055 ± 0.0070.093 ± 0.008
Tree5.0Rand-Geo0.806 ± 0.0140.795 ± 0.0180.094 ± 0.0110.139 ± 0.018
Tree5.0Shortest Path0.78 ± 0.0090.757 ± 0.0190.075 ± 0.0090.117 ± 0.014
Tree5.0Euclidean0.735 ± 0.0140.704 ± 0.0330.096 ± 0.0110.141 ± 0.017
Tree10.0Diffusion Map0.538 ± 0.050.471 ± 0.0890.113 ± 0.0120.161 ± 0.019
Tree10.0Heat-Geo0.62 ± 0.0250.59 ± 0.0330.1 ± 0.0120.146 ± 0.019
Tree10.0Heat-PHATE0.63 ± 0.0180.588 ± 0.0310.046 ± 0.0050.083 ± 0.012
Tree10.0PHATE0.623 ± 0.0160.583 ± 0.0290.07 ± 0.010.112 ± 0.017
Tree10.0Rand-Geo0.578 ± 0.0430.558 ± 0.0530.095 ± 0.0110.14 ± 0.018
Tree10.0Shortest Path0.539 ± 0.0410.513 ± 0.0550.072 ± 0.010.118 ± 0.017
Tree10.0Euclidean0.508 ± 0.0390.483 ± 0.0520.092 ± 0.0110.138 ± 0.018
Tree high1.0Diffusion Map0.754 ± 0.0490.741 ± 0.0570.267 ± 0.0210.369 ± 0.026
Tree high1.0Heat-Geo0.996 ± 0.0010.999 ± 0.0010.242 ± 0.020.338 ± 0.026
Tree high1.0Heat-PHATE0.927 ± 0.0110.875 ± 0.0320.062 ± 0.0030.084 ± 0.006
Tree high1.0PHATE0.528 ± 0.0850.141 ± 0.0610.209 ± 0.0230.307 ± 0.027
Tree high1.0Rand-Geo0.85 ± 0.0140.944 ± 0.0110.227 ± 0.020.323 ± 0.025
Tree high1.0Shortest Path0.998 ± 0.0010.999 ± 0.0010.009 ± 0.0020.018 ± 0.005
Tree high1.0Euclidean0.928 ± 0.0180.928 ± 0.0240.24 ± 0.020.334 ± 0.026
Tree high5.0Diffusion Map0.706 ± 0.1240.705 ± 0.1130.267 ± 0.0210.369 ± 0.026
Tree high5.0Heat-Geo0.97 ± 0.010.975 ± 0.0090.253 ± 0.0210.353 ± 0.026
Tree high5.0Heat-PHATE0.932 ± 0.0220.919 ± 0.030.072 ± 0.0040.112 ± 0.008
Tree high5.0PHATE0.913 ± 0.0140.872 ± 0.0340.19 ± 0.0170.278 ± 0.025
Tree high5.0Rand-Geo0.968 ± 0.010.971 ± 0.0090.245 ± 0.0190.342 ± 0.024
Tree high5.0Shortest Path0.952 ± 0.0160.95 ± 0.0190.137 ± 0.0170.209 ± 0.024
Tree high5.0Euclidean0.882 ± 0.0280.873 ± 0.0320.237 ± 0.020.333 ± 0.025
Tree high10.0Diffusion Map0.598 ± 0.1170.613 ± 0.1030.267 ± 0.0210.369 ± 0.026
Tree high10.0Heat-Geo0.861 ± 0.0390.87 ± 0.0380.254 ± 0.0210.353 ± 0.026
Tree high10.0Heat-PHATE0.844 ± 0.050.838 ± 0.0510.168 ± 0.0150.27 ± 0.025
Tree high10.0PHATE0.837 ± 0.0520.838 ± 0.0490.204 ± 0.0180.301 ± 0.024
Tree high10.0Rand-Geo0.845 ± 0.0410.86 ± 0.0380.248 ± 0.020.346 ± 0.025
Tree high10.0Shortest Path0.779 ± 0.0510.777 ± 0.0540.159 ± 0.0180.257 ± 0.026
Tree high10.0Euclidean0.709 ± 0.0540.699 ± 0.0590.229 ± 0.020.327 ± 0.026
+ +Table 7: Comparison of the estimated distance matrices with the ground truth geodesic distance matrices on the Tree roll dataset. Best models on average are bolded (not necessarily significant). + +# C.7 Time Complexity + +In Table 12, we present the average computing time for creating embeddings and corresponding distance matrix for the different methods. All methods are applied on the Swiss roll dataset in three dimension with 2000 samples. We present empirical averages and standard deviations over ten repetitions. The experiments were run on a Apple M2 Pro chip with 16G RAM. + +
dataNoise levelMethodPearsonRSpearmanRNorm Fro N2Norm inf N2
Swiss roll0.1Diffusion Map0.974 ± 0.010.983 ± 0.0070.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-Geo0.995 ± 0.0030.996 ± 0.0020.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-PHATE0.99 ± 0.0020.997 ± 0.0010.018 ± 0.00.026 ± 0.0
Swiss roll0.1PHATE0.357 ± 0.0070.357 ± 0.0070.018 ± 0.00.026 ± 0.0
Swiss roll0.1Shortest Path0.998 ± 0.00.997 ± 0.00.018 ± 0.00.026 ± 0.0
Swiss roll0.1TSNE0.951 ± 0.0140.952 ± 0.010.018 ± 0.00.026 ± 0.0
Swiss roll0.1UMAP0.765 ± 0.0590.737 ± 0.0580.015 ± 0.00.026 ± 0.0
Swiss roll0.1Euclidean0.379 ± 0.0020.42 ± 0.0020.018 ± 0.00.026 ± 0.0
Swiss roll0.1Diffusion Map0.983 ± 0.010.983 ± 0.0070.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-Geo0.995 ± 0.0030.996 ± 0.0020.018 ± 0.00.026 ±
Swiss roll0.1Heat-PHATE0.999 ± 0.0010.997 ± 0.0010.018 ± 0.00.026 ± 0.0
Swiss roll0.1PHATE0.357 ± 0.0070.357 ± 0.0070.018 ± 0.00.026 ± 0.0
SWISS ROLL0.1Shortest Path0.998 ± 0.00.997 ± 0.00.018 ± 0.00.026 ± 0.0
Swiss roll0.1TSNE0.951 ± 0.0140.952 ± 0.010.018 ± 0,00.026 ± 0.0
Swiss roll0.1UMAP0.765 ± 0.0590.737 ± 0.0580.015 ± 0.00.026 ± 0.0
Swiss roll0.1Euclidean0.379 ± 0.002
Swiss roll0.1Diffusion Map0.983 ± 0.010.983 ± 0.0070.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-Geo0.995 ± 0.0030.996 ± 0.00.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-PHATE0.999 ± 0.0010.997 ± 0.00.018 ± 0.00.026 ± 0.0
Swiss roll0.1PHATE0.357 ± 0.0070.357 ± 0.0070.018 ± 0.00.026 ± 0.0
SWISS ROLL0.1Shortest Path1.335 ± 0.0141.335 ± 0.0140.018 ± 0.00.026 ± 0.0
Swiss roll0.1TSNE0.951 ± 0.0140.952 ± 0.010.018 ± 0.00.026 ± 0.0
Swiss roll0.1UMAP0.785 ± 0.0240.785 ± 0.0240.018 ± 0.00.026 ± 0.0
Swiss roll0.1Euclidean0.379 ± 0.002
Swiss roll0.1Diffusion Map0.983 ± 0.010.983 ± 0.0070.018 ± 0.00.018 ± 0.0
Swiss roll0.1Heat-Geo0.995 ± 0.00.997 ± 0.00.018 ± 0.00.026 ± 0.0
Swiss roll0.1Heat-PHATE0.999 ± 0.00.997 ± 0.00.018 ± 0.00.026 ± 0.0
Swiss roll0.1PHATE0.357 ± 0.0070.357 ± 0.0070.018 ± 0.00.026 ± 0.0
SWISS0.1Shortest Path1.335 ± 0.0141.335 ± 0.0140.018 ± 0.00.026 ± 0.0
Swiss roll0.1TSNE0.951 ± 0.0140.952 ± 0.010.012 ± 0.00.026 ± 0.0
Swiss roll0.1UMAP0.765 ± 0.0590.737 ± 0.0580.015 ± 0.00.026 ± 0.0
Swiss roll0.1Euclidean0.379 ± 0,0020.42 ± 0.0020.018 ± 0.00.026 ± 0.0
+ +Table 8: Comparison of the estimated distance matrices with the ground truth geodesic distance matrices on the Swiss roll dataset, using a two-dimensional embedding. Best models on average are bolded (not necessarily significant). + +
dataNoise levelMethodPearsonRSpearmanRNorm Fro N2Norm inf N2
Tree0.1Diffusion Map0.748 ± 0.1250.733 ± 0.1110.113 ± 0.0120.161 ± 0.019
Tree0.1Heat-Geo0.943 ± 0.0370.94 ± 0.0370.113 ± 0.0120.161 ± 0.019
Tree0.1Heat-PHATE0.872 ± 0.040.83 ± 0.0610.113 ± 0.0120.161 ± 0.019
Tree0.1PHATE0.564 ± 0.0390.469 ± 0.0520.113 ± 0.0110.161 ± 0.018
Tree0.1Rand-Geo0.868 ± 0.0170.85 ± 0.0190.113 ± 0.0120.161 ± 0.019
Tree0.1Shortest Path0.937 ± 0.0370.931 ± 0.0410.113 ± 0.0120.161 ± 0.019
Tree0.1TSNE0.847 ± 0.0340.824 ± 0.0450.082 ± 0.0120.123 ± 0.022
Tree0.1UMAP0.692 ± 0.0580.671 ± 0.0470.107 ± 0.0120.153 ± 0.019
Tree0.1Euclidean0.809 ± 0.0170.778 ± 0.0240.113 ± 0.0120.161 ± 0.019
Tree0.5Diffusion Map0.656 ± 0.0540.653 ± 0.0570.113 ± 0.0120.161 ± 0.019
Tree0.5Heat-Geo0.806 ± 0.0190.787 ± 0.0090.113 ± 0.0120.161 ± 0.019
Tree0.5Heat-PHATE0.746 ± 0.0240.744 ± 0.0310.113 ± 0.0120.161 ± 0.019
Tree0.5PHATE0.766 ± 0.0230.746 ± 0.030.113 ± 0.0110.161 ± 0.018
Tree0.5Rand-Geo0.721 ± 0.0240.694 ± 0.0240.113 ± 0.0120.161 ± 0.019
Tree0.5Shortest Path0.765 ± 0.010.738 ± 0.0110.113 ± 0.0120.161 ± 0.019
Tree0.5TSNE0.795 ± 0.0460.766 ± 0.0550.083 ± 0.0120.128 ± 0.018
Tree0.5UMAP0.783 ± 0.060.757 ± 0.0540.11 ± 0.0110.157 ± 0.018
Tree0.5Euclidean0.704 ± 0.020.672 ± 0.0380.113 ± 0.0120.161 ± 0.019
Tree1.0Diffusion Map0.538 ± 0.050.471 ± 0.0890.113 ± 0.0120.161 ± 0.019
Tree1.0Heat-Geo0.613 ± 0.0250.58 ± 0.0360.113 ± 0.0120.161 ± 0.019
Tree1.0Heat-PHATE0.614 ± 0.020.571 ± 0.0440.113 ± 0.0120.161 ± 0.019
Tree1.0PHATE0.615 ± 0.0170.572 ± 0.0360.113 ± 0.0110.161 ± 0.018
Tree1.0Rand-Geo0.487 ± 0.0640.465 ± 0.0710.113 ± 0.0120.161 ± 0.019
Tree1.0Shortest Path0.542 ± 0.0470.514 ± 0.060.113 ± 0.0120.161 ± 0.019
Tree1.0TSNE0.583 ± 0.0420.553 ± 0.0450.086 ± 0.0110.135 ± 0.017
Tree1.0UMAP0.595 ± 0.0320.562 ± 0.0360.111 ± 0.0110.158 ± 0.019
Tree1.0Euclidean0.502 ± 0.0510.479 ± 0.0640.113 ± 0.0120.161 ± 0.019
Tree high0.1Diffusion Map0.754 ± 0.0490.741 ± 0.0570.267 ± 0.0210.369 ± 0.026
Tree high0.1Heat-Geo0.956 ± 0.0140.957 ± 0.0150.267 ± 0.0210.369 ± 0.026
Tree high0.1Heat-PHATE0.831 ± 0.0820.764 ± 0.1150.267 ± 0.0210.369 ± 0.026
Tree high0.1PHATE0.484 ± 0.0360.4 ± 0.0280.267 ± 0.020.369 ± 0.025
Tree high0.1Rand-Geo0.817 ± 0.0130.774 ± 0.0220.267 ± 0.0210.369 ± 0.026
Tree high0.1Shortest Path0.958 ± 0.0140.956 ± 0.0170.267 ± 0.0210.369 ± 0.026
Tree high0.1TSNE0.89 ± 0.0390.866 ± 0.0430.233 ± 0.0210.327 ± 0.026
Tree high0.1UMAP0.8 ± 0.0310.764 ± 0.0340.259 ± 0.0210.36 ± 0.028
Tree high0.1Euclidean0.878 ± 0.0420.859 ± 0.0510.267 ± 0.0210.369 ± 0.026
Tree high0.5Diffusion Map0.706 ± 0.1240.705 ± 0.1130.267 ± 0.0210.369 ± 0.026
Tree high0.5Heat-Geo0.932 ± 0.0220.928 ± 0.0230.267 ± 0.0210.369 ± 0.026
Tree high0.5Heat-PHATE0.923 ± 0.0230.921 ± 0.0220.267 ± 0.0210.369 ± 0.026
Tree high0.5PHATE0.844 ± 0.0480.79 ± 0.070.267 ± 0.020.369 ± 0.025
Tree high0.5Rand-Geo0.875 ± 0.0420.855 ± 0.0480.267 ± 0.0210.369 ± 0.026
Tree high0.5Shortest Path0.917 ± 0.0250.91 ± 0.030.267 ± 0.0210.369 ± 0.026
Tree high0.5TSNE0.922 ± 0.0350.91 ± 0.0450.237 ± 0.0210.334 ± 0.027
Tree high0.5UMAP0.823 ± 0.0540.803 ± 0.0410.261 ± 0.0210.361 ± 0.026
Tree high0.5Euclidean0.819 ± 0.0480.799 ± 0.0530.267 ± 0.0210.369 ± 0.026
Tree high1.0Diffusion Map0.598 ± 0.1170.613 ± 0.1030.267 ± 0.0210.369 ± 0.026
Tree high1.0Heat-Geo0.794 ± 0.0660.805 ± 0.0490.267 ± 0.0210.369 ± 0.026
Tree high1.0Heat-PHATE0.826 ± 0.0640.823 ± 0.0670.267 ± 0.0210.369 ± 0.026
Tree high1.0PHATE0.827 ± 0.0590.82 ± 0.0620.267 ± 0.020.369 ± 0.025
Tree high1.0Rand-Geo0.71 ± 0.0430.686 ± 0.0450.267 ± 0.0210.369 ± 0.026
Tree high1.0Shortest Path0.771 ± 0.0640.753 ± 0.070.267 ± 0.0210.369 ± 0.026
Tree high1.0TSNE0.84 ± 0.0660.821 ± 0.0740.238 ± 0.020.335 ± 0.026
Tree high1.0UMAP0.853 ± 0.0510.839 ± 0.0570.264 ± 0.0210.365 ± 0.026
Tree high1.0Euclidean0.683 ± 0.0670.665 ± 0.070.267 ± 0.0210.369 ± 0.026
+ +Table 9: Comparison of the estimated distance matrices with the ground truth geodesic distance matrices on the Tree dataset, using a two-dimensional embedding. Best models on average are bolded (not necessarily significant). + +
dataNoise levelMethodHomogeneityAdjusted Rand ScoreAdjusted Mutual Info Score
Swiss roll0.1Heat-Geo0.82 ± 0.0080.668 ± 0.0340.74 ± 0.018
Swiss roll0.1Phate0.731 ± 0.0350.546 ± 0.0440.652 ± 0.046
Swiss roll0.1TSNE0.748 ± 0.0670.537 ± 0.10.668 ± 0.068
Swiss roll0.1UMAP0.81 ± 0.0360.611 ± 0.0390.726 ± 0.045
Swiss roll0.5Heat-Geo0.813 ± 0.0260.656 ± 0.0490.733 ± 0.022
Swiss roll0.5Phate0.735 ± 0.0480.543 ± 0.0640.656 ± 0.053
Swiss roll0.5TSNE0.764 ± 0.070.564 ± 0.0970.684 ± 0.065
Swiss roll0.5UMAP0.826 ± 0.0190.664 ± 0.0730.744 ± 0.032
Swiss roll1.0Heat-Geo0.722 ± 0.0510.548 ± 0.0910.652 ± 0.056
Swiss roll1.0Phate0.482 ± 0.0140.317 ± 0.0310.428 ± 0.021
Swiss roll1.0TSNE0.757 ± 0.0370.562 ± 0.0580.679 ± 0.042
Swiss roll1.0UMAP0.726 ± 0.0410.51 ± 0.0770.65 ± 0.05
Swiss roll high0.1Heat-Geo0.82 ± 0.0150.666 ± 0.0330.739 ± 0.019
Swiss roll high0.1Phate0.705 ± 0.030.518 ± 0.0480.628 ± 0.04
Swiss roll high0.1TSNE0.757 ± 0.0780.558 ± 0.1150.677 ± 0.08
Swiss roll high0.1UMAP0.796 ± 0.030.624 ± 0.0480.714 ± 0.037
Swiss roll high0.5Heat-Geo0.805 ± 0.0210.655 ± 0.0470.725 ± 0.035
Swiss roll high0.5Phate0.745 ± 0.040.562 ± 0.0610.664 ± 0.047
Swiss roll high0.5TSNE0.747 ± 0.0750.538 ± 0.110.668 ± 0.075
Swiss roll high0.5UMAP0.787 ± 0.0410.573 ± 0.0670.703 ± 0.032
Swiss roll high1.0Heat-Geo0.7 ± 0.0450.534 ± 0.0570.644 ± 0.032
Swiss roll high1.0Phate0.552 ± 0.0470.386 ± 0.0560.496 ± 0.04
Swiss roll high1.0TSNE0.754 ± 0.0340.548 ± 0.0680.675 ± 0.036
Swiss roll high1.0UMAP0.76 ± 0.0410.56 ± 0.0770.68 ± 0.05
Swiss roll very high0.1Heat-Geo0.818 ± 0.0330.668 ± 0.0740.738 ± 0.039
Swiss roll very high0.1Phate0.688 ± 0.0430.497 ± 0.0530.614 ± 0.053
Swiss roll very high0.1TSNE0.741 ± 0.070.544 ± 0.1010.662 ± 0.075
Swiss roll very high0.1UMAP0.816 ± 0.0420.65 ± 0.0690.733 ± 0.054
Swiss roll very high0.5Heat-Geo0.73 ± 0.0450.605 ± 0.0930.701 ± 0.028
Swiss roll very high0.5Phate0.758 ± 0.0340.55 ± 0.0370.676 ± 0.014
Swiss roll very high0.5TSNE0.77 ± 0.0540.557 ± 0.0930.708 ± 0.031
Swiss roll very high0.5UMAP0.789 ± 0.0520.574 ± 0.1010.707 ± 0.061
Swiss roll very high1.0Heat-Geo0.592 ± 0.0330.427 ± 0.0630.545 ± 0.031
Swiss roll very high1.0Phate0.531 ± 0.0420.377 ± 0.0460.486 ± 0.045
Swiss roll very high1.0TSNE0.738 ± 0.0190.551 ± 0.0390.662 ± 0.025
Swiss roll very high1.0UMAP0.736 ± 0.0570.542 ± 0.1020.66 ± 0.061
Tree0.1Heat-Geo0.784 ± 0.0510.734 ± 0.070.786 ± 0.051
Tree0.1Phate0.55 ± 0.0420.409 ± 0.0640.555 ± 0.042
Tree0.1TSNE0.706 ± 0.0540.61 ± 0.0750.712 ± 0.055
Tree0.1UMAP0.678 ± 0.0860.584 ± 0.120.681 ± 0.086
Tree0.5Heat-Geo0.545 ± 0.1210.411 ± 0.1540.577 ± 0.094
Tree0.5Phate0.529 ± 0.1110.404 ± 0.1510.555 ± 0.095
Tree0.5TSNE0.647 ± 0.0490.591 ± 0.0650.65 ± 0.048
Tree0.5UMAP0.645 ± 0.0510.565 ± 0.0580.652 ± 0.05
Tree1.0Heat-Geo0.398 ± 0.070.3 ± 0.0770.42 ± 0.07
Tree1.0Phate0.418 ± 0.080.337 ± 0.0930.43 ± 0.075
Tree1.0TSNE0.405 ± 0.0770.378 ± 0.0740.405 ± 0.077
Tree1.0UMAP0.432 ± 0.0860.395 ± 0.0980.432 ± 0.085
+ +Table 10: Clustering results on swiss roll (with distribution) and tree. Best models on average are bolded (not necessarily significant). + +# Embeddings of HeatGeo for different hyperparameters + +![](images/c45bc0d7fd77ad28c23670c8719e2a33bd93ad310ea1344a827a59c7d6e7a536.jpg) + +![](images/32dd037d87b232761ab2b3eb2792262ace0e70cc2150fe57023572a2f8c49b77.jpg) + +![](images/6e04ca1eeca05e052aa6fa6227e2d3352d6db2cc623b7085aa5c70ae7cb1ee62.jpg) +Harnack Regularization + +![](images/f9bdb5c0c031931e38b9eef279cd66af6967c81993ccff1b007ecd3400c74947.jpg) + +![](images/febc15220a52c52feeebb5a2085d5265ba3613aabb5a999360b981d001d5a6b1.jpg) + +![](images/c51cd1225460679a36d642680578d8e51ac64e30482170b1792d3044cf195107.jpg) + +![](images/c79d4e6caad01dc626192ca3d5b4e47629d8a69ebe134c9b739ebbbdfec7ac8c.jpg) + +![](images/c88eca8be688fb95edc910a84c72c6ca1d1e31e0d12bb3db110dd55392062079.jpg) +Diffusion Time + +![](images/33e5ff972646cc2d5197148b523eff8b6524f12ed9bcdd712b3f433d01996185.jpg) + +![](images/9705efc1446176eb194f7b97ce018a08700efb2036323b3c6aa368d96a7e7034.jpg) + +![](images/014fd5cc86597ebb86574972d4df88435d8a078ae195df3b1d4452cfb27e1923.jpg) + +![](images/937499c46050ad63ba892ba3ff66dd4d2c9a8228490597396ec459d15eea7044.jpg) + +![](images/99fdd936a7080a94d47310934b0ec306cbd0d73d2da6da82836d7763beb7ea39.jpg) +Number of neighbours + +![](images/c937ab9811fe08d2d798d03a84b12abf515344c5ea53f4b67e3856be1296bf4c.jpg) + +![](images/83c9977b51cc0d817e1ffc77436c7aea5369f6e9d136f253dc1e61b31878ccdf.jpg) + +![](images/82cada10aeb63e7650efe02cc5a353c3afdca61b9e8b761a209918b2d40e789c.jpg) + +![](images/f435cb208f08caa4d9582737483b02644681f3af10c5cd6538ffa20f427f2d3d.jpg) +Order of Heat Kernel Approximation (Euler) + +![](images/be8ba79b0de87ad6e9b74e088d70561c6ed53868ef770f9fda28a8e66ba8f78d.jpg) + +![](images/f4e5735b0a780eda0de04d2006061fcea55265001149d6dda20f3a2a3fdca815.jpg) + +![](images/9c4a9d8ecde2bc2fa7b633f140df192cec4c9f433246a16c308d539f6f801e10.jpg) + +![](images/4b4dc344f4f3a9dce5f69d2b90b4efde2801d7fe4e989a778885fc39c16244f5.jpg) +Figure 8: Embeddings of Heat Geodesic Embedding for different choices of hyperparameters on the EB dataset. We evaluate the impact of the Harnack regularization, the diffusion time, the number of neighbours in the kNN, and the order of the approximation for Euler and Chebyshev approximations. + +![](images/42283ff9211b6c9361ccb6c4534d7dbead17cae921d58e349ef3b3adccdd3480.jpg) +Order of Heat Kernel Approximation (Chebyshev) + +![](images/f136fe006f04a2237cc1eaeb6c69def836293e82db978672b7dee3c0aa68463c.jpg) + +![](images/0f077075484d155f87e75e4a5ec12300568c9f3ca6097bfaa427fdae0949ce97.jpg) + +![](images/c0134f77a954056cdb5d6deb0dc81fec1ca48358caa2fafe9d79e3eca12f11f5.jpg) + +![](images/47bc349393171018d07e640b7098d3e5f614659b58c20e8f6b9e14cea8ce0f08.jpg) +Figure 9: Impact of diffusion time on the Pearson correlation between the estimated distance matrix and ground truth distance matrix for different methods on the Swiss roll dataset. + +Pearson- $\rho$ in function of Order of the heat kernel approximation on Swiss Roll data + +![](images/9d503a9e4d1fafaf07820c0a764ac6a98c02d317f20c149c77bcdeddaa1bcaf5.jpg) +Figure 10: Impact of Checbshev approximation order on the Pearson correlation between the estimated distance matrix and ground truth distance matrix for different methods on the Swiss roll dataset. + +![](images/731c4cc9ec97f21151b68ca5cd5fe7b0dc8a3011a931539f4f78a7016db00de8.jpg) +Figure 11: Impact of number of neighbours on the Pearson correlation between the estimated distance matrix and ground truth distance matrix for different methods on the Swiss roll dataset. + +![](images/8fdeb98a1d7137112cf5dc0db79d19389732c94d2905d7b661d4472eb5c0380d.jpg) +Figure 12: Impact of Harnack regularization on the Pearson correlation between the estimated distance matrix and ground truth distance matrix for HeatGeo on the Swiss roll dataset. + +![](images/f2f36d45c70ae2c02ef2d86ee1582fa45bb8b6e8d5eae91f2fcfc8dbcca062d2.jpg) +Pearson $\rho$ in function of Graph type on Swiss Roll 10D data +Figure 13: Pearson correlation between estimated and ground truth distance matrices for the 10-dimensional Swiss roll dataset for various graph constructions. Standard deviations are computed over the 5 test folds. + +Table 11: Clustering quality metrics for different methods. We report the homogeneity and the adjusted mutual information (aMI). Best models on average are bolded (higher is better). + +
dataNoise levelMethodHomogeneityAdjusted Rand ScoreAdjusted Mutual Info Score
Mnist0Diff-Map0.556 ± 0.0020.347 ± 0.0020.622 ± 0.002
Mnist0Heat-Geo0.785 ± 0.00.695 ± 0.00.829 ± 0.001
Mnist0Phate0.822 ± 0.010.72 ± 0.0170.835 ± 0.011
Mnist0TSNE0.903 ± 0.0030.871 ± 0.0020.902 ± 0.003
Mnist0UMAP0.851 ± 0.0160.846 ± 0.0050.86 ± 0.015
Coil0Diff-Map0.21 ± 0.0360.041 ± 0.0150.142 ± 0.024
Coil0Heat-Geo0.849 ± 0.0160.67 ± 0.0290.806 ± 0.022
Coil0Phate0.804 ± 0.0170.615 ± 0.0280.735 ± 0.021
Coil0TSNE0.907 ± 0.0140.79 ± 0.030.88 ± 0.02
Coil0UMAP0.871 ± 0.0090.725 ± 0.0190.826 ± 0.012
Pbmc0Diff-Map0.026 ± 0.0010.011 ± 0.00.038 ± 0.001
Pbmc0Heat-Geo0.734 ± 0.0090.724 ± 0.0190.768 ± 0.017
Pbmc0Phate0.798 ± 0.0120.818 ± 0.0090.785 ± 0.01
Pbmc0TSNE0.605 ± 0.0190.437 ± 0.0320.544 ± 0.022
Pbmc0UMAP0.177 ± 0.0370.097 ± 0.0330.148 ± 0.035
+ +Table 12: Average computing time for creating embeddings and corresponding distance matrix for the different methods. All methods are applied on the Swiss roll dataset in three dimension with 2000 samples. We present empirical averages and standard deviations over ten repetitions. + +
MethodTime (s)
UMAP4.21 ± 0.68
t-SNE80.15 ± 62.44
Isomap2.58 ± 0.03
PHATE3.30 ± 0.05
Diff Map8.62 ± 0.23
HeatGeo (Backward Euler)2.60 ± 0.05
HeatGeo (Chebyshev)2.11 ± 0.05
\ No newline at end of file diff --git a/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/images.zip b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e007ed7c4ee8a39143ee75de7412fcdc5824cebd --- /dev/null +++ b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae89162cd582e1ccfe32cc07c9801563583aa0f14efe8cd185aac61335448ebc +size 3166455 diff --git a/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/layout.json b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4fc1a47f17e8b45d56b2fdbcf4dfe216b7cbc353 --- /dev/null +++ b/aheatdiffusionperspectiveongeodesicpreservingdimensionalityreduction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cb4a42d307baeddfaca84f3ecb9e3f162240d7177ebb1242228f5353ab44ee7 +size 1025102 diff --git a/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_content_list.json b/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d8c7a09785cdf36070e0ac437d8f37abf02ac212 --- /dev/null +++ b/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fab0f340a39c84ee9213a345f837aab2b4ce230f7fe8099085901e4c0af1408 +size 70260 diff --git a/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_model.json b/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..dd6d368bd9c547013a3adab6733bbfacaa68d5c7 --- /dev/null +++ b/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b923ed3e42b546d7238b9b5f9f80a33fcbef376c19d1e247e13beed2b86b5f59 +size 90843 diff --git a/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_origin.pdf b/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6f0f7e02a7a6a88a2d5340f53321577c98f6006c --- /dev/null +++ b/aheavytailedalgebraforprobabilisticprogramming/17eb1129-1399-4166-baa0-ccc9b3efe48a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4faf9f2a0f690e59a18d58ca306a803020f166106bdce26d08475bcfbf263290 +size 708415 diff --git a/aheavytailedalgebraforprobabilisticprogramming/full.md b/aheavytailedalgebraforprobabilisticprogramming/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1258e2dc21c4793793777508803edc5e60c7f429 --- /dev/null +++ b/aheavytailedalgebraforprobabilisticprogramming/full.md @@ -0,0 +1,292 @@ +# A Heavy-Tailed Algebra for Probabilistic Programming + +Feynman Liang + +Department of Statistics +University of California, Berkeley +feynman@berkeley.edu + +Liam Hodgkinson + +School of Mathematics and Statistics +University of Melbourne, Australia +lhodgkinson@unimelb.edu.au + +Michael W. Mahoney + +ICSI, LBNL, and Department of Statistics University of California, Berkeley mmahoney@stat.berkeley.edu + +# Abstract + +Despite the successes of probabilistic models based on passing noise through neural networks, recent work has identified that such methods often fail to capture tail behavior accurately—unless the tails of the base distribution are appropriately calibrated. To overcome this deficiency, we propose a systematic approach for analyzing the tails of random variables, and we illustrate how this approach can be used during the static analysis (before drawing samples) pass of a probabilistic programming language (PPL) compiler. To characterize how the tails change under various operations, we develop an algebra which acts on a three-parameter family of tail asymptotics and which is based on the generalized Gamma distribution. Our algebraic operations are closed under addition and multiplication; they are capable of distinguishing sub-Gaussians with differing scales; and they handle ratios sufficiently well to reproduce the tails of most important statistical distributions directly from their definitions. Our empirical results confirm that inference algorithms that leverage our heavy-tailed algebra attain superior performance across a number of density modeling and variational inference (VI) tasks. + +# 1 Introduction + +Within the context of modern probabilistic programming languages (PPLs), recent developments in functional programming [51], programming languages [3], and deep variational inference (VI) [4] combine to facilitate efficient probabilistic modelling and inference. But despite the broadening appeal of probabilistic programming, common pitfalls such as mismatched distribution supports [32] and non-integrable expectations [53, 55, 60] remain uncomfortably commonplace and remarkably challenging to address. In particular, heavy-tailed distributions arise in a wide range of statistical applications and are known to present substantial technical challenges [37, 55, 60]. Recent innovations aiming to improve PPLs have automated verification of distribution constraints [32], tamed noisy gradient estimates [16] as well as unruly density ratios [53, 55], and approximated high-dimensional distributions with non-trivial bulks [39]. To address the issue of heavy-tailed targets, approaches which initialize with non-Gaussian tails have been proposed [25, 33]. However, these methods typically require the use of optimization and/or sampling strategies to estimate the tails of the target distribution. Such strategies are often unstable, or fail to allow for a sufficiently wide array of possible tail behaviours. + +![](images/a166ffa4636a2a535bdd288c1abc463172da86e2e9aab1518a252b09638d0efd.jpg) +Figure 1: Our heavy-tailed algebra ensures that the tails of density estimators and variational approximations are calibrated to those of the target distribution. Here, a generative model expressed in a PPL (1) is analyzed using the GGA without drawing any samples (2) to compute the tail parameters of the target. A representative distribution with calibrated tails is chosen for the initial approximation (3), and a learnable tail-invariant Lipschitz pushforward (see bottom of Table 1, and Theorem 2) is optimized (4) to correct the bulk approximation. + +Motivated by this, we introduce the first procedure for static analysis of a probabilistic program that automates analysis of target distributions' tails. In addition, we show how tail metadata obtained from this procedure can be leveraged by PPL compilers to generate inference algorithms which mitigate a number of pathologies. For example, importance sampling estimators can exhibit infinite variance if the tail of the approximating density is lighter than the target; most prominent black-box VI methods are incapable of changing their tail behaviour from an initial proposal distribution [25, 33]; and Monte-Carlo Markov Chain (MCMC) algorithms may also lose ergodicity when the tail of the target density falls outside of a particular family [42]. All of these issues could be avoided if the tail of the target is known before runtime. + +To classify tail asymptotics, we propose a three-parameter family of distributions which is closed under most typical operations. This family is based on the generalized Gamma distribution (Equation (2)), and it interpolates between established asymptotics on sub-Gaussian random variables [31] and regularly varying random variables [35]. Algebraic operations on random variables can then be lifted to computations on the tail parameters. This results in a heavy-tailed algebra that we designate as the generalized Gamma algebra (GGA). Through analyzing operations like $X + Y$ , $X^2$ , and $X / Y$ at the level of densities (e.g., additive convolution $p_X \oplus p_Y$ ), the tail parameters of a target density can be estimated from the parameters of any input distributions using Table 1. + +Operationalizing our GGA, we propose a tail inferential static analysis strategy analogous to traditional type inference. GGA tail metadata can be used to diagnose and address tail-related problems in downstream tasks, such as employing Riemannian-manifold methods [17] to sample heavy tails or preemptively detect unbounded expectations. Here, we consider density estimation and VI where we use the GGA-computed tail of the target density to calibrate our density approximation. When composed with a learnable Lipschitz pushforward map (Section 3.2), the resulting combination is a flexible density approximator with tails provably calibrated to match those of the target. + +Contributions. Here are our main contributions. + +- We propose the generalized Gamma algebra (GGA) as an example of a heavy-tailed algebra for probability distributions. This extends prior work on classifying tail asymptotics, while including both sub-Gaussian / sub-exponentials [31] as well as power-law / Pareto-based tail indices [11]. Composing operations outlined in Table 1, one can compute the GGA tail class for downstream random variables of interest. +- We implement GGA as abstract interpretation during the static analysis phase of a PPL compiler. This unlocks the ability to leverage GGA metadata in order to better tailor MCMC and VI algorithms produced by a PPL. +- Finally, we demonstrate that density estimators which combine our GGA tails with neural networks (autoregressive normalizing flows [39] and neural spline flows [15]) simultaneously achieves calibrated tails without sacrificing good bulk approximation. + +# 2 The Generalized Gamma Algebra + +First, we formulate our heavy-tailed algebra of random variables that is closed under most standard elementary operations (addition, multiplication, powers). The central class of random variables under consideration are those with tails of the form in Definition 1. + +Definition 1. A random variable $X$ is said to have a generalized Gamma tail if the Lebesgue density of $|X|$ satisfies + +$$ +p _ {| X |} (x) = x ^ {\nu} e ^ {- \sigma x ^ {\rho}}, \quad \text {a s} x \rightarrow \infty , \tag {1} +$$ + +for some $c > 0$ , $\nu \in \mathbb{R}$ , $\sigma > 0$ and $\rho \in \mathbb{R}$ . Denote the set of all such random variables by $\mathcal{G}$ . + +Consider the following equivalence relation on $\mathcal{G}$ : $X\equiv Y$ if and only if $0 < p_{|X|}(x) / p_{|Y|}(x) < +\infty$ for all sufficiently large $x$ . The resulting equivalence classes can be represented by their corresponding parameters $\nu, \sigma, \rho$ . Hence, we denote the class of random variables $X$ satisfying Equation (1) by $(\nu, \sigma, \rho)$ . In the special case where $\rho = 0$ , for a fixed $\nu < -1$ , each class $(\nu, \sigma, 0)$ for $\sigma > 0$ is equivalent, and is denoted by $\mathcal{R}_{|\nu|}$ , representing regularly varying tails. Our algebra operates on these equivalence classes of $\mathcal{G}$ , characterizing the change in tail behaviour under various operations. To incorporate tails which lie outside of $\mathcal{G}$ , we let $\mathcal{R}_1$ incorporate super-heavy tails, which denote random variables with tails heavier than any random variable in $\mathcal{G}$ . All operations remain consistent with this notation. Likewise, we let $\mathcal{L}$ denote super-light tails, which are treated in our algebra as a class where $\rho = +\infty$ (effectively constants). + +Equation (1) and the name of the algebra are derived from the generalized Gamma distribution. + +Definition 2. Let $\nu \in \mathbb{R}$ , $\sigma > 0$ , and $\rho \in \mathbb{R} \setminus \{0\}$ be such that $(\nu + 1) / \rho > 0$ . A non-negative random variable $X$ is generalized Gamma distributed with parameters $\nu, \sigma, \rho$ if it has Lebesgue density + +$$ +p _ {\nu , \sigma , \rho} (x) = c _ {\nu , \sigma , \rho} x ^ {\nu} e ^ {- \sigma x ^ {\rho}}, \quad x > 0, \tag {2} +$$ + +where $c_{\nu ,\sigma ,\rho} = \rho \sigma^{(\nu +1) / \rho} / \Gamma ((\nu +1) / \rho)$ is the normalizing constant. + +The importance of the generalized Gamma form arises due to a combination of two factors: + +(i) The majority of interesting continuous univariate distributions with infinite support satisfy Equation (1), including Gaussians $(\nu = 0, \rho = 2)$ , gamma/exponential/chi-squared $(\nu > -1, \rho = 1)$ , Weibull/Frechet $(\rho = \nu + 1)$ , and Student $T/\text{Cauchy/Pareto}(\mathcal{R}_{\nu})$ . A notable exception is the log-normal distribution (see Example 8 in Appendix C). +(ii) The set $\mathcal{G}$ is known to be closed under additive convolution, positive powers, and Lipschitz functions. We prove it is closed under multiplicative convolution as well. This covers the majority of elementary operations on independent random variables. Reciprocals, exponentials and logarithms comprise the only exceptions, however, we will introduce a few "tricks" to handle these cases as well. + +The full list of operations in GGA is compiled in Table 1 and described in detail in Appendix A. GGA classes for common probability distributions are provided in Appendix B. All operations in the GGA can be proven to exhibit identical behaviour with their corresponding operations on random variables, with the sole exception of reciprocals (marked by $\dagger$ ), where additional assumptions are required. The asymptotics for operations marked with an asterisk are novel to this work. For further details, refer to Appendix A. + +Repeated applications. Provided independence holds, composition of operations in the GGA remain consistent unless one applies Lipschitz functions, logarithms, or exponentials. If one of these operations is applied, the tail becomes an upper bound, which remains consistent under addition, multiplication, and powers, but not reciprocals. Given that we are working with a fixed class of tails, such behavior is inevitable, and it is possible to perform a sequence of operations for which the tail no longer becomes accurate. + +Posterior distributions. A primary application of PPLs is to perform Bayesian inference. To cover this use case, it is necessary to prescribe a procedure to deal with posterior distributions. Consider a setup where a collection of random variables $X_{1},\ldots ,X_{n}$ are dependent on corresponding latent random elements $Z_{1},\ldots ,Z_{n}$ as well as a parameter $\theta$ through functions $f_{i}$ by $X_{i} = f_{i}(\theta ,Z_{i})$ . For + +
Ordering(v1,σ1,ρ1) ≤ (ν2,σ2,ρ2) ⇔ lim supx→∞ xν1e-σ1xρ1/xν2e-σ2xρ2 < +∞.
Addition(v1,σ1,ρ1) ⊕ (ν2,σ2,ρ2) ⇔ {max{({ν1,σ1,ρ1},({ν2,σ2,ρ2})} if ρ1≠ρ2 or ρ1,ρ2<1 +(ν1+ν2+1,min{σ1,σ2},1) if ρ1=ρ2=1 +(ν1+ν2+2-ρ/2,({σ1-ρ-1}+σ2-ρ-1)1-ρ,ρ) if ρ=ρ1=ρ2>1.
Powers(ν,σ,ρ)β ⇔ (ν+1/β-1,σ,ρ/β) for β>0
Reciprocal*†(ν,σ,ρ)-1 ⇔ {(-ν-2,σ,-ρ) if (ν+1)/ρ>0 and ρ≠0 +R2 otherwise
Scalar Multiplicationc(ν,σ,ρ) ⇔ (ν,σ|c|−ρ,ρ)
Multiplication*(v1,σ1,ρ1)⊗ (ν2,σ2,ρ2) ⇔ {μ{v1/|ρ1|+v2/|ρ2|+1/2},σ,-1/μ} if ρ1,ρ2<0 +{μ{v1/ρ1+v2/ρ2-1/2},σ,1/μ} if ρ1,ρ2>0 +R|ν1| if ρ1≤0,ρ2>0 +Rmin{|ν1|,|ν2|} if ρ1=0,ρ2=0 +where μ=1/|ρ1|+1/|ρ2|=|ρ1|+|ρ2|/|ρ1ρ2|σ=μ(σ1|ρ1|)μ|ρ1| (σ2|ρ2|)μ|ρ2|.
Product of Densities*(v1,σ1,ρ1)& (ν2,σ2,ρ2) ⇔ {v1+ν2,σ1,ρ1} if ρ1<ρ2 +(v1+ν2,σ1+σ2,ρ) if ρ=ρ1=ρ2 +(v1+ν2,σ2,ρ2) otherwise.
Exponentials*†exp(ν,σ,ρ) ⇔ {Rσ+1 if ρ≥1 +R1 otherwise.
Logarithms*†log(ν,σ,ρ) ⇔ {0,|ν|-1,1} if ν<-1 +L otherwise.
Functions (L-Lipschitz)f(X1,...,Xn) ≈ L max{X1,...,Xn}
+ +Table 1: The Generalized Gamma Algebra. Operations on random variables (e.g., $X_{1} + X_{2}$ ) are viewed as actions on density functions (e.g., convolution $(\nu_{1},\sigma_{1},\rho_{1})\oplus (\nu_{2},\sigma_{2},\rho_{2})$ ) and the tail parameters of the result are analyzed and reported. In this table, * denotes novel results, and † denotes that additional assumptions are required. + +simplicity, we assume that each $f_{i} = f_{i,k} \circ f_{i,k-1} \circ \dots \circ f_{i,1}$ where each $f_{ij}$ is an elementary operation in Table 1. To estimate the tail behaviour of $\theta$ conditioned on $X$ , we propose an elementary approach involving inverses. For each operation $f_{ij}$ , if $f_{ij}$ is a power, reciprocal, or multiplication operation, let $R_{ij}$ be given according to the following: + +Powers: $f_{ij}(x) = x^{\beta},\qquad R_{ij}\equiv (1 - \beta ,1,0)$ + +Reciprocal: $f_{ij}(x) = x^{-1}$ , $R_{ij} \equiv (2, 1, 0)$ and otherwise, let $R_{ij} \equiv 1$ . + +Multiplication: $f_{ij}(x,y) = xy, R_{ij} \equiv (1,1,0)$ + +Letting $f_{i}^{-1}(x,z)$ denote the inverse of $f_{i}$ in the first argument, we show in Appendix A, + +$$ +\theta | \boldsymbol {X} = \boldsymbol {x} \equiv \left(\underset {i = 1} {\overset {n} {\&}} f _ {i} ^ {- 1} (\boldsymbol {x}, Z _ {i})\right) \& \left(\underset {i, j = 1} {\overset {n, k} {\&}} R _ {i j}\right) \& \pi , +$$ + +where $\pi$ denotes the prior for $\theta$ . Since the inverse of a composition of operations is a composition of inverses, the tail of $f_{i}^{-1}(\pmb{x}, Z_{i})$ can be determined by backpropagating through the computation graph for $X_{i}$ and sequentially applying inverse operations. Consequently, the tail behaviour of the posterior distribution for one parameter can be obtained using a single backward pass. Posterior distributions for multiple parameters involve repeating this procedure one parameter at a time, with other parameters fixed. + +# 3 Implementation + +# 3.1 Compile-time static analysis + +To illustrate an implementation of GGA for static analysis, we sketch the operation of the PPL compiler at a high-level and defer to the code in Supplementary Materials for details. A probabilistic program is first inspected using Python's built-in ast module and transformed to static single assignment (SSA) form [43]. Next, standard compiler optimizations (e.g., dead code elimination, constant propagation) are applied and an execution of the optimized program is traced [4, 58] and accumulated in a directed acyclic graph representation. A breadth-first type checking pass, as seen in Algorithm 1, completes in linear time, and GGA results may be applied to implement computeGGA() using the following steps: + +Algorithm 1: GGA tails static analysis pass +Data: Abstract syntax tree for a PPL program +Result: GGA parameter estimates for all random variables +frontier $\leftarrow$ [rv : Parents(rv) = {}; +tails $\leftarrow \{\}$ ; +while frontier $\neq \emptyset$ do + next $\leftarrow$ frontier.popLeft(); + tails[next] $\leftarrow$ computeGGA(next.op, next.parent); + frontier $\leftarrow$ frontier + next.children(); +end +return tails + +- If a node has no parents, then it is an atomic distribution and its tail parameters are known (Table 5); +- Otherwise, the node is an operation taking its potentially stochastic inputs (parents) to its output. Consult Table 1 for the output GGA tails. + +# 3.2 Representative distributions + +For each $(\nu, \sigma, \rho)$ , we make a carefully defined choice of $p$ on $\mathbb{R}$ such that if $X \sim p$ , then $X \equiv (\nu, \sigma, \rho)$ . This way, any random variable $f(X)$ , where $f$ is 1-Lipschitz, will exhibit the correct tail, and so approximations of this form may be used for VI or density estimation. Let $X \equiv (\nu, \sigma, \rho)$ and $0 < \epsilon \ll 1$ denote a small parameter such that tails $e^{-x^{\epsilon}}$ are deemed to be "very heavy" (we chose $\epsilon = 0.1$ ). Inspired by ideas from implicit renewal theory [5], our candidate distributions are as follows. + +$(\rho \leq 0)$ If $\rho \leq 0$ , then $p_X(x) \sim cx^{-|\nu|}$ . One such density is the Student $t$ distribution, in this case, with $|\nu| - 1$ degrees of freedom if $\nu < -1$ (generate $X \sim \mathrm{StudentT}(|\nu| - 1)$ ). + +$(\rho > \epsilon)$ For moderately sized $\rho > 0$ , the symmetrization of the generalized Gamma density (2). + +$(\rho \leq \epsilon)$ If $X \equiv (\nu, \sigma, \rho)$ where $\rho$ is small, then $X$ will exhibit much heavier tails, and the generalized Gamma distribution in Case 1 will become challenging to sample from. In these cases, we expect that the tail of $X$ should be well represented by a power law. The generalized Gamma density (Equation (2)) satisfies $\mathbb{E}X^{r} = \sigma^{-r / \rho}\Gamma \left(\frac{\nu + 1 + r}{\rho}\right) / \Gamma \left(\frac{\nu + 1}{\rho}\right)$ for $r > 0$ . Let $\alpha > 0$ be such that $\mathbb{E}X^{\alpha} = 2$ . By Markov's inequality, the tail of $X$ satisfies $\mathbb{P}(X > x) \leq 2x^{-\alpha}$ . Therefore, we can represent tails of this form by the Student $t$ distribution with $\alpha + 1$ degrees of freedom (generate $X \sim \mathrm{StudentT}(\alpha)$ ). + +# 3.3 Bulk correction by Lipschitz mapping + +While a representative distribution will exhibit the desired tails, the target distribution's bulk may be very different from a generalized Gamma and result in poor distributional approximation. To address this, we propose splicing together the tails from a generalized Gamma with a flexible density approximation for the bulk. While many combinations are possible, in this work we rely on the Lipschitz operation in the GGA (Theorem 2) and post-compose neural spline flows [15] (which are identity functions outside of a bounded interval hence 1-Lipschitz) after properly initialized generalized Gamma distributions. Optimizing the parameters of the flow results in good bulk approximation while simultaneously preserving the tail correctness guarantees attained by the GGA. + +![](images/0a5231cfc290e8a5c0a6282bced92b0cf7a4cd6697ec6309fe017c6f6286452f.jpg) +Figure 2: The candidate distribution chosen by the GGA calibrates tails to the target, but with incorrect bulk. A Lipschitz normalizing flow corrects the bulk (i) without changing the tail behaviour, as seen by the parallel tail asymptotics (black dashed lines) in (ii). + +![](images/08c51de1b729ef368964e494587bfbf35cd3470f0065e0b871df50e1c76fc9d9.jpg) + +Example 1. Let $A \in \mathbb{R}^{k \times k}$ , $x, y \in \mathbb{R}^k$ , with $x_i, y_i, A_{ij} \stackrel{\mathrm{id}}{\sim} \mathcal{N}(-1,1)$ . The distribution of $x^\top Ay = \sum_{i,j} x_i A_{ij} y_j$ is a convolution of normal-powers [21] and lacks a closed form expression. Using the GGA (Table 1), one can compute its tail parameters to be $\left( \frac{k}{2} - 1, \frac{3}{2}, \frac{2}{3} \right)$ . The candidate given by the GGA representative distribution (Section 3.2) is a gamma distribution with correct tail behaviour, but is a poor approximation otherwise. A learnable Lipschitz bijection is optimized to correct the bulk approximation (Figure 2(i)). From the Lipschitz property, the slope of the tail asymptotics in log-log scale remains the same before and after applying the flow correction (Figure 2(ii)): the tails are guaranteed to remain calibrated. + +Example 2. Consider $\sum_{i=1}^{4} X_i^2$ where $X_i \sim \mathrm{StudentT}(i)$ . While we are not aware of a closed-form expression for the density, this example is within the scope of our GGA. Empirical results illustrate that our method (Figure 3(i)) accurately models both the bulk and the tail whereas Gaussian-based Lipschitz flows (Figure 3(ii)) inappropriately impose tails which decay too rapidly. + +# 4 Experiments + +We now demonstrate that GGA-based density estimation yields improvements in tail estimation across several metrics. Our experiments consider normalizing flows initialized from (i) the parametric family defined in Section 3.2 against (ii) a normal distribution (status quo). To further contrast the individual effect of using a GGA base distribution over standard normals against more expressive pushforward maps [15], we also report ablation results where normalizing flows are replaced by affine transforms, as originally proposed in [30]. All experiments are repeated for 100 trials, trained to convergence using the Adam optimizer with manually tuned learning rate. Additional details are available in Appendix D. All target distributions in this section are expressed as generative PPL programs: Cauchy using a reciprocal normal; Chi2 (chi-squared) using a sum of squared normals; IG (Inverse Gamma) using a reciprocal exponential; normal using a sum of normals; and StudentT using a normal and Cauchy ratio. Doing so tasks the static analyzer to infer the target's tails and makes the analysis non-trivial. + +Our results in the following tables share a consistent narrative where a GGA base distribution rarely hurts and can significantly help with heavy tailed targets. Standard evaluation metrics such as negative cross-entropy, ELBO, or importance-weighted autoencoder bounds [6] do not evaluate the quality of tail approximations. Instead, we consider diagnostics which do: namely, an estimated tail exponent + +![](images/9d647a4c114c0da555d377c62ea895935885bebbb8f6ccfe4dc0400e571fa015.jpg) +Figure 3: Q-Q plots of density approximations of a heavy-tailed target $(\sum_{i=1}^{4} X_i^2)$ where $X_i \sim \mathrm{StudentT}(i)$ initialized by our GGA candidate (i) and the Gaussian distribution (ii). While the expressive modeling capability of flows enables good approximation of the distribution bulk, Lipschitz transformations of Gaussians inevitably impose miscalibrated squared exponential tails which are not sufficiently heavy as evidenced in (ii). + +![](images/a06555077febac145acbde84cd5854d4c3d7d9a41be4d22ec1886d01b45484d4.jpg) + +$\hat{\alpha}$ , and the Pareto $\hat{k}$ diagnostic [60]. Except for when targets are truly light tailed ( $\alpha = \infty$ in Chi2 and normal), GGA-based approximations are the only ones to reproduce appropriate GPD tail index $\hat{\alpha}$ in density estimation and achieve a passing $\hat{k}$ below 0.2 in VI. Less surprising is the result that adding a flow improved approximation metrics, as we expect the additional representation flexibility to be beneficial. + +Density Estimation. Given samples $\{x_{i}\}_{i = 1}^{N}$ from a target density $p$ , we minimize a Monte-Carlo estimate of the cross entropy $H(p,q) = -E_p[\log q(X)]\approx -\frac{1}{N}\sum_{i = 1}^{N}\log q(x_i)$ . The results are shown in Table 2 and Table 3 along with power-law tail index estimates $\hat{\alpha}$ [11]. Closeness between the target Pareto tail index $\alpha$ [11] and its estimate $\hat{\alpha}$ in $q(x)$ suggest calibrated tails. Overall, we see that normal (resp., Cauchy) based flows fail to capture heavy (resp., light) tails, while GGA-based flows yield good tail approximations (lower NLL, $\hat{\alpha}$ closer to target) across all cases. + +Table 2: Mean and standard errors (100 trials) of tail parameters $\widehat{\alpha }$ (smaller for heavier tails) for various density estimators and targets. + +
TargetαCauchy (α = 2) FlowGGA FlowNormal (α = ∞) Flow
Cauchy22.1 (0.03)2.1 (0.07)7.7 (2.5)
IG21.9 (0.03)1.9 (0.092)7.3 (1.7)
StudentT32.0 (0.06)3.3 (0.45)7.7 (2.3)
Chi22.1 (0.07)5.2 (1.6)6.8 (2.4)
Normal2.9 (0.6)8.2 (4.0)8.4 (3.5)
+ +Table 3: Mean and standard errors of log-likelihoods $E_{p} \log q(X)$ for various density estimators and targets. While larger values imply a better overall approximation (row max bolded), log-likelihood is dominated by bulk approximation so these results show that our method (GGA Flow) does not sacrifice bulk approximation quality. + +
TargetαCauchy (α = 2) FlowGGA FlowNormal (α = ∞) Flow
Cauchy2-2.53 (0.05)-3.22 (0.06)-1.2 × 103(6 × 103)
IG2-3.55 (0.08)-3.26 (0.05)-2.6 × 104(6 × 103)
StudentT3-2.12 (0.03)-2.75 (0.04)-2.92 (0.47)
Chi2-2.30 (0.05)-2.03 (0.04)-2.24 (0.04)
Normal-1.53 (0.03)-1.41 (0.02)-1.42 (0.02)
+ +Variational Inference. For VI, the bulk is corrected through the ELBO optimization objective $E_{q}\log \frac{p(X)}{q(X)}\approx \frac{1}{N}\sum_{i = 1}^{N}\log \frac{p(x_{i})}{q(x_{i})},\quad x_{i}\sim q$ . Since the density $p$ must also be evaluated, for simplicity, experiments in Table 4 use closed-form marginalized densities for targets. The overall trends also show that GGA yields consistent improvements; the $\hat{k}$ diagnostic [60] indicates VI succeeds $(\hat{k}\leq 0.2)$ when a GGA with appropriately matched tails is used and fails $(\hat{k} >1)$ when Gaussian tails are erroneously imposed. + +Table 4: Pareto $\hat{k}$ diagnostic ([60]) to assess goodness of fit for VI (mean across 100 trials, standard deviation in parenthesis) on targets of varying tail index (smaller $\alpha =$ heavier tails). A value $> {0.2}$ is interpreted as potentially problematic so only values not exceeding it are bolded. + +
TargetαNormal AffineNormal FlowGGA AffineGGA Flow
Cauchyα = 20.62 (0.26)0.22 (0.059)0.68 (0.038)0.091 (0.04)
IGα = 28.6 (1.8)8.2 (2.3)2.0 (0.4)2.9 (0.71)
StudentTα = 31.2 (0.16)1.0 (0.43)1.5 (0.082)1.3 (0.097)
Chi2α = ∞0.57 (0.081)0.61 (0.067)0.0093 (0.0067)0.089 (0.044)
Normalα = ∞0.53 (0.17)0.21 (0.067)0.4 (0.086)0.2 (0.089)
+ +Bayesian linear regression. As a practical example of VI applied to posterior distributions, we consider the setting of one-dimensional Bayesian linear regression (BLR) with conjugate priors, defined by the likelihood $y|X,\beta ,\sigma \sim \mathcal{N}(X\beta ,\sigma^2)$ with a Gaussian prior $\beta |\sigma^2\sim \mathcal{N}(0,\sigma^2)$ on the coefficients, and an inverse-Gamma prior with parameters $a_0$ and $b_{0}$ on the residual variance $\sigma^2$ . The posterior distribution for $\beta$ conditioned on $\sigma^2$ and $X,y$ is Gaussian. However, conditional on the pair $(X,y),\sigma^2$ is inverse-Gamma distributed with parameters $a_0 + \frac{n}{2}$ and $b_{0} + \frac{1}{2} (y^{\top}y - \mu^{\top}\Sigma \mu))$ , where $\mu = \Sigma^{-1}X^{\top}X\hat{\beta}$ for $\hat{\beta}$ the least-squares estimator, and $\Sigma = X^{\top}X + I$ . Since $\sigma^2$ is positive, it is typical for PPL implementations to apply an exponential transformation. Hence, a Lipschitz normalising flow starting from a Gaussian initialization will inappropriately approximate the inverse Gamma distributed $p(\sigma^2 |X,y)$ with log-normal tails. On the other hand, Lipschitz flows starting from a GGA reference distribution will exhibit the correct tails. We assess this discrepancy in Figure 4 under an affine transformation on four subsampled datasets: super (superconductor critical temperature prediction dataset [23] with $n = 256$ and $d = 154$ ); who (life expectancy data from the World Health Organisation in the year 2013 [41] with $n = 130$ , $d = 18$ ); air (air quality data [14] with $n = 6941$ , $d = 11$ ); and blog (blog feedback prediction dataset [7] with $n = 1024$ , $d = 280$ ). In Figure 4(i), the GGA-based method seems to perfectly fit to the targets, while in Figure 4(ii), the standard Gaussian approach fails to capture the tail behaviour. + +![](images/e3f832c33ddbbc8cffd777a0cc01f8a822a26c950193d21dbdc5ff813223d15e.jpg) +(i) + +![](images/7fb59283109f0735b8ec2bc0ae8f4f0c394c3067f6905c3fd173d8d31c904379.jpg) +(ii) +Figure 4: Estimated densities for the posterior distribution of $\sigma^2$ in Bayesian linear regression under optimised exponential + affine transformations from (i) GGA reference, and (ii) Gaussian reference. + +Invariant distribution of SGD. For inputs $X$ and labels $Y$ from a dataset $\mathcal{D}$ , the least squares estimator for linear regression satisfies $\hat{\beta} = \min_{\beta} \frac{1}{2} \mathbb{E}_{X,Y \sim \mathcal{D}} (Y - X\beta)^2$ . To solve for this estimator, one can apply stochastic gradient descent (SGD) sampling over independent $X_k, Y_k \sim \mathcal{D}$ to obtain the sequence of iterations + +$$ +\beta_ {k + 1} = \left(I - \delta X _ {k} X _ {k} ^ {\top}\right) \beta_ {k} + \delta Y _ {k} X _ {k} +$$ + +for a step size $\delta > 0$ . For large $\delta$ , the iterates $\beta_{k}$ typically exhibit heavy-tailed fluctuations [24]. In this regard, this sequence of iterates has been used as a simple model for more general stochastic optimization dynamics [22, 24]. In particular, generalization performance has been tied to the heaviness of the tails in the iterates [47]. Here, we use our algebra to predict the tail behaviour in a simple one-dimensional setting where $X_{k} \sim \mathcal{N}(0, \sigma^{2})$ and $Y_{k} \sim \mathcal{N}(0, 1)$ . From classical theory [5], it is known that $X_{k}$ converges in distribution to a power law with tail exponent $\alpha > 0$ satisfying $\mathbb{E}|1 - \delta X_k^2|^{\alpha} = 1$ . In Figure 5, we plot the density of the representative for $\beta_{10^4}$ obtained using our + +algebra against a kernel density estimate using $10^{6}$ samples when $\sigma \in \{0.4, 0.5, 0.6\}$ and $\delta = 2$ . In all cases, the density obtained from the algebra provides a surprisingly close fit. + +![](images/70ecaccad0e863cda8c72c447d375375ec2372e3b00ba6f34909a13f2e338bd5.jpg) +Figure 5: Kernel density estimate of iterates of SGD (blue) vs. GGA predicted tail behaviour (orange) + +![](images/d5a4501da1d35b736f7ec1c385d4285bb7ed2a46b0865aa888a21ac20059849a.jpg) + +![](images/db658fb693ca2d6369370ef7fbffc25079d6c441f1d69f95293df10ed507b90b.jpg) + +# 5 Related Work + +Heavy tails and probabilistic machine learning. For studying heavy tails, methods based on subexponential distributions [18] and generalized Pareto distributions (GPD) (or equivalently, regularly varying distributions [49]) have received significant attention historically. For example, [35] presents closure theorems for regularly varying distributions which are special cases of Proposition 1 and Theorem 2. Heavy tails often have a profound impact on probabilistic machine learning methods: in particular, the observation that density ratios $\frac{p(x)}{q(x)}$ tend to be heavy tailed has resulted in new methods for smoothing importance sampling [53], adaptively modifying divergences [55], and diagnosing VI through the Pareto $\hat{k}$ diagnostic [60]. These works are complementary to our paper, and our reported results include $\hat{k}$ diagnostics for VI and $\hat{\alpha}$ tail index estimates based on GPD. + +Our work considers heavy-tailed targets $p(x)$ which is the same setting as [25, 33]. Whereas those respective works lump the tail parameter in as another variational parameter and may be more generally applicable, the GGA may be applied before samples are drawn and leads to perfectly calibrated tails when applicable. + +Probabilistic programming. PPLs can be broadly characterized by the inference algorithms they support, such as: Gibbs sampling over Bayes nets [13, 48], stochastic control flow [19, 58], deep stochastic VI [4, 52], or Hamiltonian Monte-Carlo [8, 59]. Our implementation target beanmachine [50] is a declarative PPL selected due to availability of a PPL compiler and support for static analysis plugins. Similar to [4, 46], it uses PyTorch [40] for GPU tensors and automatic differentiation. Synthesizing an approximating distribution during PPL compilation (Section 3) is also performed in the Stan language by [30] and normalizing flow extensions in [57]. We compare directly against these related density approximators in Section 4. + +Static analysis. There is a long history of formal methods and probabilistic programming in the literature [26, 29], with much of the research [10] concerned with defining formal semantics and establishing invariants [54] (see [3] for a recent review). Static analysis uses the abstract syntax tree (AST) representation of a program in order to compute invariants (e.g., the return type of a function, the number of classes implementing a trait) without executing the underlying program. It has traditionally been applied in the context of formalizing semantics [29], and has been used to verify probabilistic programs by ensuring termination, bounding random values values [44]. As dynamic analysis in a PPL is less reliable due to non-determinism, static analysis techniques for PPLs become essential. As recent examples, [32] proposes a static analyzer for the Pyro PPL [4] to verify distribution supports and avoid -Inf log probabilities. More relevant to our work are applications of static analysis to improve inference. [38] and [12] both employ static analysis to inform choice of inference method. However, both works do not account for heavy tails whereas the primary goal of GGA-based analysis is to ensure tails are properly modelled. + +# 6 Conclusion + +In this work, we have proposed a novel systematic approach for conducting tail inferential static PPL analysis. We have done this by defining a heavy-tailed algebra, and by implementing a three-parameter generalized Gamma algebra into a PPL compiler. Initial results are promising, showing that improved inference with simpler approximation families is possible when combined with tail + +metadata. While already useful, the generalized Gamma algebra and its implementation currently has some notable limitations: + +- The most significant omission to the algebra is classification of log-normal tails. Addition may be treated using [20], but multiplication with log-normal tails remains elusive. +- Since the algebra assumes independence, handling of dependencies between defined random variables must be conducted externally. This can be addressed using a symbolic package to decompose complex expressions into operations on independent random variables. +- Scale coefficients $\sigma$ for conditional distributions may often be inexact, as exact marginalization in general is NP-hard [28]. Treatment of disintegration using symbolic manipulations is a significant open problem, with some basic developments [9, 45]. +- Compile-time static analysis is only applicable to fixed model structures. Control flow, open-universe models [36], and PPLs to support them [4] are an important future research direction. + +# References + +[1] Søren Asmussen and Hansjorg Albrecher. *Ruin probabilities*, volume 14. World scientific, 2010. +[2] Søren Asmussen, Enkelejd Hashorva, Patrick J Laub, and Thomas Taimre. Tail asymptotics of light-tailed Weibull-like sums. Probability and Mathematical Statistics, 37(2):235-256, 2017. +[3] Ryan Bernstein. Static analysis for probabilistic programs. arXiv preprint arXiv:1909.05076, 2019. +[4] Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. Pyro: Deep universal probabilistic programming. The Journal of Machine Learning Research, 20(1):973-978, 2019. +[5] Dariusz Buraczewski, Ewa Damek, Thomas Mikosch, et al. Stochastic models with power-law tails. Springer Ser. Oper. Res. Financ. Eng., Springer, Cham, 10:978-3, 2016. +[6] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. +[7] Krisztian Buza. Feedback prediction for blogs. In Data analysis, machine learning and knowledge discovery, pages 145-152. Springer, 2013. +[8] Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A probabilistic programming language. Journal of statistical software, 76(1), 2017. +[9] Kenta Cho and Bart Jacobs. Disintegration and Bayesian inversion via string diagrams. Mathematical Structures in Computer Science, 29(7):938-971, 2019. +[10] Guillaume Claret, Sriram K Rajamani, Aditya V Nori, Andrew D Gordon, and Johannes Borgström. Bayesian inference using data flow analysis. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pages 92-102, 2013. +[11] Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. Power-law distributions in empirical data. SIAM review, 51(4):661-703, 2009. +[12] Marco F Cusumano-Towner, Feras A Saad, Alexander K Lew, and Vikash K Mansinghka. Gen: a general-purpose probabilistic programming system with programmable inference. In Proceedings of the 40th acm sigplan conference on programming language design and implementation, pages 221-236, 2019. +[13] Perry de Valpine, Daniel Turek, Christopher J Paciorek, Clifford Anderson-Bergman, Duncan Temple Lang, and Rastislav Bodik. Programming with models: writing statistical algorithms for general model structures with nimble. Journal of Computational and Graphical Statistics, 26(2):403-413, 2017. + +[14] Saverio De Vito, Ettore Massera, Marco Piga, Luca Martinotto, and Girolamo Di Francia. On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario. Sensors and Actuators B: Chemical, 129(2):750-757, 2008. +[15] Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. Advances in neural information processing systems, 32, 2019. +[16] SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Geoffrey E Hinton, et al. Attend, infer, repeat: Fast scene understanding with generative models. Advances in Neural Information Processing Systems, 29, 2016. +[17] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123-214, 2011. +[18] Charles M Goldie and Claudia Klüppelberg. Subexponential distributions. A practical guide to heavy tails: statistical techniques and applications, pages 435-459, 1998. +[19] Noah Goodman, Vikash Mansinghka, Daniel M Roy, Keith Bonawitz, and Joshua B Tenenbaum. Church: a language for generative models. arXiv preprint arXiv:1206.3255, 2012. +[20] Archil Gulisashvili and Peter Tankov. Tail behavior of sums and differences of log-normal random variables. Bernoulli, 22(1):444-493, 2016. +[21] Rameshwar D Gupta and Ramesh C Gupta. Analyzing skewed data by power normal model. Test, 17(1):197-210, 2008. +[22] Mert Gurbuzbalaban, Umut Simsekli, and Lingjiong Zhu. The heavy-tail phenomenon in SGD. In International Conference on Machine Learning, pages 3964-3975. PMLR, 2021. +[23] Kam Hamidieh. A data-driven statistical model for predicting the critical temperature of a superconductor. Computational Materials Science, 154:346-354, 2018. +[24] Liam Hodgkinson and Michael W. Mahoney. Multiplicative noise and heavy tails in stochastic optimization. In International Conference on Machine Learning, pages 4262-4274. PMLR, 2021. +[25] Priyank Jaini, Ivan Kobyzev, Yaoliang Yu, and Marcus Brubaker. Tails of Lipschitz triangular flows. In International Conference on Machine Learning, pages 4673-4681. PMLR, 2020. +[26] Claire Jones and Gordon D Plotkin. A probabilistic powerdomain of evaluations. In Proceedings. Fourth Annual Symposium on Logic in Computer Science, pages 186-187. IEEE Computer Society, 1989. +[27] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[28] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. +[29] Dexter Kozen. Semantics of probabilistic programs. In 20th Annual Symposium on Foundations of Computer Science (sfcs 1979), pages 101-114. IEEE, 1979. +[30] Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic differentiation variational inference. Journal of machine learning research, 2017. +[31] Michel Ledoux. The concentration of measure phenomenon. American Mathematical Soc., 2001. +[32] Wonyeol Lee, Hangyeol Yu, Xavier Rival, and Hongseok Yang. Towards verified stochastic variational inference for probabilistic programs. Proceedings of the ACM on Programming Languages, 4(POPL):1-33, 2019. +[33] Feynman Liang, Liam Hodgkinson, and Michael W. Mahoney. Fat-tailed variational inference with anisotropic tail adaptive flows, 2022. + +[34] Arakaparampil M Mathai, Ram Kishore Saxena, and Hans J Haubold. The $H$ -function: theory and applications. Springer Science & Business Media, 2009. +[35] T Mikosch. Regular variation subexponentiality and their applications in probability theory, 1999. +[36] Brian Milch and Stuart Russell. Extending Bayesian networks to the open-universe case. Heuristics, Probability and Causality: A Tribute to Judea Pearl. College Publications, 2010. +[37] J. Nair, A. Wierman, and B. Zwart. The Fundamentals of Heavy Tails: Properties, Emergence, and Estimation. Cambridge University Press, 2022. +[38] Aditya Nori, Chung-Kil Hur, Sriram Rajamani, and Selva Samuel. R2: An efficient MCMC sampler for probabilistic programs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28, 2014. +[39] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1-64, 2021. +[40] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. +[41] Kumar Rajarshi. Life expectancy (who), 2018. +[42] Gareth O Roberts and Richard L Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, pages 341-363, 1996. +[43] Barry K Rosen, Mark N Wegman, and F Kenneth Zadeck. Global value numbers and redundant computations. In Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pages 12-27, 1988. +[44] Sriram Sankaranarayanan, Aleksandar Chakarov, and Sumit Gulwani. Static analysis for probabilistic programs: inferring whole program properties from finitely many paths. In Proceedings of the 34th ACM SIGPLAN conference on Programming language design and implementation, pages 447-458, 2013. +[45] Chung-chieh Shan and Norman Ramsey. Exact Bayesian inference by symbolic disintegration. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, pages 130–144, 2017. +[46] N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison, Noah D. Goodman, Pushmeet Kohli, Frank Wood, and Philip Torr. Learning disentangled representations with semi-supervised deep generative models. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5927-5937. Curran Associates, Inc., 2017. +[47] Umut Simsekli, Levent Sagun, and Mert Gurbuzbalaban. A tail-index analysis of stochastic gradient noise in deep neural networks. In International Conference on Machine Learning, pages 5827-5837. PMLR, 2019. +[48] David Spiegelhalter, Andrew Thomas, Nicky Best, and Wally Gilks. BUGS 0.5: Bayesian inference using Gibbs sampling manual (version ii). MRC Biostatistics Unit, Institute of Public Health, Cambridge, UK, pages 1-59, 1996. +[49] Nader Tajvidi. Confidence intervals and accuracy estimation for heavy-tailed generalized Pareto distributions. Extremes, 6(2):111-123, 2003. +[50] Nazanin Tehrani, Nimar S Arora, Yucen Lily Li, Kinjal Divesh Shah, David Noursi, Michael Tingley, Narjes Torabi, Eric Lippert, Erik Meijer, et al. Bean machine: A declarative probabilistic programming language for efficient programmable inference. In International Conference on Probabilistic Graphical Models. PMLR, 2020. + +[51] David Tolpin, Jan-Willem van de Meent, Hongseok Yang, and Frank Wood. Design and implementation of probabilistic programming language anglican. In Proceedings of the 28th Symposium on the Implementation and Application of Functional programming Languages, pages 1-12, 2016. +[52] Dustin Tran, Matthew W Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, and Alexey Radul. Simple, distributed, and accelerated probabilistic programming. Advances in Neural Information Processing Systems, 31, 2018. +[53] Aki Vehtari, Daniel Simpson, Andrew Gelman, Yuling Yao, and Jonah Gabry. Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646, 2015. +[54] Di Wang, Jan Hoffmann, and Thomas Reps. Pmaf: an algebraic framework for static analysis of probabilistic programs. ACM SIGPLAN Notices, 53(4):513-528, 2018. +[55] Dilin Wang, Hao Liu, and Qiang Liu. Variational inference with tail-adaptive f-divergence. Advances in Neural Information Processing Systems, 31, 2018. +[56] George Neville Watson. A treatise on the theory of Bessel functions. Cambridge university press, 1995. +[57] Stefan Webb, Jonathan P. Chen, Matrin Jankowiak, and Noah Goodman. Improving automated variational inference with normalizing flows. In ICML Workshop on Automated Machine Learning, 2019. +[58] David Wingate, Andreas Stuhlmüller, and Noah Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 770-778. JMLR Workshop and Conference Proceedings, 2011. +[59] Kai Xu, Hong Ge, Will Tebbutt, Mohamed Tarek, Martin Trapp, and Zoubin Ghahramani. Advancedhmc. jl: A robust, modular and efficient implementation of advanced hmc algorithms. In Symposium on Advances in Approximate Bayesian Inference, pages 1-10. PMLR, 2020. +[60] Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Yes, but did it work?: Evaluating variational inference. In International Conference on Machine Learning, pages 5581-5590. PMLR, 2018. \ No newline at end of file diff --git a/aheavytailedalgebraforprobabilisticprogramming/images.zip b/aheavytailedalgebraforprobabilisticprogramming/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a66e276186affae306c2c1668a6c46fae631d446 --- /dev/null +++ b/aheavytailedalgebraforprobabilisticprogramming/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e961d48fff6ea4c1bdd2e68b10490049039a3b9ff19a0af32452bd668f82aa94 +size 373060 diff --git a/aheavytailedalgebraforprobabilisticprogramming/layout.json b/aheavytailedalgebraforprobabilisticprogramming/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fe9e305af2499a1f9835164578d58fd43d12aa28 --- /dev/null +++ b/aheavytailedalgebraforprobabilisticprogramming/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b98f88fa634cb94d438dcf1c8374ac249e3a1155a14a318955ff55c12e4f18f +size 452815 diff --git a/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_content_list.json b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..add77698632756228a92cc374cbea23c19b20a86 --- /dev/null +++ b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:555c8ab8bfdf3f11cb1852d35d2e71a668b39243e3ece741cc94c6a93c6a2d9c +size 89606 diff --git a/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_model.json b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b3e088da4214d81c8094c18791fff5191703ee20 --- /dev/null +++ b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8641859e83f44204af4d9924d74f94f1ac3e926ef8592b4d56d2740ba204c2bf +size 113083 diff --git a/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_origin.pdf b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..590c9e2139eeff80dfdcd55e37c0fae32fbf2d67 --- /dev/null +++ b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/b94b1b47-f982-43b7-8459-04ebf18c440b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d9042c980554d5844fc7c0cb29a86710b655982cf7c37056069f39cfecce201 +size 960124 diff --git a/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/full.md b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f27ac76de72a07ed46c50906ddadda2bb328080e --- /dev/null +++ b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/full.md @@ -0,0 +1,334 @@ +# A Hierarchical Spatial Transformer for Massive Point Samples in Continuous Space + +Wenchong He Zhe Jiang* Tingsong Xiao Zelin Xu Shigang Chen + +Department of Computer & Information Science & Engineering University of Florida + +{whe2, zhe.jiang, xiaotingsong, zelin.xu, sgchen}@ufl.edu + +Ronald Fick Miles Medina Christine Angelini + +Center for Coastal Solutions + +University of Florida + +{rfick, miles.medina}@ufl.edu, christine.angelini@essie.ufl.edu + +# Abstract + +Transformers are widely used deep learning architectures. Existing transformers are mostly designed for sequences (texts or time series), images or videos, and graphs. This paper proposes a novel transformer model for massive (up to a million) point samples in continuous space. Such data are ubiquitous in environment sciences (e.g., sensor observations), numerical simulations (e.g., particle-laden flow, astrophysics), and location-based services (e.g., POIs and trajectories). However, designing a transformer for massive spatial points is non-trivial due to several challenges, including implicit long-range and multi-scale dependency on irregular points in continuous space, a non-uniform point distribution, the potential high computational costs of calculating all-pair attention across massive points, and the risks of over-confident predictions due to varying point density. To address these challenges, we propose a new hierarchical spatial transformer model, which includes multi-resolution representation learning within a quad-tree hierarchy and efficient spatial attention via coarse approximation. We also design an uncertainty quantification branch to estimate prediction confidence related to input feature noise and point sparsity. We provide a theoretical analysis of computational time complexity and memory costs. Extensive experiments on both real-world and synthetic datasets show that our method outperforms multiple baselines in prediction accuracy and our model can scale up to one million points on one NVIDIA A100 GPU. The code is available at https://github.com/spatialdatasciencegroup/HST + +# 1 Introduction + +Transformers are widely used deep learning architectures. Existing transformers are largely designed for sequences (texts or time series), images or videos, and graphs [1234567]. This paper proposes a novel transformer model for massive (up to a million) points in continuous space. Given a set of point samples in continuous space with explanatory features and target response variables, the problem is to learn the spatial latent representation of point samples and to infer the target variable at any new point location. + +Learning transformers for continuous-space points has broad applications. In environmental sciences, researchers are interested in fusing remote sensing spectra with in-situ sensor observations at irregular + +sample locations to monitor coastal water quality and air quality [8, 9, 10]. In scientific computing, researchers learn a neural network surrogate to speed up numerical simulations of particle-laden flow [11] or astrophysics [12]. For example, in cohesive sediment transport modeling, a transformer surrogate can predict the force and torque of a large number of particles dispersed in the fluid and simulate the transport of suspended sediment [13, 14]. In location-based service, people are interested in analyzing massive spatial point data (e.g., POIs, trajectories) to recommend new locations [15, 16]. + +However, the problem poses several technical challenges. First, implicit long-range and multi-scale dependency exists on irregular points in continuous space. For example, in coastal water quality monitoring, algae blooms in different areas are interrelated following ocean currents and sea surface wind (e.g., Lagrangian particle tracking). Second, point samples can be non-uniformly distributed with varying densities. Some areas can be covered with sufficient point samples while others may have only very sparse samples. Third, because of the varying sample density as well as feature noise and ambiguity, model inference at different locations may exhibit a different degree of confidence. Ignoring this risks over-confident predictions. Finally, learning complex spatial dependency (e.g., all-pair self-attention) across massive (millions) points has a high computational cost. + +To address these challenges, we propose a new hierarchical spatial transformer model, which includes multi-resolution representation learning within a quad-tree hierarchy and efficient spatial attention via coarse approximation. We also design an uncertainty quantification branch to estimate prediction confidence related to input feature noise and point sparsity. We provide a theoretical analysis of computational time complexity and memory costs. Extensive experiments on both real-world and synthetic datasets show that our method outperforms multiple baselines in prediction accuracy and our model can scale up to one million points on one NVIDIA A100 GPU. + +# 2 Problem Statement + +A spatial point sample is a data sample drawn from 2D continuous space, denoted as $\mathbf{o_i} = (\mathbf{x}(\mathbf{s}_i),y(\mathbf{s}_i),\mathbf{s}_i)$ , where $1\leq i\leq n$ , $\mathbf{s}_i\in \mathbb{R}^2$ is 2D spatial location coordinates (e.g., latitude and longitude), $\mathbf{x}(\mathbf{s}_i)\in \mathbb{R}^{m\times 1}$ is a vector of $m$ non-spatial explanatory features, and $y(\mathbf{s}_i)$ is a target response variable $(y(\mathbf{s}_i)\in \mathbb{R}$ for regression, $y(\mathbf{s}_i)\in \{0,1\}$ for binary classification). For example, in water quality monitoring, a spatial point sample consists of non-spatial explanatory features from spectral bands of an Earth imagery pixel, the spatial location of that pixel in longitude and latitude, and ground truth water quality level (e.g., algae count) at that location. + +We aim to learn the target variable as a continuous-space function $y: \mathbb{R}^2 \to \mathbb{R}$ or $\{0,1\}$ . Given a set of point sample observations $\mathcal{O} = \{(\mathbf{x}(\mathbf{s}_i), y(\mathbf{s}_i), \mathbf{s}_i)\}_{i=1}^n$ , where $\mathbf{s}_i$ is irregularly sampled in 2D, our model learns the contiguous function $y$ that can be evaluated at any new spatial points $\hat{\mathbf{s}} \notin \{\mathbf{s}_i\}_{i=1}^n$ . We define our model as learning the mapping from the observation samples to the continuous target variable function $y(\hat{\mathbf{s}}) = f_{\theta}(\mathcal{O}, \mathbf{x}(\hat{\mathbf{s}}), \hat{\mathbf{s}})$ . Thus, we formulate our problem as follows. + +Input: Multiple training instances $\mathcal{D} = \{\mathcal{O}_j\}_{j=1}^L$ (sets of irregular points in continuous 2D space). +Output: A spatial transformer model $f: \{y(\hat{\mathbf{s}}), u(\hat{\mathbf{s}})\} = f(\mathbf{x}(\hat{\mathbf{s}}), \hat{\mathbf{s}}, \mathcal{O}_j)$ for any $j \in [1, \dots, L]$ , where $\hat{\mathbf{s}}$ is any new sample location for inference, and $\mathbf{x}(\hat{\mathbf{s}})$ and $y(\hat{\mathbf{s}})$ are the explanatory features and output target variable for the new sample, respectively, and $u(\hat{\mathbf{s}})$ is the uncertainty score corresponding to the prediction $y(\hat{\mathbf{s}})$ . + +Objective: Minimize prediction errors and maximize the uncertainty quantification performance. + +Constraint: There exists implicit multi-scale and long-range spatial dependency structure between point samples in continuous space. + +Note that to supervise model training, we can construct the training instances by removing a single point sample from each set $\mathcal{O}_j$ as the new sample location and use its target variable as the ground truth. + +# 3 Related Work + +- Transformer models: Attention-based transformers are widely used deep learning architecture for sequential data (e.g., texts, time series) [1], images or videos [5, 6], graphs [7]. One main advantage of transformers is the capability of capturing the long-range dependency between samples. One major computational bottleneck is the quadratic costs associated with the all-pair self-attention. Various + +techniques have been developed to address this bottleneck. For sequential data, sparsity-based methods [17, 18] and low-rank-based methods [19, 20, 21] have been developed. Sparsity-based methods leverage various attention patterns, such as local attention, dilated window attention, or cross-partition attention to reduce computation costs. For low-rank-based methods, they assume the attention matrix can be factorized into low-rank matrices. For vision transformers, patch-based [22, 23] or axial-based [24, 25] have been proposed to improve computational efficiency. Similarly, for graph data, sampling-based [26, 27, 28] and spectral-based [29, 30] attention mechanisms were proposed to reduce computation complexity. However, these techniques require an explicit graph structure with fixed topology. To the best of our knowledge, existing transformer models cannot be applied to massive point samples in continuous space. + +- Numerical Operator learning in continuous space: Neural operator learning aims to train a neural network surrogate as the solver of a family of partial differential equation (PDE) instances [31]. The surrogate takes initial or boundary conditions and predicts the solution function. Existing surrogate models include deep convolutional neural networks [2], graph neural operators [3, 32], Fourier neural operators [33, 34], DeepONet [31], NodeFormer [26, 35] and vision transformers [36, 37, 38]. However, existing methods are mostly designed for regular grid or fixed graph node topology and thus cannot be applied to irregular spatial points. There are several methods for irregular spatial points in continuous space through implicit neural representation [39, 40, 41], or fixed-graph transformation [42], but their neural networks are limited by only taking each sample's individual spatial coordinates without explicitly capturing spatial dependency. + +- Deep learning for spatial data: Extensive research exists on deep learning for spatial data. Deep convolutional neural networks (CNNs) [43, 44] are often used for regular grids (e.g., satellite images, and global climate models) [45, 46], and graph neural networks (GNNs) [47] are used for irregular grids (e.g., meshes with irregular boundaries) [48, 49, 50] or spatial networks (e.g., river or road networks) [51, 52, 53]. However, CNNs and GNNs only capture local spatial dependency without long-range interactions. In recent years, the transformer architecture [1, 54] has been widely used for spatial representation learning with long-range dependency, but existing transformer models are often designed for regular grids (images, videos) [5, 6, 23, 24, 25, 22] and thus cannot be directly applied to irregular point samples in continuous space. + +# 4 Approach + +This section introduces our proposed hierarchical spatial transformer (HST) model. Figure [1] shows the overall architecture with an encoder and a decoder. The encoder learns a multi-resolution representation of points via spatial pooling within a quad-tree hierarchy (quadtree pooling) and conducts efficient hierarchical spatial attention via point coarsening. The intuition is to approximate the representation of faraway key points by the representation of coarse quadrtree cells. The decoder makes inferences at a new point location by traversing the quadrtree and conducting cross-attention from this point to all other points. The decoder also contains an uncertainty quantification (UQ) branch to estimate the confidence of model prediction related to feature noise and point sparsity. Note that our model differs from existing tree-based transformers [55, 56] for images or videos, as these methods require a regular grid and cannot be directly applied to irregular points in continuous space. + +# 4.1 Multi-resolution representation learning within a quadtree hierarchy + +Learning a multi-resolution latent representation of point samples is non-trivial due to the non-uniform (irregular) distribution of points in continuous space (instead of a regular grid in images [22]). To address this challenge, we propose to use a quadtree to establish a multi-scale hierarchy, a continuous spatial positional encoding for point representation, and a spatial pooling within a quadtree hierarchy to learn multi-resolution representations. + +# 4.1.1 Spatial representation of individual points (quadtree external nodes) + +Continuous spatial positional encoding: A positional encoding is a continuous functional mapping $\phi : \mathbb{R}^2 \to \mathbb{R}^{\frac{d}{2}}$ from the 2D continuous space to a $\frac{d}{2}$ -dimensional encoded vector space. The encoding function needs to allow a potentially infinite number of possible locations in continuous space and the similarity between the positional encodings of two points should reflect their spatial proximity, i.e., + +![](images/0c5e16a8141f1f3c377b8a1c88944133837e8c0b7a9f795cb56466d74f8ed65d.jpg) +Figure 1: The overall architecture of our hierarchical spatial transformer model. + +nearby samples tend to have a higher dot product similarity in their positional encoding. Common positional encodings based on discrete index numbers for sequence data are insufficient. We propose to use a multi-dimensional continuous space position encoding [57] as follows, + +$$ +\phi (\boldsymbol {s}) \approx [ \cos (\Omega_ {1} \boldsymbol {s}), \sin (\Omega_ {1} \boldsymbol {s}), \dots , \cos (\Omega_ {\frac {d}{2}} \boldsymbol {s}), \sin (\Omega_ {\frac {d}{2}} \boldsymbol {s}) ], \tag {1} +$$ + +where $d$ is the encoding dimension, $\Omega_{i}\sim \mathcal{N}(\mathbf{0},\Sigma)$ is a 1 by 2 projection matrix following an i.i.d. Gaussian distribution with a standard deviation $\sigma$ . The advantage of this encoding is that it satisfies the following property, $< \phi (s_1),\phi (s_2) > \approx k(||s_1 - s_2||) = \exp \bigl \{-(s_1 - s_2)\bigr)^T\Sigma^{-1}(s_1 - s_2)$ . Here the hyperparameter $\boldsymbol{\Sigma}$ controls the spatial kernel bandwidth. Next, we use $\phi (\pmb {o}_i)$ to denote the positional encoding $\phi (s_i)$ for consistency. + +Spatial representation: We propose an initial spatial representation of individual points by concatenating its continuous-space positional encoding and a non-spatial feature embedding, i.e., + +$$ +\mathbf {h} \left(\boldsymbol {o} _ {i}\right) = \left[ \psi \left(\boldsymbol {o} _ {i}\right); \phi \left(\boldsymbol {o} _ {i}\right) \right] \tag {2} +$$ + +where $\phi (\pmb {o}_i)$ is a positional encoding, $\psi (\pmb {o}_i) = \mathbf{W}\cdot [\mathbf{x}_i;y]$ is the non-spatial embedding, $\mathbf{W}\in \mathbb{R}^{\frac{d}{2}\times (m + 1)}$ is the embedding parameter matrices, and $d$ is the dimension of concatenated representation. We denote the representation of all point samples (external quadtree nodes) in a matrix $\mathbf{H}_o = [h(\pmb {o}_1),\dots,h(\pmb {o}_n)]^T\in \mathbb{R}^{n\times d}$ , where each row is the representation of one point. + +# 4.1.2 Spatial representation of coarse cells (quadtree internal nodes) + +To learn a multi-resolution representation of non-uniformly distributed points in continuous space, we propose a spatial pooling operation within a quadtree hierarchy. A quadtree is a spatial index structure designed for a large number of points in 2D continuous space. It recursively partitions a rectangular area into four equal cells until the number of points within a cell falls below a maximum threshold. In a quadtree, an internal node at a higher level represents an area at a coarser spatial scale and nodes at different levels provide a multi-resolution representation. Another advantage of using a quadtree is that it can handle non-uniform point distribution. A subarea with denser points will have a deeper tree branch through continued recursive space partitioning. + +Formally, given the set of point samples $\mathcal{O} = \{\mathbf{o}_i\}_{i=1}^N$ in continuous space, we construct a quadtree $\mathcal{T}$ . The quadtree has two kinds of node sets: an external node set $\mathcal{E}$ , which corresponds to the observed spatial point samples, and an internal node set $\mathcal{I}$ , which represents the spatial cells. A quadtree has $L$ levels, and all the nodes in level $l$ form a set $\mathcal{R}_l = \{r_1^l, \dots, r_{k_l}^l\}$ , where $r_j^l$ is the $j$ -th node at level $l$ , and $k_l$ is the total number of nodes in level $l$ . Given one node $r_j^l$ , we represent its sibling node set as $\mathcal{S}(r_j^l)$ (nodes on the same hierarchical level under the same parent node), the ancestor node set as $\mathcal{A}(r_j^l)$ (nodes on the path from the node $r_j^l$ to the root). We denote the spatial representation of the node $r_j^l$ as $\mathbf{h}(r_j^l)$ and $l \in \{1, \dots, L\}$ and $j \in \{1, \dots, k_l\}$ . + +For example, in Figure 2(a), there are 11 input point samples with denser point distribution in the upper left corner and the lower right corner. Assuming a maximum leaf node size of two samples, the + +![](images/9f98c5e41e7fb8b34159c46e5d26131866f02e27bd971a4ecac797d050e34b00.jpg) +(a) Space partitioning + +![](images/ac3dc45047546a3b7fe0c980abd13df4a7532af81c729ba619da84134d7bb01e.jpg) +(b)Quadtree hierarchy + +![](images/91eaad2eb9720a7d2fddfca5d72f7d83edd5085774530bab30f1cb279b47a874.jpg) +(c) Quadtree pooling +Figure 2: An example of a quadtree and pooling operation. + +corresponding quadtree is shown in Figure 2(b), which has 12 internal nodes (blue) and 11 external nodes (green). There are five different levels starting with level 0 (the root node). Level 1 has four internal nodes, i.e., $\mathcal{R}_1 = \{r_1^1,\dots,r_4^1\}$ . For instance, $r_1^1$ corresponds to the largest quad cell in the upper left corner. It has two non-empty children nodes at level 2, one of which $r_1^2$ is an internal node and the other of which $r_2^2$ is a leaf node linked to an external node $r_4^3$ (also expressed as $o_5$ ). The sibling set $\mathcal{S}(r_1^2)$ is $\{r_2^2\}$ (the other two sibling nodes are empty cells and are thus ignored). The set of ancestors $\mathcal{A}(r_1^2)$ is $\{r_1^1,r^0\}$ . + +Assume the total number of quadtree nodes is $N$ , including $n$ leaf nodes (point samples) and $N - n$ internal nodes (coarse cells). We can compute the representation of each internal node by an average pooling of the representation of its children within the quadtree hierarchy (i.e., quadtree pooling). + +Formally, the spatial representation $\mathbf{H}_p$ for all quadtree internal nodes can be computed by sparse matrix multiplication, as shown in Equation 3 where $\mathbf{H}_o\in \mathbb{R}^{n\times d}$ is the representation matrix of $n$ point samples (external nodes), $\mathbf{H}_p\in \mathbb{R}^{(N - n)\times d}$ is the pooled representation matrix of $N - n$ internal nodes, and $\mathbf{P}\in \mathbb{R}^{(N - n)\times n}$ is a sparse pooling matrix. Each row of $\mathbf{P}$ is normalized and its non-zero values indicate all the corresponding external nodes (point samples) under an internal node. The computational structure is shown by Figure 2(c). We concatenate the internal node feature $\mathbf{H}_p$ and external nodes feature $\mathbf{H}_o$ to form a representation matrix $\mathbf{H}\in \mathbb{R}^{N\times d}$ for all quadtree nodes. + +$$ +\mathbf {H} _ {p} = \text {Q u a d t r e e P o o l i n g} \left(\mathbf {H} _ {o}\right) = \mathbf {P H} _ {o}, \tag {3} +$$ + +# 4.2 Efficient Hierarchical Spatial Attention + +The goal of the spatial attention layer is to model the implicit long-range spatial dependency between all sample points. Computing all-pair self-attention across massive points (e.g., a million) is computationally prohibitive due to high time and memory costs. To reduce the computational bottleneck, we propose an efficient hierarchical spatial attention operation based on a coarse approximation of key points. Specifically, instead of computing the attention weight from a query point $o_i$ to all other points as keys, we only compute the weight from $o_i$ to a selective subset of quadtree nodes. Our intuition is that for key points that are far away from the query point $o_i$ , we can use coarse cells (nodes) to approximate them, and the further away those points are, the coarser cells (upper-level nodes) we can use to approximate them. + +Formally, for each query point (external node), we define its key node set as $\mathcal{K}$ , which includes the point itself, its own siblings (points) as well as the siblings of its ancestors. That is, $\mathcal{K}(\pmb{o}_i) = \{\pmb{o}_i\} \cup \mathcal{S}(\pmb{o}_i) \cup \{\mathcal{S}(r) | r \in \mathcal{A}(\pmb{o}_i)\}$ . For example, in Figure 3, the key set of the external node $\pmb{o}_1$ (also + +noted as $r_1^4$ ) is $\{\pmb{o}_1, \pmb{o}_2, r_2^3, r_3^3, r_2^2, r_2^1, r_3^1, r_4^1\}$ . In this way, we reduce the number of attention weight calculations from all 11 points to 8 quadtree nodes. Particularly, we use one quadtree node $r_4^1$ to approximate all the four points within it ( $o_8$ to $o_{11}$ ) since they are far away from $o_1$ . Based on the definition of a key set, the spatial attention operator can be expressed as Equation below, where $h_i$ is the output representation for $o_i$ , $\mathcal{K}_i$ is the key node set of $o_i$ , $q_i$ is the query vector of point $o_i$ , $k_j$ and $v_j$ are the key vector and value vector of + +![](images/9b7ba446a97243334bc225d88c7487fe0ce08f591540b21e7a451e9dbc6597e5.jpg) +(a) +Figure 3: An example of a quadtree (a) and the selective key node set of $\mathbf{o}_1$ in red boxes (b). + +![](images/8dffb267e2f831455bf0d1e6dc38894bf9977c008bc57089159e36c532bdc0e3.jpg) +(b) + +the attended node $r_j$ , respectively, and $d$ is the latent dimension of these vectors. + +$$ +\boldsymbol {h} _ {i} = \sum_ {j \in \mathcal {K} _ {i}} \frac {\exp \left(\boldsymbol {q} _ {i} \boldsymbol {k} _ {j} ^ {T} / \sqrt {d}\right) \boldsymbol {v} _ {j}}{\sum_ {j \in \mathcal {K} _ {i}} \exp \left(\boldsymbol {q} _ {i} \boldsymbol {k} _ {j} ^ {T} / \sqrt {d}\right)} \tag {4} +$$ + +We can express the spatial attention operator in matrix and tensor notations. Assume the spatial representation of all quadtree nodes from the prior layer is $\mathbf{H} \in \mathbb{R}^{N \times d}$ and the representation of external nodes (point samples) as $\mathbf{H}_o$ . The query matrix of all point samples can be computed by an embedding with learnable parameter matrix $\mathbf{W}_q$ , i.e., $\mathbf{Q}_o = \mathbf{W}_q\mathbf{H}_o$ . For simplicity, we denote the query matrix $\mathbf{Q}_o$ as $\mathbf{Q}$ . For each query point $\mathbf{o}_i$ , its keys are a subset of quadtree nodes. We denote their embedded key vectors in a matrix $\mathbf{K}_i \in \mathbb{R}^{d \times |\mathcal{K}_i|}$ . If we concatenate the key matrices for all queries, we get a 3D tensor $\hat{\mathbf{K}} \in \mathbb{R}^{d \times |\mathcal{K}_i| \times n}$ . Similarly, the corresponding value matrices can be concatenated into a 3D tensor $\hat{\mathbf{V}} \in \mathbb{R}^{d \times |\mathcal{K}_i| \times n}$ . The construction of 3D tensors can be implemented by the torch.gather() API in Pytorch. The corresponding cross-attention + +weights can be calculated by matrix and tensor multiplications, as illustrated in Figure 4. We can see the difference between the proposed hierarchical spatial attention and the default all-pair attention. In the all-pair self-attention (Figure 4 top), we would have to compute the dot product attention for all point entries, i.e., $\mathbf{Q}_o^T\cdot \mathbf{K}_o$ , whose size is $n\times n$ and can become too large (e.g., $n = 1,000,000$ for a million points). In our spatial attention layer, we conduct a sparse self-attention, i.e., only computing the attention weight of a sample point to its key set. In other words, the corresponding key matrix for $\mathbf{o}_i$ is a vertical slicing of $\hat{\mathbf{K}}$ , which can be denoted as $\hat{\mathbf{K}} [(:, :, i]\in \mathbb{R}^{d\times |\mathcal{K}_i|}$ , where $|\mathcal{K}|$ is the maximum key node set size ( $|\mathcal{K}| < < n$ ). + +![](images/f8d34fe343abf218c27b56e6a91ed4e4abc629642a4170d1a0cd33bfd7d341a1.jpg) +Figure 4: Sparse spatial attention with selective key set. + +Time cost analysis: The proposed spatial attention computation cost depends on the uniformness of spatial point distribution, which determines how balanced the quadtree is. Assume that there are $n$ points and the maximum number of points in each quadtree node is $M$ (quadtree threshold). We analyze two scenarios. If the quadtree is completely balanced, the tree depth and the key set size for every external node is $O(\log \frac{n}{M})$ . Thus total computation cost is $O(n \cdot (M + \log \frac{n}{M}))$ . In the worst case, the spatial points are highly nonuniform, then the quadtree structure will be highly imbalanced. The extreme tree depth is $O\left(\frac{n}{M}\right)$ , thus the attention computation is $O(n \cdot \left(\frac{n}{M} + M\right))$ . However, such a worst-case scenario is unlikely in practice, as it would require samples to be concentrated within a single subgroup of the quadtree at every level. For instance, for water quality monitoring, the sensors are sparsely distributed across a broad area to monitor multiple locations. In practical applications, it takes $O(n \cdot (\log \frac{n}{M} + M))$ complexity. This is validated in our experiments. + +Memory cost analysis: We analyze the memory costs of the HST model theoretically. Assume the number of input point samples $n$ , leaf node size threshold $M$ , batch size $B$ , and the number of head $h$ , and hidden dimension $d$ . The memory costs of HST are dominated by the hierarchical spatial attention layer. Its memory cost is $O(B \cdot h \cdot d \cdot n \cdot |\mathcal{K}_i|)$ per layer, where $|\mathcal{K}_i| = \log \frac{n}{M} + M$ for relatively balanced quadtree. + +# 4.3 Decoder: inference on a new point with uncertainty quantification + +Model inference (prediction): Given a test sample $\mathbf{o}_t = (\mathbf{x}_t, \mathbf{s}_t)$ , the decoder module predicts $y_t$ based on learned spatial representations of quadtree nodes $\mathbf{H}$ . Similar to the encoder, we use cross-attention between the test sample and its corresponding key node set in the quadtree as in Equation 4. As shown in Figure 1, we first conduct a quadtree traversal of the test point until reaching the leaf node and then identify the key node set. Based on that, we can apply a hierarchical spatial cross-attention between the test location and its key node set, followed by a dense layer. + +Uncertainty quantification (UQ): Due to the varying point density as well as feature noise and ambiguity, the model prediction at a new location may come with different confidence levels. Intuitively, the prediction at a test location surrounded by many input point samples tends to be more confident. Many methods exist in UQ for deep learning (e.g., MC dropout and deep ensemble [58, 59, 60, 61]), but few consider the uncertainty due to varying sample density. The most common + +method for UQ in the continuous spatial domain is the Gaussian process (GP) [62, 63]. The UQ of GP is based on Equation 5, where $\mathbf{c}_0 \in \mathbb{R}^{1 \times n}$ is the covariance vector between the test sample and all input point samples, $\mathbf{C} \in \mathbb{R}^{n \times n}$ is the covariance matrix for input samples, and $\sigma_0^2$ is the self-variance. In GP, the covariance $\mathbf{C}$ is computed with a kernel function that reflects the location proximity [64]. Although a GP has good theoretical properties, it is inefficient for massive points due to expensive matrix inverse operation on a large covariance matrix, and it is unable to learn a non-linear representation for samples. + +$$ +\sigma_ {t} ^ {2} = \sigma_ {0} ^ {2} - \mathbf {c} _ {0} ^ {T} \mathbf {C} ^ {- 1} \mathbf {c} _ {0} \tag {5} +$$ + +Our proposed spatial transformer framework can be considered a generalization of a GP model. We use the dot-product cross-attention weights to approximate the covariance vector $\mathbf{c}_0$ between the test location to all points, which reflects the dependency among point sample locations based on their non-linear embeddings, i.e., $\mathbf{c}_0 = \mathbf{H}\mathbf{q}_t$ , where $\mathbf{q}_t$ is the embedded query vector for the test sample. However, this idea is insufficient for the approximation of the entire covariance matrix $\mathbf{C}$ across all points, since the inverse computation of a full covariance matrix is very expensive. To overcome this challenge, we propose to directly approximate the precision matrix $(\mathbf{C}^{-1})$ based on an indicator matrix of selective key sets for all queries $\mathbf{S} \in \mathbb{R}^{N \times n}$ , in which each column indicates the key node set of a query point (and thus specifies the dependency between a query point to all other quadtree nodes). Thus, we use $\mathbf{S}^T \cdot \mathbf{S} / T_u^2$ to approximate the precision matrix $\mathbf{C}^{-1}$ , where $T_u^2$ is a hyper-parameter to be calibrated by independent validation data with calibrated with the Expected Calibration Error (ECE) [65]. The intuition is that the precision matrix reflects the conditional independence structure among point samples (similar to the selective spatial attention in our model). Since $\mathbf{S}^T \cdot \mathbf{S} = \sum_i \mathbf{s}_i \mathbf{s}_i^T$ ( $\mathbf{s}_i$ is a column of $\mathbf{S}$ ), we can see that $\mathbf{S}^T \cdot \mathbf{S}$ is a summation of several sparse block-diagonal matrices. This reflects our assumption in the quadtree that each external node (point sample) is conditionally independent of all other nodes given its key node set. Based on the above approximation, our uncertainty quantification method can be expressed by Equation 6, where $\mathbf{K}_t = \mathbf{SH}$ is the key matrix corresponding to the selective key set of the test sample. + +$$ +\begin{array}{l} u _ {t} = \sigma_ {0} ^ {2} - \left(\mathbf {q} _ {t} ^ {T} \mathbf {H} ^ {T}\right) \mathbf {S} ^ {T} \mathbf {S} (\mathbf {H} \mathbf {q} _ {t}) / T _ {u} ^ {2} \\ = \sigma_ {0} ^ {2} - \mathbf {q} _ {t} ^ {T} (\mathbf {S H}) ^ {T} (\mathbf {S H}) \mathbf {q} _ {t} / T _ {u} ^ {2} \\ = \sigma_ {0} ^ {2} - \left(\mathbf {K} _ {t} \mathbf {q} _ {t}\right) ^ {T} \left(\mathbf {K} _ {t} \mathbf {q} _ {t}\right) / T _ {u} ^ {2} \tag {6} \\ \end{array} +$$ + +Note that our approach shares similarities with the multipole graph neural operator model (MGNO) [32]. MGNO employs a multi-scale low-rank matrix factorization technique to approximate the full kernel matrix across all samples. A key distinction lies in that MGNO uses a neighborhood graph structure to approximate the dependency relationships. In contrast, our approach can capture the long-range interactions among samples in Euclidean space and we provide uncertainty quantification in prediction. + +# 5 Experimental Evaluation + +The goal is to compare our proposed spatial transformer with baseline methods in prediction accuracy and uncertainty quantification performance. All experiments were conducted on a cluster installed with one NVIDIA A100 GPU (80GB GPU Memory). The candidate methods for comparison are listed below. The models' hyper-parameters are provided in the supplementary materials. + +Gaussian Process (GP): We used the GP model based on spatial location without using the explanatory feature [62]. The prediction variance was used as the uncertainty measure. Deep Gaussian Process (Deep GP): We implemented a hybrid Gaussian process neural network [66] with the sample explanatory features and locations. The GP variance is used as the uncertainty. Spatial graph neural network (Spatial GNN): We first constructed a spatial graph based on each sample's kNN by spatial distance. Then we trained a GNN model [47]. We used the MC-dropout [58] method to quantify the prediction uncertainty. Multipole graph neural operator (MGNO): It belongs to the family of neural operator model [32]. NodeFormer: It is an efficient graph transformer model for learning the implicit graph structure [26]. We used the code from its official website. Galerkin Transformer: It uses softmax-free attention mechanism and achieve linearized transformer [35]. We quantify the prediction uncertainty based on MC-dropout. Hierarchical Spatial Transformer (HST): This is our proposed method. We implemented it with PyTorch. + +The prediction performance is evaluated with mean square error (MSE) and mean absolute error (MAE). The evaluation of UQ performance is challenging due to the lack of ground truth of uncertainty. + +UQ evaluation metrics: The quantitative evaluation metrics for UQ performance is Accuracy versus Uncertainty $(AvU)[67]$ . We set accuracy uncertainty thresholds $T_{ac}, T_{au}$ to group prediction accuracy and uncertainty into four categories as Table $\square$ shows. $n_{\mathrm{AC}}, n_{\mathrm{AU}}, n_{\mathrm{IC}}, n_{\mathrm{IU}}$ represent the number of samples in the categories AC, AU, IC, IU, respectively. As Equation $\square$ shows, $AvU$ measures the percentage of two categories AC and IU. A reliable model should provide a higher $AvU$ measure $(AvU \in [0,1])$ . The details on how we choose the thresholds are provided in the supplementary materials. + +$$ +A v U = \frac {n _ {\mathrm {A C}} + n _ {\mathrm {I U}}}{n _ {\mathrm {A C}} + n _ {\mathrm {A U}} + n _ {\mathrm {I C}} + n _ {\mathrm {I U}}} \tag {7} +$$ + +However, $AvU$ is usually biased by the accuracy of the model. Since models tend to have high confidence in accurate predictions. We propose an evaluation metric that evaluates uncertainty performance for accurate and inaccurate prediction separately. Specifically, we computed $AvU_{A}$ for accurate predictions and $AvU_{I}$ for inaccurate predictions with the following equations: + +$$ +A v U _ {A} = \frac {n _ {\mathrm {A C}}}{n _ {\mathrm {A C}} + n _ {\mathrm {A U}}}, A v U _ {I} = \frac {n _ {\mathrm {I U}}}{n _ {\mathrm {I C}} + n _ {\mathrm {I U}}} \tag {8} +$$ + +In our evaluation, we computed the harmonic average of $AvU_{A}$ and $AvU_{I}$ : $AvU = \frac{2*AvU_{A}*AvU_{I}}{AvU_{A} + AvU_{I}}$ to penalize the extreme cases instead of the arithmetic average. + +Table 1: Accuracy versus Uncertainty (AvU) + +
Uncertainty
CertainUncertain
AccuracyAccurateAccurate Certain (AC)Accurate Uncertain (AU)
InaccurateInaccurate Certain (IC)Inaccurate Uncertain (IU)
+ +Dataset description: We used three real-world datasets, including two water quality datasets collected from the Southwest Florida coastal area, and one sea-surface temperature and one PDE simulation dataset: Red tide dataset: The input data are satellite imagery obtained from the MODIS-Aqua sensor [68] and in-situ red tide data obtained from Florida Fish and Wildlife's (FWC) HAB Monitoring Database [69]. We have 104, 100 sensor observations and we use a sliding window with a sequence length 400 to generate 103700 inputs. It is split into training, validation, and test sets with a ratio of $7:1:2$ . Turbidity dataset: We used the same satellite imagery features as in the red tide dataset. The ground truth samples measure the turbidity of the coastal water. It contains 13808 sensor observations. Darcy flow for PDE operator learning: The Darcy flow dataset [33] contain 100 simulated images with $241 \times 241$ resolution. For each image, we subsample 100 sets of point samples from the original image (each set has 400 nodes). Sea Surface Temperature: We used the sea surface temperature dataset of Atlantic ocean [70]. We subsampled 400 point samples from the grid pixels. + +Table 2: Comparison of model performance on two real-world datasets and one simulation dataset + +
ModelRed tideTurbidityDarcy flow
MSEMAEMSEMAEMSEMAE
GP7.42 ± 0.252.55 ± 0.180.42 ± 0.020.46 ± 0.030.19 ± 0.030.33 ± 0.04
Deep GP6.23 ± 0.422.32 ± 0.240.35 ± 0.040.42 ± 0.060.18 ± 0.030.31 ± 0.05
Spatial GNN5.68 ± 0.072.19 ± 0.040.34 ± 0.020.46 ± 0.030.15 ± 0.020.26 ± 0.04
All-pair transformer5.30 ± 0.121.98 ± 0.070.31 ± 0.030.35 ± 0.040.10 ± 0.020.22 ± 0.03
MGNO5.41 ± 0.262.04 ± 0.100.32 ± 0.030.38 ± 0.050.12 ± 0.030.27 ± 0.03
NodeFormer5.34 ± 0.102.05 ± 0.060.32 ± 0.040.38 ± 0.040.16 ± 0.030.29 ± 0.04
GalerkinTransformer5.44 ± 0.172.11 ± 0.080.34 ± 0.030.42 ± 0.050.11 ± 0.020.25 ± 0.03
HST (Our method)5.25 ± 0.111.97 ± 0.070.30 ± 0.030.35 ± 0.040.11 ± 0.020.23 ± 0.03
+ +# 5.1 Comparison on prediction performance + +We first compared the overall regression performance between the baseline models and our proposed HST model. The results on two datasets were summarized in Table 2. The first column summarizes the performance of the red tide dataset with MSE and MAE evaluation metrics. We can see that the GP model performed the worst due to the ignorance of the samples' features. The deep GP + +model improved the GP performance from 7.42 to 6.23 by leveraging the neural network feature representation learning capability as well as considering point sample correlation in both spatial and feature proximity. The spatial GNN model performed well because the model considered local neighbor dependency structure and feature representation simultaneously. However, the spatial GNN model relies on proximity-based graph structure so it ignores the spatial hierarchical structure and long-range dependency. MGNO baseline model performs slightly better than spatial GNN because the multi-level message-passing mechanism captures the long-range dependency structure. Our framework performs better because of the awareness of multi-scale spatial relationships, which is crucial for point samples' inference. Additionally, the performance of all-pair attention is better than the fix-graph GNN model because of the long-range modeling. However, the vanilla transformer models lack the hierarchical attention structure, which may cause an imperfect attention weight score. In contrast, our model performed best with the lowest MAE of 5.25 because it modeled the interaction among point samples in the continuous multi-scale space efficiently. We can see similar results on the MAE metrics. For the second turbidity dataset, our model consistently performed the best in accuracy. For the PDE operator learning task, the results are shown in Table 2. We can see our proposed framework outperforms baselines except all-pair transformer but our model is more efficient. In summary, compared with recent SOTA graph transformer and neural operator learning methods, our framework performs best due to the capability of learning multi-scale spatial representations among massive point samples. Compared with the all-pair transformer model, our HST reduces the time costs and makes a trade-off between efficiency and spatial granularity when calculating attention between points. More experiment results on the Sea Surface Temperature dataset are provided in the supplementary materials. + +Sensitivity analysis: We also conducted a sensitivity analysis of our model to various hyperparameters, including the quadtree leaf node size threshold $M$ , spatial position encoding length scale $\sigma$ , the number of attention layers, and the embedding dimension on the red-tide dataset. The results are summarized in Figure 5 and the detailed analysis is in the Supplementary material. We can see that our model is generally stable with changes of the hyper-parameters. + +![](images/f4a0ebd4f8924a4dc7e322926d455ccb1d94b132b24df3c2d7659e3f72719f98.jpg) +(a) Quadtree leaf node size + +![](images/067e73b65a0f41a4b0cc1af038ec29cc143e162b6b61b459dbbc305a42ef7b5a.jpg) +(b) Length scale + +![](images/305050fc63b21590c998b046a238dfd1be98d90a9586b2ea35ee64b29663a41f.jpg) +(c) Attention layer +(d) Embedding dimension + +![](images/b5a6e879ffec17e87714f2e4c0ddf1622766a95db761baf68a0e42f2cad53709.jpg) +Figure 5: Parameters sensitivity analysis. + +# 5.2 Comparison on uncertainty quantification performance (UQ) + +We use $AvU$ in Equation 7 to evaluate the performance of UQ for our proposed HST model versus baseline methods and the results were summarized in Table 3. The numbers in the table correspond to the number of samples in four categories: $n_{\mathrm{AC}}, n_{\mathrm{AU}}, n_{\mathrm{IC}}, n_{\mathrm{IU}}$ . + +We can see the base GP model had good uncertainty estimation for inaccurate predictions (65% are uncertain) but for the accurate predictions, the uncertainty estimation tended to be confident, only 14% were certain. This might be due to sparse area prediction being less confident but the long-range spatial correlation or feature similarity can improve prediction in the sparse sample area, but the GP model was unaware of such dependency. The deep GP model improved the accurate prediction confidence based on the feature representation learning but still performed worse than other models and gave a 0.38 AvU score. The spatial GNN model was + +Table 3: Comparison on uncertainty quantification performance on Red tide dataset + +
ModelAccuracyUncertaintyAvUa/ AvUcAvU
GPCertainUncertain0.33
Accurate272691440.23
Inaccurate325249190.60
Deep GPCertainUncertain0.38
Accurate331685090.28
Inaccurate309451220.62
Spatial GNNCertainUncertain0.40
Accurate315239490.44
Inaccurate835545850.35
Galerkin TransformerCertainUncertain0.38
Accurate451549250.47
Inaccurate716634350.32
HSTCertainUncertain0.54
Accurate434253020.46
Inaccurate387465030.65
+ +![](images/78d441bb175be84cb3638ea828bd6947e8181c43909c344bd4407658c4719308.jpg) +(a) Computation cost per epoch for 64 training samples. + +![](images/0fefeba0ca136dc5270a69ff29bfbd42c1ea43a5618f123029e9128ee0fbc244.jpg) +(b) Computation cost per epoch versus leaf node size threshold for 5K training samples. + +![](images/a4b1496651806ab82b114b6ae4baa1298fe1e408699351c0fde39b8088f1797c.jpg) +(c) Computation cost per epoch versus hidden dimension for 5K training samples. +Figure 6: Computation cost analysis + +more confident in the accurate prediction, but + +the model was over-confident in the inaccurate prediction, thus resulting in a lower $AvU$ score (0.44). In contrast, our uncertainty model improved both the accurate prediction confidence and inaccurate prediction uncertainty, and improved the overall $AvU$ score to 0.49, because it modeled the uncertainty coming from both feature space and sample density simultaneously. The results validate our decoder attention capability in modeling the prediction uncertainty. + +# 5.3 Analysis on computation and memory cost + +We evaluate the computation costs on a simulation dataset. The point samples' locations are uniformly distributed on the two-dimensional space, and we generate the input feature and target label based on the simulated Gaussian Process. The simulated number of point samples varies from $10^{2}$ to $10^{6}$ . The training batch size is 1. Other simulation parameters and training hyperparameters are provided in the Appendix. We compare our model with the vanilla all-pair attention transformer on both computation times. The computational time per epoch on the 64 training dataset is shown in Figure 6(a). When the number of point samples increases to $50\mathrm{K}$ , the computation time and memory costs of the vanilla all-pair transformer increase dramatically and become out-of-memory (OOM) when further increasing the number to $100\mathrm{K}$ . However, our model can be scaled to 1M point samples and trained with a reasonable time cost (1 hour). We also analyze the effect of the quadtree leaf node size threshold (the maximum number of point samples in the quadtree leaf) using 5K training samples. The computation time is shown in Figure 6(b). When the leaf node size threshold increase, the computation first decreases and then increases. Because when the quadtree depth decreases from 10 to 25, the quadtree depth decreases and can decrease the total number of query-key pairs. When the threshold increases from 50 to 200, the leaf node point sample will increase the computation. The computational costs with hidden feature dimensions are shown in Figure 6(c). It's observed that the computational costs scale linearly with respect to the hidden feature dimension. + +# 6 Conclusion and Future Work + +This paper proposes a novel hierarchical spatial transformer model that can model implicit spatial dependency on a large number of point samples in continuous space. To reduce the computational bottleneck of all-pair attention computation, we propose a spatial attention mechanism based on hierarchical spatial representation in a quadtree structure. In order to reflect the varying degrees of confidence, we design an uncertainty quantification branch in the decoder. Evaluations of real-world remote sensing datasets for coastal water quality monitoring show that our method outperforms several baseline methods. In future work, we plan to extend our model to learn surrogate models for physics-simulation data on 3D mesh and explore improving the scalability of our model on multi-GPUs. + +# Acknowledgement + +This material is based upon work supported by the National Science Foundation (NSF) under Grant No. IIS-2147908, IIS-2207072, CNS-1951974, OAC-2152085, the National Oceanic and + +Atmospheric Administration grant NA19NES4320002 (CISESS) at the University of Maryland, and Florida Department of Environmental Protection. Shigang Chen's work is supported in part by the National Health Institute under grant R01 LM014027. + +# References + +[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. +[2] Limin Huang, Yu Jing, Hangyu Chen, Lu Zhang, and Yuliang Liu. A regional wind wave prediction surrogate model based on cnn deep learning network. Applied Ocean Research, 126:103287, 2022. +[3] Anima Anandkumar, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Nikola Kovachki, Zongyi Li, Burigede Liu, and Andrew Stuart. Neural operator: Graph kernel network for partial differential equations. In ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations, 2020. +[4] Tung Nguyen, Johannes Brandstetter, Ashish Kapoor, Jayesh K Gupta, and Aditya Grover. Climax: A foundation model for weather and climate. arXiv preprint arXiv:2301.10343, 2023. +[5] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International conference on machine learning, pages 4055-4064. PMLR, 2018. +[6] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swim transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3202-3211, 2022. +[7] Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. Graph transformer networks. Advances in neural information processing systems, 32, 2019. +[8] AG Dekker, Žamurović-Nenad, HJ Hoogenboom, and SWM Peters. Remote sensing, ecological water quality modelling and in situ measurements: a case study in shallow lakes. Hydrological Sciences Journal, 41(4):531-547, 1996. +[9] Jin Li and Andrew D Heap. A review of spatial interpolation methods for environmental scientists. Geoscience Australia, Record. Geoscience Australia, Canberra, 2008. +[10] David W Wong, Lester Yuan, and Susan A Perlin. Comparison of spatial interpolation methods for the estimation of air quality data. Journal of Exposure Science & Environmental Epidemiology, 14(5):404-415, 2004. +[11] B Siddani and S Balachandar. Point-particle drag, lift, and torque closure models using machine learning: Hierarchical approach and interpretability. Physical Review Fluids, 8(1):014303, 2023. +[12] Volker Springel. High performance computing and numerical modelling, 2014. +[13] Eu Gene Chung, Fabián A Bombardelli, and S Geoffrey Schladow. Modeling linkages between sediment resuspension and water quality in a shallow, eutrophic, wind-exposed lake. Ecological Modelling, 220(9-10):1251-1265, 2009. +[14] Zhirui Deng, Qing He, Zeinab Safar, and Claire Chassagne. The role of algae in fine sediment flocculation: In-situ and laboratory measurements. Marine Geology, 413:71-84, 2019. +[15] Song Yang, Jiamou Liu, and Kaiqi Zhao. Getnext: trajectory flow map enhanced transformer for next poi recommendation. In Proceedings of the 45th International ACM SIGIR Conference on research and development in information retrieval, pages 1144-1153, 2022. +[16] Yanjun Qin, Yuchen Fang, Haiyong Luo, Fang Zhao, and Chenxing Wang. Next point-of-interest recommendation with auto-correlation enhanced multi-modal transformer network. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2612-2616, 2022. +[17] Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020. +[18] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019. +[19] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning, pages 3744-3753. PMLR, 2019. + +[20] Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver: General perception with iterative attention. In International conference on machine learning, pages 4651-4664. PMLR, 2021. +[21] Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020. +[22] Shitao Tang, Jiahui Zhang, Siyu Zhu, and Ping Tan. Quadtree attention for vision transformers. arXiv preprint arXiv:2201.02767, 2022. +[23] Zhi Cheng, Xiu Su, Xueyu Wang, Shan You, and Chang Xu. Sufficient vision transformer. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 190–200, 2022. +[24] Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180, 2019. +[25] Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, and Cho-Jui Hsieh. Dynamicvit: Efficient vision transformers with dynamic token sparsification. Advances in neural information processing systems, 34:13937-13949, 2021. +[26] Qitian Wu, Wentao Zhao, Zenan Li, David P Wipf, and Junchi Yan. Nodeformer: A scalable graph structure learning transformer for node classification. Advances in Neural Information Processing Systems, 35:27387-27401, 2022. +[27] Jinsong Chen, Kaiyuan Gao, Gaichao Li, and Kun He. Nagphormer: A tokenized graph transformer for node classification in large graphs. In The Eleventh International Conference on Learning Representations, 2022. +[28] Zaixi Zhang, Qi Liu, Qingyong Hu, and Chee-Kong Lee. Hierarchical graph transformer with adaptive node sampling. Advances in Neural Information Processing Systems, 35:21171-21183, 2022. +[29] Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34:21618-21629, 2021. +[30] Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf, and Junchi Yan. Differmer: Scalable (graph) transformers induced by energy constrained diffusion. arXiv preprint arXiv:2301.09474, 2023. +[31] Nikola B Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew M Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. J. Mach. Learn. Res., 24(89):1-97, 2023. +[32] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. Advances in Neural Information Processing Systems, 33:6755-6766, 2020. +[33] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020. +[34] John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. arXiv preprint arXiv:2111.13587, 2021. +[35] Shuhao Cao. Choose a transformer: Fourier or galerkin. Advances in neural information processing systems, 34:24924-24940, 2021. +[36] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. +[37] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Accurate medium-range global weather forecasting with 3d neural networks. Nature, 619(7970):533-538, 2023. +[38] Tung Nguyen, Johannes Brandstetter, Ashish Kapoor, Jayesh K Gupta, and Aditya Grover. Climax: A foundation model for weather and climate, january 2023. arXiv preprint arXiv:2301.10343, 2023. +[39] Shaowu Pan, Steven L Brunton, and J Nathan Kutz. Neural implicit flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data, 2022. +[40] Yuan Yin, Matthieu Kirchmeyer, Jean-Yves Franceschi, Alain Rakotomamonjy, and Patrick Gallinari. Continuous pde dynamics forecasting with implicit neural representations. arXiv preprint arXiv:2209.14855, 2022. +[41] Peter Yichen Chen, Jinxu Xiang, Dong Heon Cho, Yue Chang, GA Pershing, Henrique Teles Maia, Maurizio Chiaramonte, Kevin Carlberg, and Eitan Grinspun. Crom: Continuous reduced-order modeling of pdes using implicit neural representations. arXiv preprint arXiv:2206.02607, 2022. + +[42] Zongyi Li, Nikola Borislavov Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Prakash Otta, Mohammad Amin Nabian, Maximilian Stadler, Christian Hundt, Kamyar Azizzadenesheli, et al. Geometry-informed neural operator for large-scale 3d pdes. arXiv preprint arXiv:2309.00583, 2023. +[43] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. +[44] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[45] Krishna Karthik Gadiraju, Bharathkumar Ramachandra, Zexi Chen, and Ranga Raju Vatsavai. Multimodal deep learning based crop classification using multispectral and multitemporal satellite imagery. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3234-3242, 2020. +[46] Yumin Liu, Auroop R Ganguly, and Jennifer Dy. Climate downscaling using yet: A deep convolutional network with skip connections and fusion. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3145-3153, 2020. +[47] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. +[48] Wenchong He, Zhe Jiang, Chengming Zhang, and Arpan Man Sainju. Curvanet: Geometric deep learning based on directional curvature for 3d shape analysis. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2214-2224, 2020. +[49] Neng Shi, Jiayi Xu, Skylar W Wurster, Hanqi Guo, Jonathan Woodring, Luke P Van Roekel, and Han-Wei Shen. Gnn-surrogate: A hierarchical and adaptive graph neural network for parameter space exploration of unstructured-mesh ocean simulations. IEEE Transactions on Visualization and Computer Graphics, 28(6):2301–2313, 2022. +[50] Tingsong Xiao, Lu Zeng, Xiaoshuang Shi, Xiaofeng Zhu, and Guorong Wu. Dual-graph learning convolutional networks for interpretable alzheimer's disease diagnosis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 406-415. Springer, 2022. +[51] Wenchong He, Arpan Man Sainju, Zhe Jiang, and Da Yan. Deep neural network for 3d surface segmentation based on contour tree hierarchy. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pages 253-261. SIAM, 2021. +[52] Wenchong He, Arpan Man Sainju, Zhe Jiang, Da Yan, and Yang Zhou. Earth imagery segmentation on terrain surface with limited training labels: A semi-supervised approach based on physics-guided graph co-training. ACM Transactions on Intelligent Systems and Technology (TIST), 13(2):1-22, 2022. +[53] Xiaoyang Wang, Yao Ma, Yiqi Wang, Wei Jin, Xin Wang, Jiliang Tang, Caiyan Jia, and Jian Yu. Traffic flow prediction via spatial temporal graph neural network. In Proceedings of the web conference 2020, pages 1082-1092, 2020. +[54] George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, and Carsten Eickhoff. A transformer-based framework for multivariate time series representation learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 2114-2124, 2021. +[55] Zhiruo Wang, Haoyu Dong, Ran Jia, Jia Li, Zhiyi Fu, Shi Han, and Dongmei Zhang. Tuta: Tree-based transformers for generally structured table pre-training. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1780-1790, 2021. +[56] Shitao Tang, Jiahui Zhang, Siyu Zhu, and Ping Tan. Quadtree attention for vision transformers. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. +[57] Yang Li, Si Si, Gang Li, Cho-Jui Hsieh, and Samy Bengio. Learnable fourier features for multi-dimensional spatial positional encoding. Advances in Neural Information Processing Systems, 34:15816-15829, 2021. +[58] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR, 2016. +[59] Florian Wenzel, Jasper Snoek, Dustin Tran, and Rodolphe Jenatton. Hyperparameter ensembles for robustness and uncertainty quantification. Advances in Neural Information Processing Systems, 33:6514-6527, 2020. +[60] Dongxia Wu, Liyao Gao, Matteo Chinazzi, Xinyue Xiong, Alessandro Vespignani, Yi-An Ma, and Rose Yu. Quantifying uncertainty in deep spatiotemporal forecasting. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1841-1851, 2021. +[61] Wenchong He and Zhe Jiang. A survey on uncertainty quantification methods for deep neural networks: An uncertainty source perspective. arXiv preprint arXiv:2302.13425, 2023. + +[62] Sudipto Banerjee, Bradley P Carlin, and Alan E Gelfand. Hierarchical modeling and analysis for spatial data. Crc Press, 2014. +[63] Wenchong He, Zhe Jiang, Marcus Kriby, Yiqun Xie, Xiaowei Jia, Da Yan, and Yang Zhou. Quantifying and reducing registration uncertainty of spatial vector labels on earth imagery. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 554-564, 2022. +[64] Sudipto Banerjee, Bradley P Carlin, and Alan E Gelfand. Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC, 2003. +[65] Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. Accurate uncertainties for deep learning using calibrated regression. In International conference on machine learning, pages 2796-2804. PMLR, 2018. +[66] John Bradshaw, Alexander G de G Matthews, and Zoubin Ghahramani. Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476, 2017. +[67] Ranganath Krishnan and Omesh Tickoo. Improving model calibration with accuracy versus uncertainty optimization. Advances in Neural Information Processing Systems, 33:18237-18248, 2020. +[68] [dataset] NASA Goddard Space Flight Center. Moderate-resolution imaging spectroradiometer (modis) aqua ocean level-2 data, 2021. Accessed Aug. 17, 2021. +[69] [dataset] Florida Fish and Wildlife Conservation Commission. Hab monitoring database, 2021. Accessed Aug. 17, 2021. +[70] Emmanuel De Bezenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. Journal of Statistical Mechanics: Theory and Experiment, 2019(12):124009, 2019. \ No newline at end of file diff --git a/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/images.zip b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4bc02dd0a1454f8700c353e7ff101c3d8ebfe7e4 --- /dev/null +++ b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fd6cfa5b9504a632062e44b5e0fdabfa1080af6f17bb380e10562bd93f34c98 +size 321988 diff --git a/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/layout.json b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9a63a1e24b195d0605260d27ffa053b7ed07b8bb --- /dev/null +++ b/ahierarchicalspatialtransformerformassivepointsamplesincontinuousspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f4679f40df7c1c8c5cf05ba89c9de5474e3ec972345a54c07e0e7cd5004dbfb +size 508526 diff --git a/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_content_list.json b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0666ca253476c9c617af9b6ac89a90b0fac3844d --- /dev/null +++ b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f57c315c9746d1f9c43df81a980045a0ed68e92e49e81bde31d064cec4cfda9 +size 105156 diff --git a/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_model.json b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_model.json new file mode 100644 index 0000000000000000000000000000000000000000..99120e1ea5f4b063aea3427f1e6475cf927fc510 --- /dev/null +++ b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b88ba0cb6af6a482048802681ec1d8ea6ed17131157ce355d8402404579219c7 +size 128934 diff --git a/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_origin.pdf b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..20bbd0d7546a32994ed21ed0995cf1dbc0f87e96 --- /dev/null +++ b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/69693b15-7147-4598-b4f2-4d8c39fcca24_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c6b64dd6f196629f04d7ded64b940c4ad1e4d23b2bc0aeadb0ad100973e6135 +size 907609 diff --git a/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/full.md b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/full.md new file mode 100644 index 0000000000000000000000000000000000000000..796607f26a1af8781f39f45569b034c2ff7b0d79 --- /dev/null +++ b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/full.md @@ -0,0 +1,334 @@ +# A Hierarchical Training Paradigm for Antibody Structure-sequence Co-design + +Fang Wu* + +Tsinghua University + +Beijing, China + +Stan Z. Li + +Westlake University + +Hangzhou, China + +# Abstract + +Therapeutic antibodies are an essential and rapidly expanding drug modality. The binding specificity between antibodies and antigens is decided by complementarity-determining regions (CDRs) at the tips of these Y-shaped proteins. In this paper, we propose a hierarchical training paradigm (HTP) for the antibody sequence-structure co-design. HTP consists of four levels of training stages, each corresponding to a specific protein modality within a particular protein domain. Through carefully crafted tasks in different stages, HTP seamlessly and effectively integrates geometric graph neural networks (GNNs) with large-scale protein language models to excavate evolutionary information from not only geometric structures but also vast antibody and non-antibody sequence databases, which determines ligand binding pose and strength. Empirical experiments show that HTP sets the new state-of-the-art performance in the co-design problem as well as the fix-backbone design. Our research offers a hopeful path to unleash the potential of deep generative architectures and seeks to illuminate the way forward for the antibody sequence and structure co-design challenge. + +# 1 Introduction + +Antibodies, known as immunoglobulins (Ig), are large Y-shaped proteins that the immune system uses to identify and neutralize foreign objects such as pathogenic bacteria and viruses [1, 2]. They recognize a unique molecule of the pathogen, called an antigen. As illustrated in Figure 1 (a), each tip of the "Y" of an antibody contains a paratope that is specific for one particular epitope on an antigen, allowing these two structures to bind together with precision. Notably, the binding specificity of antibodies is largely determined by their complementarity-determining regions (CDRs). Consequently, unremitting efforts have been made to automate the creation of CDR subsequences with desired constraints of binding affinity, stability, and synthesizability [3]. However, the search space is vast, with up to $20^{L}$ possible combinations for a $L$ -length CDR sequence. This makes it infeasible to solve the protein structures and then examine their corresponding binding properties via experimental approaches. As a remedy, a group of computational antibody design mechanisms has been introduced to accelerate this filtering process. + +Some prior studies [4, 5] prefer to generate only 1D sequences, which has been considered suboptimal because it lacks valuable geometric information. Meanwhile, since the target structure for antibodies is rarely given as a prerequisite [6], more attention has been paid to co-designing the sequence and structure. One conventional line of research [7-9] resorts to sampling protein sequences and structures on the complex energy landscape constructed by physical and chemical principles. But it is found to be time-exhausted and vulnerable to being trapped in local energy optima. Another line [10-12] relies on deep generative models to simultaneously design antibodies' sequences and structures. They take advantage of the most advanced geometric deep learning (DL) techniques and can seize + +![](images/2d008e530d0c34a796b53e2664df165062969524c2cea43f25a2c434455ab17d.jpg) +(a) + +![](images/57a5dab12fcccec6ad95b7debc8e93fac55bd0bd0d37208a448a1b361069401b.jpg) +(b) +Figure 1: (a) Schematic structure of an antibody bonded with an antigen (figure modified from Wikipedia). (b) The workflow overview of our hierarchical training paradigm (HTP). + +higher-order interactions among residues directly from the data [13]. Their divergence mainly lies in their generative manner. For instance, early works [10, 11] adopt an iterative fashion, while successors [14, 15] employ full-shot generation. Most utilize the traditional translation framework, while some [12] leverage the diffusion denoise probabilistic model. + +Despite this fruitful progress, the efficacy of existing co-design methods is predominantly limited by the small number of antibody structures. The Structural Antibody Database (SAbDab)[16] and RAbD[9] are two widely used datasets in the field. After eliminating structures without antigens and removing duplicates, SAbDab comprises only a few thousand complex structures, whereas RAbD consists of 60 complex structures. These numbers are orders of magnitude lower than the data sizes that can inspire major breakthroughs in DL areas [17, 18]. Consequently, deep-generative models fail to benefit from large amounts of 3D antibody-antigen complex structures and are of limited sizes, or otherwise, overfitting may occur. + +To address this issue, in this paper, we propose a hierarchical training paradigm (HTP), a novel unified prototype to exploit multiple biological data resources, and aim at fully releasing the potential of geometric graph neural networks (GGNNs) [19, 20] for the sequence and structure co-design problem. Explicitly, HTP consists of four distinct training ranks: single-protein sequence level, antibody sequence level, protein-protein complex structure level, and antibody-antigen complex structure level. These steps are itemized as the data abundance declines, but the task specificity increases, as depicted in Figure 1 (b). Alongside them, we present various pretraining objectives for the first three levels to mine correlated evolutionary patterns, which benefit the co-design task in the final stage. Specifically, we first pretrain the protein language models (PLMs) on tremendous single-protein sequences to obtain general representations and fine-tune them on antibody sequences to capture more condensed semantics. For 3D geometry, we invented a pocket-painting task to exploit the protein-protein complex structures and simulate the CDR generation process. After that, we combine the marvelously pretrained PLMs and the well-pretrained GGNNs to co-design the expected antibodies. Our study provides a promising road to excavate the power of existing deep generative architectures and hopes to shed light on the future development of antibody sequence and structure co-design. The contributions of our work can be summarized as follows. + +- First, we equip GGNNs with large-scale PLMs to bridge the gap between protein databases of different modalities (i.e., 1D sequence, and 3D structure) for the antibody co-design problem. +- Second, we design four distinct levels of training tasks to hierarchically incorporate protein data of different domains (i.e., antibodies, and non-antibodies) for the antibody co-design challenge. HTP breaks the traditional co-design routine that separates proteins of different domains and extends antibody-antigen complex structures to broader databases. +- Comprehensive experiments have been conducted to indicate that each stage of HTP significantly contributes to the improvements in the capacity of the DL model to predict more accurate antibody sequences and restore its corresponding 3D structures. To be explicit, HTP brings a rise of $78.56\%$ in the amino acid recovery rate (AAR) and a decline of $41.97\%$ in structure prediction error for the sequence-structure co-design. It also leads to an average increase of $26.92\%$ in AAR for the fixed-backbone design problem. + +![](images/29af7346e55be2d914a5cbe4430a598e78e6a3396637d75c6ac8b128aaa9ecca.jpg) + +![](images/62bbae97ae343a21e7c090dbb627287e5c3254c5568dc3b9de8d168a940262d2.jpg) +Figure 2: The illustration of the first three stages of our hierarchical training mechanism for antibody sequence-structure co-design. In level I, a Transformer-based language model is trained by masked language modeling on a large number of single protein sequences to extract general-purposed representations. In level II, the Transformer-based PLM is then further fine-tuned on specific antibody sequences from databases like OAS or ABCD. CDRs are all masked and require recovery, with other framework regions reserved. In level III, GGNNs are asked to predict both the sequence and structure of the pseudo-CDR on protein-protein complex structure databases. The residue features are based on PLMs gained in the previous step. + +# 2 Methods + +This section is organized as follows. Subsection 2.1 describes the background knowledge and mathematical formulation of the co-design problem. Subsection 2.4 introduces the backbone architecture to encode the geometric structure of protein-protein complexes as well as antibody-antigen complexes in the 3D space. Subsection 2.2 to 2.6 concentrate on explaining the four different levels of our HTP. + +# 2.1 Preliminary + +Background An antibody is a Y-shaped protein with two symmetric sets of chains, each consisting of a heavy chain and a light chain. Each chain has one variable domain (VH/VL) and some constant domains. The variable domain can be further divided into a framework region (FW) and three CDRs. Notably, CDRs in heavy chains contribute the most to the antigen-binding affinity and are the most challenging to characterize. + +Notations and Task Formulation We represent each antibody-antigen complex as a heterogeneous graph $\mathcal{G}_{LR}$ . It is made up of two spatially aggregated components, i.e., the antibody and antigen denoted as $\mathcal{G}_L = \{\mathcal{V}_L,\mathcal{E}_L\}$ and $\mathcal{G}_R = \{\mathcal{V}_R,\mathcal{E}_R\}$ , respectively. $\mathcal{G}_L$ and $\mathcal{G}_R$ use residues as nodes with numbers of $N_{L}$ and $N_{R}$ separately. The node locations $\mathbf{x}_L\in \mathbb{R}^{N_L\times 3}$ and $\mathbf{x}_R\in \mathbb{R}^{N_R\times 3}$ are defined as their corresponding $\alpha$ -carbon coordinate, and are associated with the initial $\psi_h$ -dimension roto-translation invariant features $\mathbf{h}_L\in \mathbb{R}^{N_L\times \psi_h}$ and $\mathbf{h}_R\in \mathbb{R}^{N_R\times \psi_h}$ (e.g. residue types, electronegativity). CDRs are subgraphs of $\mathcal{G}_L$ and can be divided into $\mathcal{G}_{HC} = \{\mathcal{V}_{HC},\mathcal{E}_{HC}\}$ and $\mathcal{G}_{LC} = \{\mathcal{V}_{LC},\mathcal{E}_{LC}\}$ , which belong to the heavy chain and the light chain, respectively. We assume that $\mathcal{G}_{HC}$ and $\mathcal{G}_{LC}$ have $N_{HC}$ and $N_{LC}$ residues. Besides, it is worth noting that the distance scales between the internal and external interactions are very different. Based on this fact, we strictly distinguish the interaction within and across two graphs $\mathcal{G}_R$ and $\mathcal{G}_L$ as and $\mathcal{E}_L\cup \mathcal{E}_R$ and $\mathcal{E}_{LR}$ , individually. This implementation avoids the underutilization of cross-graph edges' information due to implicit positional relationships between the antibody and the antigen [21]. + +In this work, we hypothesize that the antigen structure and the antibody framework are known, aiming to design CDR with more efficacy. Formally, our goal is to jointly model the distribution of CDR in the heavy chain given the structure of the remaining antibody-antigen complex as $p(\mathcal{G}_{HC}|\mathcal{G}_{LR} - \mathcal{G}_{HC})$ or in the light chain as $p(\mathcal{G}_{LC}|\mathcal{G}_{LR} - \mathcal{G}_{LC})$ . Since the heavy chain plays a more critical role in determining antigen binding affinity, we take $\mathcal{G}_{HC}$ as the target example in the following content, but design both heavy and light chains in the experiment section 3.1. + +# 2.2 Single-protein Sequence Level + +The idea that biological function and structures are recorded in the statistics of protein sequences selected through evolution has a long history [22]. Unobserved variables that determine the fitness of a protein, such as structure, function, and stability, leave a mark on the distribution of the natural sequence observed [23]. To uncover that information, a group of PLMs has been developed at the scale of evolution, including the series of ESM [22] and ProtTrans [24]. They are capable of capturing information about secondary and tertiary structures and can be generalized across a broad range of downstream applications. Recent studies [25] also demonstrate that equipping GGNNs with pretrained language models can end up with a stronger capacity. Accordingly, we adopt an ESM-2 with 150M parameters to extract per-residue representations, denoted as $\mathbf{h}_L' \in \mathbb{R}^{N_L \times \psi_{PLM}}$ and $\mathbf{h}_R' \in \mathbb{R}^{N_R \times \psi_{PLM}}$ , and use them as input node features. Here $\psi_{PLM} = 640$ and we use $\Phi$ to denote the trainable parameter set of the language model. + +Noteworthy, ESM-2 supplies plenty of options with different model sizes, ranging from 8M to 15B, and a persistent improvement has been found as the model scale increases. However, the trajectory of improvement becomes relatively smooth after the scale reaches $10^{8}$ . Therefore, for the sake of computational efficiency, we select the 150M version of ESM-2, which performs comparably with the 650M parameters of the ESM-1b model [22]. As declared by Wu et al. [26], incompatibility exists between the experimental structure and its original amino acid sequence (i.e., FASTA sequence). For simplicity, we obey Wu et al. [26]'s mode, which uses the fragmentary sequence directly as the substitute for the integral amino acid sequence and forwards it to the PLMs. + +# 2.3 Antibody Sequence Level + +Though PLMs have achieved great progress within protein informatics, their protein representations are generally pursued. Noticeably, the residue distribution of antibodies is significantly different from non-antibodies. Over the past decades, billions of antibodies have been sequenced [27], which enables the training of a language model specifically for the antibody domain [28]. Several researchers have recognized this problem and present models such as AntiBERTa [29] and AbLang [30] to decipher the biology of disease and the discovery of novel therapeutic antibodies. Nonetheless, those pretrained antibody language models have some intrinsic flaws and may not be suitable to apply in our sequence-structure co-design problem immediately. + +First and foremost, they are directly pretrained on antibody sequence datasets and fail to exploit the vast amounts of all protein sequences. Second, the existing pretrained antibody language models are all on a small scale. For example, AntiBERTa consists of 12 layers with 86M parameters. Last but not least, both AntiBERTa and AbLang regard the heavy and light chains individually, and AbLang even trains two different models for them. This approach is precluded from encoding a comprehensive and integral representation of the whole antibody. More importantly, they are pretrained on antibodies and cannot be perfectly generalized to extract representations of antigens, which is necessary for analyzing the interactions between antibodies and antigens. + +To avoid their drawbacks, we chose to fine-tune ESM on available antibody sequence datasets and considered both antibody and antigen sequences. Following AbLang [30], we leverage the Observed Antibody Space database (OAS) [31] and subsequent update [32]. OAS is a project that collects and annotates immune repertoires for use in large-scale analysis. It contains over one billion sequences, from over 80 different studies. These repertoires cover diverse immune states, organisms, and individuals. Additionally, Olsen et al. [30] has observed in OAS that approximately $80\%$ sequences lack more than one residue at the N-terminus, nearly $43\%$ of them are missing the first 15 positions, and about $1\%$ contain at least one ambiguous residue for each sequence. Remarkably, OAS contains both unpaired and paired antibody sequences and we utilize only paired ones. To better align with our + +co-design target, we implement masked language modeling (MLM) and mask residues in all CDRs (i.e., VH, and VL) simultaneously to increase the task difficulty. + +# 2.4 Geometric Graph Neural Networks + +To capture 3D interactions of residues in different chains, we adopt a variant of equivariant graph neural network (EGNN) [33] to act on this heterogeneous 3D antibody-antigen graph. The architecture has several key improvements. First, it consists of both the intra- and inter- message-passing schemes to distinguish interactions within the same graph and interactions between different counterparts. Second, it only updates the coordinates of residues in CDR, i.e., the part that is to be designed, while the positions of other parts are maintained as unchangeable. Last, it uses features from the pretrained PLMs as the initial node state rather than a randomized one. Here, all modules are E(3)-equivariant. + +The $l$ -th layer of our backbone is formally defined as the following: + +$$ +\mathbf {m} _ {j \rightarrow i} = \phi_ {e} \left(\mathbf {h} _ {i} ^ {(l)}, \mathbf {h} _ {j} ^ {(l)}, d \left(\mathbf {x} _ {i} ^ {(l)}, \mathbf {x} _ {j} ^ {(l)}\right)\right), \forall e _ {i j} \in \mathcal {E} _ {L} \cup \mathcal {E} _ {R}, \tag {1} +$$ + +$$ +\boldsymbol {\mu} _ {j \rightarrow i} = a _ {j \rightarrow i} \mathbf {h} _ {j} ^ {(l)} \cdot \phi_ {d} \left(d \left(\mathbf {x} _ {i} ^ {(l)}, \mathbf {x} _ {j} ^ {(l)}\right)\right), \forall e _ {i j} \in \mathcal {E} _ {L R}, \tag {2} +$$ + +$$ +\mathbf {h} _ {i} ^ {(l + 1)} = \phi_ {h} \left(\mathbf {h} _ {i} ^ {(l)}, \sum_ {j} \mathbf {m} _ {j \rightarrow i}, \sum_ {j ^ {\prime}} \boldsymbol {\mu} _ {j ^ {\prime} \rightarrow i}\right), \tag {3} +$$ + +where $d(.,.)$ is the Euclidean distance function. $\phi_e$ is the edge operation and $\phi_h$ denotes the node operation that aggregates the intra-graph messages $\mathbf{m}_i = \sum_j\mathbf{m}_{j\rightarrow i}$ and the cross-graph message $\pmb{\mu}_i = \sum_{j'}\pmb{\mu}_{j'\rightarrow i}$ as well as the node embeddings $\mathbf{h}_i^{(l)}$ to acquire the updated node embedding $\mathbf{h}_i^{(l + 1)}$ . $\phi_d$ operates on the inter-atomic distances. $\phi_e$ , $\phi_h$ and $\phi_d$ are all multi-layer perceptrons (MLPs). Besides that, $a_{j\rightarrow i}$ is an attention weight with trainable MLPs $\phi^q$ and $\phi^k$ , and takes the following form as: + +$$ +a _ {j \rightarrow i} = \frac {\exp \left(\left\langle \phi^ {q} \left(\mathbf {h} _ {i} ^ {(l)}\right) , \phi^ {k} \left(\mathbf {h} _ {j} ^ {(l)}\right)\right\rangle\right)}{\sum_ {j ^ {\prime}} \exp \left(\left\langle \phi^ {q} \left(\mathbf {h} _ {i} ^ {(l)}\right) , \phi^ {k} \left(\mathbf {h} _ {j ^ {\prime}} ^ {(l)}\right)\right\rangle\right)}. \tag {4} +$$ + +As for coordinate iterations, we note that residues located in CDRs (i.e., $\mathcal{G}_{HC}$ ) are the sole constituent that needs spatial transformation. On the contrary, the position of the remaining piece (i.e., $\mathcal{G}_{LR} - \mathcal{G}_{HC}$ ) is ascertainable. If we change the coordinate of $\mathcal{G}_{LR} - \mathcal{G}_{HC}$ , its conformation can be disorganized and irrational from physical or biochemical perspectives. Therefore, it is reasonable to stabilize $\mathcal{G}_{LR} - \mathcal{G}_{HC}$ in each layer and simply alter $\mathcal{G}_{HC}$ . Mathematically, + +$$ +\mathbf {x} _ {i} ^ {(l + 1)} = \left\{ \begin{array}{l l} \mathbf {x} _ {i} ^ {(l)} + \frac {1}{| \mathcal {N} _ {i} |} \sum_ {i \in \mathcal {N} _ {i}} \left(\mathbf {x} _ {j} ^ {(l)} - \mathbf {x} _ {j} ^ {(l)}\right) \phi_ {x} (i, j), & \text {i f} \mathbf {x} _ {i} \in \mathcal {G} _ {H C} \\ \mathbf {x} _ {i} ^ {(l)}, & \text {o t h e r w i s e} \end{array} , \right. \tag {5} +$$ + +where $\mathcal{N}_i$ denotes the neighbors of node $i$ and we take the mean aggregation to update the coordinate for each movable node. $\phi_{x}$ varies according to whether the edge $e_{ij}$ represent intra-graph connectivity or cross-graph connectivity. In particular, $\phi_{x} = \phi_{m}(\mathbf{m}_{i\rightarrow j})$ if $e_{ij}\in \mathcal{E}_L^{(t)}\cup \mathcal{E}_R^{(t)}$ . Otherwise, $\phi_{x} = \phi_{\mu}\left(\pmb{\mu}_{i\rightarrow j}\right)$ when $e_{ij}\in \mathcal{E}_{LR}^{(t)}$ , where $\phi_{m}$ and $\phi_{\mu}$ are two different functions to deal with different types of messages. Then, $\phi_{x}$ is left multiplied with $\mathbf{x}_i^{(l)} - \mathbf{x}_j^{(l)}$ to keep the direction information. Equation 5 takes as input the edge embedding $\mathbf{m}_{i\rightarrow j}$ or $\pmb{\mu}_{i\rightarrow j}$ as a weight to sum all relative coordinate $\mathbf{x}_i^{(l)} - \mathbf{x}_j^{(l)}$ and output the renewed coordinates $\mathbf{x}_i^{(l + 1)}$ . + +To summarize, the $l$ -th layer of our architecture $(l \in [L])$ takes as input the set of atom embeddings $\left\{\mathbf{h}_L^{(l)}, \mathbf{h}_R^{(l)}\right\}$ , and 3D coordinates $\left\{\mathbf{x}_L^{(l)}, \mathbf{x}_R^{(l)}\right\}$ . Then it outputs a transformation on $\left\{\mathbf{h}_L^{(l)}, \mathbf{h}_R^{(l)}\right\}$ as well as coordinates of residues on the CDRs, that is, $\mathbf{x}_{HC}^{(l)}$ . Concisely, $\mathbf{h}_L^{(l+1)}, \mathbf{x}_{HC}^{(l+1)}, \mathbf{h}_R^{(l+1)} = \mathrm{GGNN}^{(l)}\left(\mathbf{h}_L^{(l)}, \mathbf{x}_L^{(l)}, \mathbf{h}_R^{(l)}, \mathbf{x}_R^{(l)}\right)$ , while the coordinates of other non-CDR parts remain the same as in the last layer as $\mathbf{x}_L^{(l+1)} \cup \mathbf{x}_R^{(l+1)} \backslash \mathbf{x}_{HC}^{(l+1)} = \mathbf{x}_L^{(l)} \cup \mathbf{x}_R^{(l)} \backslash \mathbf{x}_{HC}^{(l)}$ . We assign $\Theta$ as the trainable parameter set of the whole GGNN architecture. + +![](images/97e48d6f06fe349f8c9c34d2dc5167a3ef48859ef4afa6df91e13186399185c2.jpg) +Figure 3: The final stage of our hierarchical training mechanism for antibody sequence-structure co-design. In level IV, both the pretrained language model and geometric graph neural networks are employed to implement the co-design task with the parameters of the pretrained language model fixed. At last, the in silico antibody is experimentally validated via high-throughput binding quantification. + +# 2.5 Protein-protein Complex Structure Level + +Apart from empowering PLMs with tremendous protein sequences, plenty of protein-protein complex structures stay uncultivated, which can be harnessed to promote the geometric backbone architecture. Despite the emergence of several structure-based pretraining methods in the biological domain [21, 34], it is not trivial to apply them to our antibody design problem because all previous studies focus on single protein structures. In order to enable GGNNs to encode the general docking pattern between multiple proteins, we propose the pocket inpainting task. + +Specifically, we used the Database of Interacting Protein Structures (DIPS) [35]. DIPS is a much larger protein complex structure dataset than existing antibody-antigen complex structure datasets and is derived from the Protein Data Bank (PDB) [36]. In DIPS, each complex $\mathcal{G}_{LR}$ has two sub-units $\mathcal{G}_L$ and $\mathcal{G}_R$ . We calculate the distance between each amino acid of these two substructures and select residues whose minimum distance to the counterpart substructure is less than the threshold $\epsilon_P = 8\AA$ as the pocket $\mathcal{G}_P$ . Mathematically, the pocket nodes $\mathcal{V}_P$ is written as follows: + +$$ +\mathcal {V} _ {P} = \left\{v _ {L, i} \mid \min _ {j = 1} ^ {N _ {L}} d \left(\mathbf {x} _ {L i}, \mathbf {x} _ {R, j}\right) < \epsilon_ {P} \right\} \cup \left\{v _ {R, i} \mid \min _ {j = 1} ^ {N _ {R}} d \left(\mathbf {x} _ {R, i}, \mathbf {x} _ {L, j}\right) < \epsilon_ {P} \right\}, \tag {6} +$$ + +then our target is to retrieve $\mathcal{G}_P$ given the leftover $\mathcal{G}_{LR} - \mathcal{G}_P$ . Here, we follow the official split based on PDB sequence clustering at a $30\%$ sequence identity level to ensure little contamination between sets. It results in train/val/test of 87,303/31,050/15,268 complex samples. + +# 2.6 Antibody-antigen Complex Structures Level + +After the preceding levels of preparation, it is time to co-design the antibody. Given an antigen $\mathcal{G}_R$ and a fractional antibody $\mathcal{G}_L - \mathcal{G}_{HC}$ , we first employ the well-trained PLM $\Phi$ to attain per-residue node features $\mathbf{h}_R^{(0)}$ and $\mathbf{h}_L^{(0)}$ , where the nodes in the unknown part $\mathcal{G}_{HC}$ are tagged as a masked token. Then both features and coordinates are fed into the well-trained geometric encoder to unravel the CDR sequences and structures in a concurrent way, i.e., $\mathbf{h}_L^{(L)}, \mathbf{x}_{HC}^{(L)}, \mathbf{h}_R^{(L)} = \mathrm{EGNN}_{\Theta}\left(\mathbf{h}_L^{(0)}, \mathbf{x}_L^{(0)}, \mathbf{h}_R^{(0)}, \mathbf{x}_R^{(0)}\right)$ . Eventually, we use an MLP $\phi_o$ and a Softmax operator as the classifier to output the probability distribution of residue types as $p_i = \text{Softmax}\left(\phi_o\left(\mathbf{h}_i^{(L)}\right)\right) \in \mathbb{R}^{20}$ for $v_i \in \mathcal{V}_{HC}$ . + +CDR Coordinates Initialization. How to initialize the positions of residues in CDRs is of great importance to the co-design problem, and there is no consensus among different approaches. HERN [11] tries two kinds of strategies. One strategy is to randomly initialize all coordinates by adding a small Gaussian noise around the center of the epitope as $\mathbf{x}_i^{(0)} = \frac{1}{N_R}\sum_{j\in \mathcal{G}_R}\mathbf{x}_j + \epsilon ,\epsilon \sim \mathcal{N}(0,1)$ . The other is to predict the pairwise distance instantly $\mathbf{D}^{(N_L + N_R)\times (N_L + N_R)}$ between paratope and epitope atoms + +Table 1: Results of sequence and structure co-deign on SAbDab. + +
ModelCDR-H1CDR-H2CDR-H3
AAR (%)↑RMSD ↓TM-Score ↑AAR (%)↑RMSD ↓TM-Score ↑AAR (%)↑RMSD ↓TM-Score ↑
RAbD20.63 ± 1.63.56 ± 0.050.9206 ± 0.00727.80 ± 0.82.85 ± 0.090.9253 ± 0.01021.73 ± 0.74.58 ± 0.130.8916 ± 0.012
C-RGNN40.39 ± 3.21.98 ± 0.020.9380 ± 0.00333.36 ± 1.71.32 ± 0.050.9507 ± 0.00521.89 ± 1.53.59 ± 0.160.9187 ± 0.011
MEAN43.80 ± 2.51.84 ± 0.040.9411 ± 0.00837.18 ± 1.51.27 ± 0.040.9522 ± 0.00722.56 ± 1.73.44 ± 0.180.9248 ± 0.009
HERN48.42 ± 2.71.69 ± 0.040.9472 ± 0.00541.53 ± 2.11.26 ± 0.030.9531 ± 0.00625.73 ± 1.43.02 ± 0.110.9340 ± 0.004
DiffAb52.82 ± 0.91.51 ± 0.010.9658 ± 0.00145.95 ± 2.31.24 ± 0.010.9588 ± 0.00227.04 ± 2.82.89 ± 0.150.9417 ± 0.008
HTP81.33 ± 1.60.49 ± 0.020.9829 ± 0.00667.77 ± 1.30.53 ± 0.020.9860 ± 0.00440.98 ± 1.52.06 ± 0.030.9621 ± 0.005
ModelCDR-L1CDR-L2CDR-L3
AAR (%)↑RMSD ↓TM-Score ↑AAR (%)↑RMSD ↓TM-Score ↑AAR (%)↑RMSD ↓TM-Score ↑
RAbD35.11 ± 1.01.88 ± 0.010.9458 ± 0.00227.82 ± 0.61.35 ± 0.020.9611 ± 0.00923.73 ± 0.52.14 ± 0.060.9247 ± 0.010
C-RGNN41.44 ± 2.52.06 ± 0.020.9326 ± 0.00836.71 ± 4.31.26 ± 0.010.9652 ± 0.00633.80 ± 4.81.95 ± 0.060.9308 ± 0.008
MEAN47.69 ± 2.31.87 ± 0.020.9461 ± 0.00639.42 ± 3.51.24 ± 0.010.9647 ± 0.00835.18 ± 2.61.84 ± 0.050.9370 ± 0.005
HERN55.24 ± 2.71.63 ± 0.020.9502 ± 0.00346.02 ± 4.11.18 ± 0.020.9712 ± 0.00437.28 ± 4.11.77 ± 0.070.9389 ± 0.002
DiffAb62.71 ± 1.21.48 ± 0.010.9637 ± 0.00252.10 ± 3.61.11 ± 0.060.9780 ± 0.01443.62 ± 2.61.65 ± 0.050.9447 ± 0.004
HTP91.13 ± 1.00.67 ± 0.040.9869 ± 0.00589.80 ± 0.81.03 ± 0.050.9846 ± 0.00873.82 ± 1.10.78 ± 0.040.9704 ± 0.003
+ +and reconstruct atom coordinates from this distance matrix. The results show that the former performs better. DiffAB [12] initializes them from the standard normal distribution as $\mathbf{x}_i^{(0)}\sim \mathcal{N}(\mathbf{0},\mathbf{I}_3)$ . MEAN [14] leverage the even distribution between the residue right before CDRs and the one right after CDRs as the initial positions. + +All of the above initialization mechanisms have corresponding drawbacks. To be specific, HERN insufficiently considers the context information since it only characterizes the shape of the epitope and ignores the incomplete antibody structure $\mathcal{G}_{LR} - \mathcal{G}_{HC}$ . The initialization of normal distribution is only suitable in diffusion-based models. Moreover, our empirical experiments observe the least instability during training if an even distribution of MEAN is adopted, but residues right before or right after CDRs can be missing. Here, we propose another way to initialize the residue positions in CDRs. First, we follow MEAN and select the residue right before and right after CDRs. If both residues exist, we take the mean coordinates. Otherwise, we use only the existing one. After that, we add some little noise like HERN to separate nodes and introduce some randomization to prevent overfitting, which is proven to bring slight improvements in performance. + +Loss Function. Two sorts of losses are used for supervision. First, a cross-entropy (CE) loss is employed for sequence prediction as $\mathcal{L}_{seq} = \frac{1}{|\mathcal{V}_{HC}|}\sum_{v_i\in \mathcal{V}_{HC}}\mathrm{CE}(p_i,c_i)$ , where $c_{i}$ is the ground truth residue type for each node. Apart from that, a common RMSD loss is utilized for structure prediction and we leverage the Huber loss [37] to avoid numerical instability as $\mathcal{L}_{struct} = \text{Huber} \left( \mathbf{x}_{HC}^{(L)}, \mathbf{x}_{HC} \right)$ , where the latter is the ground truth coordinates of the target CDR. The total loss is a weighted sum of the above two as $\mathcal{L} = \mathcal{L}_{seq} + \lambda \mathcal{L}_{struct}$ , where $\lambda > 0$ is the balance hyperparameter. + +# 3 Experiments + +We assess our HTP via two mainstream challenging tasks: sequence-structure co-design in Section 3.1, and antibody sequence design based on antibody backbones in Section 3.2. Our evaluation is conducted in the standard SAbDab database [16]. More experimental setting details and data descriptions are elucidated in Appendix A. + +# 3.1 Sequence-structure Co-design + +Task and Metrics. In this task, we remove the original CDR from the antibody-antigen complex in the test set and try to co-design both sequence and structure of the removed region. Here we set the length of the CDR to be identical to the length of the original CDR. But in practice, the lengths of CDRs can be variable. For quantitative evaluation, we adopt amino acid recovery (AAR) and root-mean-squared error (RMSD) regarding the 3D predicted structure of CDRs as the metric. AAR is defined as the overlapping rate between the predicted 1D sequences and the ground truths. We also take advantage of TM score [38] to calculate the global similarity between the predicted and ground truth antibody structures. It ranges from 0 to 1 and evaluates how well the CDRs fit into the frameworks. + +Table 2: Results of the fix-backbone design task on SAbDab. + +
ModelCDR-H1CDR-H2CDR-H3
AAR (%)↑Perplexity ↓AAR (%)↑Perplexity ↓AAR (%)↑Perplexity ↓
RosettaFix36.29 ± 0.214.78 ± 0.0137.70 ± 0.312.30 ± 0.0228.13 ± 0.124.05 ± 0.14
Structured-TF53.24 ± 3.28.61 ± 0.0849.87 ± 1.210.27 ± 0.0630.29 ± 0.419.65 ± 0.11
DiffAb59.91 ± 1.26.44 ± 0.0559.14 ± 1.86.92 ± 0.0833.30 ± 0.516.84 ± 0.12
GVP-GNN62.72 ± 1.54.08 ± 0.0362.48 ± 1.74.77 ± 0.0934.59 ± 0.615.79 ± 0.13
HTP86.01 ± 1.11.71 ± 0.0464.46 ± 1.33.29 ± 0.0643.25 ± 1.29.01 ± 0.10
ModelCDR-L1CDR-L2CDR-L3
AAR (%)↑Perplexity ↓AAR (%)↑Perplexity ↓AAR (%)↑Perplexity ↓
RosettaFix35.42 ± 0.315.82 ± 0.0136.76 ± 0.214.67 ± 0.0132.17 ± 0.118.01 ± 0.00
Structured-TF56.73 ± 3.17.63 ± 0.1052.11 ± 1.88.93 ± 0.0843.48 ± 0.713.88 ± 0.02
DiffAb58.82 ± 1.66.89 ± 0.0655.40 ± 1.27.16 ± 0.0547.31 ± 0.510.60 ± 0.02
GVP-GNN60.18 ± 1.45.48 ± 0.0559.66 ± 1.56.48 ± 0.0651.34 ± 0.68.27 ± 0.03
HTP93.62 ± 1.51.09 ± 0.0891.46 ± 1.71.58 ± 0.1080.71 ± 1.12.66 ± 0.10
+ +Baselines. We pick up a broad range of existing mechanisms for comparison. Rosetta Antibody Design (RAbD) [9] is an antibody design software based on Rosetta energy functions. RefineGNN [10] is an auto-regressive model that first consider 3D geometry for antibody design and is E(3)-invariant. However, its original version is merely conditioned on the framework region rather than the antigen and the remaining part of the antibody. We follow Jin et al. [11] and replace its encoder with a message-passing neural network (MPNN) encoder and use the attention layer to extract information from the antibody-antigen representation. We name this modified model C-RGNN to make a distinction from its original architecture. Hierarchical Equivariant Refinement (HERN) [11] is the abbreviation of Hierarchical Equivariant Refinement Network, which employs a hierarchical MPNN to predict atomic forces and refine a binding complex in an iterative and equivariant manner. Its autoregressive decoder progressively docks generated antibodies and builds a geometric representation of the binding interface to guide the next residue choice. Multichannel Equivariant Attention Network (MEAN) [14] adopts a multi-round progressive full-shot scheme instead of an autoregressive one to output both 1D sequences and 3D structures. DiffAb [12] is a diffusion-based mechanism that achieves state-of-the-art performance on antibody design recently. It consists of three diffusion processes for amino acid types, coordinates, and orientations, respectively. + +Results and Analysis. We run each model three times with different random seeds and report the mean and standard deviation of each metric in Table 1, where metrics are labeled with $\uparrow/\downarrow$ if higher/lower is better, respectively. It can be found that our model outperforms all baselines by a significant margin in terms of AAR, RMSD, and TM-Score. Specifically, HTP brings an improvement of $53.97\%$ , $47.42\%$ and $51.55\%$ in AAR and $67.54\%$ , $57.25\%$ and $29.75\%$ in RMSD over the state-of-the-art DiffAb in H1, H2 and H3, respectively. This implies that our HTP might have a higher success rate in designing new antibodies targeting the given antigen. Moreover, it can also be expected that the performance of our HTP can be further improved as the number of antibodies and antigens with solved 3D structures keeps increasing. In addition, the light chain generally has a higher AAR and a lower RMSD than the heavy chain. For example, our model achieves nearly $90\%$ AAR in CDR-L1 and CDR-L2. This phenomenon is consistent with the fact that CDR in the heavy chain is much longer and more variant. + +# 3.2 Fix-backbone Sequence Design + +Task and Metrics. This problem is more straightforward than the previous co-design task. The backbone structure of CDRs is given and only requires the CDR sequence's design. The fixed-backbone design is a common setting in protein design [39, 40] with an alias of inverse folding. We rely on the metrics of AAR introduced in Section 3.1 to examine the generated antibodies. The metric RSMD is ruled out since the backbone structures are fixed. We also rely on perplexity (PPL) to understand the uncertainty of model predictions, which is common for evaluating language models in natural language processing. In short, the perplexity calculates the cross entropy loss and takes its exponent. + +Table 3: Comparison with pretrained antibody-specific language models. + +
PLMsCDR-H3
AAR (%)↑RMSD ↓TM-Score ↑
-25.31 ± 0.72.95 ± 0.020.9391 ± 0.005
AbLang28.46 ± 1.62.88 ± 0.170.9435 ± 0.009
AntiBERTa34.52 ± 1.22.41 ± 0.130.9473 ± 0.006
HTP40.98 ± 1.52.06 ± 0.030.9621 ± 0.005
+ +Baselines. Apart from some baselines that have been listed in the sequence-structure co-design problem, we compare HTP with three additional approaches. RosettaFix [41] is a Rosetta-based software for computational protein sequence design. Structured Transformer [42] (Structured-TF) is an auto-regression generative model that is able to sample CDR sequence given the backbone structure. GVP-GNN [43] extends standard dense layers to operate on collections of Euclidean vectors and performs geometric and relational reasoning on efficient representations of macromolecules. + +Results. Table 2 documents the result. It is clearly found that our model achieves the highest AAR and the lowest PPL compared to all the baseline algorithms. To be specific, HTP leads to an increase of $37.13\%$ , $3.01\%$ , and $18.47\%$ in AAR over the best GVP-GNN in H1, H2, and H3, separately, and $28.76\%$ , $24.28\%$ , and $49.90\%$ in L1, L2, and L3, respectively. This demonstrates that our model is also effective in capturing the conditional probability of sequences given backbone structures. Furthermore, we can also observe that CDR-H3 is the hardest segment in comparison to other regions as its AAR is usually lower than $50\%$ . Meanwhile, the average AAR of the light chain is more than $75\%$ , indicating that the light chain maintains less diversity than the heavy chain. + +# 3.3 Discussion + +Comparison with Antibody-specific Language Models. Recently, emerging efforts have been paid to train large antibody-specific language models. Studies have also demonstrated that these models can capture biologically relevant information and be generalized to various antibody-related applications, such as paratope position prediction. Here, we make a further investigation into the efficacy of these antibody-specific language models for sequence-structure co-design. To be precise, we abandon the pretraining stages of the single-protein and antibody sequence levels and directly leverage external antibody-specific language models. As shown in Table 3, AntiBERTa and AbLang provide biologically relevant information that is beneficial for the challenge of co-design with an increase of $12.44\%$ and $36.38\%$ in AAR. However, their improvements are much smaller than those of PLMs trained through the first two levels of tasks in HTP. + +Up- and Downstream Protein. Recently, Wang et al. [44] proposed a joint sequence-structure recovery method based on RosettaFold to scaffold functional sites of proteins. They perform the inpainting and fix-backbone sequence design tasks without immediate up- and downstream protein visible. However, we discover that our HTP can generate adequate diversity without the need to mask neighboring residues, that is., up- and downstream proteins. This difference stems from the fact that our HTP considers the entire antigen and available antibody as the context to complete the masked CDR rather than only depending on the tiny context of up- and downstream protein. + +Data Leakage of Language Models. It is undisputed that the evaluation of design methods that use PLMs should be stringent and ensure that the test data were not previously seen by those pretrained models. However, ESM-2 is trained on all protein sequences in the UniRef database (September 2021 version), while our test set includes sequence released after December 2021, as well as structures with any CDR similar to those released after this date (with sequence identity higher than $50\%$ ). It is fairly possible that the training set of ESM-2 includes antibody sequences similar to the test set, leading to the intolerable data leakage problem. + +Here, we conducted additional experiments aimed at assessing the contribution of ESM-2 in directly recovering CDR sequences. Specifically, we abandon the structural information and re-generate CDRs entirely based on sequential information. Towards this end, we first extract residue-level representations via (fixed-weight) ESM-2 and feed them to a three-layer perceptron to predict the masked CDR-H3, where no antigen sequences are given. The results show that this algorithm only + +achieved an AAR of $14.63\%$ to recover CDR-H3, much lower than all baseline methods such as RAdD $(21.73\%)$ and our HTP $(40.98\%)$ . This compellingly demonstrates that ESM-2 is not the primary driver of our favorable numerical outcomes and the experimental benefits brought forth by HTP are not due to data leakage. This also accords with ATUE's [45] findings that ESM models perform well in low-antibody-specificity-related tasks but can even bring negative impacts for high-antibody-specificity-related tasks. We have also provided adequate evidence in Appendix 4 to show that the other pretraining resources are free from any possible data leakage concern. + +# 4 Conclusion + +Antibodies are crucial immune proteins produced during an immune response to identify and neutralize the pathogen. Recently, several machine learning-based algorithms have been proposed to simultaneously design the sequences and structures of antibodies conditioned on the 3D structure of the antigen. This paper introduces a novel approach called the hierarchical training paradigm (HTP) to address the co-design problem. It leverages both geometric neural networks and large-scale protein language models and proposes four levels of training stages to efficiently exploit the evolutionary information encoded in the abundant protein sequences and complex binding structures. Extensive experiments confidently show that each stage of HTP significantly contributes to the improvements of the deep learning model's capacity in predicting more accurate antibody sequences and storing its corresponding 3D structures. Instead of focusing on the architecture side, our study hopes to shed light on how to better blend protein data of different modalities (i.e., one- and three-dimensions) and domains (i.e., antibodies, and non-antibodies) for tackling the sequence-structure co-design challenge, which has been long ignored by existing works. + +# 5 Limitations and Future Work + +In spite of the promising progress of our HTP, there is still some space left for future explorations. First, more abundant databases can be exploited in our framework. For example, AntiBodies Chemically Defined (ABCD) [46] is a large antibody sequence database that can be used to improve the capacity of protein language models at the second level. We do not use it in our work because our request for this database has not been approved by the authors so far. Secondly, we fix the language models during the last two levels of training (i.e., levels that need complex structure prediction) for simplicity and use them as the node feature initializer. It might be beneficial if both the PLM and the geometric encoder are tuned. + +# Acknowledgments and Disclosure of Funding + +This work was supported by National Key R&D Program of China (No. 2022ZD0115100), National Natural Science Foundation of China Project (No. U21A20427), and Project (No. WU2022A009) from the Center of Synthetic Biology and Integrated Bioengineering of Westlake University. F.W. and S.L. led the research. F.W. contributed technical ideas and developed the method and performed analytics. S.L. provided evaluation and suggestions. The authors thank Professor Siqi Sun and Tao You from Shanghai Artificial Intelligence Lab (SAIL) for their efforts in processing the OAS data, and Professor Buyong Ma from Shanghai Jiaotong University for his comments on improving the quality of the paper. + +# References + +[1] Gary W Litman, Jonathan P Rast, Michael J Shamblott, Robert N Haire, Michele Hulst, William Roess, Ronda T Litman, Kristin R Hinds-Frey, Anna Zilch, and Chris T Amemiya. Phylogenetic diversification of immunoglobulin genes and the antibody repertoire. Molecular biology and evolution, 10(1):60-72, 1993. +[2] Charles A Janeway, Paul Travers, Mark Walport, and Donald J Capra. Immunobiology. Taylor & Francis Group UK: Garland Science, 2001. +[3] Matthew IJ Raybould, Claire Marks, Konrad Krawczyk, Bruck Taddese, Jaroslaw Nowak, Alan P Lewis, Alexander Bujotzek, Jiye Shi, and Charlotte M Deane. Five computational + +developability guidelines for therapeutic antibody profiling. Proceedings of the National Academy of Sciences, 116(10):4025-4030, 2019. +[4] Ethan C Alley, Grigory Khimulya, Surojit Biswas, Mohammed AlQuraishi, and George M Church. Unified rational protein engineering with sequence-based deep representation learning. Nature methods, 16(12):1315-1322, 2019. +[5] Jung-Eun Shin, Adam J Riesselman, Aaron W Kollasch, Conor McMahon, Elana Simon, Chris Sander, Aashish Manglik, Andrew C Kruse, and Debora S Marks. Protein design and variant prediction using autoregressive generative models. Nature communications, 12(1):1-11, 2021. +[6] Sharon Fischman and Yanay Ofran. Computational design of antibodies. Current opinion in structural biology, 51:156-162, 2018. +[7] Tong Li, Robert J Pantazes, and Costas D Maranas. Optmaven-a new framework for the de novo design of antibody variable region models targeting specific antigen epitopes. *PloS one*, 9 (8):e105954, 2014. +[8] Gideon D Lapidoth, Dror Baran, Gabriele M Pszolla, Christoffer Norn, Assaf Alon, Michael D Tyka, and Sarel J Fleishman. Abdesign: A n algorithm for combinatorial backbone design guided by natural conformations and sequences. Proteins: Structure, Function, and Bioinformatics, 83 (8):1385-1406, 2015. +[9] Jared Adolf-Bryfogle, Oleks Kalyuzhniy, Michael Kubitz, Brian D Weitzner, Xiaozhen Hu, Rumiko Adachi, William R Schief, and Roland L Dunbrack Jr. Rosettaantibodydesign (rabd): A general framework for computational antibody design. PLoS computational biology, 14(4): e1006112, 2018. +[10] Wengong Jin, Jeremy Wohlwend, Regina Barzilay, and Tommi Jaakkola. Iterative refinement graph neural network for antibody sequence-structure co-design. arXiv preprint arXiv:2110.04624, 2021. +[11] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Antibody-antigen docking and design via hierarchical equivariant refinement. arXiv preprint arXiv:2207.06616, 2022. +[12] Shitong Luo, Yufeng Su, Xingang Peng, Sheng Wang, Jian Peng, and Jianzhu Ma. Antigen-specific antibody design and optimization with diffusion-based generative models. bioRxiv, 2022. +[13] Rahmad Akbar, Habib Bashour, Puneet Rawat, Philippe A Robert, Eva Smorodina, Tudor-Stefan Cotet, Karine Flem-Karlsen, Robert Frank, Brij Bhushan Mehta, Mai Ha Vu, et al. Progress and challenges for the machine learning-based design of fit-for-purpose monoclonal antibodies. In Mabs, volume 14, page 2008790. Taylor & Francis, 2022. +[14] Xiangzhe Kong, Wenbing Huang, and Yang Liu. Conditional antibody design as 3d equivariant graph translation. arXiv preprint arXiv:2208.06073, 2022. +[15] Chence Shi, Chuanrui Wang, Jiarui Lu, Bozitao Zhong, and Jian Tang. Protein sequence and structure co-design with equivariant translation. arXiv preprint arXiv:2210.08761, 2022. +[16] James Dunbar, Konrad Krawczyk, Jinwoo Leem, Terry Baker, Angelika Fuchs, Guy Georges, Jiye Shi, and Charlotte M Deane. Sabdab: the structural antibody database. Nucleic acids research, 42(D1):D1140-D1146, 2014. +[17] Pedro Hermosilla and Timo Ropinski. Contrastive representation learning for 3d protein structures. arXiv preprint arXiv:2205.15675, 2022. +[18] Fang Wu, Huiling Qin, Wenhao Gao, Siyuan Li, Connor W Coley, Stan Z Li, Xianyuan Zhan, and Jinbo Xu. Instructbio: A large-scale semi-supervised learning paradigm for biochemical problems. arXiv preprint arXiv:2304.03906, 2023. +[19] Fang Wu, Qiang Zhang, Dragomir Radev, Jiyu Cui, Wen Zhang, Huabin Xing, Ningyu Zhang, and Huajun Chen. 3d-transformer: Molecular representation with transformer in 3d space. 2021. + +[20] Fang Wu, Dragomir Radev, and Stan Z Li. Molformer: Motif-based transformer on 3d heterogeneous molecular graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5312-5320, 2023. +[21] Fang Wu, Shuting Jin, Yinghui Jiang, Xurui Jin, Bowen Tang, Zhangming Niu, Xiangrong Liu, Qiang Zhang, Xiangxiang Zeng, and Stan Z Li. Pre-training of equivariant graph matching networks with conformation flexibility for drug binding. Advanced Science, 9(33):2203796, 2022. +[22] Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proceedings of the National Academy of Sciences, 118(15):e2016239118, 2021. +[23] Sean R. Eddy. Profile hidden markov models. Bioinformatics (Oxford, England), 14(9):755-763, 1998. +[24] Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, et al. Protrans: towards cracking the language of life's code through self-supervised deep learning and high performance computing. arXiv preprint arXiv:2007.06225, 2020. +[25] Fang Wu, Lirong Wu, Dragomir Radev, Jinbo Xu, and Stan Z Li. Integration of pre-trained protein language models into geometric deep learning networks. Communications Biology, 6 (1):876, 2023. +[26] Fang Wu, Yu Tao, Dragomir Radev, and Jinbo Xu. When geometric deep learning meets pretrained protein language models. arXiv preprint arXiv:2212.03447, 2022. +[27] Neha Chaudhary and Duane R Wesemann. Analyzing immunoglobulin repertoires. Frontiers in immunology, 9:462, 2018. +[28] Fang Wu, Nicolas Courty, Shuting Jin, and Stan Z Li. Improving molecular representation learning with metric learning-enhanced optimal transport. *Patterns*, 4(4), 2023. +[29] Jinwoo Leem, Laura S Mitchell, James HR Farmery, Justin Barton, and Jacob D Galson. Deciphering the language of antibodies using self-supervised learning. *Patterns*, page 100513, 2022. +[30] Tobias H Olsen, Iain H Moal, and Charlotte M Deane. Ablang: An antibody language model for completing antibody sequences. bioRxiv, 2022. +[31] Aleksandr Kovaltsuk, Jinwoo Leem, Sebastian Kelm, James Snowden, Charlotte M Deane, and Konrad Krawczyk. Observed antibody space: a resource for data mining next-generation sequencing of antibody repertoires. The Journal of Immunology, 201(8):2502-2509, 2018. +[32] Tobias H Olsen, Fergus Boyles, and Charlotte M Deane. Observed antibody space: A diverse database of cleaned, annotated, and translated unpaired and paired antibody sequences. Protein Science, 31(1):141-146, 2022. +[33] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pages 9323-9332. PMLR, 2021. +[34] Zuobai Zhang, Minghao Xu, Arian Jamasb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, and Jian Tang. Protein representation learning by geometric structure pretraining. arXiv preprint arXiv:2203.06125, 2022. +[35] Raphael Townshend, Rishi Bedi, Patricia Suriana, and Ron Dror. End-to-end learning on 3d protein structure for interface prediction. Advances in Neural Information Processing Systems, 32, 2019. +[36] Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne. The protein data bank. *Nucleic acids research*, 28(1):235–242, 2000. + +[37] Peter J Huber. Robust estimation of a location parameter. In *Breakthroughs in statistics*, pages 492-518. Springer, 1992. +[38] Yang Zhang and Jeffrey Skolnick. Scoring function for automated assessment of protein structure template quality. Proteins: Structure, Function, and Bioinformatics, 57(4):702-710, 2004. +[39] Doug Tischer, Sidney Lisanza, Jue Wang, Runze Dong, Ivan Anishchenko, Lukas F Milles, Sergey Ovchinnikov, and David Baker. Design of proteins presenting discontinuous functional sites using deep learning. *Biorxiv*, 2020. +[40] Chloe Hsu, Robert Verkuil, Jason Liu, Zeming Lin, Brian Hie, Tom Sercu, Adam Lerer, and Alexander Rives. Learning inverse folding from millions of predicted structures. bioRxiv, 2022. +[41] Andrew Leaver-Fay, Michael Tyka, Steven M Lewis, Oliver F Lange, James Thompson, Ron Jacak, Kristian W Kaufman, P Douglas Renfrew, Colin A Smith, Will Sheffler, et al. Rosetta3: an object-oriented software suite for the simulation and design of macromolecules. In Methods in enzymology, volume 487, pages 545-574. Elsevier, 2011. +[42] John Ingraham, Vikas Garg, Regina Barzilay, and Tommi Jaakkola. Generative models for graph-based protein design. Advances in neural information processing systems, 32, 2019. +[43] Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael JL Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. arXiv preprint arXiv:2009.01411, 2020. +[44] Jue Wang, Sidney Lisanza, David Juergens, Doug Tischer, Joseph L Watson, Karla M Castro, Robert Ragotte, Amijai Saragovi, Lukas F Milles, Minkyung Baek, et al. Scaffolding protein functional sites using deep learning. Science, 377(6604):387-394, 2022. +[45] Danqing Wang, YE Fei, and Hao Zhou. On pre-training language model for antibody. In The Eleventh International Conference on Learning Representations, 2022. +[46] Wanessa C Lima, Elisabeth Gasteiger, Paolo Marcatili, Paula Duek, Amos Bairoch, and Pierre Cosson. The abcd database: a repository for chemically defined antibodies. Nucleic acids research, 48(D1):D261–D264, 2020. +[47] Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. bioRxiv, 2022. +[48] Brian L Hie, Kevin K Yang, and Peter S Kim. Evolutionary velocity with protein language models predicts evolutionary dynamics of diverse proteins. Cell Systems, 13(4):274-285, 2022. +[49] Lei Li, Shuang Chen, Zhichao Miao, Yang Liu, Xu Liu, Zhi-Xiong Xiao, and Yang Cao. Abrsa: a robust tool for antibody numbering. Protein Science, 28(8):1524-1531, 2019. +[50] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[51] Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Peter Chen, John Canny, Pieter Abbeel, and Yun Song. Evaluating protein transfer learning with tape. Advances in neural information processing systems, 32, 2019. +[52] Liang He, Shizhuo Zhang, Lijun Wu, Huanhuan Xia, Fusong Ju, He Zhang, Siyuan Liu, Yingce Xia, Jianwei Zhu, Pan Deng, et al. Pre-training co-evolutionary protein representation via a pairwise masked language model. arXiv preprint arXiv:2110.15527, 2021. +[53] Amy X Lu, Haoran Zhang, Marzyeh Ghassemi, and Alan Moses. Self-supervised contrastive learning of protein representations by mutual information maximization. BioRxiv, 2020. +[54] Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R Eguchi, Po-Ssu Huang, and Richard Socher. Progen: Language modeling for protein generation. arXiv preprint arXiv:2004.03497, 2020. + +[55] Pascal Sturmfels, Jesse Vig, Ali Madani, and Nazneen Fatema Rajani. Profile prediction: An alignment-based pre-training task for protein sequence models. arXiv preprint arXiv:2012.00195, 2020. +[56] Roshan M Rao, Jason Liu, Robert Verkuil, Joshua Meier, John Canny, Pieter Abbeel, Tom Sercu, and Alexander Rives. Msa transformer. In International Conference on Machine Learning, pages 8844–8856. PMLR, 2021. +[57] Surojit Biswas, Grigory Khimulya, Ethan C Alley, Kevin M Esvelt, and George M Church. Low-n protein engineering with data-efficient deep learning. Nature methods, 18(4):389-396, 2021. +[58] Tristan Bepler and Bonnie Berger. Learning the protein language: Evolution, structure, and function. Cell systems, 12(6):654-669, 2021. +[59] Zichen Wang, Steven A Combs, Ryan Brand, Miguel Romero Calvo, Panpan Xu, George Price, Nataliya Golovach, Emmanuel O Salawu, Colby J Wise, Sri Priya Ponnapalli, et al. Lm-gvp: A generalizable deep learning framework for protein property prediction from sequence and structure. bioRxiv, 2021. +[60] Jesse Vig, Ali Madani, Lav R Varshney, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. Bertology meets biology: interpreting attention in protein language models. arXiv preprint arXiv:2006.15222, 2020. +[61] José Jiménez, Stefan Doerr, Gerard Martínez-Rosell, Alexander S Rose, and Gianni De Fabritiis. Deepsite: protein-binding site predictor using 3d-convolutional neural networks. Bioinformatics, 33(19):3036–3042, 2017. +[62] Jiyu Cui, Fang Wu, Wen Zhang, Lifeng Yang, Jianbo Hu, Yin Fang, Peng Ye, Qiang Zhang, Xian Suo, Yiming Mo, et al. Direct prediction of gas adsorption via spatial atom interaction learning. Nature Communications, 14(1):7043, 2023. +[63] Fang Wu and Stan Z Li. Diffmd: A geometric diffusion model for molecular dynamics simulations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5321-5329, 2023. +[64] Fang Wu, Siyuan Li, Lirong Wu, Stan Z Li, Dragomir Radev, and Qiang Zhang. Discovering the representation bottleneck of graph neural networks from multi-order interactions. arXiv preprint arXiv:2205.07266, 2022. +[65] UniProt Consortium. Uniprot: a hub for protein information. *Nucleic acids research*, 43(D1): D204–D212, 2015. +[66] Robert D Finn, Alex Bateman, Jody Clements, Penelope Coggill, Ruth Y Eberhardt, Sean R Eddy, Andreas Heger, Kirstie Hetherington, Liisa Holm, Jaina Mistry, et al. Pfam: the protein families database. Nucleic acids research, 42(D1):D222-D230, 2014. +[67] Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, and Dejing Dou. Structure-aware protein self-supervised learning. arXiv preprint arXiv:2204.04213, 2022. +[68] Yuzhi Guo, Jiaxiang Wu, Hehuan Ma, and Junzhou Huang. Self-supervised pre-training for protein embeddings using tertiary structures. 2022. + +# Appendix + +# A Experimental setting + +Data Descriptions for Different Stages of HTP. (1) For the single-protein sequence level, we employ the state-of-the-art ESM-2 [47], which outperforms all tested single-sequence protein language models across a wide range of structure prediction tasks and enables prediction of the atomic resolution structure. It is trained on 86 billion amino acids across 250 million protein sequences that span evolutionary diversity. Specifically, ESM uses UniRef50, September 2021 version. The training dataset was partitioned by randomly selecting $0.5\%$ ( $\approx$ 250,000) sequences to form the validation set. The training set has sequences removed via the procedure described in Hie et al. [48]. ESM-2 runs the MMseqs search to obtain the query and target databases. All train sequences that match a validation sequence with $50\%$ sequence identity in this search are removed from the train set. The details of the ESM series can be found in https://github.com/facebookresearch/esm. + +(2) For the antibody sequence level, we use the Observed Antibody Space database (OAS) [31] and its subsequent update [32] as the pretraining data. It currently contains more than one billion sequences, from more than 80 different studies that cover various immune states, organisms, and individuals, which can be downloaded from its official website at https://opig.stats.ox.ac.uk/webapps/oas/. We upload the processed paired data in https://pan.baidu.com/s/181B8gl9Maf0nnNPIw83ZzA?pwd=1212 (password: 1212) as well as the unpaired data in https://pan.baidu.com/s/161gU8fso6rz6-QGfNoCoHQ?pwd=96uF (password: 96uf). + +(3) For the protein-protein complex structure level, we use the Database of Interacting Protein Structures (DIPS) [35]. It is a larger protein complex structure dataset than existing antibody-antigen complex structure datasets and is extracted from the Protein Data Bank [36]. We attain the database from Atom3d in Zendo https://zenodo.org/record/4911102, which is a collection of both novel and existing benchmark datasets spanning several key classes of biomolecules. Referring to Atom3d, we split protein complexes by sequence identity at $30\%$ , resulting in train/validation/test sets with 87,303/31,050/15,268 instances. + +(4) For the antibody-antigen complex structure level, we select all available antibody-antigen protein complexes from SAbDab [16] at https://opig.stats.ox.ac.uk/webapps/newsabdab/sabdab/, leading to a dataset containing 9,823 structures. CDRs are identified using the antibody numbering program AbRSA [49]. Following the setting in [12], the chosen data points are divided into training and test data based on their release date and CDR sequence identity. To be explicit, the test split contains protein structures released after December 24, 2021, as well as structures with any CDR similar to those released after this date with a sequence identity higher than $50\%$ . The antibodies in the test set are further clustered with a CDR sequence identity of $50\%$ to eliminate duplicates, resulting in 20 antibody-antigen structures. The training and validation splits just include complexes not involved during the curation of the test split. After that, we randomly divided the remaining complexes with a ratio of $90\%$ and $10\%$ into training and validation sets. + +Dataset Sequence Similarity. In our splitting strategy, the test set of SAbDab includes sequence released after December 2021, as well as structures with any CDR similar to those released after this date (with sequence identity higher than $50\%$ ). It is fairly possible that the pretraining datasets include antibody sequences similar to the test set. To navigate this issue, we have conducted a comprehensive analysis of the sequence similarity between the various pretraining data sources and the test set in SAbDab. This analysis includes general protein sequences from general protein-protein complexes from DIPS and antibodies from OAS. We have plotted the sequence similarity distributions in Figure 4 and present the statistical findings of this analysis in Table 4. + +Table 4: The statistics of the sequence similarity between our splitted SAbDab test set and different pretraining datasets. + +
DatasetMeanStd.Min.25%50%75%Max.
DIPS0.1880.0360.0000.1830.1980.2080.429
OAS0.2460.0170.2000.2350.2430.2540.401
+ +![](images/94f1403371a09ae5c8db64e776ddb2fe2c5a7b3e6811b2f4f99e5f0a6b8d9df3.jpg) +(a) + +![](images/07fe8f874e9c1d77a1d5515a804eebad6bc611a0c4640bfcaf782a7d77556ffe.jpg) +(b) +Figure 4: The distributional plot of sequence similarity between different pre-training resources and the test set in SAbDab. + +The statistical findings unequivocally demonstrate that the highest sequence similarity values across these three datasets are consistently below 0.5. This empirical evidence serves as a strong basis for our resolute assertion that neither DIPS, nor OAS encompasses sequences that exhibit significant similarity to the SAbDab test set. With this robust evidence at hand, we maintain our firm confidence that the hierarchical training paradigm we have employed is free from any potential data leakage concerns. + +Implementation Details. HPT is implemented in PyTorch and PyTorch Geometric packages. For all four training stages, we leverage an Adam optimizer [50] with a weight decay of 1e-5. All experiments are run on multiple A100 GPUs, each with a memory storage of 80G. + +(1) For ESM-2 in the single-protein sequence level training, we adopt a middle-size version, which has a parameter number of 150M, 30 layers, and a hidden dimension of 640. Besides, we append the ESM-2 with a three-layer perceptron to forecast the residue type for MLM. +(2) For the antibody sequence level training, we use a batch size of 2 to avoid out-of-memory error and 4 workers to load the data. The number of epochs is 100 and starting learning rate is 1e-5. Apart from that, we utilize a ReduceLROOnPlateau scheduler with a factor of 0.6, patience of 5 epochs, and a minimum learning rate of 1e-7. +(3) For protein-protein complex structure level training, we use a batch size of 32, 1000 epochs, and 4 works to speed up data loading. The starting learning rate is 1e-4, and a ReduceLROnPlateau scheduler is utilized to automatically adjust the learning rate with a factor of 0.6 and patience of 3 epochs. We adopt a distance threshold of $8.0\mathring{\mathrm{A}}$ to determine the connection between different graph nodes (i.e., the alpha carbon of each residue). As for the loss weight balance, we set $\lambda = 1$ . +(4) For the antibody-antigen complex structure level training, we also adopt the distance threshold of $8.0\mathring{\mathrm{A}}$ to build the graph connection. For the random initialization of the CDR coordinates, we use a noise of $\epsilon = 0.1$ . As for the other important hyperparameters, we use a grid search mechanism to find the optimal combination. Notably, the geometric neural networks used in the third and fourth levels are matched to each other. If we alter the setting of GGNNs in the antibody-antigen complex structure level training, we need to retrain it in the protein-protein complex structure level first. The entire hyperparameter search space is depicted in Table 5. + +Reproduction of Baselines. Concerning the implementation of several baseline methods, we use the official repositories for conditional RefineGNN (https://github.com/wengong-jin/RefineGNN/), HERN (https://github.com/wengong-jin/abdockgen), DiffAb (https://github.com/luost26/diffab). To reproduce the performance of existing antibody-specific pretrained PLMs, we download the code from https://github.com/alchemab/antiberta for AntiBERTa and https://github.com/oxpig/AbLang for AbLang. In our comparison, we directly use their pretrained residue features as the input for GGNNs without any fine-tuning. + +Table 5: Hyperparameters setup for HTP. + +
Hyperparameters Search SpaceSymbolValue
Training Setup
Epochs-[100, 500, 1000]
Batch size-[32, 64, 128]
Learning rate-[1e-4, 5e-5, 1e-6, 1e-7]
Warmup-[Yes, No]
Warmup epochs-[10, 20]
Loss Balance weight for Coordinates and Residue Typesλ[0.1, 0.3, 0.5, 0.7]
GNN Architecture
Dropout rate-[0.1, 0.2]
Number of GNN layersL[2, 4, 6]
Tanh activation function-[Yes, No]
Coordinate Normalization-[Yes, No]
The hidden dimension of node representations-[320, 640]
The hidden dimension of edge representations-[16, 32, 64]
+ +Code Availability. All relevant Python code to reproduce the results in our paper is stored in the GitHub repository at https://github.com/smiles724/HTP. + +# B Additional Results + +# B.1 Ablation Study + +We investigate the effectiveness and necessity of each component of our HTP. As shown in Table 6, the elimination of the level of protein-protein complex structure level induces performance degradation, where RMSD increases from 2.06 to 2.49. Moreover, we implement a variant of HTP by replacing features obtained by pretrained PLMs with learnable embedding features, whose performance is worse than HTP. To be concise, AAR declines from 40.98 to 25.31, and RMSD increases from 2.06 to 2.65. In summary, our HTP brings significant relative improvements of $78.56\%$ in AAR, $41.97\%$ in RMSD, and $2.94\%$ in TM-Score. This phenomenon strongly supports the superiority of our approach over existing naive co-design algorithms that are trained only on antibody-specific structure data. + +Table 6: Effects of each module, where SPS stands for the single-protein sequence level, PPCS denotes the protein-protein complex structure level, and AS represents the antibody sequence level. The last row computes the relative improvements of HTP over the primitive baseline without any protein data augmentation. + +
SPSASPPCSSAbDab (CDR-H3)
AAR (%) ↑RMSD ↓TM-Score
1XXX22.95 ± 0.53.55 ± 0.010.9146 ± 0.003
2XX33.87 ± 0.82.77 ± 0.040.9450 ± 0.006
3X38.42 ± 1.62.49 ± 0.030.9538 ± 0.004
4XX25.31 ± 0.72.95 ± 0.020.9391 ± 0.005
540.98 ± 1.52.06 ± 0.030.9621 ± 0.005
Imp.---78.56%41.97%2.94%
+ +# C Related Work + +Antibody Design. The majority of old-school computational approaches for antibody design are based on sampling algorithms over handcrafted and statistical energy functions to iteratively modify protein sequences and structures [8, 9]. These physics-based algorithms are computationally expensive and prone to be stuck in local energy minimum, which triggers the adaptation of deep + +learning in this sub-field. The initial researchers [4, 5] use pure PLMs to generate protein sequences but disregard the available antigen structures. + +To circumvent this, Jin et al. [10] introduce RefineGNN, the first co-design architecture that aims to neutralize SARS-CoV-2. Later, HERN [11] is proposed as a more general version for paratope docking and design, which opens the door to produce antibodies given arbitrary antigen structures. Subsequent efforts are spent in either modifying the generative style or utilizing more advanced deep learning architectures such as diffusion denoise probabilistic models (DDPMs). For example, DiffAb [12] achieves atomic-resolution antibody design with SO(3)-equivariance, while MEAN [14] corrects the autoregressive manner with a full-shot one to prevent low efficiency and accumulated errors during inference. + +Protein Sequence Modeling. Sequence-based protein representation learning is mainly inspired by the field of natural language processing. A large body of early work is focused on modeling individual protein families [51], solving problems such as functional nanobody design [5]. The success of this method then motivates the prospective trend to model large-scale databases of protein sequences by means of unsupervised learning. This line of study targets capturing the biochemical and co-evolutionary knowledge that underlies a large-scale protein sequence corpus by self-supervised pertaining. Thanks to them, a number of pertaining objects have been explored such as the next amino acid prediction [4, 24], masked language modeling (MLM) [51, 22], pairwise MLM [52], contrastive predictive coding [53], conditional generation [54], and position-specific scoring matrix prediction [55]. In addition, another line [56, 57] is based on multiple sequence alignment (MSA), leveraging sequences within a protein family to seize the conserved and variable regions of homologous sequences. Notably, some schemes for protein sequence modeling also seek to incorporate structural information in either the pretraining stage [58] or the finetuning stage [59]. + +The improvements in model scale and architecture are also crucial to the recent achievement of PLMs. Explicitly, Rao et al. [51] evaluate various PLMs in a panel of benchmarks and discover that multi-head attention outpaces the Potts model in contact prediction, even if using a single sequence for inference. Concurrently, Vig et al. [60] observe that specific attention heads of pretrained Transformers have straight correlations with protein contact. Others [24] investigate a variety of Transformer variants and demonstrate that large Transformers can obtain state-of-the-art features in various tasks. Apart from that, the latest ESM-2 [47] trains the largest PLM with 15B parameters and shows that as models are scaled, they learn information enabling the protein structure prediction at the resolution of individual atoms. + +Protein Structure Learning. With the rapid advance of geometric deep learning, it has become increasingly attractive and challenging to represent and reason about the structures of macromolecules in 3D space. For the sake of encoding spatial information in protein structures including bond lengths and dihedral angles, numerous 3D geometric neural networks such as 3DCNN [61] or GNNs [42, 43, 62, 63] have been invented. They excel at capturing complex interactions between sets of amino acids [64] and attain pivotal Euclidean geometry, e.g., E(3) or SE(3)-equivariance and symmetry. + +However, compared to protein sequences in databases like UniProt [65] or Pfam [66], the known structures in the PDB are scarce and hard to obtain. Therefore, it becomes urgent to develop structure-based mechanisms to efficiently learn protein representations with much fewer pretraining data. For instance, Hermosilla and Ropinski [17] use contrastive learning in terms of molecular substructures to help models understand protein structure similarity and functionality. Moreover, Chen et al. [67] propose a self-supervised framework that predicts angles and inter-residue distances. Additionally, Guo et al. [68] present a coordinate denoising score matching method. Wu et al. [21] put forward a novel prompt-based denoising conformation generative pretraining method based on the trajectories of molecular dynamics simulations. A recent attempt [34] makes a combination of both contrastive learning and self-prediction with more intriguing augmentation functions. Despite this progress, all of them are dealing with single-protein structures. No preceding studies have considered structure-based pretraining in the circumstance of multiple proteins. That is, how to pretrain on protein-protein complex, or more specifically, the antibody-antigen complex, remains unexplored. \ No newline at end of file diff --git a/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/images.zip b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6d82ad8a6018a75da4dbe92ecf54b265f7832345 --- /dev/null +++ b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd1ff1a64f2b20650d80a910695bd98f57bf8daab6b1d61b769ac1c6a8dea0b9 +size 593067 diff --git a/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/layout.json b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..58d0861cdcc787a99b09757cc3a9eba4de3053ab --- /dev/null +++ b/ahierarchicaltrainingparadigmforantibodystructuresequencecodesign/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3ee9c6e16ea1275abdf710c1bd67acb96b40ea251dc0df67f1ccf61bfef7dcd +size 498368 diff --git a/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_content_list.json b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9a3f566205f71f5f39966bae96909baf1e9b0306 --- /dev/null +++ b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c7579b64e5d365d53d066f55354d37fcdbe6bd338c0883d4cdd5511e2fc7e5e +size 79660 diff --git a/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_model.json b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a7026d6a71a99674490c59b324b389848235ee32 --- /dev/null +++ b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:597dbbcff32377c1886ce6f73879e5bb2d4fd5f2f3731cbd3d0b46480e39fe35 +size 98743 diff --git a/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_origin.pdf b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8d240c044417ced521fb6e8416b3947976f9f500 --- /dev/null +++ b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/7ad796be-1911-4bd6-821c-4401c1c1ed7a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45691d8052bb42be2c0b7a9c1fd98e6d528a101dd3752b7c1fc11cf7da7ce930 +size 9785899 diff --git a/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/full.md b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0fa6dfe88c1fb3158a3e35e0e1223e1860d8c114 --- /dev/null +++ b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/full.md @@ -0,0 +1,287 @@ +# A High-Resolution Dataset for Instance Detection with Multi-View Instance Capture + +Qianqian Shen $^{1,*}$ , Yunhan Zhao $^{2,*}$ , Nahyun Kwon $^{3}$ , Jeeun Kim $^{3}$ , Yanan Li $^{1,\dagger}$ , Shu Kong $^{3,4,5,\dagger}$ + +$^{1}$ Zhejiang Lab $^{2}$ UC-Irvine $^{3}$ Texas A&M University + +$^{4}$ Institute of Collaborative Innovation $^{5}$ University of Macau + +shenqq@zhejianglab.com, yunhaz5@ics.uci.edu, {nahyunkwon, jeeun.kim, shu} $@$ tamu.edu, + +liyn@zhejianglab.com, skong@um.edu.mo + +Dataset and open-source code + +# Abstract + +Instance detection (InsDet) is a long-lasting problem in robotics and computer vision, aiming to detect object instances (predefined by some visual examples) in a cluttered scene. Despite its practical significance, its advancement is overshadowed by Object Detection, which aims to detect objects belonging to some predefined classes. One major reason is that current InsDet datasets are too small in scale by today's standards. For example, the popular InsDet dataset GMU (published in 2016) has only 23 instances, far less than COCO (80 classes), a well-known object detection dataset published in 2014. We are motivated to introduce a new InsDet dataset and protocol. First, we define a realistic setup for InsDet: training data consists of multi-view instance captures, along with diverse scene images allowing synthesizing training images by pasting instance images on them with free box annotations. Second, we release a real-world database, which contains multi-view capture of 100 object instances, and high-resolution $(6\mathrm{k}\times 8\mathrm{k})$ testing images. Third, we extensively study baseline methods for InsDet on our dataset, analyze their performance and suggest future work. Somewhat surprisingly, using the off-the-shelf class-agnostic segmentation model (Segment Anything Model, SAM) and the self-supervised feature representation DINOv2 performs the best, achieving $>10$ AP better than end-to-end trained InsDet models that repurpose object detectors (e.g., FasterRCNN and RetinaNet). + +# 1 Introduction + +Instance detection (InsDet) requires detecting specific object instances (defined by some visual examples) from a scene image [12]. It is practically important in robotics, e.g., elderly-assistant robot robots need to fetch specific items (my-cup vs. your-cup) from a cluttered kitchen [40], micro-fulfillment robots for the retail need to pick items from mixed boxes or shelves [4]. + +Motivation. InsDet receives much less attention than the related problem of Object Detection (ObjDet), which aims to detect all objects belonging to some predefined classes [28, 37, 29, 48]. Fig. 1 compares the two problems. One major reason is that there are not large-enough InsDet datasets by today's standards. For example, the popular InsDet dataset GMU (published in 2016) [15] has only 23 object instances while the popular ObjDet dataset COCO has 80 object classes (published in 2014) [28]. Moreover, there are no unified protocols in the literature of InsDet. The current InsDet literature mixes multiple datasets to simulate training images and testing scenarios [12]. Note that the + +![](images/55bbc95c1e8afea291ec50cb3b0e212ad97b1010a56ac1a5bc5614b154eb09d8.jpg) +Object detection (ObjDet): detecting objects belonging to +these classes: +Figure 1: Object detection (ObjDet) vs. instance detection (InsDet). ObjDet aims to detect all objects belonging to some predefined classes, whereas InsDet requires detecting specific object instances defined by some visual examples. Loosely speaking, InsDet treats a single object instance as a class compared to ObjDet. Please refer to Fig. 2-right for the challenge of InsDet, which is the focus of our work. + +![](images/a615fee1b498088c2513c6f7be6bcfa96ce9d84f5be1c44b1861707e1e1da8d7.jpg) +Instance Detection (InsDet): detecting instances with + +training protocol of InsDet does not follow that of ObjDet, which has training images annotated with bounding boxes. Differently, for $\mathrm{InsDet}^2$ , its setup should have profile images of instances (cf. right in Fig. 1) and optionally diverse background images not containing such instances [12]. We release a new dataset and present a unified protocol to foster the InsDet research. + +Overview of our dataset is presented in Fig. 2. In our dataset, profile images (3072x3072) of object instances and testing images (6144x8192) are high-resolution captured by a Leica camera (commonly used in today's cellphones). This inexpensive camera is deployable in current or future robot devices. Hence, our dataset simulates real-world scenarios, e.g., robotic navigation in indoor scenes. Even with high-resolution images, objects in testing images appear small, taking only a tiny region in the high-res images. This demonstrates a clear challenge of InsDet in our dataset. Therefore, our dataset allows studying InsDet methods towards real-time operation on high-res (as future work). + +Preview of technical insights. On our dataset, we revisit existing InsDet methods [26, 12, 16]. Perhaps the only InsDet framework is cut-paste-learn [12], which cuts instances from their profile images, pastes them on random background images (so being able to derive "free" bounding boxes annotations), and trains InsDet detectors on such data by following that of ObjDet (e.g., FasterRCNN [37]). We study this framework, train different detectors, and confirm that the state-of-the-art transformer-based detector DINO [48] performs the best, achieving 27.99 AP, significantly better than CNN-based detector FasterRCNN (19.52 AP). Further, we present a non-learned method that runs off-the-shelf proposal detectors (SAM [23] in our work) to generate object proposals and use self-supervised learned features $(\mathrm{DINO}_f[8]^3$ and $\mathrm{DINOv2}_f$ [33]) to find matched proposals to instances' profile images. Surprisingly, this non-learned method resoundingly outperforms end-to-end learning methods, i.e., SAM+DINOv2 $^f$ achieves 41.61 AP, much better than DINO (27.99 AP) [48]. + +# Contributions. We make three major contributions. + +1. We formulate the InsDet problem with a unified protocol and release a challenging dataset consisting of both high-resolution profile images and high-res testing images. +2. We conduct extensive experiments on our dataset and benchmark representative methods following the cut-paste-learn framework [12], showing that stronger detectors perform better. +3. We present a non-learned method that uses an off-the-shelf proposal detector (i.e., SAM [23]) to produce proposals, and self-supervised learned features (e.g., DINOv2f [33]) to find instances (which are well matched to their profile images). This simple method significantly outperforms the end-to-end InsDet models. + +# 2 Related Work + +Instance detection (InsDet) is a long-lasting problem in computer vision and robotics [49, 12, 32, 3, 15, 21, 4], referring to detecting specific object instances in a scene image. Traditional InsDet methods use keypoint matching [34] or template matching [19]; more recent ones train deep neural networks to approach InsDet [32]. Some others focus on obtaining more training samples by rendering realistic instance examples [22, 21], data augmentation [12], and synthesizing training images by cutting + +![](images/54c61f396bf6bbf7fe0d6a64b2cb54e1ce2900e131dc0709d61963b1dba752d4.jpg) +Figure 2: Overview of our instance detection dataset. Left: It contains 100 distinct object instances. For each of them, we capture 24 profile photos from multiple views. We paste QR code images beneath objects to allow relative camera estimation (e.g., by COLMAP [41]), just like other existing datasets [20, 5]. Middle: We take photos in random scenes (which do not contain any of the 100 instances) as background images. The background images can be optionally used to synthesize training data, e.g., pasting the foreground instances on them towards box-annotated training images [26, 12, 16] as used in the object detection literature [28]. Right: high-resolution $(6\mathrm{k}\times 8\mathrm{k})$ testing images of clutter scenes contain diverse instances, including some of the 100 predefined instances and other uninterested ones. The goal of InsDet is to detect the predefined instances in these testing images. From the zoom-in regions, we see the scene clutters make InsDet a rather challenging problem. + +![](images/f190f22c227a76a0a5e36f73fe75d1936e80e4d51a4a21d6313e7d6964da3e27.jpg) + +![](images/6a277ec0461bd43524a1b32ccd74c9479aae260a9abad1bcb10b89050c984127.jpg) + +instances as foregrounds and pasting them to background images [26, 12, 16]. Speaking of InsDet datasets, [15] collects scene images from 9 kitchen scenes with RGB-D cameras and defines 23 instances of interest to annotate with 2D boxes on scene images; [21] creates 3D models of 29 instances from 6 indoor scenes, and uses them to synthesize training and testing data; [4] creates 3D mesh models of 100 grocery store objects, renders 80 views of images for each instance, and uses them to synthesize training data. + +As for benchmarking protocol of InsDet, [12] synthesizes training data from BigBird [43] and UW Scenes [25] and tests on the GMU dataset [15]; [21] trains on their in-house data and test on LM-O [5] and Rutgers APC [38] datasets. Moreover, some works require hardware-demanding setups [4], some synthesize both training and testing data [21, 26], while others mix existing datasets for benchmarking [12]. Given that the modern literature on InsDet lacks a unified benchmarking protocol (till now!), we introduce a more realistic unified protocol along with our InsDet dataset, allowing fairly benchmarking methods and fostering research of InsDet. + +Object detection (ObjDet) is a fundamental computer vision problem [13, 28, 37], requiring detecting all objects belonging to some predefined categories. The prevalent ObjDet detectors adopt convolutional neural networks (CNNs) as a backbone and a detector-head for proposal detection and classification, typically using bounding box regression and a softmax-classifier. Approaches can be grouped into two categories: one-stage detectors [36, 30, 35, 46] and two-stage detectors [17, 6]. One-stage detectors predict candidate detection proposals using bounding boxes and labels at regular spatial positions over feature maps; two-stage detectors first produce detection proposals, then perform classification and bounding box regression for each proposal. Recently, the transformer-based detectors transcend CNN-based detectors [7, 51, 48], yielding much better performance on various ObjDet benchmarks. Different from ObjDet, InsDet requires distinguishing individual object instances within a class. Nevertheless, to approach InsDet, the common practice is to repurpose ObjDet detectors by treating unique instances as individual classes. We follow this practice and benchmark various ObjDet methods on our InsDet dataset. + +Pretrained models. Pretraining is an effective way to learn features from diverse data. For example, training on the large-scale ImageNet dataset for image classification [10], a neural network can serve as a powerful feature extractor for various vision tasks [11, 42]. Object detectors trained on the COCO dataset [28] can serve as a backbone allowing finetuning on a target domain to improve detection performance [27]. Such pretraining requires human annotations which can be costly. Therefore, self-supervised pretraining has attracted increasing attention and achieved remarkable progress [9, 18, 8, 33]. Moreover, the recent literature shows that pretraining on much larger-scale data can serve as a foundation model for being able to perform well across domains and tasks. For example, the Segment Anything Model (SAM) pretrains a class-agnostic proposal detector on web-scale data and shows an impressive ability to detect and segment diverse objects in the wild [23]. In this work, with our high-res InsDet dataset, we explore a non-learned method by using publicly available pretrained models. We show that such a simple method significantly outperforms end-to-end learned InsDet detectors. + +# 3 Instance Detection: Protocol and Dataset + +In this section, we formulate a realistic unified InsDet protocol and introduce the new dataset. We release our dataset under the MIT License, hoping to contribute to the broader research community. + +# 3.1 The Protocol + +Our InsDet protocol is motivated by real-world indoor robotic applications. In particular, we consider the scenario that assistive robots must locate and recognize instances to fetch them in a cluttered indoor scene [40], where InsDet is a crucial component. Realistically, for a given object instance, the robots should see it only from a few views (at the training stage), and then accurately detect it in a distance in any scenes (at the testing stage). Therefore, we suggest the protocol specifying the training and testing setups below. We refer the readers to Fig. 2 for an illustration of this protocol. + +- Training. There are profile images of each instance captured at different views and diverse background images. The background images can be used to synthesize training images with free 2D-box annotations, as done by the cut-paste-learn methods [26, 12, 16]. +- Testing. InsDet algorithms are required to precisely detect all predefined instances from real-world images of cluttered scenes. + +Evaluation metrics. The InsDet literature commonly uses average precision (AP) at IoU=0.5 [12, 2, 32]; others use different metrics, e.g., AP at IoU=0.75 [21], mean AP [3, 15], and F1 score [4]. As a single metric appears to be insufficient to benchmark methods, we follow the literature of ObjDet that uses multiple metrics altogether [28]. + +- AP averages the precision at IoU thresholds from 0.5 to 0.95 with the step size 0.05. It is the primary metric in the most well-known COCO Object Detection dataset [28]. +- $\mathbf{AP}_{50}$ and $\mathbf{AP}_{75}$ are the precision averaged over all instances with IoU threshold as 0.5 and 0.75, respectively. In particular, $\mathbf{AP}_{50}$ is the widely used metric in the literature of InsDet. +- AR (average recall) averages the proposal recall at IoU threshold from 0.5 to 1.0 with the step size 0.05, regardless of the classification accuracy. AR measures the localization performance (excluding classification accuracy) of an InsDet model. + +Moreover, we tag hard and easy scenes in the testing images based on the level of clutter and occlusion, as shown by the right panel of Fig. 2. + +# 3.2 The Dataset + +We introduce a challenging real-world dataset of indoor scenes (motivated by indoor assistive robots), including high-resolution photos of 100 distinct object instances, and high-resolution testing images captured from 14 indoor scenes where there are such 100 instances defined for InsDet. Table 1 summarizes the statistics compared with existing datasets, showing that our dataset is larger in scale and more challenging than existing InsDet datasets. Importantly, object instances are located far from the camera in cluttered scenes; this is realistic because robots must detect objects in a distance before approaching them [1]. Perhaps surprisingly, only a few InsDet datasets exist in the literature. Among them, Grocery [4], which is the latest and has the most instances like our dataset, is not publicly available. + +Our InsDet dataset contains 100 object instances. When capturing photos for each instance, inspired by prior arts [43, 20, 5], we paste a QR code on the tabletop, which enables pose estimation, e.g., using COLMAP [41]. Yet, we note more realistic scenarios can be hand-holding instances for capturing [24], which we think of as future work. In Fig. 3, we plot the per-instance frequency in the testing set. Each instance photo is of $3072 \times 3072$ pixel resolution. For each instance, we capture 24 photos from multiple views. The left panel of Fig. 2 shows some random photos for some instances. For the testing set, we capture high-resolution images ( $6144 \times 8192$ ) in cluttered scenes, where some instances are placed in reasonable locations, as shown in the right panel of Fig. 2. We tag these images as easy or hard based on scene clutter and object occlusion levels. When objects are placed sparsely, we tag the testing images as easy; otherwise, we tag them as hard. Our InsDet dataset also contains 200 high-res background images of indoor scenes (cf. Fig. 2-middle). These indoor scenes are not included in testing images. They allow using the cut-paste-learn framework to synthesize training images [26, 12, 16]. Following this framework, we segment foreground instances using GrabCut [39] + +Table 1: Comparison of our dataset to existing ones. Several datasets are used in the InsDet literature although they are designed for different tasks. For example, BigBird and LM are designed to study algorithms of object recognition and object pose estimation, hence they contain instances that are close to the camera. Naively repurposing them for InsDet leads to saturated performance, impoverishing the exploration space of InsDet. Instead, ours is more challenging as instances are placed far from the camera, simulating realistic scenarios where robots must detect instances at a distance. Importantly, our dataset contains far more instances than other publicly available InsDet datasets. + +
for what taskpublicly available#instances#scenespublished yearresolution
BigBird [43]recognition100N/A20141280x1024
RGBD [26]scene label.300142017N/A
LM [20]6D pose est.1512012480x640
LM-O [5]6D pose est.2012017480x640
RU-APC [38]3D pose est.1412016480x640
GMU [15]InsDet23920161080x1920
AVD [1]InsDet33920171080x1920
Grocery [4]InsDetX100102021unknown
OursInsDet1001420236144x8192
+ +![](images/24164cf203750ed1a7869af13fe7f38807c15c7e7d7b26f5c676ef66159b4c06.jpg) +Figure 3: Imbalanced distribution of instances in test-set. Yet, instances have the same number of profile images in training and the metrics average over all instances. So, the evaluation is unbiased. + +
sizebounding box area
small< 2002
medium2002 - 4002
large> 4002
+ +![](images/d2dbe995724ff78a205fdb3f9179e7ba7c1e444317a23e1481a591fd2622e1ad.jpg) +Table 2: Following the spirit of COCO dataset, we tag objects with different sizes by small, medium, and large, respectively. +Figure 4: Distribution of objects w.r.t their bounding box area in testing images. We split them into small, medium, and large subgroups to allow breakdown analysis. + +to paste them on background images. It is worth noting that the recent vision foundation model SAM [23] makes interactive segmentation much more efficient. Yet, this work is made public after we collected our dataset. Following the COCO dataset [28], we further tag testing object instances as small, medium, and large according to their bounding box area, as in Table 2. To determine their size tags, we plot the distribution of their sizes in Fig. 4, showing an intuitive way to tag them. + +# 4 Methodology + +# 4.1 The Strong Baseline: Cut-Paste-Learn + +Cut-Paste-Learn serves as a strong baseline that synthesizes training images with 2D-box annotations [12]. This allows one to train InsDet detectors in the same way as training normal ObjDet detectors, by simply treating the $K$ unique instances as $K$ distinct classes. It cuts and pastes foreground instances at various aspect ratios and scales on diverse background images, yielding synthetic training images, as shown in Fig. 5. Cut-paste-learn is model-agnostic, allowing one to adopt any state-of-the-art detector architecture. In this work, we study five popular detectors, covering the two-stage detector FasterRCNN [37], and one-stage anchor-based detector RetinaNet [29], and one-stage anchor-free detectors CenterNet [49], and FCOS [45]; and the transformer-based detector DINO [48]. There are multiple factors in the cut-paste-learn framework, such as the number of inserted objects in each background image, their relative size, the number of generated training images and blending methods. We conduct comprehensive ablation studies and report results using the best-tuned choices. We refer interested readers to the supplement for the ablation studies. + +![](images/b69576f471bf59f27f826bd6a521ab339b38496fd78683fedab20b102ce4abf9.jpg) +(a) Box + +![](images/bf0ae63bf3658630468d7e6b7f7d4dd179d2678fb9289a013ddc26bcebf6ae3f.jpg) +(b) Gaussian blurring +Figure 5: Synthetic training images for cut-paste-learn methods. We use different blending methods to paste object instances on the same background. We recommend that interested readers refer to the supplement for an ablation study using different blending methods. + +![](images/e6983e5750c863ff700b2069a398b4a8cb5c4cbaf5534c6d18e6fdea84a9bd84.jpg) +(c) Motion + +![](images/6960c41ebb04f764e63cc03daad5a7bc1dc936ef5a1a5273c6166e4d5d4ff8c4.jpg) +(d) Naive pasting + +# 4.2 The Simple, Non-Learned Method + +We introduce a simple, non-learned InsDet method by exploiting publicly available pretrained models. This method consists of three main steps: (1) proposal generation on testing images, (2) matching proposals and profile images, (3) selecting the best-matched proposals as the detected instances. + +Proposal generation. We use the recently released Segment Anything Model (SAM) [23] to generate proposals. For a proposal, we define a minimum bounding square box encapsulating the masked instance, and then crop the region from the high-resolution testing image. SAM not only achieves high recall (Table 4) on our InsDet dataset but detects objects not belonging to the instances of interest. So the next step is to find interested instances from the proposals. + +Feature representation of proposals and profile images. Intuitively, among the pool of proposals, we are interested in those that are well-matched to any profile images of any instance. The well-matched ones are more likely to be predefined instances. To match proposals and profile images, we use off-the-shelf features to represent them. In this work, we study two self-supervised learned models as feature extractors, i.e. $\mathrm{DINO}_f$ [8], and $\mathrm{DINOv2}_f$ [33]. We feed a square crop (of a proposal) or a profile image to the feature extractor to obtain its feature representation. We use cosine similarity over the features as the similarity measure between a proposal and a profile image. + +Proposal matching and selection. As each instance has multiple profile images, we need to design the similarity between a proposal and an instance. For a proposal, we compute the cosine similarities of its feature to all the profile images of an instance and use the maximum as its final similarity to this instance. We then filter out proposals and instances if they have similarities lower than a threshold, indicating that they are not matched to any instances or proposals. Finally, we obtain a similarity matrix between all remaining proposals and all remaining instances. Over this matrix, we study two matching algorithms to find the best match (hence the final InsDet results), i.e. Rank & Select, and Stable Matching [14, 31]. The former is a greedy algorithm that iteratively selects the best match (highest cosine similarity) between a proposal and an instance and removes the corresponding proposal until no proposal/instance is left. The latter produces an optimal list of matched proposals and instances, such that there exist no pair of instances and proposals which both prefer each other to their current correspondence under the matching. + +# 5 Experiments + +Synthesizing training images for cut-paste-learn baselines. Our baseline method trains state-of-the-art ObjDet detectors on data synthesized using the cut-paste-learn strategy [12]. For evaluating on our InsDet dataset, we generate 19k training examples and 6k validation examples. For each example, various numbers of foreground objects ranging from 25 to 35 are pasted to a randomly selected background image. The objects are randomly resized with a scale from 0.15 to 0.5. We use four blending options [12], including Gaussian blurring, motion blurring, box blurring, and naive pasting. Fig. 5 shows some random synthetic images. The above factors have a notable impact on the final performance of trained models, and we have conducted a comprehensive ablation study. We refer interested readers to the supplement for the study. + +Implementation details. We conduct all the experiments based on open-source implementations, such as Detector2 [47] (for FasterRCNN and RetinaNet), CenterNet [50], FCOS [44] and DINO [48]. The CNN-based end-to-end detectors are initialized with pretrained weights on COCO [28]. We fine-tune CNN-based models using SGD and the transformer-based model using AdamW with a + +Table 3: Benchmarking results on our dataset. We summarize three salient conclusions. (1) End-to-end trained detectors perform better with stronger detector architectures, e.g., the transformer DINO (27.99 AP) outperforms FasterRCNN (19.54 AP). (2) Interestingly, the non-learned method SAM+DINOv2f performs the best (41.61 AP), significantly better than end-to-end learned detectors including DINO (27.99 AP). (3) All methods have much lower AP on hard testing images or smal1 objects (e.g., SAM+DINOv2f yields 28.03 AP on hard vs. 47.57 AP on easy), showing that future work should focus on hard situations or smal1 instances. + +
AP\( \mathbf{A}\mathbf{P}_{50} \)\( \mathbf{A}\mathbf{P}_{75} \)
avghardeasysmallmediumlarge
FasterRCNN [37]19.5410.2623.755.0322.2037.9729.2123.26
RetinaNet [29]22.2214.9226.495.4825.8042.7131.1924.98
CenterNet [49]21.1211.8525.705.9024.1540.3832.7223.60
FCOS [45]22.4013.2228.686.1726.4638.1332.8025.47
DINO [48]27.9917.8932.6511.5131.6048.3539.6232.19
SAM + DINOf36.9722.3843.8811.9340.8562.6744.1340.42
SAM + DINOv2f41.6128.0347.5714.5845.8369.1449.1045.95
+ +Table 4: Benchmarking results w.r.t average recall (AR) for small, medium and large instances. "AR@max10" means AR within the top-10 ranked detections. In computing AR, we rank detections by using the detection confidence scores of the learning-based methods (e.g., FasterRCNN) or similarity scores in the non-learned methods (e.g., SAM+DINO $_f$ ). AR $_s$ , AR $_m$ , and AR $_l$ are breakdowns of AR for small, medium and large testing object instances. Results show that (1) the non-learned methods that use SAM generally recall more instances than others, and (2) all methods suffer from small instances. In sum, results show that methods yielding higher recall achieve higher AP metrics (cf. Table 3). + +
AR@max10AR@max100ARs@max100ARm@max100ARl@max100
FasterRCNN [37]26.2439.2414.8344.8760.05
RetinaNet [29]26.3349.3822.0456.7669.69
CenterNet [49]23.5544.7217.8452.0364.58
FCOS [45]25.8246.2822.0952.8564.11
DINO [48]29.8454.2232.0059.4372.92
SAM + DINOf31.2563.0531.6570.0190.63
SAM + DINov2f40.0263.0631.1170.4090.36
+ +learning rate of 1e-3 and a batch size of 16. We fine-tune all the models for 5 epochs (which are enough for training to converge) and evaluate checkpoints after each epoch for model selection. The models are trained on a single Tesla V100 GPU with 32G memory. + +If applied, we preprocess object instance profile images and proposals. Specifically, for a profile image, we remove the background pixels (e.g., pixels of QR code) using foreground segmentation (i.e., GrabCut). For each proposal, we crop its minimum bounding square box. We also study whether removing background pixels by using SAM's mask output performs better. We use $\mathrm{DINO}_f$ and $\mathrm{DINOv2}_f$ to compute feature representations. + +# 5.1 Benchmarking Results + +Quantitative results. To evaluate the proposed InsDet protocol and dataset, we first train detectors from a COCO-pretrained backbone following the cut-past-learn baseline. Table 3 lists detailed comparisons and Fig. 6 plots the precision-recall curves for the compared methods. We can see that detectors with stronger architectures perform better, e.g. DINO (27.99% AP) vs. FasterRCNN (19.54% AP). Second, non-learned methods outperform end-to-end trained models, e.g., SAM+DINOv2f (41.61% AP) vs. DINO (27.99% AP). Third, all the methods perform poorly on hard scenes and small instances, suggesting future work focusing on such cases. + +Table 4 compares methods w.r.t the average recall (AR) metric. "AR@max10" means AR within the top-10 ranked detections. In computing AR, we rank detections by using the detection confidence scores of the learning-based methods (e.g., FasterRCNN) or similarity scores in the non-learned methods (e.g., $\mathrm{SAM + DINO}_f$ ). $\mathrm{AR}_s$ , $\mathrm{AR}_m$ , and $\mathrm{AR}_l$ are breakdowns of AR for small, medium, and large testing object instances. Results show that (1) the non-learned methods that use SAM generally recall more instances than others, and (2) all methods suffer from small instances. In sum, results show that methods yielding higher recall achieve higher AP metrics (cf. Table 3). Table 5 further studies AR in hard and easy scenes. We can observe that: (1) the non-learned methods that use SAM + +Table 5: Benchmarking results w.r.t average recall (AR) for hard and easy scenes. We add a breakdown analysis of testing images on hard and easy scenes. Results show that (1) the non-learned methods that use SAM generally recall more instances than others, and (2) all methods suffer from hard scenes. + +
AR@max10AR@max100
avghardeasyavghardeasy
FasterRCNN [37]26.2412.9232.3339.2416.9149.43
RetinaNet [29]26.3315.3831.3349.3829.0058.69
CenterNet [49]23.5511.8728.8744.7224.8853.76
FCOS [45]25.8212.8131.7446.2826.5555.27
DINO [48]29.8416.6335.8454.2236.4662.30
SAM + DINOf31.2516.9637.7363.0542.4672.41
SAM + DINOv2f40.0227.6445.3663.0643.4771.96
+ +![](images/a0584636f8a03c71c59aabb8c63ba42c48a34609260adf753eb24cc159761cce.jpg) +Figure 6: Precision-recall curves with IoU=0.5 (AP50 in the legend) on our InsDet dataset. Stronger detectors perform better, e.g., DINO, a transformer-based detector significantly outperforms FasterRCNN. Furthermore, even with a simple non-learned method, leveraging pretrained models, e.g., SAM+DINOv2f, outperforms end-to-end learned methods. + +recall more proposals than other competitors in both hard and easy scenes; (2) all methods basically suffer from hard scenes. + +Qualitative results. Fig. 7 visualizes qualitative results on two testing examples from the InsDet dataset. Stronger detectors, e.g., the non-learned method SAM+DINOv2 $_f$ , produce fewer false negatives. Even so, all detectors still struggle to detect instances with presented barriers such as heavy occlusion, instance size being too small, etc. As shown in Fig. 6, the non-learned method SAM+DINOv2 $_f$ outperforms end-to-end learned methods in a wide range of recall thresholds. + +# 5.2 Ablation Study + +Due to the space limit, we ablate the instance crop and stable matching in the main paper and put more (including ablation studies for the cut-paste-learn methods) in the supplement. + +Proposal feature extraction in the non-learned method. Given a box crop (encapsulating the proposal) generated by SAM in the non-learned method, we study how to process the crop to improve InsDet performance. Here, we can either crop and feed its minimum bounding box to compute DINOv2f features, or we can use the mask to remove the background in the box. Table 6 shows the comparison. Clearly, the latter performs remarkably better in both hard and easy scenarios. + +Proposal-instance match in the non-learned method. After generating proposals by SAM, we need to compare them with instance profile images to get the final detection results. We study the InsDet performance of the two matching algorithms. Rank & Select is a greedy algorithm that iteratively finds the best match between any proposals and instances until no instances/proposals are left unmatched; stable matching produces an optimal list of matched proposals and instances such that there does not exist a pair in which both prefer other proposals/instances to their current correspondence under the matching. Table 7 compares these two methods, clearly showing that stable matching works better. + +Impact of different image/proposal resolutions. We study the InsDet performance when using images of different sizes for SAM, and using different resolutions of crops fed into DINOv2 $f$ . For example, we can use SAM on images of size $3072 \times 4096$ to generate proposals, we can resize crops of proposals to $224 \times 224$ to feed into DINOv2 $f$ if they are larger than $224 \times 224$ (otherwise, keep them unchanged). Table 8 lists detailed comparisons. We have two observations. (1) The InsDet performance generally increases with the image resolution but starts to drop when the input image is too large, i.e., $6144 \times 8192$ . This is because SAM tends to produce object parts as individual instances, + +![](images/07b64ccc6b2085192397d13f192cabe9b944664e81ab0ef5f6603b68257f7510.jpg) + +![](images/ebc61b425c958040fd99ac7c32da4abe323ba3ecd09815440bafa2a953eaf85d.jpg) + +![](images/fd7d767c2dcd7ab06e6fcfdddf33d719f0093711defdf76edf6045d0e233d467.jpg) + +![](images/7823d2c54e67b0d72794c3e2cf5dba9a62a6b1651d1a85f543f58fd04b68ef6c.jpg) + +![](images/27b5cd7cae78a8afa884f9e24a3b64cc0c8851e50814469f9b36d1a4657f3ab8.jpg) +Figure 7: Visual results of FasterRCNN, DINO, and SAM+DINOv2 on our InsDet dataset. The top row illustrates the sparse placement of instances (i.e., easy scenario), while the bottom contains more cluttered instances (i.e., hard scenario). We drop predicted instance names for brevity. SAM helps localize instances with more precise bounding boxes, e.g., as arrows labeled in the upper row. DINOv2 provides more precise recognition of localized instances, e.g., five instances in the right of the bottom row. Compared with DINO, SAM+DINOv2 is better at locating occluded instances. + +![](images/700af41bef30d0125b05d4abf3b7a20f0016ef137d023b49e711fa8c6c950725.jpg) + +![](images/c81191d0645d01b7babd2723e4cfea5b317440824e243d673077635d1173b1f8.jpg) + +![](images/dfa63f90000d90999d1695358fa564ea6383bddd6b35ec7e6b80bf4486a2c255.jpg) + +Table 6: Ablation study: whether to remove background in crops for feature computation. Based on a proposal given by SAM, we can crop and feed its minimum bounding square to compute DINOv2 $f$ feature, or we can use the mask to remove the background in the square before computing the feature. Clearly, the latter performs remarkably better. + +
strategyAPAP50AP75
avghardeasyavghardeasyavghardeasy
w/o background removal36.0423.0442.3743.8429.1251.0039.5925.7446.13
w/ background removal39.1224.0047.1746.7230.8154.6642.8626.4051.58
+ +Table 7: Ablation study: whether to generate unique proposal-instance match. In contrast to Rank&Select, Stable Matching produces a unique match to proposal/instance for each instance/proposal, yielding better performance than Rank&Select. + +
strategyAPAP50AP75
avghardeasyavghardeasyavghardeasy
Rank & Select38.6223.9546.3146.0430.7753.6442.3726.3950.61
Stable Matching39.1224.0047.1746.7230.8154.6642.8626.4051.58
+ +Table 8: Ablation study: which image/proposal size to use for SAM and DINOv2. We notice that InsDet performance generally increases with the input image resolution, but starts to drop when the image is too large. When using larger proposals for DINOv2, InsDet performance also gets better. + +
image resolution for SAMinput size for DINOv2fAPAP50AP75
avghardeasyavghardeasyavghardeasy
768×102436.4621.6743.8846.2229.5254.1141.5324.9549.99
1536×204839.1224.0047.1746.7230.8154.6642.8626.4051.58
3072×4096224×22439.1724.0846.6045.7130.3453.0741.9026.1249.71
6144×819238.7423.3946.2945.2429.2452.7840.8125.1448.65
1536×2048112×11226.4616.5231.3230.8320.9436.2628.8918.8133.73
224×22439.1224.0047.1746.7230.8154.6642.8626.4051.58
448×44841.6128.0347.5749.1036.6454.8445.9531.4152.03
+ +
methodtime (sec)AP (%)
FasterRCNN [37]0.0039919.54
RetinaNet [29]0.0041222.22
CenterNet [49]0.0037621.12
FCOS [45]0.0027122.40
DINO [48]1.9062527.99
SAM + DINOf15.1036.97
SAM + DINOv2f14.7041.61
+ +Table 9: We compare the inference runtime (second/image) of different methods, along with their InsDet performance in AP (\%). Clearly, there is a trade-off between runtime and detection precision. For example, among the methods studied in our work, SAM+DINOv2f achieves the highest AP (41.61) but is four orders of magnitude slower than FasterRCNN (19.54 AP). Developing faster and better InsDet methods is apparently future work. + +resulting in more false positives. (2) When using larger proposals for DINo $2_{f}$ , InsDet performance gets better, e.g., $41.61\%$ ( $448 \times 448$ ) vs. $39.12\%$ ( $224 \times 224$ ). + +# 5.3 Runtime Comparison + +Table 9 compares the runtime of different methods, along with their InsDet performance. There is a trade-off between runtime and detection precision. For example, among the methods studied in our work, SAM+DINOv2f achieves the highest AP (41.61) but is four orders of magnitude slower than FasterRCNN (19.54 AP). Developing faster and better InsDet methods is future work. + +# 5.4 Discussions + +Societal impact. InsDet is a crucial component in various robotic applications such as elderly-assistive agents. Hence, releasing a unified benchmarking protocol contributes to broader communities. While our dataset enables InsDet research to move forward, similar to other works, directly applying algorithms brought by our dataset is risky in real-world applications. + +Limitations. We note several limitations in our current work. First, while our work uses normal cameras to collect datasets, we expect to use better and cheaper hardware (e.g., depth camera and IMU) for data collection. Second, while the cut-paste-learn method we adopt does not consider geometric cues when synthesizing training images, we hope to incorporate such information to generate better and more realistic training images, e.g., pasting instances only on up-surfaces like tables, desks, and floors. Third, while SAM+DINOv2 $f$ performs the best, this method is time-consuming (see a run-time study in the supplement); real-world applications should consider real-time requirements. + +Future work. In view of the above limitations, the future work includes: (1) Exploring high-resolution images for more precise detection on hard situations, e.g., one can combine proposals generated from multi-scale and multi-resolution images. (2) Developing faster algorithms, e.g., one can use multi-scale detectors to attend to regions of interest for progressive detection. (3) Bridging end-to-end fast models and powerful yet slow pretrained models, e.g., one can train lightweight adaptors atop pretrained models for better InsDet. + +# 6 Conclusion + +We explore the problem of Instance Detection (InsDet) by introducing a new dataset consisting of high-resolution images and formulating a realistic unified protocol. We revisit representative InsDet methods in the cut-paste-learn framework and design a non-learned method by leveraging publicly-available pretrained models. Extensive experiments show that the non-learned method significantly outperforms end-to-end InsDet models. Yet, the non-learned method is slow because running large pretrained models takes more time than end-to-end trained models. Moreover, all methods struggle in hard situations (e.g., in front of heavy occlusions and a high level of clutter in the scene). This shows that our dataset serves as a challenging venue for the community to study InsDet. + +# Acknowledgements + +This work is supported by NSFC (No.62206256), and University of Macau (SRG2023-00044-FST). Shu Kong acknowledges Dr. Bin Liu for the initial support via compute resource. + +# References + +[1] Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Kovsecka, and Alexander C. Berg. A dataset for developing and benchmarking active vision. In IEEE International Conference on Robotics and Automation (ICRA), 2017. +[2] Phil Ammirato, Cheng-Yang Fu, Mykhailo Shvets, Jana Kovsecka, and Alexander C Berg. Target driven instance detection. arXiv:1803.04610, 2018. +[3] Siddharth Ancha, Junyu Nan, and David Held. Combining deep learning and verification for precise object instance detection. arXiv:1912.12270, 2019. +[4] Richard Bormann, Xinjie Wang, Markus Völk, Kilian Kleeberger, and Jochen Lindermayr. Real-time instance detection with fast incremental learning. In IEEE International Conference on Robotics and Automation (ICRA), 2021. +[5] Eric Brachmann, Alexander Krull, Frank Michel, Stefan Gumhold, Jamie Shotton, and Carsten Rother. Learning 6d object pose estimation using 3d object coordinates. In ECCV, 2014. +[6] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018. +[7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020. +[8] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021. +[9] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. +[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. +[11] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pages 647-655. PMLR, 2014. +[12] Debidatta Dwibedi, Ishan Misra, and Martial Hebert. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In ICCV, 2017. +[13] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627-1645, 2009. +[14] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. The American Mathematical Monthly, 69(1):9-15, 1962. +[15] Georgios Georgakis, Md Alimoor Reza, Arsalan Mousavian, Phi-Hung Le, and Jana Kovsecka. Multiview rgb-d dataset for object instance detection. In International Conference on 3D Vision (3DV), 2016. +[16] Georgios Georgakis, Arsalan Mousavian, Alexander C Berg, and Jana Kovsecka. Synthesizing training data for object detection in indoor scenes. Robotics: Science and Systems (RSS), 2017. +[17] Ross Girshick. Fast r-cnn. In ICCV, 2015. +[18] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020. +[19] Stefan Hinterstoisser, Cedric Cagniart, Slobodan Ilic, Peter Sturm, Nassir Navab, Pascal Fua, and Vincent Lepetit. Gradient response maps for real-time detection of textureless objects. IEEE transactions on pattern analysis and machine intelligence, 34(5):876-888, 2011. + +[20] Stefan Hinterstoiβer, Vincent Lepetit, Slobodan Ilic, Stefan Holzer, Gary R. Bradski, Kurt Konolige, and Nassir Navab. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Asian Conference on Computer Vision, 2012. +[21] Tomávs Hodavn, Vibhav Vineet, Ran Gal, Emanuel Shalev, Jon Hanzelka, Treb Connell, Pedro Urbina, Sudipta N Sinha, and Brian Guenter. Photorealistic image synthesis for object instance detection. In IEEE international conference on image processing (ICIP), 2019. +[22] Wadim Kehl, Fabian Manhardt, Federico Tombari, Slobodan Ilic, and Nassir Navab. Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. In ICCV, 2017. +[23] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv:2304.02643, 2023. +[24] Ikki Kishida, Hong Chen, Masaki Baba, Jiren Jin, Ayako Amma, and Hideki Nakayama. Object recognition with continual open set domain adaptation for home robot. In WACV, 2021. +[25] Kevin Lai, Liefeng Bo, Xiaofeng Ren, and Dieter Fox. A large-scale hierarchical multi-view rgb-d object dataset. In IEEE International Conference on Robotics and Automation (ICRA), 2011. +[26] Kevin Lai, Liefeng Bo, and Dieter Fox. Unsupervised feature learning for 3d scene labeling. IEEE International Conference on Robotics and Automation (ICRA), 2014. +[27] Hengduo Li, Bharat Singh, Mahyar Najibi, Zuxuan Wu, and Larry S Davis. An analysis of pre-training on object detection. arXiv:1904.05871, 2019. +[28] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. +[29] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. Focal loss for dense object detection. In ICCV, 2017. +[30] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In ECCV, 2016. +[31] David G McVitie and Leslie B Wilson. The stable marriage problem. Communications of the ACM, 14(7):486-490, 1971. +[32] Jean-Philippe Mercier, Mathieu Garon, Philippe Giguere, and Jean-Francois Lalonde. Deep template-based object instance detection. In WACV, 2021. +[33] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv:2304.07193, 2023. +[34] A Quadros, James Patrick Underwood, and Bertrand Douillard. An occlusion-aware feature for range images. In IEEE International Conference on Robotics and Automation (ICRA), 2012. +[35] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv:1804.02767, 2018. +[36] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In CVPR, 2016. +[37] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015. +[38] Colin Rennie, Rahul Shome, Kostas E Bekris, and Alberto F De Souza. A dataset for improved rgbd-based object detection and pose estimation for warehouse pick-and-place. IEEE Robotics and Automation Letters, 1(2):1179-1185, 2016. + +[39] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. "grabcut" interactive foreground extraction using iterated graph cuts. ACM transactions on graphics (TOG), 23(3):309-314, 2004. +[40] Neil Savage et al. Robots rise to meet the challenge of caring for old people. Nature, 601(7893): 8-10, 2022. +[41] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, 2016. +[42] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In CVPR Workshops, 2014. +[43] Arjun Singh, James Sha, Karthik S. Narayan, Tudor Achim, and P. Abbeel. Bigbird: A large-scale 3d database of object instances. IEEE International Conference on Robotics and Automation (ICRA), 2014. +[44] Zhi Tian, Hao Chen, Xinlong Wang, Yuliang Liu, and Chunhua Shen. AdelaiDet: A toolbox for instance-level recognition tasks. https://git.io/adelaidet, 2019. +[45] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In ICCV, 2019. +[46] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv:2207.02696, 2022. +[47] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detector2. https://github.com/facebookresearch/detectron2, 2019. +[48] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Harry Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. In International Conference on Learning Representations, 2022. +[49] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. In arXiv:1904.07850, 2019. +[50] Xingyi Zhou, Vladlen Koltun, and Philipp Krahenbuhl. Probabilistic two-stage detection. In arXiv:2103.07461, 2021. +[51] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv:2010.04159, 2020. \ No newline at end of file diff --git a/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/images.zip b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..16a4212c2b27ca3fd2d417e84f1b8f7920bfa15c --- /dev/null +++ b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80307fa5236e2a6e8f4258438dd1ebc50a68a5dc207ff502d95e16047ac16e31 +size 664173 diff --git a/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/layout.json b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5697ccc355971f2471104fa8f4205b37e80afaf4 --- /dev/null +++ b/ahighresolutiondatasetforinstancedetectionwithmultiviewobjectcapture/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3892adf692ef5307fd4bdcf1d6eb67df42855fce5b85bde8f79ccedd4d57b758 +size 366793 diff --git a/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_content_list.json b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..207e9d443554e23cda28f1c82055d259572870fd --- /dev/null +++ b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bc5e4ebd67d459aaa757b37adf8bd911bbc517e49e8304dc53d384570437b5b +size 80202 diff --git a/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_model.json b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4678941caa78cae813c256b7f6b7ab5eea0a43e1 --- /dev/null +++ b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f17a7a9e4499136a80671d8b4837cf9bb5f644114da2168f5b46269be899b7ae +size 100979 diff --git a/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_origin.pdf b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2db05537944abad391706fe67c40ae4551034dc4 --- /dev/null +++ b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/d6abb19e-6e19-41f9-9899-9b14a56980c6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fee519911fad2bf607ad584feb1dd2eedf68b2a75b0a9bcaaaed5aec8c61ecc +size 6699111 diff --git a/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/full.md b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dfd2d39478215d64d5458e0f8627f38d2d8e6b0a --- /dev/null +++ b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/full.md @@ -0,0 +1,261 @@ +# A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation + +Thomas Fel\*,1,2, Victor Boutin\*,1,2, +Mazda Moayeri\*, Remy Cadene\*, Louis Bethune\* Léo Andeol\*, Mathieu Chalvidal\*,2, Thomas Serre\*,2 + +$^{1}$ Carney Institute for Brain Science, Brown University + +$^{2}$ Artificial and Natural Intelligence Toulouse Institute + +3Department of Computer Science, University of Maryland {thomas_fel,victor_boutin}@brown.edu + +# Abstract + +In recent years, concept-based approaches have emerged as some of the most promising explainability methods to help us interpret the decisions of Artificial Neural Networks (ANNs). These methods seek to discover intelligible visual "concepts" buried within the complex patterns of ANN activations in two key steps: (1) concept extraction followed by (2) importance estimation. While these two steps are shared across methods, they all differ in their specific implementations. Here, we introduce a unifying theoretical framework that recast the first step - concept extraction problem - as a special case of dictionary learning, and we formalize the second step - concept importance estimation - as a more general form of attribution method. This framework offers several advantages as it allows us: $(i)$ to propose new evaluation metrics for comparing different concept extraction approaches; $(ii)$ to leverage modern attribution methods and evaluation metrics to extend and systematically evaluate state-of-the-art concept-based approaches and importance estimation techniques; $(iii)$ to derive theoretical guarantees regarding the optimality of such methods. + +We further leverage our framework to try to tackle a crucial question in explainability: how to efficiently identify clusters of data points that are classified based on a similar shared strategy. To illustrate these findings and to highlight the main strategies of a model, we introduce a visual representation called the strategic cluster graph. Finally, we present Lens, a dedicated website that offers a complete compilation of these visualizations for all classes of the ImageNet dataset. + +# 1 Introduction + +The black-box nature of Artificial Neural Networks (ANNs) poses a significant hurdle to their deployment in industries that must comply with stringent ethical and regulatory standards [1]. In response to this challenge, eXplainable Artificial Intelligence (XAI) focuses on developing new tools to help humans better understand how ANNs arrive at their decisions [2, 3]. Among the large array of methods available, attribution methods have become the go-to approach [4-14]. They yield heatmaps in order to highlight the importance of each input feature (or group of features [15]) for driving a model's decision. However, there is growing consensus that these attribution methods fall short of providing meaningful explanations [16-19] as revealed by multiple user studies [20-25, 20]. It has been suggested that for explainability methods to become usable by human users, they need to be able to highlight not just the location of important features within an image (i.e., the where information) but also their semantic content (i.e., the what information). + +One promising set of explainability methods to address this challenge includes concept-based explainability methods, which are methods that aim to identify high-level concepts within the activation space of ANNs [26]. These methods have recently gained renewed interest due to their success in providing human-interpretable explanations [27-30] (see section 2 for a detailed description of the related work). However, concept-based explainability methods are still in the early stages, and progress relies largely on researchers' intuitions rather than well-established theoretical foundations. A key challenge lies in formalizing the notion of concept itself [31]. Researchers have proposed desiderata such as meaningfulness, coherence, and importance [27] but the lack of formalism in concept definition has hindered the derivation of appropriate metrics for comparing different methods. + +This article presents a theoretical framework to unify and characterize current concept-based explainability methods. Our approach builds on the fundamental observation that all concept-based explainability methods share two key steps: (1) concepts are extracted, and (2) importance scores are assigned to these concepts based on their contribution to the model's decision [27]. Here, we show how the first extraction step can be formulated as a dictionary learning problem while the second importance scoring step can be formulated as an attribution problem in the concept space. To summarize, our contributions are as follows: + +- We describe a novel framework that unifies all modern concept-based explainability methods and we borrow metrics from different fields (such as sparsity, reconstruction, stability, FID, or OOD scores) to evaluate the effectiveness of those methods. +- We leverage modern attribution methods to derive seven novel concept importance estimation methods and provide theoretical guarantees regarding their optimality. Additionally, we show how standard faithfulness evaluation metrics used to evaluate attribution methods (i.e., Insertion, Deletion [32], and $\mu$ Fidelity [33]) can be adapted to serve as benchmarks for concept importance scoring. In particular, we demonstrate that Integrated Gradients, Gradient Input, RISE, and Occlusion achieve the highest theoretical scores for 3 faithfulness metrics when the concept decomposition is on the penultimate layer. +- We introduce the notion of local concept importance to address a significant challenge in explainability: the identification of image clusters that reflect a shared strategy by the model (see Figure 1). We show how the corresponding cluster plots can be used as visualization tools to help with the identification of the main visual strategies used by a model to help explain false positive classifications. + +![](images/b23d2fdd6391a86f4bf219c82e3007901360cbb61b2d16b99173cbdafa9e123c.jpg) +Figure 1: Strategic cluster graphs for the espresso and zucchini classes. The framework presented in this study provides a comprehensive approach to uncover local importance using any attribution methods. Consequently, it allows us to estimate the critical concepts influencing the model's decision for each image. As a result, we introduced the Strategic cluster graph, which offers a visual representation of the main strategies employed by the model in recognizing an entire object class. For espresso (left), the main strategies for classification appear to be: bubbles and foam on the coffee, Latte art, transparent cups with foam and black liquid, the handle of the coffee cup, and finally the coffee in the cup, which appears to be the predominant strategy. As for zucchini, the strategies are: a zucchini in a vegetable garden, the corolla of the zucchini flower, sliced zucchini, the spotted pattern on the zucchini skin and stacked zucchini. + +![](images/23e1691e010aac6856cb8d7bb1d79aab46b5f931140e89d68f935a0566876b5f.jpg) + +# 2 Related Work + +Kim et al. [26] were the first to propose a concept-based approach to interpret neural network internal states. They defined the notion of concepts using Concept Activation Vectors (CAVs). CAVs are derived by training a linear classifier between a concept's examples and random counterexamples and then taking the vector orthogonal to the decision boundary. In their work, the concepts are manually selected by humans. They further introduce the first concept importance scoring method, called Testing with CAVs (TCAV). TCAV uses directional derivatives to evaluate the contribution of each CAV to the model's prediction for each object category. Although this approach demonstrates meaningful explanations to human users, it requires a significant human effort to create a relevant image database of concepts. To address this limitation, Ghorbani et al. [27] developed an unsupervised method called Automatic Concept Extraction (ACE) that extracts CAVs without the need for human supervision. In their work, the CAVs are the centroid of the activations (in a given layer) when the network is fed with multi-scale image segments belonging to an image class of interest. However, the use of image segments could introduce biases in the explanations [34-37]. ACE also leverages TCAV to rank the concepts of a given object category based on their importance. + +Zhang et al. [28] proposed a novel method for concept-based explainability called Invertible Concept-based Explanation (ICE). ICE leverages matrix factorization techniques, such as non-negative matrix factorization (NMF), to extract Concept Activation Vectors (CAVs). Here, the concepts are localized as the matrix factorization is applied on feature maps (before the global average pooling). In ICE, the concepts' importance is computed using the TCAV score [26]. Note that the Singular Value Decomposition (SVD) was also suggested as a concept discovery method [28, 30]. CRAFT (Concept Recursive Activation FacTorization for explainability) uses NMF to extract the concepts, but as it is applied after the global pooling average, the concepts are location invariant. Additionally, CRAFT employs Sobol indices to quantify the global importance of concepts associated with an object category. + +# 3 A Unifying perspective + +Notations. Throughout, $||\cdot ||_2$ and $||\cdot ||_F$ represent the $\ell_{2}$ and Frobenius norm, respectively. We consider a general supervised learning setting, where a classifier $f:\mathcal{X}\to \mathcal{Y}$ maps inputs from an input space $\mathcal{X}\subseteq \mathbb{R}^d$ to an output space $\mathcal{V}\subseteq \mathbb{R}^c$ . For any matrix $\mathbf{X}\in \mathbb{R}^{n\times d}$ , $\pmb{x}_i$ denotes the $i^{th}$ row of $\mathbf{X}$ , where $i\in \{1,\dots ,n\}$ and $\pmb{x}_i\in \mathbb{R}^d$ . Without loss of generality, we assume that $\pmb{f}$ admits an intermediate space $\mathcal{H}\subseteq \mathbb{R}^p$ . In this setup, $h:\mathcal{X}\rightarrow \mathcal{H}$ maps inputs to the intermediate space, and $g:\mathcal{H}\rightarrow \mathcal{V}$ takes the intermediate space to the output. Consequently, $\pmb {f}(\pmb {x}) = (g\circ h)(\pmb {x})$ . Additionally, let $\pmb {a} = \pmb {h}(\pmb {x})\in \mathcal{H}$ represent the activations of $\pmb{x}$ in this intermediate space. We also abuse notation slightly: $\pmb {f}(\mathbf{X}) = (\pmb {g}\circ \pmb {h})(\mathbf{X})$ denotes the vectorized application of $\pmb{f}$ on each element $\pmb{x}$ of $\mathbf{X}$ , resulting in $(f(x_{1}),\ldots ,f(x_{n}))$ . + +Prior methods for concept extraction, namely ACE [27], ICE [28] and CRAFT [29], can be distilled into two fundamental steps: + +(i) Concept extraction: A set of images $\mathbf{X} \in \mathbb{R}^{n \times d}$ belonging to the same class is sent to the intermediate space giving activations $\mathbf{A} = \mathbf{h}(\mathbf{X}) \in \mathbb{R}^{n \times p}$ . These activations are used to extract a set of $k$ CAVs using K-Means [27], PCA (or SVD) [28, 30] or NMF [28, 29]. Each CAV is denoted $\pmb{v}_i$ and $\mathbf{V} = (\pmb{v}_1, \dots, \pmb{v}_k) \in \mathbb{R}^{p \times k}$ forms the dictionary of concepts. +(ii) Concept importance scoring: It involves calculating a set of $k$ global scores, which provides an importance measure of each concept $v_{i}$ to the class as a whole. Specifically, it quantifies the influence of each concept $v_{i}$ on the final classifier prediction for the given set of points $\mathbf{X}$ . Prominent measures for concept importance include TCAV [26] and the Sobol indices [29]. + +The two-step process described above is repeated for all classes. In the following subsections, we theoretically demonstrate that the concept extraction step $(i)$ could be recast as a dictionary learning problem (see 3.1). It allows us to reformulate and generalize the concept importance step $(ii)$ using attribution methods (see 3.2). + +# 3.1 Concept Extraction + +A dictionary learning perspective. The purpose of this section is to redefine all current concept extraction methods as a problem within the framework of dictionary learning. Given the necessity for clearer formalization and metrics in the field of concept extraction, integrating concept extraction with dictionary learning enables us to employ a comprehensive set of metrics and obtain valuable theoretical insights from a well-established and extensively researched domain. + +The goal of concept extraction is to find a small set of interpretable CAVs (i.e., $\mathbf{V}$ ) that allows us to faithfully interpret the activation $\mathbf{A}$ . By preserving a linear relationship between $\mathbf{V}$ and $\mathbf{A}$ , we facilitate the understanding and interpretability of the learned concepts [26, 38]. Therefore, we look for a coefficient matrix $\mathbf{U} \in \mathbb{R}^{n \times k}$ (also called loading matrix) and a set of CAVs $\mathbf{V}$ , so that $\mathbf{A} \approx \mathbf{U}\mathbf{V}^{\mathrm{T}}$ . In this approximation of $\mathbf{A}$ using the two low-rank matrices $(\mathbf{U}, \mathbf{V})$ , $\mathbf{V}$ represents the concept basis used to reinterpret our samples, and $\mathbf{U}$ are the coordinates of the activation in this new basis. Interestingly, such a formulation allows a recast of the concept extraction problem as an instance of dictionary learning problem [39] in which all known concept-based explainability methods fall: + +$$ +\left(\mathbf {U} ^ {\star}, \mathbf {V} ^ {\star}\right) = \underset {\mathbf {U}, \mathbf {V}} {\arg \min } \| \mathbf {A} - \mathbf {U} \mathbf {V} ^ {\top} \| _ {F} ^ {2} s. t \left\{ \begin{array}{l} \forall i, \boldsymbol {u} _ {i} \in \left\{\mathbf {e} _ {1}, \dots , \mathbf {e} _ {k} \right\} (\text {K - M e a n s : A C E [ 2 7 ]}), \\ \mathbf {V} ^ {\top} \mathbf {V} = \mathbf {I} (\text {P C A : [ 2 8 , 3 0 ]}), \\ \mathbf {U} \geq 0, \mathbf {V} \geq 0 (\text {N M F : C R A F T [ 2 9 ] & I C E [ 2 8 ]}) \\ \mathbf {U} = \psi (\mathbf {A}), \| \mathbf {U} \| _ {0} \leq K (\text {S p a r s e A t o e n c o d e r [ 4 0 ]}) \end{array} \right. \tag {1} +$$ + +with $e_i$ the $i$ -th element of the canonical basis, $\mathbf{I}$ the identity matrix and $\psi$ any neural network. In this context, $\mathbf{V}$ is the dictionary and $\mathbf{U}$ the representation of $\mathbf{A}$ with the atoms of $\mathbf{V}$ . $u_i$ denote the $i$ -th row of $\mathbf{U}$ . These methods extract the concept banks $\mathbf{V}$ differently, thereby necessitating different interpretations*. + +In ACE, the CAVs are defined as the centroids of the clusters found by the K-means algorithm. Specifically, a concept vector $\boldsymbol{v}_i$ in the matrix $\mathbf{V}$ indicates a dense concentration of points associated with the corresponding concept, implying a repeated activation pattern. The main benefit of ACE comes from its reconstruction process, involving projecting activations onto the nearest centroid, which ensures that the representation will lie within the observed distribution (no out-of-distribution instances). However, its limitation lies in its lack of expressivity, as each activation representation is restricted to a single concept $(||\boldsymbol{u}||_0 = 1)$ . As a result, it cannot capture compositions of concepts, leading to sub-optimal representations that fail to fully grasp the richness of the underlying data distribution. + +On the other hand, the PCA benefits from superior reconstruction performance due to its lower constraints, as stated by the Eckart-Young-Mirsky[43] theorem. The CAVs are the eigenvector of the covariance matrix: they indicate the direction in which the data variance is maximal. An inherent limitation is that the PCA will not be able to properly capture stable concepts that do not contribute to the sample variability (e.g. the dog-head concept might not be considered important by the PCA to explain the dog class if it is present across all examples). Neural networks are known to cluster together the points belonging to the same category in the last layer to achieve linear separability ([44, 29]). Thus, the orthogonality constraint in the PCA might not be suitable to correctly interpret the manifold of the deep layer induced by points from the same class (it is interesting to note that this limitation can be of interest when studying all classes at once). Also, unlike K-means, which produces strictly positive clusters if all points are positive (e.g., the output of ReLU), PCA has no sign constraint and can undesirably reconstruct out-of-distribution (OOD) activations, including negative values after ReLU. + +In contrast to K-Means, which induces extremely sparse representations, and PCA, which generates dense representations, the NMF (used in CRAFT and ICE) strikes a harmonious balance as it provides moderately sparse representation. This is due to NMF relaxing the constraints imposed by the K-means algorithm (adding an orthogonality constraint on $\mathbf{V}$ such that $\mathbf{V}\mathbf{V}^{\mathrm{T}} = \mathbf{I}$ would yield an equivalent solution to K-means clustering [45]). This sparsity facilitates the encoding of + +![](images/25132ae41f5250dac221e91f3771eb8bc4b185c8e65aaa0a95565f047dd8da81.jpg) +Figure 2: Most important concepts extracted for the studied methods. This qualitative example shows the three most important concepts extracted for the 'rabbit' class using a ResNet50 trained on ImageNet. The crops correspond to those maximizing each concepts $i$ (i.e., $\pmb{x}$ where $\mathbf{U}(\pmb{x})_i$ is maximal). As demonstrated in previous works [28, 29, 49], NMF (requiring positive activations) produces particularly interpretable concepts despite poorer reconstruction than PCA and being less sparse than K-Means. Details for the sparse Autoencoder architecture are provided in the appendix. + +![](images/b863385cb82913433722d54ddee4ec7b27884e1ffe92f76cf758842bbc5cdc80.jpg) + +compositional representations that are particularly valuable when an image encompasses multiple concepts. Moreover, by allowing only additive linear combinations of components with non-negative coefficients, NMF inherently fosters a parts-based representation. This distinguishes NMF from PCA, which offers a holistic representation model. Interestingly, the NMF is known to yield representations that are interpretable by humans [28, 29]. Finally, the non-orthogonality of these concepts presents an advantage as it accommodates the phenomenon of superposition [38], wherein neurons within a layer may contribute to multiple distinct concepts simultaneously. + +To summarize, we have explored three approaches to concept extraction, each necessitating a unique interpretation of the resulting Concept Activation Vectors (CAVs). Among these methods, NMF (used in CRAFT and ICE) emerges as a promising middle ground between PCA and K-means. Leveraging its capacity to capture intricate patterns, along with its ability to facilitate compositional representations and intuitive parts-based interpretations (as demonstrated in Figure 2), NMF stands out as a compelling choice for extracting meaningful concepts from high-dimensional data. These advantages have been underscored by previous human studies, as evidenced by works such as Zhang et al.[28] and Fel et al.[29]. + +
Relative ℓ2(↓)Sparsity (↑)Stability (↓)FID (↓)OOD (↓)
Eff / R50 / MobEff / R50 / MobEff / R50 / MobEff / R50 / MobEff / R50 / Mob
PCA0.60 / 0.54 / 0.730.00 / 0.00 / 0.00.41 / 0.38 / 0.430.47 / 0.17 / 0.242.44 / 0.36 / 0.16
KMeans0.72 / 0.66 / 0.840.95 / 0.95 / 0.950.07 / 0.08 / 0.040.46 / 0.21 / 0.331.76 / 0.29 / 0.15
NMF0.63 / 0.57 / 0.750.68 / 0.44 / 0.640.17 / 0.14 / 0.160.38 / 0.21 / 0.241.98 / 0.29 / 0.15
+ +Table 1: Concept extraction comparison. Eff, R50 and Mob denote EfficientNetV2[46], ResNet50[47], MobileNetV2[48]. The concept extraction methods are applied on the last layer of the networks. Each results is averaged across 10 classes of ImageNet and obtained from a set of 16k images for each class. + +Evaluation of concept extraction Following the theoretical discussion of the various concept extraction methods, we conduct an empirical investigation of the previously discussed properties to gain deeper insights into their distinctions and advantages. In our experiment, we apply the PCA, K-Means, and NMF concept extraction methods on the penultimate layer of three state-of-the-art models. We subsequently evaluate the concepts using five different metrics (see Table 1). All five metrics are connected with the desired characteristics of a dictionary learning method. They include achieving a high-quality reconstruction (Relative 12), sparse encoding of concepts (Sparsity), ensuring the stability of the concept base in relation to A (Stability), performing reconstructions within the intended domain (avoiding OOD), and maintaining the overall distribution during the reconstruction process (FID). All the results come from 10 classes of ImageNet (the one used in Imagenette [50]), and are obtained using $n = 16k$ images for each class. + +We begin our empirical investigation by using a set of standard metrics derived from the dictionary learning literature, namely Relative $l_{2}$ and Sparsity. Concerning the Relative $l_{2}$ , PCA achieves the highest score among the three considered methods, confirming the theoretical expectations based on the Eckart-Young-Mirsky theorem [43], followed by NMF. Concerning the sparsity of the underlying representation $\mathbf{u}$ , we compute the proportion of non-zero elements $||\mathbf{u}||_0 / k$ . Since + +K-means inherently has a sparsity of $1 / k$ (as induced by equation 1), it naturally performs better in terms of sparsity, followed by NMF. + +We deepen our investigation by proposing three additional metrics that offer complementary insights into the extracted concepts. Those metrics are the Stability, the FID, and the OOD score. The Stability (as it can be seen as a loose approximation of algorithmic stability [51]) measures how consistent concepts remain when they are extracted from different subsets of the data. To evaluate Stability, we perform the concept extraction methods $N$ times on $K$ -fold subsets of the data. Then, we map the extracted concepts together using a Hungarian loss function and measure the cosine similarity of the CAVs. If a method is stable, it should yield the same concepts (up to permutation) across each $K$ -fold, where each fold consists of 1000 images. K-Means and NMF demonstrate the highest stability, while PCA appears to be highly unstable, which can be problematic for interpreting the results and may undermine confidence in the extracted concepts. + +The last two metrics, FID and OOD, are complementary in that they measure: (i) how faithful the representations extracted are w.r.t the original distribution, and (ii) the ability of the method to generate points lying in the data distribution (non-OOD). Formally, the FID quantifies the 1-Wasserstein distance [52] $\mathcal{W}_1$ between the empirical distribution of activation $\mathbf{A}$ , denoted $\mu_{\mathbf{a}}$ , and the empirical distribution of the reconstructed activation $\mathbf{U}\mathbf{V}^{\mathrm{T}}$ denoted $\mu_{\mathbf{u}}$ . Thus, FID is calculated as $\mathrm{FID} = \mathcal{W}_1(\mu_{\mathbf{a}},\mu_{\mathbf{u}})$ . On the other hand, the OOD score measures the plausibility of the reconstruction by leveraging Deep-KNN [53], a recent state-of-the-art OOD metric. More specifically, we use the Deep-KNN score to evaluate the deviation of a reconstructed point from the closest original point. In summary, a good reconstruction method is capable of accurately representing the original distribution (as indicated by FID) while ensuring that the generated points remain within the model's domain (non-OOD). K-means leads to the best OOD scores because each instance is reconstructed as a centroid, resulting in proximity to in-distribution (ID) instances. However, this approach collapses the distribution to a limited set of points, resulting in low FID. On the other hand, PCA may suffer from mapping to negative values, which can adversely affect the OOD score. Nevertheless, PCA is specifically optimized to achieve the best average reconstructions. NMF, with fewer stringent constraints, strikes a balance by providing in-distribution reconstructions at both the sample and population levels. + +In conclusion, the results clearly demonstrate NMF as a method that strikes a balance between the two approaches as NMF demonstrates promising performance across all tested metrics. Henceforth, we will use the NMF to extract concepts without mentioning it. + +The Last Layer as a Promising Direction The various methods examined, namely ACE, ICE, and CRAFT, generally rely on a deep layer to perform their decomposition without providing quantitative or theoretical justifications for their choice. To explore the validity of this choice, we apply the aforementioned metrics to each block's output in a ResNet50 model. Figure 3 illustrates the metric evolution across different blocks, revealing a trend that favors the last layer for the decomposition. This empirical finding aligns with the practical implementations discussed above. + +![](images/34bd8eb4454965839f99deaac4ee23217dce40226d35c3279769537ae162c407.jpg) +Figure 3: Concept extraction metrics across layers. The concept extraction methods are applied on activations probed on different blocks of a ResNet50 (B2 to B5). Each point is averaged over 10 classes of ImageNet using 16k images for each class. We evaluate 3 concept extraction methods: PCA $(\texttt{- - - })$ , NMF $(\text{一} )$ , and KMeans $(\cdot \cdot \cdot \cdot)$ + +![](images/328138196032925d1370228b83760e8d37feb3542401ee09a9ed2a463c830d26.jpg) + +![](images/ef07f1faa778f93aa9d528d8afe7f3224832460e2a361fe296d576dd55bc212e.jpg) + +![](images/6844b326a02af185996a09a01a4c3f778e089327c7720fef33d280ea04dfb4a4.jpg) + +![](images/45d815939c07712020d8a30c6db6febd054ce873caafd7d3f9dbff97004cbaf7.jpg) + +# 3.2 Concept importance + +In this section, we leverage our framework to unify concept importance scoring using the existing attribution methods. Furthermore, we demonstrate that specifically in the case of decomposition in the penultimate layer, it exists optimal methods for importance estimation, namely RISE [32], + +Integrated Gradients [7], Gradient-Input [6], and Occlusion [13]. We provide theoretical evidence to support the optimality of these methods. + +From concept importance to attribution methods The dictionary learning formulation allows us to define the concepts $\mathbf{V}$ in such a way that they are optimal to reconstruct the activation, i.e., $\mathbf{A} \approx \mathbf{U}\mathbf{V}^{\top}$ . Nevertheless, this does not guarantee that those concepts are important for the model's prediction. For example, the "grass" concept might be important to characterize the activations of a neural network when presented with a St-Bernard image, but it might not be crucial for the network to classify the same image as a St Bernard [26, 16, 54]. The notion of concept importance is precisely introduced to avoid such a confirmation bias and to identify the concepts used to classify among all detected concepts. + +We use the notion of Concept ATribution methods (which we denote as CATs) to assess the concept importance score. The CATs are a generalization of the attribution methods: while attribution methods assess the sensitivity of the model output to a change in the pixel space, the concept importance evaluates the sensitivity to a change in the concept space. To compute the CATs methods, it is necessary to link the activation $\mathbf{a} \in \mathbb{R}^p$ to the concept base $\mathbf{V}$ and the model prediction $\mathbf{y}$ . To do so, we feed the second part of the network $(\mathbf{g})$ with the activation reconstruction $(\mathbf{u}\mathbf{V}^{\top} \approx \mathbf{a})$ so that $\mathbf{y} = \mathbf{g}(\mathbf{u}\mathbf{V}^{\top})$ . Intuitively, a CAT method quantifies how a variation of $\mathbf{u}$ will impact $\mathbf{y}$ . We denote $\varphi_{i}(\mathbf{u})$ the $i$ -th coordinate of $\varphi(\mathbf{u})$ , so that it represents the importance of the $i$ -th concept in the representation $\mathbf{u}$ . Equipped with these notations, we can leverage the sensitivity metrics introduced in standard attribution methods to re-define the current measures of concept importance, as well as introduce the new CATs borrowed from the attribution methods literature: + +$$ +\varphi_ {i} (\boldsymbol {u}) = \left\{ \begin{array}{l l} \nabla_ {\boldsymbol {u} _ {i}} \boldsymbol {g} (\boldsymbol {u} \mathbf {V} ^ {\top}) & \text {(u s e d i n T C A V : A C E , I C E)}, \\ \frac {\mathbb {E} _ {\mathbf {m} _ {\sim i}} (\mathbb {V} _ {\mathbf {m}} (\boldsymbol {g} ((\boldsymbol {u} \odot \mathbf {m}) \mathbf {V} ^ {\top}) | \mathbf {m} _ {\sim i}))}{\mathbb {V} (\boldsymbol {g} ((\boldsymbol {u} \odot \mathbf {m}) \mathbf {V} ^ {\top}))} & \text {(S o b o l : C R A F T)}, \\ (\boldsymbol {u} _ {i} - \boldsymbol {u} _ {i} ^ {\prime}) \times \int_ {0} ^ {1} \nabla_ {\boldsymbol {u} _ {i}} \boldsymbol {g} ((\boldsymbol {u} ^ {\prime} \alpha + (1 - \alpha) (\boldsymbol {u} - \boldsymbol {u} ^ {\prime})) \mathbf {V} ^ {\top}) d \alpha & \text {I n t . G r a d i e n t s}, \\ \underset {\delta \sim \mathcal {N} (0, \mathbf {I} \sigma)} {\mathbb {E}} (\nabla_ {\boldsymbol {u} _ {i}} \boldsymbol {g} ((\boldsymbol {u} + \delta) \mathbf {V} ^ {T})) & \text {S m o o t h g r a d .} \\ \dots & \end{array} \right. +$$ + +The complete derivation of the 7 new CATs is provided in the appendix. In the derivations, $\nabla_{\boldsymbol{u}_i}$ denotes the gradient with respect to the $i$ -th coordinate of $\boldsymbol{u}$ , while $\mathbb{E}$ and $\mathbb{V}$ represent the expectation and variance, respectively. In Eq. 6, $\mathbf{m}$ is a mask of real-valued random variable between 0 and 1 (i.e. $\mathbf{m} \sim \mathcal{U}([0,1]^p)$ ). We note that, when we use the gradient (w.r.t to $\boldsymbol{u}_i$ ) as an importance score, we end up with the directional derivative used in the TCAV metric [26], which is used by ACE and ICE to assess the importance of concepts. CRAFT leverages the Sobol-Hoeffding decomposition (used in sensitivity analysis), to estimate the concept importance. The Sobol indices measure the contribution of a concept as well as its interaction of any order with any other concepts to the output variance. Intuitively, the numerator of Eq. 6 is the expected variance that would be left if all variables but $\boldsymbol{u}_i$ were to be fixed. + +Evaluation of concept importance methods Our generalization of the concept importance score, using the Concept ATtributions (CATs), allows us to observe that current concept-based explainability methods are only leveraging a small subset of concept importance methods. In Appendix A, we provide the complete derivation of 7 new CATs based on the following existing attribution methods, notably: Gradient input [6], Smooth grad [5], Integrated Gradients [7], VarGrad [55], Occlusion [13], HSIC [10] and RISE [32]. + +With the concept importance scoring now formulated as a generalization of attribution methods, we can borrow the metrics from the attribution domain to evaluate the faithfulness [56, 32, 33] of concept importance methods. In particular, we adapt three distinct metrics to evaluate the significance of concept importance scores: the C-Deletion [32], C-Insertion [32], and $\mathrm{C - }\mu$ Fidelity [33] metrics. In C-Deletion, we gradually remove the concepts (as shown in Figure 4), in decreasing order of importance, and we report the network's output each time a concept is removed. When a concept is removed in C-Deletion, the corresponding coordinate in the representation is set to 0. The final C-Deletion metrics are computed as the area under the curve in Figure 4. For C-Insertion, this is + +![](images/2e13e51b9116240cb826d26bb63efdde60a094506038e73854905cbc1cd14634.jpg) +(a) + +![](images/9cbca209f05653bb8a21cea5158e3caa486998db859be781144e901ab75c0de6.jpg) +(b) +Figure 4: (a) C-Deletion, C-Insertion curves. Fidelity curves for C-Deletion depict the model's score as the most important concepts are removed. The results are averaged across 10 classes of ImageNet using a ResNet50 model. (b) C-Deletion, C-Insertion and C- $\mu$ Fidelity across layer. We report the 3 metrics to evaluate CATs for each block (from B2 to B5) of a ResNet50. We evaluate 8 Concept Attribution methods, all represented with different colors (see legend in Figure 4(a). The average trend of these eight methods is represented by the black dashed line $(- - - )$ . Lower C-Deletion is better, higher C-Insertion and C- $\mu$ Fidelity is better. Overall, it appears that the estimation of importance becomes more faithful towards the end of the model. + +the opposite: we start from a representation vector filled with zero, and we progressively add more concepts, following an increasing order of importance. + +For the C- $\mu$ Fidelity, we calculate the correlation between the model's output when concepts are randomly removed and the importance assigned to those specific concepts. The results across layers for a ResNet50 model are depicted in Figure 4b. We observe that decomposition towards the end of the model is preferred across all the metrics. As a result, in the next section, we will specifically examine the case of the penultimate layer. + +A note on the last layer Based on our empirical results, it appears that the last layer is preferable for both improved concept extraction and more accurate estimation of importance. Herein, we derive theoretical guarantees about the optimality of concept importance methods in the penultimate layer. Without loss of generality, we assume $y \in \mathbb{R}$ the logits of the class of interest. In the penultimate layer, the score $y$ is a linear combination of activations: $y = a\mathbf{W} + \mathbf{b}$ for weight matrix $\mathbf{W}$ and bias $\mathbf{b}$ . In this particular case, all CATs have a closed-form (see appendix B), that allows us to derive 2 theorems. The first theorem tackles the CATs optimality for the C-Deletion and C-Insertion methods (demonstration in Appendix D). We observe that the C-Deletion and C-Insertion problems can be represented as weighted matroids. Therefore the greedy algorithms lead to optimal solutions for CATs and a similar theorem could be derived for $\mathrm{C - }\mu$ Fidelity. + +Theorem 3.1 (Optimal C-Deletion, C-Insertion in the penultimate layer). When decomposing in the penultimate layer, Gradient Input, Integrated Gradients, Occlusion, and Rise yield the optimal solution for the C-Deletion and C-Insertion metrics. More generally, any method $\varphi(\mathbf{u})$ that satisfies the condition $\forall (i,j) \in \{1,\dots,k\}^2, (\mathbf{u} \odot \mathbf{e}_i)\mathbf{V}^\top \mathbf{W} \geq (\mathbf{u} \odot \mathbf{e}_j)\mathbf{V}^\top \mathbf{W} \Rightarrow \varphi(\mathbf{u})_i \geq \varphi(\mathbf{u})_j$ yields the optimal solution. + +Theorem 3.2 (Optimal C- $\mu$ Fidelity in the penultimate layer). When decomposing in the penultimate layer, Gradient Input, Integrated Gradients, Occlusion, and Rise yield the optimal solution for the $C - \mu$ Fidelity metric. + +Therefore, for all 3 metrics, the concept importance methods based on Gradient Input, Integrated Gradient, Occlusion, and Rise are optimal, when used in the penultimate layer. + +In summary, our investigation of concept extraction methods from the perspective of dictionary learning demonstrates that the NMF approach, specifically when extracting concepts from the penultimate layer, presents the most appealing trade-off compared to PCA and K-Means methods. In addition, our formalization of concept importance using attribution methods provided us with a theoretical guarantee for 4 different CATs. Henceforth, we will then consider the following setup: a + +NMF on the penultimate layer to extract the concepts, combined with a concept importance method based on Integrated Gradient. + +# 3.3 Unveiling main strategies + +![](images/85703bc9e949958f902efd93f4d23df3ee93f50bfed4e9c5f70bfc03e93b2b1c.jpg) +Figure 5: From global (class-based) to local (image-based) importance. Global importance can be decomposed into reliability and prevalence scores. Prevalence quantifies how frequently a concept is encountered, and reliability indicates how diagnostic a concept is for the class. The bar-charts are computed for the class "Espresso" on a ResNet50 (see Figure 1, left panel) + +So far, the concept-based explainability methods have mainly focused on evaluating the global importance of concepts, i.e., the importance of concepts for an entire class [26, 29]. This point can be limiting when studying misclassified data points, as we can speculate that the most important concepts for a given class might not hold for an individual sample (local importance). Fortunately, our formulation of concept importance using attribution methods gives us access to importance scores at the level of individual samples (i.e., $\varphi(\pmb{u})$ ). Here, we show how to use these local importance scores to efficiently cluster data points based on the strategy used for their classification. + +The local (or image-based) importance of concepts can be integrated into global measures of importance for the entire class with the notion of prevalence and reliability (see Figure 5). A concept is said to be prevalent at the class level when it appears very frequently. A prevalence score is computed based on the number of times a concept is identified as the most important one, i.e., $\arg \max \varphi (\boldsymbol {u})$ . At the same time, a concept is said to be reliable if it is very likely to trigger a correct prediction. The reliability is quantified using the mean classification accuracy on samples sharing the same most important concept. + +Strategic cluster graph. In the strategic cluster graph (Figure 1 and Figure 6), we combine the notions of + +concept prevalence and reliability to reveal the main strategies of a model for a given category, more precisely, we reveal their repartition across the different samples of the class. We use a dimensionality reduction technique (UMAP [57]) to arrange the data points based on the concept importance vector $\varphi(\boldsymbol{u})$ of each sample. Data points are colored according to the associated concept with the highest importance - arg max $\varphi(\boldsymbol{u})$ . Interestingly, one can see in Figure 1 and Figure 6 that spatially close points represent samples classified using similar strategies - as they exhibit similar concept importance - and not necessarily similar embeddings. For example, for the "lemon" object category (Figure 6), the texture of the lemon peel is the most prevalent concept, as it appears to be the dominant concept in $90\%$ of the samples (see the green cluster in Figure 6). We also observe that the concept "pile of round, yellow objects" is not reliable for the network to properly classify a lemon as it results in a mean classification accuracy of $40\%$ only (see top-left graph in Figure 6). + +In Figure 6 (right panel), we have exploited the strategic cluster graph to understand the classification strategies leading to bad classifications. For example, an orange $(1^{st}$ image, $1^{st}$ row) was classified as a lemon because of the peel texture they both share. Similarly, a cathedral roof was classified as a lemon because of the wedge-shaped structure of the structure $(4^{th}$ image, $1^{st}$ row). + +# 4 Discussion + +This article introduced a theoretical framework that unifies all modern concept-based explainability methods. Breaking down and formalizing the two essential steps in these methods, concept extraction and concept importance scoring, allowed us to better understand the underlying principles driving concept-based explainability. We leveraged this unified framework to propose new evaluation metrics for assessing the quality of extracted concepts. Through experimental and theoretical analyses, we justified the standard use of the last layer of an ANN for concept-based explanation. Finally, we harnessed the parallel between concept importance and attribution methods to gain insights into global concept importance (at the class level) by examining local concept importance (for individual samples). We proposed the strategic cluster graph, which provides insights into the strategy used by an ANN to classify images. We have provided an example use of this approach to better understand + +![](images/7bed030d372b20a3bb45ae49656c467e7a2ae3a6044c11fe5472ee0f21fb6711.jpg) +Understand False Positive + +![](images/e1d3befe6c13d39b1ca3ddf295a1cdc44eba957af842440a72c1a6eb7ae137ad.jpg) +Hallucinating lemon because of the peel texture + +![](images/cf0c44bb7c98d611ce0e30c32f5b67d0d4f49e9b4b035083f23894d7a29883d4.jpg) +Hallucinating lemon because of lemon wedge shape + +![](images/c8053ae799204fdd9239f589089385c9da589cb7fb1bbe862f399b07a5c962f8.jpg) +Hallucination of lemons from piles of round, yellow objects +Figure 6: Strategic cluster graph for the lemon category. Left: U-MAP of lemon samples, in the concept space. Each concept is represented with its own color and is exemplified with example belonging to the cluster. The concepts are $\bullet$ the lemon wedge shape, $\bullet$ a pile of round, yellow objects, $\bullet$ green objects hanging on a tree, and finally $\bullet$ the peel texture, which is the predominant strategy. The reliability of each concept is shown in the top-left bar-chart. Right: Example of images predicted as lemon along with their corresponding explanations. These misclassified images are recognized as lemons through the implementation of strategies that are captured by our proposed strategic cluster graph. + +![](images/713228d17ed44018dac1f8d478829e8a7d7d455710dd4a7b4054c4c11184f088.jpg) +Hallucinating a lemon because of a green object hanging on a tree + +the failure cases of a system. Overall, our work demonstrates the potential benefits of the dictionary learning framework for automatic concept extraction and we hope this work will pave the way for further advancements in the field of XAI. + +# Acknowledgements + +This work was conducted as part of the DEEL project. Funding was provided by ONR (N00014-19-1-2029), NSF (IIS-1912280 and EAR-1925481), DARPA (D19AC00015), NIH/NINDS (R21 NS 112743), and the ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANR-19-PI3A0004). Additional support provided by the Carney Institute for Brain Science and the Center for Computation and Visualization (CCV). We acknowledge the Cloud TPU hardware resources that Google made available via the TensorFlow Research Cloud (TFRC) program as well as computing hardware supported by NIH Office of the Director grant S10OD025181. + +# References + +[1] Paolo Tripicchio and Salvatore D'Avella. Is deep learning ready to satisfy industry needs? Procedia Manufacturing, 51:1192-1199, 2020. +[2] Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. ArXiv e-print, 2017. +[3] Alon Jacovi, Ana Marasovic, Tim Miller, and Yoav Goldberg. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 624-635, 2021. +[4] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop, Proceedings of the International Conference on Learning Representations (ICLR), 2013. +[5] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. In Workshop on Visualization for Deep Learning, Proceedings of the International Conference on Machine Learning (ICML), 2017. +[6] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the International Conference on Machine Learning (ICML), 2017. +[7] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning (ICML), 2017. +[8] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. +[9] Thomas Fel, Remi Cadene, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, and Thomas Serre. Look at the variance! efficient black-box explanations with sobol-based sensitivity analysis. In Advances in Neural Information Processing Systems (NeurIPS), 2021. +[10] Paul Novello, Thomas Fel, and David Vigouroux. Making sense of dependence: Efficient black-box explanations using dependence measure. In Advances in Neural Information Processing Systems (NeurIPS), 2022. +[11] Thomas Fel, Melanie Ducoffe, David Vigouroux, Remi Cadene, Mikael Capelle, Claire Nicodeme, and Thomas Serre. Don't lie to me! robust and efficient explainability with verified perturbation analysis. Workshop on Formal Verification of Machine Learning, Proceedings of the International Conference on Machine Learning (ICML), 2022. +[12] Mara Graziani, Iam Palatnik de Sousa, Marley MBR Vellasco, Eduardo Costa da Silva, Henning Müller, and Vincent Andrearczyk. Sharpening local interpretable model-agnostic explanations for histopathology: improved understandability and reliability. In Medical Image Computing and Computer Assisted Intervention (MICCAI). Springer, 2021. +[13] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), 2014. +[14] Ruth C. Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. +[15] Marouane II Idrissi, Nicolas Bousquet, Fabrice Gamboa, Bertrand Iooss, and Jean-Michel Loubes. On the coalitional decomposition of parameters of interest, 2023. +[16] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems (NIPS), 2018. +[17] Leon Sixt, Maximilian Granz, and Tim Landgraf. When explanations lie: Why many modified bp attributions fail. In Proceedings of the International Conference on Machine Learning (ICML), 2020. +[18] Dylan Slack, Anna Hilgard, Sameer Singh, and Himabindu Lakkaraju. Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in Neural Information Processing Systems (NeurIPS), 34, 2021. +[19] Sukrut Rao, Moritz Böhle, and Bernt Schiele. Towards better understanding attribution methods. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. + +[20] Peter Hase and Mohit Bansal. Evaluating explainable ai: Which algorithmic explanations help users predict model behavior? Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2020. +[21] Hua Shen and Ting-Hao Huang. How useful are the machine-generated interpretations to general users? a human evaluation on guessing the incorrectly predicted labels. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 8, pages 168-172, 2020. +[22] Julien Colin, Thomas Fel, Rémi Cadène, and Thomas Serre. What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods. Advances in Neural Information Processing Systems (NeurIPS), 2021. +[23] Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, and Olga Russakovsky. HIVE: Evaluating the human interpretability of visual explanations. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), 2022. +[24] Giang Nguyen, Daeyoung Kim, and Anh Nguyen. The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. Advances in Neural Information Processing Systems (NeurIPS), 2021. +[25] Leon Sixt, Martin Schuessler, Oana-Iuliana Popescu, Philipp Weiß, and Tim Landgraf. Do users benefit from interpretable vision? a user study, baseline, and dataset. Proceedings of the International Conference on Learning Representations (ICLR), 2022. +[26] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning. Proceedings of the International Conference on Machine Learning (ICML), 2018. +[27] Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems, pages 9273-9282, 2019. +[28] Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A Ehinger, and Benjamin IP Rubinstein. Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11682-11690, 2021. +[29] Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, and Thomas Serre. Craft: Concept recursive activation factorization for explainability. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. +[30] Mara Graziani, An-phi Nguyen, Laura O'Mahony, Henning Müller, and Vincent Andrearczyk. Concept discovery and dataset exploration with singular value decomposition. In ICLR 2023 Workshop on Pitfalls of limited data and computation for Trustworthy ML, 2023. +[31] James Genone and Tania Lombrozo. Concept possession, experimental semantics, and hybrid theories of reference. Philosophical Psychology, 25(5):717-742, 2012. +[32] Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In Proceedings of the British Machine Vision Conference (BMVC), 2018. +[33] Umang Bhatt, Adrian Weller, and José M. F. Moura. Evaluating and aggregating feature-based model explanations. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2020. +[34] Johannes Haug, Stefan Zurn, Peter El-Jiz, and Gjergji Kasneci. On baselines for local feature attributions. arXiv preprint arXiv:2101.00905, 2021. +[35] Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Ravikumar, Seungyeon Kim, Sanjiv Kumar, and Cho-Jui Hsieh. Evaluations and methods for explanation through robustness analysis. In Proceedings of the International Conference on Learning Representations (ICLR), 2021. +[36] Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (un) reliability of saliency methods. 2019. +[37] Pascal Sturmfels, Scott Lundberg, and Su-In Lee. Visualizing the impact of feature attribution baselines. Distill, 2020. +[38] Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, et al. Toy models of superposition. arXiv preprint arXiv:2209.10652, 2022. + +[39] Julien Mairal, Francis Bach, Jean Ponce, et al. Sparse modeling for image and vision processing. Foundations and Trends® in Computer Graphics and Vision, 8(2-3):85–283, 2014. +[40] Alireza Makhzani and Brendan Frey. K-sparse autoencoders. Proceedings of the International Conference on Learning Representations (ICLR), 2014. +[41] Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monoseismicity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monoseismic-features/index.html. +[42] Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superposition. Transformer Circuits Thread, 2022. +[43] Carl Eckart and Gale Young. The approximation of one matrix by another of lower rank. Psychometrika, 1 (3):211-218, 1936. +[44] Vardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652-24663, 2020. +[45] Chris Ding, Xiaofeng He, and Horst D Simon. On the equivalence of nonnegative matrix factorization and spectral clustering. In Proceedings of the 2005 SIAM international conference on data mining, pages 606-610. SIAM, 2005. +[46] Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. Advances in Neural Information Processing Systems (NeurIPS), 2018. +[47] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. +[48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510-4520, 2018. +[49] Jayneel Parekh, Sanjeel Parekh, Pavlo Mozharovskyi, Florence d'Alché Buc, and Géral Richard. Listen to interpret: Post-hoc interpretability for audio networks with nmf. Advances in Neural Information Processing Systems (NeurIPS), 2022. +[50] Jeremy Howard. Imagenette dataset. URL https://github.com/fastai/imagenette/. +[51] Olivier Bousquet and André Elisseeff. Stability and generalization. The Journal of Machine Learning Research, 2002. +[52] Cédric Villani et al. Optimal transport: old and new, volume 338. Springer. +[53] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, pages 20827-20840. PMLR, 2022. +[54] Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2017. +[55] Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +[56] Alon Jacovi and Yoav Goldberg. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2020. +[57] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018. +[58] Thomas Fel, Lucas Hervier, David Vigouroux, Antonin Poche, Justin Plakoo, Remi Cadene, Mathieu Chalvidal, Julien Colin, Thibaut Boissin, Louis Bethune, Agustin Picard, Claire Nicodeme, Laurent Gardes, Gregory Flandin, and Thomas Serre. Xplique: A deep learning explainability toolbox. Workshop on Explainable Artificial Intelligence for Computer Vision (CVPR), 2022. + +[59] Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2018. +[60] Matthew Sotoudeh and Aditya V. Thakur. Computing linear restrictions of neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +[61] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), 2014. +[62] Hassler Whitney. On the abstract properties of linear dependence. Hassler Whitney Collected Papers, pages 147-171, 1992. +[63] Sajad Fathi Hafshejani and Zahra Moaberfard. Initialization for non-negative matrix factorization: a comprehensive review. International Journal of Data Science and Analytics, 2023. +[64] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings of the International Conference on Learning Representations (ICLR), 2015. +[65] Bogdan Dumitrescu and Paul Irofti. Dictionary learning algorithms and applications. Springer, 2018. \ No newline at end of file diff --git a/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/images.zip b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f790ac4bd031c08729894e7bf20eb44cca1e2b23 --- /dev/null +++ b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f46f139a341623a6eb095474bb83bf831d22ab38d7003b4643bf68db892f95b0 +size 383360 diff --git a/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/layout.json b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bbb258043010694847aaa4e032de670fd7e5f9ea --- /dev/null +++ b/aholisticapproachtounifyingautomaticconceptextractionandconceptimportanceestimation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aebb4d8d304c661da0fae8b93b5a268680ddba8f70ec1df72dbcb59d48244dbe +size 434340 diff --git a/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_content_list.json b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..55bbbc7ad8a3fc8bab53a7a0ed6ecfa5971e29a1 --- /dev/null +++ b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d3a842a2ccfdc098409b82f942f34b2a3067210d3482e4b7cb3e837d189580e +size 130853 diff --git a/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_model.json b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..51eb723171a4e580e7e55a8c7e6bf80f2b838cc7 --- /dev/null +++ b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb531c4d7347c3265148b7a61e97c8bab3550e1a2d7125af2daf8f971449e07a +size 159990 diff --git a/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_origin.pdf b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1564a8d650a00822d31900e083f77335fd48f30d --- /dev/null +++ b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/edce1e41-7a8b-4ccb-9664-8ce0dc33eff8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3315ae705bf97a233fa4ad1d50ea4aacac5be640f102af45dd5e7ec380e8ec21 +size 1171536 diff --git a/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/full.md b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3a1eaf7fbe763cc66783cebc4826773c61e67499 --- /dev/null +++ b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/full.md @@ -0,0 +1,464 @@ +# (Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy + +Elan Rosenfeld +Machine Learning Department +Carnegie Mellon University +elan@cmu.edu + +Saurabh Garg +Machine Learning Department +Carnegie Mellon University + +# Abstract + +We derive a new, (almost) guaranteed upper bound on the error of deep neural networks under distribution shift using unlabeled test data. Prior methods are either vacuous in practice or accurate on average but heavily underestimate error for a sizeable fraction of shifts. In particular, the latter only gives guarantees based on complex continuous measures such as test calibration, which cannot be identified without labels, and are therefore unreliable. Instead, our bound requires a simple, intuitive condition which is well justified by prior empirical works and holds in practice effectively $100\%$ of the time. The bound is inspired by $\mathcal{H}\Delta \mathcal{H}$ -divergence but is easier to evaluate and substantially tighter, consistently providing non-vacuous test error upper bounds. Estimating the bound requires optimizing one multiclass classifier to disagree with another, for which some prior works have used sub-optimal proxy losses; we devise a "disagreement loss" which is theoretically justified and performs better in practice. We expect this loss can serve as a drop-in replacement for future methods which require maximizing multiclass disagreement. Across a wide range of natural and synthetic distribution shift benchmarks, our method gives valid error bounds while achieving average accuracy comparable to—though not better than—competitive estimation baselines. + +# 1 Introduction + +When deploying a model, it is important to be confident in how it will perform under inevitable distribution shift. Standard methods for achieving this include data dependent uniform convergence bounds [41, 6] (typically vacuous in practice) or assuming a precise model of how the distribution can shift [57, 10]. Unfortunately, it is difficult or impossible to determine how severely these assumptions are violated by real data [62], so practitioners usually cannot trust such bounds with confidence. + +To better estimate test performance in the wild, some recent work instead tries to directly predict accuracy of neural networks using unlabeled data from the test distribution of interest [21, 3, 40]. While these methods predict the test performance surprisingly well, they lack pointwise trustworthiness and verifiability: their estimates are good on average over all distribution shifts, but they provide no signal of the quality of any individual prediction (here, each point is a single test distribution, for which a method predicts a classifier's average accuracy). Because of the opaque conditions under which these methods work, it is also difficult to anticipate their failure cases—indeed, it is reasonably common for them to substantially overestimate test accuracy for a particular shift, which is problematic when optimistic deployment would be very costly. Worse yet, we find that this gap grows with test error (Figure 1), making these predictions least reliable precisely when their reliability is most important. Although it is clearly impossible to guarantee upper bounds on test error for all shifts, there is still potential for error bounds that are intuitive and reasonably trustworthy. + +![](images/b58436d740cf0834bfea67060198d4e5f01359f3e4358014e88c05927c3ddde6.jpg) +Figure 1: Our bound vs. three prior methods for estimation across a wide variety of distribution shift benchmarks (e.g., WILDs, BREEDs, DomainNet) and training methods (e.g., ERM, FixMatch, BN-adapt). Prior methods are accurate on average, but it is difficult or impossible to know when a given prediction is reliable and why. Worse yet, they usually overestimate accuracy, with the gap growing as test accuracy decreases—this is precisely when a reliable, conservative estimate is most desirable. Instead, $\mathrm{DIS}^2$ maximizes the disagreement discrepancy to give a reliable error upper bound which holds effectively $100\%$ of the time. See Appendix F for stratification by training method. + +In this work, we develop a method for (almost) provably bounding test error of classifiers under distribution shift using unlabeled test points. Our bound's only requirement is a simple, intuitive, condition which describes the ability of a hypothesis class to achieve small loss on a particular objective defined over the (unlabeled) train and test distributions. Inspired by $\mathcal{H}\Delta\mathcal{H}$ -divergence [41, 7], our method requires training a critic to maximize agreement with the classifier of interest on the source distribution while simultaneously maximizing disagreement on the target distribution; we refer to this joint objective as the Disagreement Discrepancy, and so we name the method $\mathrm{DIS}^2$ . We optimize this discrepancy over linear classifiers using deep features—or linear functions thereof—finetuned on only the training set. Recent evidence suggests that such representations are sufficient for highly expressive classifiers even under large distribution shift [61]. Experimentally, we find that our bound is valid effectively $100\%$ of the time, $^1$ consistently giving non-trivial lower bounds on test accuracy which are reasonably comparable to competitive baselines. + +Additionally, our proof of the bound leads to a natural (post-hoc) hypothesis test of the validity of its lone assumption. This provides an unusually strong positive signal: for more than half the datasets we evaluate we prove with high probability that the assumption holds; the corresponding indicator that it does not hold never occurs. We also show that it is possible to approximately test this bound's likelihood of being valid a priori using only unlabeled data: the optimization process itself provides useful information about the bound's validity, and we use this to construct a score which linearly correlates with the tightness of the bound. This score can then be used to relax the original bound into a sequence of successively tighter-yet-less-conservative estimates, interpolating between robustness and accuracy and allowing a user to make estimates according to their specific risk tolerance. + +While maximizing agreement is statistically well understood, our method also calls for maximizing disagreement on the target distribution. This is not so straightforward in the multiclass setting, and we observe that prior works use unsuitable losses which do not correspond to minimizing the 0-1 loss of interest and are non-convex (or even concave) in the model logits [12, 50, 23]. To rectify this, we derive a new "disagreement loss" which serves as an effective proxy loss for maximizing multiclass disagreement. Experimentally, we find that minimizing this loss results in lower risk (that is, higher disagreement) compared to prior methods, and we believe it can serve as a useful drop-in replacement for any future methods which require maximizing multiclass disagreement. + +Experiments across numerous vision datasets (BREEDs [65], FMoW-WILDs [35], Visda [51], Domainnet [53], CIFAR10, CIFAR100 [36] and OfficeHome [69]) demonstrate the effectiveness of our bound. Though $\mathrm{DIS}^2$ is competitive with prior methods for error estimation, we emphasize that our focus is not on improving raw predictive accuracy—rather, we hope to obtain reliable (i.e., + +correct), reasonably tight bounds on the test error of a given classifier under distribution shift. In particular, while existing methods tend to severely overestimate accuracy as the true accuracy drops, our bound maintains its validity while remaining non-vacuous, even for drops in accuracy as large as $70\%$ . In addition to source-only training, we experiment with unsupervised domain adaptation methods that use unlabeled target data and show that our observations continue to hold. + +# 2 Related Work + +Estimating test error with unlabeled data. The generalization capabilities of overparameterized models on in-distribution data have been extensively studied using conventional machine learning tools [46, 47, 45, 48, 17, 5, 73, 39, 42]. This research aims to bound the generalization gap by evaluating complexity measures of the trained model. However, these bounds tend to be numerically loose compared to actual generalization error [70, 43]. Another line of work instead explores the use of unlabeled data for predicting in-distribution generalization [56, 55, 20, 44, 32]. More relevant to our work, there are several methods that predict the error of a classifier under distribution shift with unlabeled test data: (i) methods that explicitly predict the correctness of the model on individual unlabeled points [14, 15, 8]; and (ii) methods that directly estimate the overall error without making a pointwise prediction [9, 24, 12, 21, 3]. + +To achieve a consistent estimate of the target accuracy, several works require calibration on the target domain [32, 24]. However, these methods often yield poor estimates because deep models trained and calibrated on a source domain are not typically calibrated on previously unseen domains [49]. Additionally, [14, 24] require a subset of labeled target domains to learn a regression function that predicts model performance—but thus requires significant a priori knowledge about the nature of shift that, in practice, might not be available before models are deployed in the wild. + +Closest to our work is [12], where the authors use domain-invariant predictors as a proxy for unknown target labels. However, like other works, their method only estimates the target accuracy—the actual error bounds they derive are not computable in practice. Second, even their estimate is computationally demanding and relies on multiple approximations, tuning of numerous hyperparameters, e.g. lagrangian multipliers; as a result, proper tuning is difficult and the method does not scale to modern deep networks. Finally, they suggest minimizing the (concave) negative cross-entropy loss, but we show that this can be a poor proxy for maximizing disagreement, instead proposing a more suitable replacement which we find performs much better. + +Uniform convergence bounds. Our bound is inspired by classic analyses using $\mathcal{H}$ - and $\mathcal{H}\Delta \mathcal{H}$ -divergence [41, 6, 7]. These provide error bounds via a complexity measure that is both data- and hypothesis-class-dependent. This motivated a long line of work on training classifiers with small corresponding complexity, such as restricting classifiers' discriminative power between source and target data [18, 67, 38, 72]. Unfortunately, such bounds are often intractable to evaluate and are usually vacuous in real world settings. We provide a more detailed comparison to our approach in Section 3.1. + +# 3 Deriving an (Almost) Provable Error Bound + +Notation. Let $\mathcal{S}, \mathcal{T}$ denote the source and target (train and test) distributions, respectively, over labeled inputs $(x,y) \in \mathcal{X} \times \mathcal{Y}$ , and let $\hat{\mathcal{S}}, \hat{\mathcal{T}}$ denote sets of samples from them with cardinalities $n_S$ and $n_T$ (they also denote the corresponding empirical distributions). Recall that we observe only the covariates $x$ without the label $y$ when a sample is drawn from $\mathcal{T}$ . We consider classifiers $h: \mathcal{X} \to \mathbb{R}^{|\mathcal{Y}|}$ which output a vector of logits, and we let $\hat{h}$ denote the particular classifier whose error we aim to bound. Generally, we use $\mathcal{H}$ to denote a hypothesis class of such classifiers. Where clear from context, we use $h(x)$ to refer to the argmax logit, i.e. the predicted class. We treat these classifiers as deterministic throughout, though our analysis can easily be extended to probabilistic classifiers and labels. For a distribution $\mathcal{D}$ on $\mathcal{X} \times \mathcal{Y}$ , let $\epsilon_{\mathcal{D}}(h,h') := \mathbb{E}_{\mathcal{D}}[\mathbf{1}\{\arg \max_y h(x)_y \neq \arg \max_y h'(x)_y\}]$ denote the one-hot disagreement between classifiers $h$ and $h'$ on $\mathcal{D}$ . Let $y^*$ represent the true labeling function such that $y^*(x) = y$ for all samples $(x,y)$ ; with some abuse of notation, we write $\epsilon_{\mathcal{D}}(h)$ to mean $\epsilon_{\mathcal{D}}(h,y^*)$ , i.e. the 0-1 error of classifier $h$ on distribution $\mathcal{D}$ . + +The bound we derive in this work is extremely simple and relies on one new concept: + +Definition 3.1. The disagreement discrepancy $\Delta(h, h')$ is the disagreement between $h$ and $h'$ on $\mathcal{T}$ minus their disagreement on $\mathcal{S}$ : + +$$ +\Delta (h, h ^ {\prime}) := \epsilon_ {\mathcal {T}} (h, h ^ {\prime}) - \epsilon_ {\mathcal {S}} (h, h ^ {\prime}). +$$ + +We leave the dependence on $S, T$ implicit. Note that this term is symmetric in its arguments and signed—it can be negative. With this definition, we now have the following lemma: + +Lemma 3.2. For any classifier $h$ , $\epsilon_{\mathcal{T}}(h) = \epsilon_{\mathcal{S}}(h) + \Delta (h,y^{*})$ + +Proof. By definition, $\epsilon_{\mathcal{T}}(h) = \epsilon_{\mathcal{S}}(h) + (\epsilon_{\mathcal{T}}(h) - \epsilon_{\mathcal{S}}(h)) = \epsilon_{\mathcal{S}}(h) + \Delta (h,y^{*})$ + +![](images/e93daf4848123b319096810315ec5072bacb14c3711976dc15a9f6cc2de25ad6.jpg) + +We cannot directly use Lemma 3.2 to estimate $\epsilon_{\mathcal{T}}(\hat{h})$ because the second term is unknown. However, observe that $y^{*}$ is fixed. That is, while a learned $\hat{h}$ will depend on $y^{*}$ and $\mathcal{S}$ and therefore $\Delta (\hat{h},y^{*})$ may be large under large distribution shift— $y^{*}$ is not chosen to maximize $\Delta (\hat{h},y^{*})$ in response to the $\hat{h}$ we have learned. This means that for a sufficiently expressive hypothesis class $\mathcal{H}$ , it should be possible to identify an alternative labeling function $h^{\prime}\in \mathcal{H}$ for which $\Delta (\hat{h},h^{\prime})\geq \Delta (\hat{h},y^{*})$ (we refer to such $h^\prime$ as the critic). In other words, we should be able to find an $h^\prime \in \mathcal{H}$ which, if it were the true labeling function, would imply at least as large of a drop in accuracy from train to test as occurs in reality. This key observation serves as the basis for our bound, and we discuss it in greater detail in Section 3.1. + +In this work we consider the class $\mathcal{H}$ of linear critics, with the features $\mathcal{X}$ defined as source-finetuned neural representations or the logits output by the classifier $\hat{h}$ . Prior work provides strong evidence that this class has surprising capacity under distribution shift, including the possibility that functions very similar to $y^{*}$ lie in $\mathcal{H}$ [61, 34, 33]. We formalize this intuition with the following assumption: + +Assumption 3.3. Define $h^* \coloneqq \arg \max_{h' \in \mathcal{H}} \Delta(\hat{h}, h')$ . We assume + +$$ +\Delta (\hat {h}, y ^ {*}) \leq \Delta (\hat {h}, h ^ {*}). +$$ + +Note that this statement is necessarily true whenever $y^{*} \in \mathcal{H}$ ; it only becomes meaningful when considering restricted $\mathcal{H}$ , as we do here. Note also that this assumption is made specifically for $\hat{h}$ , i.e., on a per-classifier basis. This is important because while the above may not hold for every classifier $\hat{h}$ , it need only hold for the classifiers whose error we would hope to bound, which is in practice a very small subset of classifiers (such as those which can be found by approximately minimizing the empirical training risk via SGD). From Lemma 3.2, we immediately have the following result: + +Proposition 3.4. Under Assumption 3.3, $\epsilon_{\mathcal{T}}(\hat{h}) \leq \epsilon_{\mathcal{S}}(\hat{h}) + \Delta(\hat{h}, h^*)$ . + +Unfortunately, identifying the optimal critic $h^*$ is intractable, meaning this bound is still not estimable—we present it as an intermediate result for clarity of presentation. To derive the practical bound we report in our experiments, we need one additional step. In Section 4, we derive a "disagreement loss" which we use to approximately maximize the empirical disagreement discrepancy $\hat{\Delta}(\hat{h}, \cdot) = \epsilon_{\hat{T}}(\hat{h}, \cdot) - \epsilon_{\hat{S}}(\hat{h}, \cdot)$ . Relying on this loss, we instead make the assumption: + +Assumption 3.5. Suppose we identify the critic $h' \in \mathcal{H}$ which maximizes a concave surrogate to the empirical disagreement discrepancy. We assume $\Delta(\hat{h}, y^*) \leq \Delta(\hat{h}, h')$ . + +This assumption is slightly stronger than Assumption 3.3—in particular, Assumption 3.3 implies with high probability a weaker version of Assumption 3.5 with additional terms that decrease with increasing sample size and a tighter proxy loss. $^2$ Thus, the difference in strength between these two assumptions shrinks as the number of available samples grows and as the quality of our surrogate objective improves. Ultimately, our bound holds without these terms, implying that the stronger assumption is reasonable in practice. We can now present our main result: + +Theorem 3.6 (Main Bound). Under Assumption 3.5, with probability $\geq 1 - \delta$ + +$$ +\epsilon_ {\mathcal {T}} (\hat {h}) \leq \epsilon_ {\hat {\mathcal {S}}} (\hat {h}) + \hat {\Delta} (\hat {h}, h ^ {\prime}) + \sqrt {\frac {(n _ {S} + 4 n _ {T}) \log^ {1 / \delta}}{2 n _ {S} n _ {T}}}. +$$ + +Proof. Assumption 3.5 implies $\epsilon_{\mathcal{T}}(\hat{h}) \leq \epsilon_{\mathcal{S}}(\hat{h}) + \Delta(\hat{h}, h') = \epsilon_{\mathcal{S}}(\hat{h}, y^*) + \epsilon_{\mathcal{T}}(\hat{h}, h') - \epsilon_{\mathcal{S}}(\hat{h}, h')$ , so the problem reduces to upper bounding these three terms. We define the random variables + +$$ +r _ {\mathcal {S}, i} = \left\{ \begin{array}{l l} 1 / _ {n _ {S}}, & h ^ {\prime} (x _ {i}) = \hat {h} (x _ {i}) \neq y _ {i}, \\ - 1 / _ {n _ {S}}, & h ^ {\prime} (x _ {i}) \neq \hat {h} (x _ {i}) = y _ {i}, \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \quad r _ {\mathcal {T}, i} = \frac {\mathbf {1} \{\hat {h} (x _ {i}) \neq h ^ {\prime} (x _ {i}) \}}{n _ {T}} +$$ + +for source and target samples, respectively. By construction, the sum of all of these variables is precisely $\epsilon_{\hat{S}}(\hat{h},y^{*}) + \epsilon_{\hat{T}}(\hat{h},h^{\prime}) - \epsilon_{\hat{S}}(\hat{h},h^{\prime})$ (note these are the empirical terms). Further, observe that + +$$ +\begin{array}{l} \mathbb {E} \left[ \sum_ {\hat {\mathcal {S}}} r _ {\mathcal {S}, i} \right] = \mathbb {E} _ {\mathcal {S}} [ \mathbf {1} \{\hat {h} (x _ {i}) \neq y _ {i} \} - \mathbf {1} \{\hat {h} (x _ {i}) \neq h ^ {\prime} (x _ {i}) \} ] = \epsilon_ {\mathcal {S}} (\hat {h}, y ^ {*}) - \epsilon_ {\mathcal {S}} (\hat {h}, h ^ {\prime}), \\ \mathbb {E} \left[ \sum_ {\hat {\tau}} r _ {\mathcal {T}, i} \right] = \mathbb {E} _ {\mathcal {T}} [ \mathbf {1} \{\hat {h} (x _ {i}) \neq h ^ {\prime} (x _ {i}) \} ] = \epsilon_ {\mathcal {T}} (\hat {h}, h ^ {\prime}), \\ \end{array} +$$ + +and thus their expected sum is $\epsilon_{S}(\hat{h},y^{*}) + \epsilon_{\mathcal{T}}(\hat{h},h^{\prime}) - \epsilon_{S}(\hat{h},h^{\prime})$ , which are the population terms we hope to bound. Now we apply Hoeffding's inequality: the probability that the expectation exceeds their sum by $t$ is no more than $\exp \left(-\frac{2t^2}{n_S(2 / n_S)^2 + n_T(1 / n_T)^2}\right)$ . Solving for $t$ completes the proof. + +Remark 3.7. While we state Theorem 3.6 as an implication, Assumption 3.5 is equivalent to the stated bound up to finite-sample terms. Our empirical findings (and prior work) suggest that Assumption 3.5 is reasonable in general, but this equivalence allows us to actually prove that it holds in practice for many shifts. We elaborate on this in Appendix E. + +The core message behind Theorem 3.6 is that if there is a simple (i.e., linear) critic $h'$ with large disagreement discrepancy, the true $y^*$ could plausibly be this function, implying $\hat{h}$ could have high error—likewise, if no simple $y^*$ could hypothetically result in high error, we should expect low error. + +Remark 3.8. Bounding error under distribution shift is fundamentally impossible without assumptions. Prior works which estimate accuracy using unlabeled data rely on experiments, suggesting that whatever condition allows their method to work holds in a variety of settings [21, 3, 40, 32, 24]; using these methods is equivalent to implicitly assuming that it will hold for future shifts. Understanding these conditions is thus crucial for assessing in a given scenario whether they can be expected to be satisfied.3 It is therefore of great practical value that Assumption 3.5 is a simple, intuitive requirement: below we demonstrate that this simplicity allows us to identify potential failure cases a priori. + +# 3.1 How Does $\mathsf{DIS}^2$ Improve over $\mathcal{H}$ and $\mathcal{H}\Delta \mathcal{H}$ -Divergence? + +To verifiably bound a classifier's error under distribution shift, one must develop a meaningful notion of distance between distributions. One early attempt at this was $\mathcal{H}$ -divergence [6, 41] which measures the ability of a binary hypothesis class to discriminate between $S$ and $\mathcal{T}$ in feature space. This was later refined to $\mathcal{H}\Delta \mathcal{H}$ -divergence [7], which is equal to $\mathcal{H}$ -divergence where the discriminator class comprises all exclusive-ors between pairs of functions from the original class $\mathcal{H}$ . Though these measures can in principle provide non-vacuous bounds, they usually do not, and evaluating them is intractable because it requires maximizing an objective over all pairs of hypotheses. Furthermore, these bounds are overly conservative even for simple function classes and distribution shifts because they rely on uniform convergence. In practice, we do not care about bounding the error of all classifiers in $\mathcal{H}$ -we only care to bound the error of $\hat{h}$ . This is a clear advantage of $\mathrm{DIS}^2$ over $\mathcal{H}\Delta \mathcal{H}$ . + +The true labeling function is never worst-case.4 More importantly, we observe that one should not expect the distribution shift to be truly worst case, because the test distribution $\mathcal{T}$ and ground truth + +![](images/34e8552176ee867e2c2ab59ac323a0a305f68d40954bb50ab91881fd23e085b2.jpg) +(a) + +![](images/e946ca7eaa665234f539405039d4ee8d26afe5f235200068a407a8693a2bb76b.jpg) +(b) +Figure 2: The advantage of $\mathsf{DIS}^2$ over bounds based on $\mathcal{H}$ and $\mathcal{H}\Delta \mathcal{H}$ -divergence. Consider the task of classifying circles and squares (triangles are unlabeled). (a): Because $h_1$ and $h_2 \oplus h_3$ perfectly discriminate between $S$ (blue) and $\mathcal{T}$ (red), $\mathcal{H}$ and $\mathcal{H}\Delta \mathcal{H}$ -divergence bounds are always vacuous. In contrast, $\mathsf{DIS}^2$ is only vacuous when $0\%$ accuracy is induced by a reasonably likely ground truth (such as $y_3^*$ in (c), but not $y_1^*$ in (b)), and can often give non-vacuous bounds (such as $y_2^*$ in (b)). + +![](images/6bf68bc0070bcac4447ea8e61eb7684e2d579caef7ee6d3ef03723d067d9c8ee.jpg) +(c) + +$y^{*}$ are not chosen adversarially with respect to $\hat{h}$ . Figure 2 gives a simple demonstration of this point. Consider the task of learning a linear classifier to discriminate between squares and circles on the source distribution $S$ (blue) and then bounding the error of this classifier on the target distribution $\mathcal{T}$ (red), whose true labels are unknown and are therefore depicted as triangles. Figure 2(a) demonstrates that both $\mathcal{H}$ - and $\mathcal{H}\Delta \mathcal{H}$ -divergence achieve their maximal value of 1, because both $h_1$ and $h_2 \oplus h_3$ perfectly discriminate between $S$ and $\mathcal{T}$ . Thus both bounds would be vacuous. + +Now, suppose we were to learn the max-margin $\hat{h}$ on the source distribution (Figure 2(b)). It is possible that the true labels are given by the worst-case boundary as depicted by $y_{1}^{*}$ (pink), thus "flipping" the labels and causing $\hat{h}$ to have 0 accuracy on $\mathcal{T}$ . In this setting, a vacuous bound is correct. However, this seems rather unlikely to occur in practice--instead, recent experimental evidence [61, 34, 33] suggests that the true $y^{*}$ will be much simpler. The maximum disagreement discrepancy here would be approximately 0.5, giving a test accuracy lower bound of 0.5--this is consistent with plausible alternative labeling functions such as $y_{2}^{*}$ (orange). Even if $y^{*}$ is not linear, we still expect that some linear function will induce larger discrepancy; this is precisely Assumption 3.3. Now suppose instead we learn $\hat{h}$ as depicted in Figure 2(c). Then a simple ground truth such as $y_{3}^{*}$ (green) is plausible, which would mean $\hat{h}$ has 0 accuracy on $\mathcal{T}$ . In this case, $y_{3}^{*}$ is also a critic with disagreement discrepancy equal to 1, and so $\mathrm{DIS}^2$ would correctly output an error upper bound of 1. + +A setting where $\mathsf{DIS}^2$ may be invalid. There is one setting where it should be clear that Assumption 3.5 is less likely to be satisfied: when the representation we are using is explicitly regularized to keep $\max_{h' \in \mathcal{H}} \Delta(\hat{h}, h')$ small. This occurs for domain-adversarial representation learning methods such as DANN [18] and CDAN [38], which penalize the ability to discriminate between $S$ and $\mathcal{T}$ in feature space. Given a critic $h'$ with large disagreement discrepancy, the discriminator $D(x) = \mathbf{1}\{\arg \max_y \hat{h}(x)_y = \arg \max_y h'(x)_y\}$ will achieve high accuracy on this task (precisely, $\frac{1 + \Delta(\hat{h}, h')}{2}$ ). By contrapositive, enforcing low discriminatory power means that the max discrepancy must also be small. It follows that for these methods $\mathsf{DIS}^2$ should not be expected to hold universally, and in practice we see that this is the case (Figure 3). Nevertheless, when $\mathsf{DIS}^2$ does overestimate accuracy, it does so by significantly less than prior methods. + +# 4 Efficiently Maximizing the Disagreement Discrepancy + +For a classifier $\hat{h}$ , Theorem 3.6 clearly prescribes how to bound its test error: first, train a critic $h'$ on the chosen $\mathcal{X}$ to approximately maximize $\Delta(\hat{h}, h')$ , then evaluate $\epsilon_{\hat{S}}(\hat{h})$ and $\hat{\Delta}(\hat{h}, h')$ using a holdout set. The remaining difficulty is in identifying the maximizing $h' \in \mathcal{H}$ —that is, the one which minimizes $\epsilon_{\mathcal{S}}(\hat{h}, h')$ and maximizes $\epsilon_{\mathcal{T}}(\hat{h}, h')$ . We can approximately minimize $\epsilon_{\mathcal{S}}(\hat{h}, h')$ by minimizing the sample average of the convex surrogate $\ell_{\mathrm{logistic}} := -\frac{1}{\log|\mathcal{Y}|} \log \operatorname{softmax}(h(x))_y$ as justified by statistical learning theory. However, it is less clear how to maximize $\epsilon_{\mathcal{T}}(\hat{h}, h')$ . + +![](images/9dfeb6c0957196c2388a558039820ccdd4679be588f518b7279b34c135069406.jpg) +Figure 3: $\mathbf{DIS}^2$ may be invalid when the features are regularized to violate Assumption 3.5. Domain-adversarial representation learning algorithms such as DANN [18] and CDAN [38] indirectly minimize $\max_{h' \in \mathcal{H}} \Delta(\hat{h}, h')$ , meaning the necessary condition is less likely to be satisfied. Nevertheless, when $\mathbf{DIS}^2$ does overestimate accuracy, it almost always does so by less than prior methods. + +A few prior works suggest proxy losses for multiclass disagreement [12, 50, 23]. We observe that these losses are not theoretically justified, as they do not upper bound the 0-1 disagreement loss or otherwise do not meaningfully enforce that higher agreement causes higher loss. Furthermore, they are non-convex (or even concave) in the model logits, hindering optimization. Indeed, it is easy to identify simple settings in which minimizing these losses will result in a degenerate classifier with arbitrarily small loss but high agreement. Instead, we derive a new loss which satisfies the above desiderata and thus serves as a more principled approach to maximizing disagreement. + +Definition 4.1. The disagreement logistic loss of a classifier $h$ on a labeled sample $(x, y)$ is defined as + +$$ +\ell_ {\mathrm {d i s}} (h, x, y) := \frac {1}{\log 2} \log \left(1 + \exp \left(h (x) _ {y} - \frac {1}{| \mathcal {Y} | - 1} \sum_ {\hat {y} \neq y} h (x) _ {\hat {y}}\right)\right). +$$ + +Fact 4.2. The disagreement logistic loss is convex in $h(x)$ and upper bounds the 0-1 disagreement loss (i.e., $\mathbf{1}\{\arg \max_{\hat{y}}h(x)_{\hat{y}} = y\}$ ). For binary classification, the disagreement logistic loss is equivalent to the logistic loss with the label flipped. + +We expect that $\ell_{\mathrm{dis}}$ can serve as a useful drop-in replacement for any future algorithm which requires maximizing disagreement in a principled manner. We combine $\ell_{\mathrm{logistic}}$ and $\ell_{\mathrm{dis}}$ to arrive at the empirical disagreement discrepancy objective: + +$$ +\hat {\mathcal {L}} _ {\Delta} \left(h ^ {\prime}\right) := \frac {1}{| \hat {\mathcal {S}} |} \sum_ {x \in \hat {\mathcal {S}}} \ell_ {\text {l o g i s t i c}} \left(h ^ {\prime}, x, \hat {h} (x)\right) + \frac {1}{| \hat {\mathcal {T}} |} \sum_ {x \in \hat {\mathcal {T}}} \ell_ {\text {d i s}} \left(h ^ {\prime}, x, \hat {h} (x)\right). +$$ + +By construction, $1 - \hat{\mathcal{L}}_{\Delta}(h')$ is concave and bounds $\hat{\Delta}(\hat{h}, h')$ from below. However, note that the representations are already optimized for accuracy on $\mathcal{S}$ , which suggests that predictions will have low entropy and that the $1 / \log |y|$ scaling is unnecessary for balancing the two terms. We therefore drop the constant scaling factors; this often leads to higher discrepancy. In practice we optimize this objective with multiple initializations and hyperparameters and select the solution with the largest empirical discrepancy on a holdout set to ensure a conservative bound. Experimentally, we find that replacing $\ell_{\mathrm{dis}}$ with any of the surrogate losses from [12, 50, 23] results in smaller discrepancy; we present these results in Appendix B. + +Tightening the bound by optimizing over the logits. Looking at Theorem 3.6, it is clear that the value of the bound will decrease as the capacity of the hypothesis class is restricted. Since the number of features is large, one may expect that Assumption 3.5 holds even for a reduced feature set. In particular, it is well documented that deep networks optimized with stochastic gradient descent learn representations with small effective rank, often not much more than the number of classes [1, 2, 54, 29]. This suggests that the logits themselves should contain most of the features' information about $S$ and $\mathcal{T}$ and that using the full feature space is unnecessarily conservative. To test this, we evaluate $\mathrm{DIS}^2$ on the full features, the logits output by $\hat{h}$ , and various fractions of the top + +principal components (PCs) of the features. We observe that using logits indeed results in tighter error bounds while still remaining valid—in contrast, using fewer top PCs also results in smaller error bounds, but at some point they become invalid (Figure C.2). The bounds we report in this work are thus evaluated on the logits of $\hat{h}$ , except where we provide explicit comparisons in Section 5. + +Identifying the ideal number of PCs via a "validity score". Even though reducing the feature dimensionality eventually results in an invalid bound, it is tempting to consider how we may identify approximately when this occurs, which could give a more accurate (though less conservative) prediction. We find that the optimization trajectory itself provides meaningful signal about this change. Specifically, Figure C.3 shows that for feature sets which are not overly restrictive, the critic very rapidly ascends to the maximum source agreement, then slowly begins overfitting. For much more restrictive feature sets (i.e., fewer PCs), the critic optimizes much more slowly, suggesting that we have reached the point where we are artificially restricting $\mathcal{H}$ and therefore underestimating the disagreement discrepancy. We design a "validity score" which captures this phenomenon, and we observe that it is roughly linearly correlated with the tightness of the eventual bound (Figure C.4). Though the score is by no means perfect, we can evaluate $\mathrm{DIS}^2$ with successively fewer PCs and only retain those above a certain score threshold, reducing the average prediction error while remaining reasonably conservative (Figure C.5). For further details, see Appendix C. + +
DA? Prediction MethodCoverage (↑)Overest. (↓)MAE (↓)
AC [25]0.1000 ±.0320.0333 ±.0230.1194 ±.0120.1123 ±.0120.1091 ±.0110.1091 ±.012
DoC [24]0.1667 ±.0400.0167 ±.01670.1237 ±.0120.1096 ±.0120.1055 ±.0110.1083 ±.012
ATC NE [21]0.2889 ±.0480.1333 ±.0440.0824 ±.0090.0969 ±.0120.0665 ±.0070.0854 ±.011
COT [40]0.2554 ±.04670.1667 ±.0490.0860 ±.0090.0948 ±.0110.0700 ±.0070.0808 ±.010
DIS2(Features)1.00001.00000.00000.00000.2807 ±.0090.1918 ±.008
DIS2(Logits)0.9889 ±.0110.7500 ±.0580.0011 ±.0000.0475 ±.0070.1489 ±.0110.0945 ±.010
DIS2(Logits w/o δ)0.7556 ±.04750.4333 ±.0650.0771 ±.0130.0892 ±.0110.0887 ±.0090.0637 ±.008
+ +Table 1: Comparing the $\mathsf{DIS}^2$ bound to prior methods for predicting accuracy. DA denotes if the representations were learned via a domain-adversarial algorithm. We report what fraction of predictions correctly bound the true error (Coverage) and the average prediction error among shifts whose accuracy is overestimated (Overest.), along with overall MAE. $\mathsf{DIS}^2$ has substantially higher coverage and lower overestimation error, though lower overall MAE. By dropping the concentration term in Theorem 3.6 we can get even better MAE—even beating the baselines on domain-adversarial representations—at some cost to coverage. + +# 5 Experiments + +Datasets. We conduct experiments across 11 vision benchmark datasets for distribution shift on datasets that span applications in object classification, satellite imagery, and medicine. We use four BREEDs datasets: [65] Entity13, Entity30, Nonliving26, and Living17; FMoW [11] and Camelyon [4] from WILDS [35]; Officehome [69]; Visda [52, 51]; CIFAR10, CIFAR100 [36]; and Domainet [53]. Each of these datasets consists of multiple domains with different types of natural and synthetic shifts. We consider subpopulation shift and natural shifts induced due to differences in the data collection process of ImageNet, i.e., ImageNetv2 [60] and a combination of both. For CIFAR10 and CIFAR100 we evaluate natural shifts due to variations in replication studies [59] and common corruptions [27]. For all datasets, we use the same source and target domains commonly used in previous studies [22, 64]. We provide precise details about the distribution shifts considered in Appendix A. Because distribution shifts vary widely in scope, prior evaluations which focus on only one specific type of shift (e.g., corruptions) often do not convey the full story. We therefore emphasize the need for more comprehensive evaluations across many different types of shifts and training methods, as we present here. + +Experimental setup and protocols. Along with source-only training with ERM, we experiment with Unsupervised Domain Adaptation (UDA) methods that aim to improve target performance with unlabeled target data (FixMatch [66], DANN [18], CDAN [38], and BN-adapt [37]). We experiment with Densenet121 [28] and Resnet18/Resnet50 [26] pretrained on ImageNet. For source-only ERM, as with other methods, we default to using strong augmentations: random horizontal flips, random crops, as well as Cutout [16] and RandAugment [13]. Unless otherwise specified, we default to full finetuning for source-only ERM and UDA methods. We use source hold-out performance to pick the best hyperparameters for the UDA methods, since we lack labeled validation data from the target distribution. For all of these methods, we fix the algorithm-specific hyperparameters to their original recommendations following the experimental protocol in [22]. For more details, see Appendix A. + +Methods evaluated. We compare $\mathrm{DIS}^2$ to four competitive baselines: Average Confidence (AC; [25]), Difference of Confidences (DoC; [24]), Average Thresholded Confidence (ATC; [21]), and Confidence Optimal Transport (COT; [40]). We give detailed descriptions of these methods in Appendix A. For all methods, we implement post-hoc calibration on validation source data with temperature scaling [25], which has been shown to improve performance. For $\mathrm{DIS}^2$ , we report bounds evaluated both on the full features and on the logits of $\hat{h}$ as described in Section 4. Unless specified otherwise, we set $\delta = .01$ everywhere. We also experiment with dropping the lower order concentration term in Theorem 3.6, using only the sample average. Though this is of course no longer a conservative bound, we find it is an excellent predictor of test error and is worth including. + +Metrics for evaluation. As our emphasis is on giving valid error bounds, we report the coverage, i.e. the fraction of predictions for which the true error does not exceed the predicted error. We also report the standard prediction metric, mean absolute error (MAE). Finally, we measure the conditional average overestimation: this is the MAE among predictions which overestimate the accuracy. This metric captures the idea that the most important thing is giving a valid bound—but if for some reason it is not, we'd at least like it to be as accurate as possible. + +Results. Reported metrics for all methods can be found in Table 1. We aggregate results over all datasets, shifts, and training methods—we stratify only by whether the training method is domain-adversarial, as this affects the validity of Assumption 3.5. We find that $\mathrm{DIS}^2$ achieves competitive MAE while maintaining substantially higher coverage, even for domain-adversarial features. When it does overestimate accuracy, it does so by much less, implying that it is ideal for conservative estimation even when any given error bound is not technically satisfied. Dropping the concentration term performs even better (sometimes beating the baselines), at the cost of some coverage. This suggests that efforts to better estimate the true maximum discrepancy may yield even better predictors. We also show scatter plots to visualize performance on individual distribution shifts, plotting each source-target pair as a single point. For these too we report separately the results for domain-adversarial (Figure 3) and non-domain-adversarial methods (Figure 1). To avoid clutter, these two plots do not include DoC, as it performed comparably to AC. Figure 4(a) displays additional scatter plots which allow for a direct comparison of the variants of $\mathrm{DIS}^2$ . Finally, Figure 4(b) plots the observed violation rate (i.e. 1—coverage) of $\mathrm{DIS}^2$ on non-domain-adversarial methods for varying $\delta$ . We observe that it lies at or below the line $y = x$ , meaning the probabilistic bound provided by Theorem 3.6 holds across a range of failure probabilities. Thus we see that our probabilistic bound is empirically valid all of the time—not in the sense that each individual shift's error is upper bounded, but rather that the desired violation rate is always satisfied. + +Strengthening the baselines to improve coverage. Since the baselines we consider in this work prioritize predictive accuracy over conservative estimates, their coverage can possibly be improved without too much increase in error. We explore this option using LOOCV: for a desired coverage, we learn a parameter to either scale or shift a method's prediction to achieve that level of coverage on all but one of the datasets. We then evaluate the method on all shifts of the remaining dataset, and we repeat this for each dataset. Appendix D reports the results for varying coverage levels. We find that (i) the baselines do not achieve the desired coverage on the held out data, though they get somewhat close; and (ii) the adjustment causes them to suffer higher MAE than $\mathsf{DIS}^2$ . Thus $\mathsf{DIS}^2$ is on the Pareto frontier of MAE and coverage, and is preferable when conservative bounds are desirable. We believe identifying alternative methods of post-hoc prediction adjustment is a promising future direction. + +![](images/1b656d12ba77fde777d5ee6702702347a875e7df4f28aadce15d9fc7d9cae84b.jpg) +Figure 4: (a): Scatter plots depicting $\mathrm{Dis}^2$ estimated bound vs. true error for a variety of shifts. "w/o $\delta$ indicates that the lower-order term of Theorem 3.6 has been dropped. (b): Observed bound violation rate vs. desired probability $\delta$ . Observe that the true rate lies at or below $y = x$ across a range of values. + +![](images/284eef406156d70a944f2d9b2a589e3b23b93765b0a871c70a69ae5e229e9ce4.jpg) +(a) + +![](images/a3698f07bb3af3697571647d2c81f346b3811786b9b48d08db0322f8724f9b43.jpg) + +![](images/febc9a17963866e5bbac219fd67fec8e93150f463632f0107f247f9e45cbae42.jpg) +(b) + +# 6 Conclusion + +The ability to evaluate trustworthy, non-vacuous error bounds for deep neural networks under distribution shift remains an extremely important open problem. Due to the wide variety of real-world shifts and the complexity of modern data, restrictive a priori assumptions on the distribution (i.e., before observing any data from the shift of interest) seem unlikely to be fruitful. On the other hand, prior methods which estimate accuracy using extra information—such as unlabeled test samples—often rely on opaque conditions whose likelihood of being satisfied is difficult to predict, and so they sometimes provide large underestimates of test error with no warning signs. + +This work bridges this gap with a simple, intuitive condition and a new disagreement loss which together result in competitive error prediction, while simultaneously providing an (almost) guaranteed probabilistic error bound. We also study how the process of evaluating the bound (e.g., the optimization landscape) can provide even more useful signal, enabling better predictive accuracy. We expect there is potential to push further in each of these directions, hopefully extending the current accuracy-reliability Pareto frontier for test error bounds under distribution shift. + +# Acknowledgments and Disclosure of Funding + +Thanks to Sam Sokota, Bingbin Liu, Yuchen Li, Yiding Jiang, Zack Lipton, Roni Rosenfeld, and Andrej Risteski for helpful comments. ER acknowledges the support of NSF via IIS-1909816, IIS-1955532, OAC-1934584. SG acknowledges Amazon Graduate Fellowship and JP Morgan AI Ph.D. Fellowship for their support. + +# References + +[1] Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In International Conference on Machine Learning, pages 244-253. PMLR, 2018. +[2] Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems, 32, 2019. +[3] Christina Baek, Yiding Jiang, Aditi Raghunathan, and J Zico Kolter. Agreement-on-the-line: Predicting the performance of neural networks under distribution shift. In Advances in Neural Information Processing Systems, 2022. +[4] Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. IEEE Transactions on Medical Imaging, 2018. +[5] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in neural information processing systems, pages 6240-6249, 2017. + +[6] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems, volume 19. MIT Press, 2006. +[7] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79: 151-175, 2010. +[8] Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, Yingyu Liang, and Somesh Jha. Detecting errors and estimating accuracy on unlabeled data with self-training ensembles. Advances in Neural Information Processing Systems, 34:14980-14992, 2021. +[9] Mayee Chen, Karan Goel, Nimit S Sohoni, Fait Poms, Kayvon Fatahalian, and Christopher Ré. Mandoline: Model evaluation under distribution shift. In International Conference on Machine Learning, pages 1617-1629. PMLR, 2021. +[10] Yining Chen, Elan Rosenfeld, Mark Sellke, Tengyu Ma, and Andrej Risteski. Iterative feature matching: Toward provable domain generalization with logarithmic environments. In Advances in Neural Information Processing Systems, volume 35, pages 1725-1736. Curran Associates, Inc., 2022. +[11] Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. +[12] Ching-Yao Chuang, Antonio Torralba, and Stefanie Jegelka. Estimating generalization under distribution shifts via domain-invariant representations. International conference on machine learning, 2020. +[13] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 702-703, 2020. +[14] Weijian Deng and Liang Zheng. Are labels always necessary for classifier accuracy evaluation? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15069-15078, 2021. +[15] Weijian Deng, Stephen Gould, and Liang Zheng. What does rotation prediction tell us about classifier accuracy under varying testing environments? arXiv preprint arXiv:2106.05961, 2021. +[16] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. +[17] Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008, 2017. +[18] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17(1):2096-2030, jan 2016. ISSN 1532-4435. +[19] Jacob Gardner, Geoff Pleiss, Kilian Q Weinberger, David Bindel, and Andrew G Wilson. Gpytorch: Blackbox matrix-matrix gaussian process inference withgpu acceleration. In Advances in Neural Information Processing Systems, pages 7576-7586, 2018. +[20] Saurabh Garg, Sivaraman Balakrishnan, J Zico Kolter, and Zachary C Lipton. Ratt: Leveraging unlabeled data to guarantee generalization. arXiv preprint arXiv:2105.00303, 2021. +[21] Saurabh Garg, Sivaraman Balakrishnan, Zachary Chase Lipton, Behnam Neyshabur, and Hanie Sedghi. Leveraging unlabeled data to predict out-of-distribution performance. In International Conference on Learning Representations, 2022. +[22] Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Siva Balakrishnan, and Zachary Lipton. Rlsbench: Domain adaptation under relaxed label shift. In International Conference on Machine Learning (ICML), 2023. + +[23] Tom Ginsberg, Zhongyuan Liang, and Rahul G Krishnan. A learning based hypothesis test for harmful covariate shift. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=rdfgqiwz71Z. +[24] Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, and Ludwig Schmidt. Predicting with confidence on unseen distributions. arXiv preprint arXiv:2107.03315, 2021. +[25] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning (ICML), 2017. +[26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Computer Vision and Pattern Recognition (CVPR), 2016. +[27] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019. +[28] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. +[29] Minyoung Huh, Hossein Mobahi, Richard Zhang, Brian Cheung, Pulkit Agrawal, and Phillip Isola. The low-rank simplicity bias in deep networks, 2022. URL https://openreview.net/forum?id=dn4B7Mes2z. +[30] David Hume. An enquiry concerning human understanding: A critical edition, volume 3. Oxford University Press on Demand, 2000. +[31] Junguang Jiang, Yang Shu, Jianmin Wang, and Mingsheng Long. Transferability in deep learning: A survey, 2022. +[32] Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, and J Zico Kolter. Assessing generalization of SGD via disagreement. In International Conference on Learning Representations, 2022. +[33] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decoupling representation and classifier for long-tailed recognition. In International Conference on Learning Representations, 2020. +[34] Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. arXiv preprint arXiv:2204.02937, 2022. +[35] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning (ICML), 2021. +[36] Alex Krizhevsky and Geoffrey Hinton. Learning Multiple Layers of Features from Tiny Images. Technical report, Citeseer, 2009. +[37] Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. arXiv preprint arXiv:1603.04779, 2016. +[38] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018. +[39] Philip M Long and Hanie Sedghi. Generalization bounds for deep convolutional neural networks. arXiv preprint arXiv:1905.12600, 2019. +[40] Yuzhe Lu, Zhenlin Wang, Runtian Zhai, Soheil Kolouri, Joseph Campbell, and Katia Sycara. Predicting out-of-distribution error with confidence optimal transport. In ICLR 2023 Workshop on Pitfalls of limited data and computation for Trustworthy ML, 2023. +[41] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009. + +[42] Vaishnavh Nagarajan and J Zico Kolter. Deterministic pac-bayesian generalization bounds for deep networks via generalizing noise-resilience. arXiv preprint arXiv:1905.13344, 2019. +[43] Vaishnavh Nagarajan and J. Zico Kolter. Uniform convergence may be unable to explain generalization in deep learning. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. +[44] Preetum Nakkiran and Yamini Bansal. Distributional generalization: A new kind of generalization. ArXiv, abs/2009.08092, 2019. +[45] Behnam Neyshabur. Implicit regularization in deep learning. arXiv preprint arXiv:1709.01953, 2017. +[46] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In Conference on Learning Theory, pages 1376-1401, 2015. +[47] Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. Exploring generalization in deep learning. arXiv preprint arXiv:1706.08947, 2017. +[48] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2018. +[49] Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530, 2019. +[50] Matteo Pagliardini, Martin Jaggi, François Fleuret, and Sai Praneeth Karimireddy. Agree to disagree: Diversity through disagreement for better transferability. In *The Eleventh International Conference on Learning Representations*, 2023. +[51] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge, 2017. +[52] Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, and Kate Saenko. Syn2real: A new benchmark for synthetic-to-real visual domain adaptation. arXiv preprint arXiv:1806.09755, 2018. +[53] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406-1415, 2019. +[54] Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C Courville, Doina Precup, and Guillaume Lajoie. Gradient starvation: A learning proclivity in neural networks. Advances in Neural Information Processing Systems, 34:1256-1272, 2021. +[55] Emmanouil A Platanios, Hoifung Poon, Tom M Mitchell, and Eric Horvitz. Estimating accuracy from unlabeled data: A probabilistic logic approach. arXiv preprint arXiv:1705.07086, 2017. +[56] Emmanouil Antonios Platanios, Avinava Dubey, and Tom Mitchell. Estimating accuracy from unlabeled data: A bayesian approach. In International Conference on Machine Learning, pages 1416-1425. PMLR, 2016. +[57] Hamed Rahimian and Sanjay Mehrotra. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659, 2019. +[58] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar-10 classifiers generalize to CIFar-10? arXiv preprint arXiv:1806.00451, 2018. +[59] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar-10 classifiers generalize to CIFar-10? 2018. https://arxiv.org/abs/1806.00451. + +[60] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. DoImagenet classifiers generalize toImagenet? In International Conference on Machine Learning, pages 5389-5400. PMLR, 2019. +[61] Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. Domain-adjusted regression or: Erm may already learn features sufficient for out-of-distribution generalization. arXiv preprint arXiv:2202.06856, 2022. +[62] Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. An online learning approach to interpolation and extrapolation in domain generalization. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 2641-2657. PMLR, 28-30 Mar 2022. +[63] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015. +[64] Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, and Percy Liang. Extending the wilds benchmark for unsupervised adaptation. In NeurIPS Workshop on Distribution Shifts, 2021. +[65] Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. Breeds: Benchmarks for subpopulation shift. arXiv preprint arXiv:2008.04859, 2020. +[66] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33, 2020. +[67] Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2016. +[68] Antonio Torralba, Rob Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958-1970, 2008. +[69] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018-5027, 2017. +[70] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016. +[71] Richard Zhang. Making convolutional networks shift-invariant again. In ICML, 2019. +[72] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning. PMLR, 2019. +[73] Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P Adams, and Peter Orbanz. Non-vacuous generalization bounds at the imagenet scale: a pac-bayesian compression approach. arXiv preprint arXiv:1804.05862, 2018. + +# Appendix + +# A Experimental Details + +# A.1 Description of Baselines + +Average Thresholded Confidence (ATC). ATC first estimates a threshold $t$ on the confidence of softmax prediction (or on negative entropy) such that the number of source labeled points that get a confidence greater than $t$ match the fraction of correct examples, and then estimates the test error on on the target domain $\mathcal{D}_{\mathrm{test}}$ as the expected number of target points that obtain a score less than $t$ , i.e., + +$$ +\mathrm {A T C} _ {\mathcal {D} _ {\mathrm {t e s t}}} (s) = \sum_ {i = 1} ^ {n} \mathbb {I} [ s (f (x _ {i} ^ {\prime})) < t ], +$$ + +where $t$ satisfies: $\sum_{i=1}^{j} \mathbb{I}\left[\max_{j \in \mathcal{Y}}\left(f_{j}\left(x_{i}\right)\right)DatasetSourceTargetCIFAR10CIFAR10v1CIFAR10v1, CIFAR10v2, CIFAR10C-Frost (severity 4), CIFAR10C-Pixelate (severity 5), CIFAR10-C Saturate (severity 5)CIFAR100, CIFAR100C-Fog (severity 4),CIFAR100CIFAR100CIFAR100C-Motion Blur (severity 2), CIFAR100C-Contrast (severity 4), CIFAR100C-spatter (severity 2)CamelyonCamelyon (Hospital 1-3)Camelyon (Hospital 1-3), Camelyon (Hospital 4), Camelyon (Hospital 5)FMoWFMoW (2002-’13)FMoW (2002-’13), FMoW (2013-’16), FMoW (2016-’18)Entity13Entity13 (ImageNetv1 sub-population 1)Entity13 (ImageNetv1 sub-population 1), Entity13 (ImageNetv1 sub-population 2), Entity13 (ImageNetv2 sub-population 1), Entity13 (ImageNetv2 sub-population 2)Entity30Entity30 (ImageNetv1 sub-population 1)Entity30 (ImageNetv1 sub-population 1), Entity30 (ImageNetv1 sub-population 2), Entity30 (ImageNetv2 sub-population 1), Entity30 (ImageNetv2 sub-population 2)Living17Living17 (ImageNetv1 sub-population 1)Living17 (ImageNetv1 sub-population 1), Living17 (ImageNetv1 sub-population 2), Living17 (ImageNetv2 sub-population 1), Living17 (ImageNetv2 sub-population 2)Nonliving26Nonliving26 (ImageNetv1 sub-population 1)Nonliving26 (ImageNetv1 sub-population 1), Nonliving26 (ImageNetv1 sub-population 2), Nonliving26 (ImageNetv2 sub-population 1), Nonliving26 (ImageNetv2 sub-population 2)OfficehomeProductProduct, Art, ClipArt, RealDomainNetRealReal, Painting, Sketch, ClipArtVisdaSynthetic (originally referred to as train)Synthetic, Real-1 (originally referred to as val), Real-2 (originally referred to as test) + +Table A.2: Details of the source and target datasets in our testbed. + +
DatasetEpochBatch sizel2 regularizationLearning rate
CIFAR10502000.0001 (chosen from {0.0001, 0.001, 1e-5, 0.0})0.01 (chosen from {0.001, 0.01, 0.0001})
CIFAR100502000.0001 (chosen from {0.0001, 0.001, 1e-5, 0.0})0.01 (chosen from {0.001, 0.01, 0.0001})
Camelyon10960.01 (chosen from {0.01, 0.001, 0.0001, 0.0})0.03 (chosen from {0.003, 0.3, 0.0003, 0.03})
FMoW30640.0 (chosen from {0.0001, 0.001, 1e-5, 0.0})0.0001 (chosen from {0.001, 0.01, 0.0001})
Entity13402565e-5 (chosen from {5e-5, 5e-4, 1e-4, 1e-5})0.2 (chosen from {0.1, 0.5, 0.2, 0.01, 0.0})
Entity30402565e-5 (chosen from {5e-5, 5e-4, 1e-4, 1e-5})0.2 (chosen from {0.1, 0.5, 0.2, 0.01, 0.0})
Living17402565e-5 (chosen from {5e-5, 5e-4, 1e-4, 1e-5})0.2 (chosen from {0.1, 0.5, 0.2, 0.01, 0.0})
Nonliving26402560 5e-5 (chosen from {5e-5, 5e-4, 1e-4, 1e-5})0.2 (chosen from {0.1, 0.5, 0.2, 0.01, 0.0})
Officehome50960.0001 (chosen from {0.0001, 0.001, 1e-5, 0.0})0.01 (chosen from {0.001, 0.01, 0.0001})
DomainNet15960.0001 (chosen from {0.0001, 0.001, 1e-5, 0.0})0.01 (chosen from {0.001, 0.01, 0.0001})
Visda10960.0001 (chosen from {0.0001, 0.001, 1e-5, 0.0})0.01 (chosen from {0.001, 0.01, 0.0001})
+ +Table A.3: Details of the learning rate and batch size considered in our testbed + +- DANN, CDANN, As per Transfer Learning Library suggestion, we use a learning rate multiplier of 0.1 for the featurizer when initializing with a pre-trained network and 1.0 otherwise. We default to a penalty weight of 1.0 for all datasets with pre-trained initialization. + +FixMatch We use the lambda is 1.0 and use threshold $\tau$ as 0.9. + +Compute Infrastructure Our experiments were performed across a combination of Nvidia T4, A6000, and V100 GPUs. + +# B Comparing Disagreement Losses + +We define the alternate losses for maximizing disagreement: + +1. Chuang et al. [12] minimize the negative cross-entropy loss, which is concave in the model logits. That is, they add the term $\log \operatorname{softmax}(h(x)_y)$ to the objective they are minimizing. This loss results in substantially lower disagreement discrepancy than the other two. +2. Pagliardini et al. [50] use a loss which is not too different from ours. They define the disagreement objective for a point $(x,y)$ as + +$$ +\log \left(1 + \frac {\exp (h (x) _ {y})}{\sum_ {\hat {y} \neq y} \exp (h (x) _ {\hat {y}})}\right). \tag {1} +$$ + +For comparison, $\ell_{\mathrm{dis}}$ can be rewritten as + +$$ +\log \left(1 + \frac {\exp (h (x) _ {y})}{\exp \left(\frac {1}{| y | - 1} \sum_ {\hat {y} \neq y} h (x) _ {\hat {y}}\right)}\right), \tag {2} +$$ + +where the incorrect logits are averaged and the exponential is pushed outside the sum. This modification results in (2) being convex in the logits and an upper bound to the disagreement 0-1 loss, whereas (1) is neither. + +![](images/ce7b49c8a5862748f3a263cb8b700cd3162fa5fd5241acd541b073eb34b0d3ca.jpg) +Figure B.1 & Table B.3: Histogram of disagreement discrepancies for each of the three losses, and the average values across all datasets. **Bold (resp. **underline**) indicates the method has higher average discrepancy under a paired t-test at significance $p = .01$ (resp. $p = .025$ ). + +Figure B.1 displays histograms of the achieved disagreement discrepancy across all distributions for each of the disagreement losses (all hyperparameters and random seeds are the same for all three losses). The table below it reports the mean disagreement discrepancy on the train and test sets. We find that the negative cross-entropy, being a concave function, results in very low discrepancy. The D-BAT loss (Equation (1)) is reasonably competitive with our loss (Equation (2)) on average, seemingly because it gets very high discrepancy on a subset of shifts. This suggests that it may be particularly suited for a specific type of distribution shift, though it is less good overall. Though the averages are reasonably close, the samples are not independent, so we run a paired t-test and we find that the increases to average train and test discrepancies achieved by $\ell_{\mathrm{dis}}$ are significant at levels $p = 0.024$ and $p = 0.009$ , respectively. With enough holdout data, a reasonable approach would be to split the data in two: one subset to validate critics trained on either of the two losses, and another to evaluate the discrepancy of whichever one is ultimately selected. + +# C Exploration of the Validity Score + +To experiment with reducing the complexity of the class $\mathcal{H}$ , we evaluate $\mathrm{DIS}^2$ on progressively fewer top principal components (PCs) of the features. Precisely, for features of dimension $d$ , we evaluate $\mathrm{DIS}^2$ on the same features projected onto their top $d / k$ components, for $k \in [1, 4, 16, 32, 64, 128]$ (Figure C.2). We see that while projecting to fewer and fewer PCs does reduce the error bound value, unlike the logits it is a rather crude way to reduce complexity of $\mathcal{H}$ , meaning at some point it goes too far and results in invalid error bounds. + +![](images/e91b9c557f659a42f63b417e3880156823bbcebc41b99abe8de82be6d665ac0c.jpg) + +![](images/b76f9e2eb3edcc1faf346d7aab9ee00acc8c5087713ce346b6d21750b497aa82.jpg) + +![](images/d7c69f746e752f60de016319a735a4d034cb89e0a126e1b1d3371ea965617c2d.jpg) + +![](images/d8a3f80628db295a61e9751377749e3afb7c1abf677c02a9ea3917060b05aea4.jpg) +Figure C.2: $\mathbf{Dis}^2$ bound as fewer principal components are kept. Reducing the number of top principal components crudely reduces complexity of $\mathcal{H}$ —this leads to lower error estimates, but at some point the bounds become invalid for a large fraction of shifts. + +![](images/38e820cfdc5b75ea8bd6b8ecf90cd0430dbad228f281e8f116ca33484a27f8fb.jpg) + +![](images/8b38aa9417577da78d83dcffc00863ca8cecdb53ff39341bdf5c94b8992abc3f.jpg) + +However, during the optimization process we observe that around when this violation occurs, the task of training a critic to both agree on $S$ and disagree on $\mathcal{T}$ goes from "easy" to "hard". Figure C.3 shows that on the full features, the critic rapidly ascends to maximum agreement on $S$ , followed by slow decay (due to both overfitting and learning to simultaneously disagree on $\mathcal{T}$ ). As we drop more and more components, this optimization becomes slower. + +![](images/ddb65ee4e1f93937d08f4086e5f1a23ee1f12b98969ad00df16cb774d408ac26.jpg) +Figure C.3: Agreement on one shift between $\hat{h}$ and $h'$ on $\hat{S}$ during optimization. We observe that as the number of top PCs retained drops, the optimization occurs more slowly and less monotonically. For this particular shift, the bound becomes invalid when keeping only the top $1/128$ components, depicted by the brown line. + +We therefore design a "validity score" intended to capture this phenomenon which we refer to as the cumulative $\ell_1$ ratio. This is defined as the maximum agreement achieved, divided by the cumulative sum of absolute differences in agreement across all epochs up until the maximum was achieved. + +Formally, let $\{a_i\}_{i=1}^T$ represent the agreement between $h'$ and $\hat{h}$ after epoch $i$ , i.e. $1 - \epsilon_{\hat{S}}(\hat{h}, h_i')$ , and define $m := \arg \max_{i \in [T]} a_i$ . The cumulative $\ell_1$ ratio is then $\frac{a_m}{a_1 + \sum_{i=2}^{m} |a_i - a_{i-1}|}$ . Thus, if the agreement rapidly ascends to its maximum without ever going down over the course of an epoch, this ratio will be equal to 1, and if it non-monotonically ascends then the ratio will be significantly less. This definition was simply the first metric we considered which approximately captures the behavior we observed; we expect it could be greatly improved. + +![](images/3e06834c4038b36906f8d4295f8828d5a825eda8c272e7cd6271884bb86d8805.jpg) +Figure C.4: Cumulative $\ell_1$ ratio versus error prediction gap. Despite its simplicity, the ratio captures the information encoded in the optimization trajectory, roughly linearly correlating with the tightness and validity of a given prediction. It is thus a useful metric for identifying the ideal number of top PCs to use. + +Figure C.4 displays a scatter plot of the cumulative $\ell_1$ ratio versus the difference in estimated and true error for $\mathrm{DIS}^2$ evaluated on the full range of top PCs. A negative value implies that we have underestimated the error (i.e., the bound is not valid). We see that even this very simply metric roughly linearly correlates with the tightness of the bound, which suggests that evaluating over a range of top PC counts and only keeping predictions whose $\ell_1$ ratio is above a certain threshold can improve raw predictive accuracy without reducing coverage by too much. Figure C.5 shows that this is indeed the case: compared to $\mathrm{DIS}^2$ evaluated on the logits, keeping all predictions above a score threshold can produce more accurate error estimates, without too severely underestimating error in the worst case. + +![](images/53e8aa61db94b8860398356031d75bcdf22c6534f8af99b52cc2cb6538ba1883.jpg) +Figure C.5: $\mathbf{DIS}^2$ bounds and MAE / coverage as the cumulative $\ell_1$ ratio threshold is lowered. Values in parenthesis are (MAE / coverage). By only keeping predictions with ratio above a varying threshold, we can smoothly interpolate between bound validity and raw error prediction accuracy. + +![](images/6562c98d23e17324936d23959d33a14efee8047b386d1703dcaf975633d00013.jpg) + +![](images/6002fa53a589bec0cd93ec8828cac96738ab6add629646b73a88970a0faab713.jpg) + +![](images/451c441a4934aae7444905b8710ba2e6e0420d69b549c660f22e061cff745911.jpg) + +# D Making Baselines More Conservative with LOOCV + +To more thoroughly compare $\mathrm{DIS}^2$ to prior estimation techniques, we consider a strengthening of the baselines which may give them higher coverage without too much cost to prediction accuracy. Specifically, for each desired coverage level $\alpha \in [0.9, 0.95, 0.99]$ , we use all but one of the datasets to learn a parameter to either scale or shift a method's predictions enough to achieve coverage $\alpha$ . We then evaluate this scaled or shifted prediction on the distribution shifts of the remaining dataset, and we repeat this for each one. + +The results, found in Table D.4, demonstrate that prior methods can indeed be made to have much higher coverage, although as expected their MAE suffers. Furthermore, they still underestimate error on the tail distribution shifts by quite a bit, and they rarely achieve the desired coverage on the heldout dataset—though they usually come reasonably close. In particular, ATC [21] and COT [40] do well with a shift parameter, e.g. at the desired coverage $\alpha = 0.95$ ATC matches $\mathrm{DIS}^2$ in MAE and gets $94.4\%$ coverage (compared to $98.9\%$ by $\mathrm{DIS}^2$ ). However, its conditional average overestimation is quite high, almost $9\%$ . COT gets much lower overestimation (particularly for higher coverage levels), and it also appears to suffer less on the tail distribution shifts in the sense that $\alpha = 0.99$ does not induce nearly as high MAE as it does for ATC. However, at that level it only achieves $95.6\%$ coverage, and it averages almost $5\%$ accuracy overestimation on the shifts it does not correctly bound (compared to $0.1\%$ by $\mathrm{DIS}^2$ ). Also, its MAE is still substantially higher than $\mathrm{DIS}^2$ , despite getting lower coverage. Finally, we evaluate the scale/shift approach on our $\mathrm{DIS}^2$ bound without the lower order term, but based on the metrics we report there appears to be little reason to prefer it over the untransformed version, one of the baselines, or the original $\mathrm{DIS}^2$ bound. + +Taken together, these results imply that if one's goal is predictive accuracy and tail behavior is not important (worst $\sim 10\%$ ), ATC or COT will likely get reasonable coverage with a shift parameter—though they still significantly underestimate error on a non-negligible fraction of shifts. If one cares about the long tail of distribution shifts, or prioritizes being conservative at a slight cost to average accuracy, $\mathrm{DIS}^2$ is clearly preferable. Finally, we observe that the randomness which determines which shifts are not correctly bounded by $\mathrm{DIS}^2$ is "decoupled" from the distributions themselves under Theorem 3.6, in the sense that it is an artifact of the random samples, rather than a property of the distribution (recall Figure 4(b)). This is in contrast with the shift/scale approach which would produce almost identical results under larger sample sizes because it does not account for finite sample effects. This implies that some distribution shifts are simply "unsuitable" for prior methods because they do not satisfy whatever condition these methods rely on, and observing more samples will not remedy this problem. It is clear that working to understand these conditions is crucial for reliability and interpretability, since we are not currently able to identify which distributions are suitable a priori. + +
Methodα→ AdjustmentMAE (↓)Coverage (↑)Overest. (↓)
0.90.950.990.90.950.990.90.950.99
ACnone0.1060.1220.118
shift0.1530.2010.4650.8780.9220.9560.1190.1380.149
scale0.1950.2210.4160.9110.9220.9670.1350.0970.145
DoCnone0.1050.1670.122
shift0.1580.2000.4670.8780.9110.9560.1160.1250.154
scale0.1950.2230.4170.9000.9440.9670.1230.1390.139
ATC NEnone0.0670.2890.083
shift0.1170.1500.3090.9000.9440.9780.0720.0880.127
scale0.1280.1530.3570.8890.9330.9780.0620.0740.144
COTnone0.0690.2560.085
shift0.1150.1400.2320.8780.9440.9560.0490.0650.048
scale0.1500.1930.2480.8890.9440.9560.0740.0660.044
DIS2(w/o δ)none0.0830.7560.072
shift0.1590.1690.1970.8890.9330.9890.0210.0100.017
scale0.1490.1680.1970.8890.9330.9890.0230.0210.004
DIS2(δ=10-2)none0.1500.9890.001
DIS2(δ=10-3)none0.1741.0000.000
+ +Table D.4: MAE, coverage, and conditional average overestimation for the strengthened baselines with a shift or scale parameter on non-domain-adversarial representations. Because a desired coverage $\alpha$ is only used when an adjustment is learned, "none"—representing no adjustment—does not vary with $\alpha$ . + +# E Proving that Assumption 3.5 Holds for Some Datasets + +Here we describe how the equivalence of Assumption 3.5 and the bound in Theorem 3.6 allow us to prove that the assumption holds with high probability. By repeating essentially the same proof as Theorem 3.6 in the other direction, we get the following corollary: + +Corollary E.1. If Assumption 3.5 does not hold, then with probability $\geq 1 - \delta$ + +$$ +\epsilon_ {\hat {\mathcal {T}}} (\hat {h}) > \epsilon_ {\hat {\mathcal {S}}} (\hat {h}) + \hat {\Delta} (\hat {h}, h ^ {\prime}) - \sqrt {\frac {2 (n _ {S} + n _ {T}) \log {1 / \delta}}{n _ {S} n _ {T}}}. +$$ + +Note that the concentration term here is different from Theorem 3.6 because we are bounding the empirical target error, rather than the true target error. The reason for this change is that now we can make direct use of its contrapositive: + +Corollary E.2. With probability $\geq 1 - \delta$ over the randomness of the samples $\hat{S}$ and $\hat{T}$ , if it is the case that + +$$ +\epsilon_ {\hat {\mathcal {T}}} (\hat {h}) \leq \epsilon_ {\hat {\mathcal {S}}} (\hat {h}) + \hat {\Delta} (\hat {h}, h ^ {\prime}) - \sqrt {\frac {2 (n _ {S} + n _ {T}) \log {1 / \delta}}{n _ {S} n _ {T}}}, +$$ + +then Assumption 3.5 must hold. + +We evaluate this bound on non-domain-adversarial shifts with $\delta = 10^{-6}$ . As some of the BREEDS shifts have as few as 68 test samples, we restrict ourselves to shifts with $n_T \geq 500$ to ignore those where the finite-sample term heavily dominates; this removes a little over $20\%$ of all shifts. Among the remainder, we find that the bound in Corollary E.2 holds $55.7\%$ of the time when using full features and $25.7\%$ of the time when using logits. This means that for these shifts, we can be essentially certain that Assumption 3.5—and therefore also Assumption 3.3—is true. + +Note that the fact that the bound is not violated for a given shift does not at all imply that the assumption is not true. In general, the only rigorous way to prove that Assumption 3.5 does not hold would be to show that for a fixed $\delta$ , the fraction of shifts for which the bound in Theorem 3.6 does not hold is larger than $\delta$ (in a manner that is statistically significant under the appropriate hypothesis test). Because this never occurs in our experiments, we cannot conclude that the assumption is ever false. At the same time, the fact that the bound does hold at least $1 - \delta$ of the time does not prove that the assumption is true—it merely suggests that it is reasonable and that the bound should continue to hold in the future. This is why it is important for Assumption 3.5 to be simple and intuitive, so that we can trust that it will persist and anticipate when it will not. + +However, Corollary E.2 allows us to make a substantially stronger statement. In fact, it says that for any distribution shift, with enough samples, we can prove a posteriori whether or not Assumption 3.5 holds, because the gap between these two bounds will shrink with increasing sample size. + +![](images/06dc5ee15402e070220c28922e4739735d4a7684b26dad93740059e26ce28ae8.jpg) +F Figure 1 Stratified by Training Method +Figure F.6: Error prediction stratified by training method. Stars denote $\mathrm{DIS}^2$ , circles are ATC NE. We see that $\mathrm{DIS}^2$ maintains its validity across different training methods. \ No newline at end of file diff --git a/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/images.zip b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9d0670147688a31b5a7aa6707aefa8a884a40a8e --- /dev/null +++ b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df7933a4e692855f656a7685c7fe74a076ca4555413b3f97aac82d5f643096c8 +size 865411 diff --git a/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/layout.json b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fb959ac5d6d3536b9e8148cff824694c484f470d --- /dev/null +++ b/almostprovableerrorboundsunderdistributionshiftviadisagreementdiscrepancy/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34547826d9b97efa9c0f28e2fef5fa637050cc1776d8e465235f3a2a25855d67 +size 740867 diff --git a/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_content_list.json b/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..83e9a59dfceb1ca600430ddc8061f0f2e3643dcc --- /dev/null +++ b/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e19620db807347c52351abb1b18c30719a45fb0cd23c868f08df5e4b948167ec +size 79384 diff --git a/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_model.json b/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bbd87ceba721004b34c1b240bbdd040d9bef3136 --- /dev/null +++ b/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f541b38fc1bca03e4a15efe76edd2dc08946d2872414fc84939565697c0261e0 +size 101540 diff --git a/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_origin.pdf b/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a1616d386a7fb48cc05125bd35535ca9644b8346 --- /dev/null +++ b/alogicforexpressinglogprecisiontransformers/5a9b1f37-2fc5-42e9-a2c5-4ee15f41f2ce_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ada324f61083c5c3b80436720f6b7e46a47769dd434de3d815374de333b0a02e +size 714647 diff --git a/alogicforexpressinglogprecisiontransformers/full.md b/alogicforexpressinglogprecisiontransformers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..535d924848521f3d8a1439b4d4696d93b2d4f559 --- /dev/null +++ b/alogicforexpressinglogprecisiontransformers/full.md @@ -0,0 +1,309 @@ +# A Logic for Expressing Log-Precision Transformers + +William Merrill +New York University +willlm@nyu.edu + +Ashish Sabharwal Allen Institute for AI ashishs@allenai.org + +# Abstract + +One way to interpret the reasoning power of transformer-based language models is to describe the types of logical rules they can resolve over some input text. Recently, Chiang et al. (2023) showed that finite-precision transformer classifiers can be equivalently expressed in a generalization of first-order logic. However, finite-precision transformers are a weak transformer variant because, as we show, a single head can only attend to a constant number of tokens and, in particular, cannot represent uniform attention. Since attending broadly is a core capability for transformers, we ask whether a minimally more expressive model that can attend universally can also be characterized in logic. To this end, we analyze transformers whose forward pass is computed in $\log n$ precision on contexts of length $n$ . We prove any log-precision transformer classifier can be equivalently expressed as a first-order logic sentence that, in addition to standard universal and existential quantifiers, may also contain majority-vote quantifiers. This is the tightest known upper bound and first logical characterization of log-precision transformers. + +Any log-precision transformer can be re-expressed as a sentence in $\mathsf{FO}(\mathsf{M})$ logic, e.g.: + +$$ +\begin{array}{c} M i. \mathsf {a} (i) \wedge M j. \mathsf {b} (j) \wedge \neg \exists k, \ell . (\mathsf {a} (k) \wedge \mathsf {b} (\ell) \wedge \ell < k) \\ (m \mathsf {a} s f o l l o w e d b y m \mathsf {b} s, i.e., \mathsf {a} ^ {m} \mathsf {b} ^ {m}) \\ \text {a a a a b b b b} \checkmark \quad \text {a a a b b b b} X \quad \text {b a a a a b b b} X \end{array} +$$ + +Figure 1: A first-order logic with majority $(\mathsf{FO}(\mathsf{M}))$ sentence for $\mathsf{a}^m\mathsf{b}^m$ . In addition to standard $\forall$ and $\exists$ quantifiers over string indices, $\mathsf{FO}(\mathsf{M})$ allows majority quantifiers (M) that take a majority-vote across indices. $\mathsf{a}(i)$ indicates whether token $i$ is a (and analogously for b). We prove $\mathsf{FO}(\mathsf{M})$ can express any function computed by a log-precision transformer. + +# 1 Introduction + +The incredible success of deep learning models, especially very large language and vision transformers with hundreds of billions of parameters (Brown et al., 2020; Thoppilan et al., 2022), has come at the cost of increasingly limited understanding of how these models actually work and when they might fail. This raises many concerns, such as around their safe deployment, fairness, and accountability. Does the inner working of a transformer defy description in a simpler symbolic system that we can better understand? Or can transformer computation be described using a familiar symbolic formalism? Understanding how to view the reasoning process of a transformer in terms of logic could potentially expand our ability to formally reason about their behavior over large domains of inputs. + +Chiang et al. (2023) provide a partial answer to this question, showing that any finite-precision transformer classifier can be expressed as a sentence in a variant of first-order logic with counting + +quantifiers and modular arithmetic over input position indices. Specifically, counting quantifiers take the form $\exists^{=x}i:\phi(i)$ where $x$ is a count variable and $i$ is a position index. They show that there exists a single sentence in this logic that computes the output of the transformer for any input string of any length. This is a powerful result because it shows that a simple logical formalism is fully sufficient to describe all the complexity of a massive finite-precision transformer. It also provides an upper bound on finite-precision transformers: any function that cannot be defined in first-order counting logic with modular indexing cannot be expressed by the transformer. + +However, Chiang et al.'s result is not fully general because it relies on the transformer precision being fixed with respect to the transformer's context length. More generally, as we will demonstrate in Section 3, finite-precision transformers are a fundamentally weak variant of transformers: crucially, cannot express uniform attention patterns, which are a core algorithmic primitive of transformers (Weiss et al., 2018). In fact, we show that they can only attend to a constant number of input positions, which may be seen as a rather limited generalization of hard attention. For example, Chiang et al. show that their logic for finite-precision transformers cannot recognize $\mathsf{a}^m\mathsf{b}^m$ , whereas in practice, transformers can (Bhattamishra et al., 2020). This motivates studying a formal model of transformers where precision grows with context length (which we formalize as log-precision), making it possible to capture uniform attention as well as other broad attention patterns. This is useful both for recognizing $\mathsf{a}^m\mathsf{b}^m$ and more generally for reasoning globally over the input. + +We demonstrate that log-precision transformer classifiers can also be expressed as sentences in a simple logic: first-order logic with majority, or FO(M), over inputs strings (Barrington et al., 1990). In addition to standard existential and universal quantifiers, FO(M) has majority quantifiers that return true iff more than half the propositions they quantify are true. It also allows comparing input positions (e.g., $\ell < k$ in Figure 1) and accessing their individual bits. Our main result is as follows: + +Theorem 1 (Informal version of Theorem 2). For any log-precision transformer $\mathcal{T}$ , there exists an FO(M) sentence $\phi$ that computes the same function as $\mathcal{T}$ , i.e., $\phi(x) = \mathcal{T}(x)$ for any input string $x$ . + +Upper bound. Theorem 2 shows transformers with more than finite precision can also be expressed in a simple extension of first-order logic, going beyond Chiang et al. (2023)'s result. On the other hand, $\mathsf{FO}(\mathsf{M})$ is a strict superset of Chiang et al.'s counting logic; it can simulate counting quantifiers (see Section 2.2) and allows non-modular position comparisons. Thus, handling a more general class of transformers powerful enough to express uniform attention slightly weakens the bound. + +Still, our result constitutes (to our knowledge) the tightest upper bound on log-precision transformers and the first defined in terms of logic, building on a line of complexity-theoretic work analyzing the power of transformers (Hahn, 2020; Merrill et al., 2022; Liu et al., 2023; Merrill & Sabharwal, 2023). In particular, $\mathsf{FO}(\mathsf{M})$ strengthens the upper bound of log-space-uniform $\mathsf{T}\mathsf{C}^0$ by Merrill & Sabharwal (2023). The refined bound adds to the limitations of transformers identified by Merrill & Sabharwal (2023): for example, it establishes unconditionally that log-precision transformers cannot compute boolean matrix permanents, and shows that, in a certain formal sense, integer division and matching parentheses are among the formally hardest problems that transformers can solve (see Section 4).3 + +Mechanistic interpretability. Beyond providing an upper bound on the reasoning problems solvable by transformers, we believe Theorem 1 could guide the design of "transformer-complete" programming languages similar in spirit to RASP (Weiss et al., 2018). RASP is a declarative programming language designed to capture transformer computation, and Lindner et al. (2023) implement a compiler from RASP into transformers. Unlike RASP, FO(M) can provably express any transformer (Theorem 1), which we believe justifies using it (or an equivalent but more user-friendly variant) as a target language for programs extracted from transformers. + +Similar to a decision tree, an $\mathsf{FO}(\mathsf{M})$ sentence has the interpretable property that each sub-sentence corresponds to a constraint on input (see Figure 1). In contrast, the internal modules of a transformer or circuit do not satisfy this since they map between arbitrary latent spaces. We speculate this property + +could facilitate interpreting models by translating them to $\mathsf{FO}(\mathsf{M})$ , though a careful exploration of the algorithmic and HCI aspects of this idea lies outside the current paper's theoretical scope. + +Contributions. Our results shed new light on how to view the computation inside transformers in terms of logic. Specifically, our main contributions are to prove the following: + +1. Fixed-precision transformers can only attend to a fixed number of tokens, and those with precision less than $\log \log n$ cannot uniformly attend over length- $n$ contexts (Proposition 1). +2. Log-precision transformer classifiers can be expressed as sentences in $\mathsf{FO}(\mathsf{M})$ (Theorem 2). + +# 2 Preliminaries: Transformers and FO(M) + +Let $\Sigma$ be a finite alphabet. We denote by $^*$ the Kleene star operator, i.e., for a set $X$ , $X^{*} = \bigcup_{n = 0}^{\infty}X^{n}$ . We will view transformers and $\mathsf{FO}(\mathsf{M})$ sentences both as functions from $\Sigma^{*}\rightarrow \{0,1\}$ , and show that any function a transformer computes can also be computed by an $\mathsf{FO}(\mathsf{M})$ sentence. + +# 2.1 Transformers + +We view the transformer precision $p$ as a function of the context length $n$ , writing $p(n)$ where appropriate. Let $\mathbb{D}_p$ be the datatype of $p$ -precision floats, i.e., tuples $\langle m, e \rangle$ where $m, e$ are signed integers together taking $p$ bits. Using $|x|$ to mean the size of integer $x$ , a float represents the value $m \cdot 2^{e - |m| + 1}$ . Following Appendix A of Merrill & Sabharwal (2023), we define $p$ -truncated addition $(+, \sum)$ , multiplication $(\cdot)$ , and division $(/)$ over $\mathbb{D}_p$ . We now define a transformer encoder binary classifier over $\mathbb{D}_p$ , largely adopting Merrill & Sabharwal's notation. + +Definition 1. A $p$ -precision transformer $\mathcal{T}$ with $h$ heads, $d$ layers, model dimension $m$ (divisible by $h$ ), and feedforward width $w$ is specified by: + +1. An embedding function $\phi : \Sigma \times \mathbb{N} \to \mathbb{D}_p^m$ whose form is defined in Appendix C.1; +2. For each $1 \leq \ell \leq d$ and $1 \leq k \leq h$ , a head similarity function $s_k^\ell: \mathbb{D}_p^m \times \mathbb{D}_p^m \to \mathbb{D}_p$ whose form is defined in Appendix C.2; +3. For each $1 \leq \ell \leq d$ and $1 \leq k \leq h$ , a head value function $v_k^\ell : \mathbb{D}_p^m \to \mathbb{D}_p^{m/h}$ whose form is defined in Appendix C.2; +4. For each $1 \leq \ell \leq d$ , an activation function $f^{\ell}:(\mathbb{D}_p^{m / h})^h\times \mathbb{D}_p^m\to \mathbb{D}_p^m$ whose form is defined in Appendix C.3 and implicitly uses the feedforward dimension $w$ ; +5. An output classifier head $\kappa : \mathbb{D}_p^m \to \{0, 1\}$ whose form is defined in Appendix C.4. + +Definition 2. We define the transformer computation and output as a function of an input $x \in \Sigma^n$ . + +1. Embeddings: For $1 \leq i \leq n$ , $\mathbf{h}_i^0 = \phi(x_i, i)$ .6 +2. Self Attention: For $0 \leq \ell \leq d - 1$ , (multihead) self-attention block $\ell + 1$ computes $h$ attention heads: + +$$ +\mathbf {a} _ {i, k} ^ {\ell + 1} = \sum_ {j = 1} ^ {n} \frac {s _ {k} ^ {\ell + 1} \left(\mathbf {h} _ {i} ^ {\ell} , \mathbf {h} _ {j} ^ {\ell}\right)}{Z _ {i , k}} \cdot v _ {k} ^ {\ell + 1} \left(\mathbf {h} _ {j} ^ {\ell}\right), \quad \text {w h e r e} Z _ {i, k} = \sum_ {j = 1} ^ {n} s _ {k} ^ {\ell + 1} \left(\mathbf {h} _ {i} ^ {\ell}, \mathbf {h} _ {j} ^ {\ell}\right). +$$ + +3. Activation Block: For $0 \leq \ell \leq d - 1$ , activation block $\ell + 1$ aggregates the head outputs to produce $\mathbf{h}^{\ell + 1}$ : + +$$ +\mathbf {h} _ {i} ^ {\ell + 1} = f ^ {\ell + 1} (\mathbf {a} _ {i, 1} ^ {\ell + 1}, \dots , \mathbf {a} _ {i, h} ^ {\ell + 1}, \mathbf {h} _ {i} ^ {\ell}). +$$ + +4. Classifier Head: The network prediction on $x \in \Sigma^n$ is $\kappa(\mathbf{h}_n^d)$ . + +We say $\mathcal{T}(x) = \kappa (\mathbf{h}_{|x|}^{d})$ and $L_{\mathcal{T}}$ is the language of $x\in \Sigma^{*}$ such that $\mathcal{T}(x) = 1$ . We refer to $\phi, s_k^\ell, v_h^\ell, f^\ell$ , and $\kappa$ as the core functions in $\mathcal{T}$ , and to embeddings, self attention, activation, and the classifier head as the components of $\mathcal{T}$ . We write $\theta_{\mathcal{T}}$ for the concatenated vector of parameters for the functions $\phi, s_k^\ell, v_h^\ell, f^\ell$ , and $\kappa$ , for all $1\leq \ell \leq d$ and $1\leq k\leq h$ . + +We define a log-precision transformer as one where $p$ is at most $O(\log n)$ and is a "simple" function, i.e., computable in $O(\log n)$ time. In our model, the weights $\theta_{\mathcal{T}}$ defining $\mathcal{T}$ are fixed, but the precision $p$ used to compute the forward pass can depend on $n$ (see Footnote 13 for a generalization). + +# 2.2 First-Order Logic with Majority + +As we will show, transformers can be translated into sentences in $\mathsf{FO}(\mathsf{M})$ . But what do such sentences look like? Informally, $\mathsf{FO}(\mathsf{M})$ is first-order logic extended to also have majority (M) quantifiers. Following Barrington et al. (1990), our sense of $\mathsf{FO}(\mathsf{M})$ takes strings in $\Sigma^{*}$ as input and returns 0 or 1 to define a formal language. In this setting, quantifiers range over indices (positions) into the string. Predicates can be applied to the variables introduced by these quantifiers. + +Definition 3 (FO(M) index). Indices in FO(M) are integers denoting positions in the input string: + +1. The constant 1, representing the first token's position. +2. The constant $n$ , representing the last token's position. +3. Strings (e.g., $i, j, k$ ) representing variables ranging over positions 1 to $n$ . +4. Any index built by applying addition or subtraction to other indices.7 + +Definition 4 (FO(M) formula). Formulas in FO(M) are constructed as follows: + +1. Let $\Sigma$ be a finite alphabet. For each $\sigma \in \Sigma$ and any index $i$ , $\sigma(i)$ , e.g., $\mathsf{a}(i)$ , is a formula that is true if the $i$ -th input token is $\sigma$ .9 +2. For any indices $i, j$ , the formula $\operatorname{bit}(i, j)$ returns the $j$ -th bit of the binary expansion of $i$ .10 +3. For two indices $i, j, i = j, i \leq j$ , and $i \geq j$ are formulas with their conventional semantics. +4. For two formulas $\phi, \psi, \phi \wedge \psi$ and $\phi \lor \psi$ are formulas with their conventional semantics. +5. For any formula $\phi$ (which may refer to $i$ ), the following are valid formulas: + +(a) $\exists i.\phi$ means some value of $i$ in $[1,n]$ makes $\phi$ true. +(b) $\forall i.\phi$ means all values of $i$ in $[1,n]$ make $\phi$ true. +(c) $M_i$ . $\phi$ means $\geq n / 2$ values of $i$ in $[1, n]$ make $\phi$ true. + +We use parentheses where necessary to disambiguate the order of operations. General formulas may contain free (i.e., unbound) variables: e.g., $\forall i$ . $i = j$ . A sentence is an $\mathsf{FO}(\mathsf{M})$ formula $\phi$ with no free variables. Sentences represent functions from $\Sigma^{*}$ to $\{0,1\}$ and thus define a formal language.[11] + +Extensions. Beyond Definition 4, $\mathsf{FO}(\mathsf{M})$ can express counting and threshold quantifiers in terms of majority quantifiers (Barrington et al., 1990). Given a formula $\phi$ , a counting quantifier creates a new formula $\exists^k i : \phi$ that is true iff $\phi$ is true across exactly $k$ values of $i$ . Threshold quantifiers $\exists^{\leq k}$ and $\exists^{\geq k}$ work similarly but check if $\phi$ is true for at least or at most $k$ values of $i$ . In addition, we show in Appendix A that $\mathsf{FO}(\mathsf{M})$ can express conditional majority quantifiers, which create a formula $\mathsf{Mi} : \phi[\psi]$ that is true iff $\psi$ is true for at least half the values of $i$ that make $\phi$ true. + +# 2.2.1 Examples + +To illustrate the formalism, we provide example languages definable in $\mathsf{FO}(\mathsf{M})$ with $\Sigma = \{\mathsf{a},\mathsf{b}\}$ . First, we show two languages that do not require majority quantifiers to express: + +Example 1 (Bigram matching). Strings containing the bigram $\mathsf{ab}$ : $\exists i[\mathsf{a}(i)\wedge \mathsf{b}(i + 1)]$ . + +Example 2 (Skip-bigram matching). Strings containing the long-distance pattern $\mathbf{a} \ldots \mathbf{b}$ (cf. "induction heads" of Elhage et al. 2021): $\exists i [\mathsf{b}(i) \wedge \exists j [j \leq i \wedge \mathsf{a}(j)]]$ . + +In contrast, Example 3 is a simple example that requires majority quantifiers (Furst et al., 1984): + +Example 3 (Majority). Strings with more b's than a's: $Mi[b(i)]$ . + +Figure 1 showed how $\mathsf{FO}(\mathsf{M})$ can be used to recognize patterns like $\mathsf{a}^m\mathsf{b}^m$ . A similar idea can be used to model parentheses matching (Barrington et al., 1990): + +Example 4 (1-Dyck). The well-balanced parentheses language (with a opening and b closing): + +$$ +\forall i. \left(\exists a, b. \left(\left(\exists^ {a} j: \mathbf {a} (j) \wedge j \leq i\right) \wedge \left(\exists^ {b} j: \mathsf {b} (j) \wedge j \leq i\right) \wedge b \leq a\right)\right) \wedge M i. \mathbf {a} (i) \wedge M j. \mathsf {b} (j). +$$ + +Example 5 (Integer Arithmetic). Iterated addition (i.e., summing $n$ $n$ -bit numbers), iterated multiplication, and division (Hesse, 2001) can all be expressed in $\mathsf{FO}(\mathsf{M})$ . + +# 3 Finite Precision Transformers Cannot Attend Universally + +Attention heads that spread attention weight uniformly across inputs have been observed in transformer LMs (Merrill et al., 2021) and make soft attention fundamentally more powerful than hard attention (Hao et al., 2022; Merrill et al., 2022). In particular, uniform attention is an important primitive that transformers can use to solve tasks involving counting (Bhattachamishra et al., 2020; Chiang et al., 2023), taking majority votes (Merrill et al., 2022), and matching parentheses or sorting (Weiss et al., 2021). A transformer with sufficient precision can easily implement uniform attention by setting the keys and queries across all positions to be constant. However, attention heads with finite precision cannot represent uniform attention over long sequences as a consequence of the following: + +Proposition 1. Let $\mathbf{a} \in \mathbb{R}^n$ s.t. $\sum_{i=1}^{n} a_i = 1$ and $\tilde{\mathbf{a}}$ its nearest $p$ -precision float approximation. + +1. Then the number of nonzero entries of $\tilde{\mathbf{a}}$ is upper bounded by its precision: specifically, $\tilde{\mathbf{a}}$ has at most $2^{2^p}$ nonzero entries. +2. Moreover, if $p < \log \log n$ and $\mathbf{a}$ is uniform (i.e., $a_{i} = 1 / n$ ), then $\tilde{\mathbf{a}} = \vec{0}$ . + +Proof. The smallest positive value representable by a $p$ -precision float is $2^{-(p_m - 2 + 2^{pe - 1})}$ which is bounded below by $2^{-2^p + 1}$ . Letting $k = 2^{2^p}$ , it holds that $2^{-2^p + 1} = 2 / k$ . So if $\tilde{a}_i$ gets the minimum value, then $a_i \geq 1 / k$ . Since $\sum_{i} a_i = 1$ , there can be at most $k$ indices satisfying this property. This implies there can be at most $k$ nonzero entries in $\tilde{\mathbf{a}}$ . If $n > k$ and $\mathbf{a}$ is uniform, $1 / n$ is less than half of the minimum representable value of $2 / k$ . Thus, $\tilde{\mathbf{a}} = \vec{0}$ . + +Proposition 1 says that fixed-precision transformers are artificially limited because they can only attend over bounded-length windows, making them similar to hard-attention transformers (Hao et al., 2022). Moreover, they cannot compute uniform attention over contexts of length $n$ with less than $\log \log n$ precision. This explains why Chiang et al. (2023) prove finite-precision transformers provably cannot recognize $\mathsf{a}^m\mathsf{b}^m$ , while in practice transformers have been shown to learn even its harder variant $\mathsf{a}^m\mathsf{b}^m\mathsf{c}^m$ even with long context lengths (Bhattamishra et al., 2020). In essence, their upper bound only applies in the asymptotic regime when $n > 2^{2^p}$ . + +In contrast, transformers in practice have enough precision both to compute uniform attention and recognize $\mathsf{a}^m\mathsf{b}^m$ on practical context lengths. More concretely, the bfloat16 representation allows uniform attention over $2^{6 + 2^7}\approx 10^{42}$ tokens and normal float1612 allows $2^{10 + 2^4}\approx 10^8$ tokens, both well above the typical context window of transformers. This motivates a formal model of transformers with enough precision to compute uniform attention and recognize languages such as $\mathsf{a}^m\mathsf{b}^m$ . + +# 4 Main Result: Expressing Log-Precision Transformers in FO(M) + +By Proposition 1, precision must grow with the context length $n$ ( $p > \log \log n$ ) for a transformer to compute uniform attention and other attention patterns with unbounded range, like practical transformers. In this paper, we analyze any transformer with up to $O(\log n)$ precision. We show that any function computable by log-precision transformers can be expressed in $\mathsf{FO}(\mathsf{M})$ : + +Theorem 2. Let $\mathcal{T}$ be a log-precision transformer with a parameter vector $\theta_{\mathcal{T}}$ fixed for all context lengths $n$ . Then, there exists an $\mathsf{FO}(\mathsf{M})$ sentence $\phi$ that computes the same function as $\mathcal{T}$ , i.e., $\phi(x) = \mathcal{T}(x)$ for any input string $x$ . + +Theorem 2 is the tightest known upper bound for log-precision transformers and shows that it is still possible to characterize transformers in a simple variant of first-order logic even with log-precision and uniform attention. As alluded to earlier, Theorem 2 immediately implies that any problem complete for $\mathsf{FO}(\mathsf{M})$ (or a larger class) is also transformer-hard. Since integer division and Dyck language membership are known to be $\mathsf{FO}(\mathsf{M})$ -complete (Hesse, 2001; Aaronson et al., 2022), it follows, perhaps surprisingly, that the entire computation of any transformer on input $x$ can be reduced to a single integer division or a finite number of Dyck-language queries: + +Corollary 2.1. Let $\mathcal{T}$ be a transformer satisfying Theorem 2. For any input $x$ , there exist first-order definable integers $a, b$ , and $i$ (dependent on $\mathcal{T}$ and $x$ ) such that $\mathcal{T}(x)$ equals the $i$ -th bit of $\lfloor a / b \rfloor$ . For any $x$ , there also exist first-order definable strings $w_1, \ldots, w_m$ such that $\mathcal{T}(x)$ is first-order definable in terms of the membership of the $w_i$ 's in $k$ -Dyck. + +# 5 Preliminaries for Proving Theorem 2 + +# 5.1 Computation Graphs + +A computation graph $G$ over a datatype $\mathbb{D} \subseteq \{0,1\}^*$ and a countable set of primitive functions $\mathfrak{F} \subseteq \mathbb{D}^* \times \mathbb{D}$ is a directed acyclic graph where: + +1. Each node is labelled by a node type: a function $f \in \mathfrak{F}$ computed by this node. +2. Each edge represents a value $\mathbb{D}$ flowing as output from one node into another node. We consider the edges flowing into node $j$ to have an order, i.e., be numbered. +3. $\mathfrak{F}$ contains the special symbol input, which designates $k$ nodes as input nodes. We refer to $k$ as the arity and assume w.l.o.g. that nodes $0,\ldots ,k - 1$ are inputs. $^{14}$ +4. A single node is taken as the output node (w.l.o.g., the node with the largest index). + +A computation graph $G$ of arity $k$ parameterizes a function $\mathbb{D}^k \to \mathbb{D}$ in the standard way: the input nodes are assigned the input values, and the value of each node is computed (traversing the graph in a bottom-up topological order) as a function of the values of its children until the output node receives a value. The value of the output node is considered the output of the function. It is worth noting that computation graphs can only process inputs of bounded length. To process arbitrary-length inputs, we will need to generalize them to computation graph families (Section 5.2). + +For a computation graph $G$ , $\mathrm{size}(G)$ is the number of nodes, $\mathrm{depth}(G)$ is the length of the longest path from an input node to the output, and $\mathrm{arity}(G, i)$ is the number of inputs to node $i$ . + +Threshold circuits. A threshold circuit is a special case of a computation graph where $\mathbb{D} = \{0,1\}$ and $\mathcal{F}$ is the set of threshold functions of the form $\theta_{\leq \Delta}$ and $\theta_{\geq \Delta}$ over $\mathbb{D}^*$ , defined as follows: $\theta_{\leq \Delta}(x) = 1$ if $\sum_{\sigma \in x}\sigma \leq \Delta$ and 0 otherwise; $\theta_{\geq \Delta}(x)$ is defined analogously. Typical AND, OR, and NOT gates are a special case of threshold gates, as is an IDENTITY gate. $^{15}$ + +We allow nodes with the $k' \geq 1$ largest indices to all be designated as (ordered) output nodes. A threshold circuit with arity $k$ and $k'$ output nodes will thus be a function from $\{0,1\}^k$ to $\{0,1\}^{k'}$ . This will be convenient when simulating neural network components that output multiple bits. + +We will find it useful to consider threshold circuits as a kind of compilation target for computation graphs: in other words, we will be concerned with simulating computation graphs defined over more complex functions and data types into threshold circuits. + +# 5.2 Computation Graph Families + +A computation graph family over $\mathbb{D}$ and $\mathfrak{F}$ is a mapping from $n\in \mathbb{N}$ to a computation graph $G_{n}$ for processing inputs of size $n$ . Thus, $\mathcal{G}$ defines a function from $\mathbb{D}^*\to \mathbb{D}$ , where $\mathcal{G}(x) = G_{|x|}(x)$ . + +Intuitively, computation graph families are useful because they generalize computation graphs to define functions over unbounded-length strings as inputs. + +Size, depth, and arity. For computation graph families, the size, depth, and arity become functions of the input length $n$ : $\text{size}_{\mathcal{G}}(n) = \text{size}(G_n)$ , $\text{depth}_{\mathcal{G}}(n) = \text{depth}(G_n)$ , $\text{arity}_{\mathcal{G}}(n,i) = \text{arity}(G_n,i)$ . + +Uniformity. The infinite set $\mathcal{G}$ can be alternatively represented by two functions: + +1. $\mathrm{node}_{\mathcal{G}}(n,i)$ , which returns the type of node $i$ in $G_{n}$ if $i \leq \mathrm{size}(G_{n})$ , and $\emptyset$ otherwise. For example, if node $i$ computes the logical AND of its inputs, then $\mathrm{node}_{\mathcal{G}}(n,i) = \wedge$ . +2. edge $_{\mathcal{G}}(n, i, j)$ , which returns the argument index of $i$ into node $j$ if $G_{n}$ contains an edge $i \to j$ and $-1$ otherwise. edge $_{\mathcal{G}}(n, i, j)$ only needs to be defined over $i, j < \text{size}(G_{n})$ . For example, if $G_{n}$ contains a node $j$ with three incoming edges, the second of which comes from node $i$ , then edge $_{\mathcal{G}}(n, i, j) = 1$ . + +A pair of algorithms implementing these two functions uniquely specifies a computation graph family, as it enables building the computation graph $G_{n}$ for any $n$ . Uniform computation graph families (generalizing uniform circuits; cf. Arora & Barak, 2009) are families where node $\mathcal{G}$ and edge $\mathcal{G}$ can be computed efficiently, i.e., under some constraints on space or time: + +Definition 5 (Uniformity). A computation graph family $\mathcal{G}$ is $T(n)$ -uniform iff $\mathrm{node}_{\mathcal{G}}(n,i)$ and $\mathrm{edge}_{\mathcal{G}}(n,i,j)$ can be computed by a deterministic Turing machine in time $T(n)$ . We focus on log-uniform computation graph families: i.e., where $T(n) = \mathrm{O}(\log n)$ .16 + +Threshold circuit families. These are simply families of threshold circuits. We will be simulating computation graph families with threshold circuit families. Log-uniform $\mathsf{TC}^0$ is the class of languages recognized by log-uniform constant-depth, poly-size threshold circuit families. See Merrill & Sabharwal (2023); Liu et al. (2023); Arora & Barak (2009) for more background on $\mathsf{TC}^0$ and circuits. + +# 6 Proof of Theorem 2 + +The idea is to simulate a transformer with a log-uniform $\mathsf{TC}^0$ circuit family. Since log-uniform $\mathsf{TC}^0 = \mathsf{FO}(\mathsf{M})$ , this would imply any transformer can be expressed in $\mathsf{FO}(\mathsf{M})$ . First, we note that transformers are log-uniform computation graphs: + +Lemma 1 (Proof in Appendix B.1). A transformer $\mathcal{T}$ is a log-uniform computation graph family where $\mathfrak{F}$ contains embedding, self-attention, feedforward, and output components. + +Further, each core module of the transformer can be simulated by a log-uniform $\mathsf{TC}^0$ circuit family: + +Lemma 2 (Proof in Appendix B.2). Let $\mathcal{T}$ be a log-precision transformer with fixed parameters $\theta_{\mathcal{T}}$ . Then each component in $\mathfrak{F}$ is computable in log-uniform $\mathsf{TC}^0$ . + +Intuitively, we can now simulate a transformer in log-uniform $\mathsf{TC}^0$ by just simulating each of its components with a threshold circuit and routing their inputs and outputs appropriately. However, we will need two more technical conditions to verify that this construction is indeed log-uniform: + +Lemma 3 (Proof in Appendix B.3). Let $\mathcal{T}$ be a log-precision transformer with fixed parameters $\theta_{\mathcal{T}}$ . There exists a function $\mathrm{bsize}(n)$ that is a power of 2 and computable in $\mathrm{O}(\log n)$ time s.t. $\mathrm{size}_{\mathcal{F}}(n) \leq \mathrm{bsize}(n)$ for all $\mathcal{F} \in \mathfrak{F}$ . + +Lemma 4 (Proof in Appendix B.4). If $\mathcal{F}$ is a log-uniform $\mathsf{TC}^0$ family and $\mathrm{size}_{\mathcal{F}}(n)\leq \mathrm{bsize}(n)$ , there exists a log-uniform $\mathsf{TC}^0$ family $\mathcal{F}'$ s.t. $\mathcal{F}(x) = \mathcal{F}'(x)$ for all $x$ and $\mathrm{size}_{\mathcal{F}'}(n) = \mathrm{bsize}(n)$ . + +Combined, Lemmas 3 and 4 show that each $\mathcal{F} \in \mathfrak{F}$ is computable by a log-uniform $\mathsf{TC}^0$ family with size $\mathrm{bsize}(n)$ that is a power of 2 and computable in time $\mathrm{O}(\log n)$ . We will show these conditions imply a transformer $\mathcal{T}$ can be simulated by a $\mathsf{TC}^0$ family $\mathcal{C}$ (Theorem 3) and moreover that $\mathcal{C}$ is log-uniform (Corollary 3.2). By the equivalence of log-uniform $\mathsf{TC}^0$ and $\mathrm{FO}(\mathsf{M})$ (Barrington et al., 1990), we then conclude that any log-precision transformer can be expressed in $\mathrm{FO}(\mathsf{M})$ . + +
Algorithm 1 nodeC(n,i)Return the type of gate i in circuit Cn.Algorithm 2 edgeC(n,i,j)If Cn contains an edge i → j, return the argument number of that edge. Otherwise, return -1.
1: F← nodeG(n,bnode(n,i))2: if F ≠ ∅ then3: return nodeF(n,i-bstart(n,i'))4: else return ∅1: i' ← bnode(n,i)2: j' ← bnode(n,j)3: si ← bstart(n,i')4: sj ← bstart(n,j')5: if i' = j' then6: F ← nodeG(n,i')7: return edgeF(n,i-si,j-sj)8: else if edgeG(n,i',j') ≥ 0 then9: bi ← i-(si + bsize(n,i') - p(n))10: bj ← j-(sj + p(n)·edgeG(n,i',j'))11: if bi = bj < p(n) then return j - sj)12: else return -113: else return -1
+ +# 6.1 Simulating Computation Graph Families with Circuit Families + +We give algorithms that take a computation graph family and define a circuit family simulating it. Intuitively, the algorithms create contiguous blocks of circuit gates simulating each node in the computation graph and route inputs and outputs between blocks appropriately. + +Block mapping. This algorithm depends on a block mapping, which is an implementation of the following three functions: + +1. The block node $\text{bnode}(n, i)$ : the index of the node that gate $i$ 's block is simulating. +2. The block start bstart $(n,i^{\prime})$ : the smallest gate index in the block simulating node $i^{\prime}$ +3. The block size $\mathrm{bsize}(n,i')$ : the number of gates in the block simulating node $i'$ . + +Further, we enforce that a valid block mapping must satisfy that, for all $i$ , with $i' = \mathrm{bnode}(n,i)$ + +$$ +\operatorname {b s t a r t} \left(n, i ^ {\prime}\right) \leq i < \operatorname {b s t a r t} \left(n, i ^ {\prime}\right) + \operatorname {b s i z e} \left(n, i ^ {\prime}\right). +$$ + +Let $\mathcal{G}$ be a computation graph whose primitive functions are computable by log-uniform threshold circuits. We can identify each primitive function with a log-uniform threshold circuit family $\mathcal{F}$ that computes it, where the first arity $\mathcal{F}(n)$ gates are IDENTITY gates reserved for taking input. For such a graph, node $_{\mathcal{G}}$ can be taken to return a symbol identifying a circuit family $\mathcal{F}$ . In this case, our algorithm requires that, for all $i'$ , the block size of $i'$ must match the size of the circuit for the type of block $i'$ , i.e., $\mathrm{bsize}(n,i') = \mathrm{size}_{\mathrm{node}_{\mathcal{G}}(n,i')} (n)$ . These properties let us meaningfully identify a graph node $i'$ with a block of nodes that will simulate it. This intuition enables us to develop Algorithms 1 and 2 for constructing a uniform threshold circuit family from a uniform computation graph family. + +Theorem 3. Let $\mathcal{G}$ be a computation graph over a finite set of node types $\mathfrak{F}$ , where each $\mathcal{F} \in \mathfrak{F}$ is specified by a log-uniform circuit family. Let bnode, bstart, and bsize be a valid block mapping in the sense above. Then Algorithms 1 and 2 define a circuit family $\mathcal{C}$ such that + +1. $\mathcal{C}$ and $\mathcal{G}$ compute the same $\mathbb{D}_p^* \to \mathbb{D}_p$ function (let the final $p$ gates of each $C_i$ be its output). +2. $\mathrm{depth}_{\mathcal{C}}(n)\leq \mathrm{depth}_{\mathcal{G}}(n)\cdot \max_{\mathcal{F}}\mathrm{depth}_{\mathcal{F}}(n).$ +3. $\mathsf{size}_{\mathcal{C}}(n)\leq \mathsf{size}_{\mathcal{G}}(n)\cdot \max_{\mathcal{F}}\mathsf{size}_{\mathcal{F}}(n).$ + +Proof. Assume w.l.o.g. that the gates of $\mathcal{C}$ are topologically ordered. We show by induction over circuit gates $j$ (with $j' = \mathrm{bnode}(n, j)$ ) that: + +1. For all $i' < j'$ , the last $p$ nodes of block $i'$ store the value of node $i'$ . +2. For all $i$ such that $\mathrm{bstart}(n,j')\leq i\leq j$ , gate $i$ of $\mathcal{C}$ (as a function of the input nodes of $j'$ ) computes gate $i - \mathrm{bstart}(n,j')$ of $\mathrm{node}_{\mathcal{G}}(n,j')$ . + +Base case. We have two circuits with no gates, so the premises are trivially satisfied. + +Inductive case. Assume the premises hold up to $j$ . We will show they hold for $j + 1$ . Let $\mathcal{T} = \mathrm{node}_{\mathcal{G}}(n, j')$ . By Premise 1, we know that the last $p$ nodes of block $i'$ store the output of node $i'$ , for + +$i' < j'$ . By Algorithm 2, for each $i'$ such that $\mathrm{edge}_{\mathcal{G}}(n,i',j') = a$ with $0 \leq k < \mathrm{arity}_{\mathcal{F}}(n)$ , gates $kp$ through $k(p + 1) - 1$ of block $j'$ will copy the final $p$ gates of block $i'$ . Thus, the first $k \cdot \mathrm{arity}_{\mathcal{F}}(n)$ gates of block $j'$ store the inputs to node $j'$ . + +At this point, we use Premise 2 to conclude that the first $j - \mathrm{bstart}(n,j')$ gates of block $j'$ compute the same function as the first $j - \mathrm{bstart}(n,j')$ gates of $\mathcal{F}$ with respect to this input. Thus, we just need to show that gate $j + 1$ is also correct. Within Algorithm 2, we fall in case $i' = j'$ , meaning that gate $j + 1$ of block $j'$ gates the same inputs as gate $j + 1$ of $\mathcal{F}$ . By Algorithm 1, the type of gate $j + 1$ in block $j'$ is the type of gate $j + 1$ of $\mathcal{F}$ . Thus, gate $j + 1$ in block $j'$ computes the same function of the input gates as gate $j + 1$ in $\mathcal{F}$ . If $j + 1 = \mathrm{bsize}(n,j')$ , we conclude that the final $p$ gates of block $j'$ store the output of node $j'$ . + +Let $\mathsf{XC}^0$ denote any family of constant-depth, poly-size circuits, including $\mathsf{AC}^0$ and $\mathsf{TC}^0$ .17 + +Corollary 3.1. Let $\mathcal{G}$ be a constant-depth, poly-size computation graph family over a finite $\mathfrak{F}$ . If every node type in $\mathfrak{F}$ can be computed by $\times C^0$ circuits, the function computed by $\mathcal{G}$ is in $\times C^0$ . + +Since a transformer has constant depth and polynomial size, Corollary 3.1 lets us easily recover prior results about hard-attention transformers (Hao et al., 2022; Hahn, 2020) and saturated attention transformers (Merrill et al., 2022) using a common framework. All one has to do is show that all individual node types in such transformers can be computed by $\mathsf{AC}^0$ and $\mathsf{TC}^0$ circuits, respectively. + +Corollary 3.1 established that Algorithms 1 and 2 construct a circuit family that simulates $\mathcal{G}$ . With the right block mapping, $\mathcal{C}$ will be log-uniform as long as $\mathcal{G}$ and its node types are log-uniform. + +Corollary 3.2. Let $\mathcal{G}$ be a log-uniform, constant-depth computation graph family over a finite $\mathfrak{F}$ , where each $\mathcal{F} \in \mathfrak{F}$ is specified by a log-uniform $\mathsf{TC}^0$ family with $\text{size}_{\mathcal{F}}(n) = \text{bsize}(n)$ that is a power of 2 computable in $\mathrm{O}(\log n)$ time. Then $\mathcal{G}$ can be simulated by a log-uniform $\mathsf{TC}^0$ family $\mathcal{C}$ that obeys the size and depth properties of Theorem 3. + +Proof. Let $\mathcal{C}$ be the circuit family defined by Algorithms 1 and 2 given $\mathcal{G}$ and the following block mapping: $\mathtt{bnode}(n,i) = \lfloor i / \mathtt{bsize}(n)\rfloor$ , $\mathtt{bstart}(n,i') = i'\cdot \mathtt{bsize}(n)$ , $\mathtt{bsize}(n,i') = \mathtt{bsize}(n)$ . Since $\mathtt{bsize}(n)$ is a power of 2, $\mathtt{bnode}$ and $\mathtt{bstart}$ are reducible to left and right shifting over $\mathrm{O}(\log n)$ -bit integers, which can be implemented in $\mathrm{O}(\log n)$ time. Thus, each block mapping function is computable in time $\mathrm{O}(\log n)$ . Since $\mathtt{node}_{\mathcal{G}}$ and $\mathtt{edge}_{\mathcal{G}}$ are just calling functions computable in time $\mathrm{O}(\log n)$ with constant overhead, we conclude that $\mathcal{C}$ , the circuit family they define, is log-uniform, and it is already known to simulate $\mathcal{G}$ with constant depth and polynomial size by Theorem 3. + +# 7 Conclusion + +We proved that any log-precision transformer classifier can be translated to an $\mathsf{FO}(\mathsf{M})$ sentence that computes the same function (on all inputs of any length). This result comes by first simulating a transformer with a highly uniform threshold circuit family, and then leveraging the established equivalence of log-uniform circuits and $\mathsf{FO}(\mathsf{M})$ . Transformers and other neural nets are often discussed in contrast with symbolic models based on logical formalisms (Garnelo & Shanahan, 2019)—an immediate implication of our result is that it is possible to express the inner workings of transformers also in a simple logic, challenging the premise of a rigid division between symbolic and neural models. Our results also provide the tightest known upper bound on log-precision transformers. + +While it is striking that a full transformer can be translated to a sentence in a logic as simple as $\mathsf{FO}(\mathsf{M})$ , we believe the bound is not tight. In particular, we conjecture that it is possible to simulate any transformer with an $\mathsf{FO}(\mathsf{M})$ sentence of quantifier depth of at most 2, which could be proven by establishing a hierarchy theorem describing the $\mathsf{FO}(\mathsf{M})$ quantifier depth needed to simulate a $\mathsf{TC}^0$ family of a certain size. It would also be an interesting extension to translate real transformers to $\mathsf{FO}(\mathsf{M})$ sentences. In this sense, we believe our results provide a theoretical foundation to guide mechanistic interpretability work (cf. Weiss et al., 2021; Lindner et al., 2023). + +Our findings provide a novel view into transformer classifiers and their limits. It would be exciting for future research to extend our results to account for other common practical uses of transformers, such as for long-form generation, chain-of-thought reasoning, and in-context learning. + +# Acknowledgments + +We thank Paul Beame, David Chiang, anonymous reviewers, and researchers at the Allen Institute for AI for feedback. WM was supported by an NSF graduate research fellowship and in part by NSF award 1922658. + +# References + +Aaronson, S., Kuperberg, G., and Habryka, O. TC0: Constant depth threshold circuits, 2022. URL https://complexityzoo.net/Complexity_Zoo:T#tc0. +Arora, S. and Barak, B. Computational Complexity: A Modern Approach. Cambridge University Press, 2009. +Barrington, D. A. M., Immerman, N., and Straubing, H. On uniformity within $\mathsf{NC}^1$ . Journal of Computer and System Sciences, 41(3):274-306, 1990. +Bhattachamishra, S., Ahuja, K., and Goyal, N. On the ability and limitations of transformers to recognize formal languages. In EMNLP, 2020. +Brent, R. P. and Zimmermann, P. Modern computer arithmetic, volume 18. Cambridge University Press, 2010. +Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In NeurIPS, 2020. +Chiang, D., Cholak, P., and Pillay, A. Tighter bounds on the expressivity of transformer encoders. ICML, 2023. +Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., DasSarma, N., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt, L., Ndousse, K., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., and Olah, C. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. https://transformer-circuits.pub/2021/framework/index.html. +Furst, M. L., Saxe, J. B., and Sipser, M. Parity, circuits, and the polynomial-time hierarchy. Mathematical systems theory, 17:13-27, 1984. +Garnelo, M. and Shanahan, M. Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Current Opinion in Behavioral Sciences, 29:17-23, 2019. ISSN 2352-1546. +Hahn, M. Theoretical limitations of self-attention in neural sequence models. TACL, 8:156-171, 2020. +Hao, Y., Angluin, D., and Frank, R. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. TACL, 10:800-810, 2022. +Hesse, W. Division is in uniform $\mathsf{TC}^0$ . In International Colloquium on Automata, Languages, and Programming, pp. 104-114. Springer, 2001. +Hunter, P., Bouyer, P., Markey, N., Ouaknine, J., and Worrell, J. Computing rational radical sums in uniform TC0. Foundations of Software Technology and Theoretical Computer Science, 2010. +Lindner, D., Kramár, J., Rahtz, M., McGrath, T., and Mikulik, V. Tracr: Compiled transformers as a laboratory for interpretability. arXiv, abs/2301.05062, 2023. +Liu, B., Ash, J. T., Goel, S., Krishnamurthy, A., and Zhang, C. Transformers learn shortcuts to automata. In ICLR, 2023. + +Merrill, W. and Sabharwal, A. The parallelism tradeoff: Limitations of log-precision transformers. TACL, 11:531-545, 2023. +Merrill, W., Ramanujan, V., Goldberg, Y., Schwartz, R., and Smith, N. A. Effects of parameter norm growth during transformer training: Inductive bias from gradient descent. In EMNLP, 2021. +Merrill, W., Sabharwal, A., and Smith, N. A. Saturated transformers are constant-depth threshold circuits. TACL, 10, 2022. +Pérez, J., Marinković, J., and Barceló, P. On the Turing completeness of modern neural network architectures. In ICLR, 2019. +Thoppilan, R., Freitas, D. D., Hall, J., Shazeer, N. M., Kulshreshtha, A., Cheng, H.-T., Jin, A., Bos, T., Baker, L., Du, Y., Li, Y., Lee, H., Zheng, H., Ghafouri, A., Menegali, M., Huang, Y., Krikun, M., Lepikhin, D., Qin, J., Chen, D., Xu, Y., Chen, Z., Roberts, A., Bosma, M., Zhou, Y., Chang, C.-C., Krivokon, I. A., Rusch, W. J., Pickett, M., Meier-Hellstern, K. S., Morris, M. R., Doshi, T., Santos, R. D., Duke, T., Søraker, J. H., Zevenbergen, B., Prabhakaran, V., Diaz, M., Hutchinson, B., Olson, K., Molina, A., Hoffman-John, E., Lee, J., Aroyo, L., Rajakumar, R., Butryna, A., Lamm, M., Kuzmina, V. O., Fenton, J., Cohen, A., Bernstein, R., Kurzweil, R., Aguera-Arcas, B., Cui, C., Croak, M., Chi, E., and Le, Q. LaMDA: Language models for dialog applications. ArXiv, abs/2201.08239, 2022. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Attention is all you need. In NeurIPS, 2017. +Weiss, G., Goldberg, Y., and Yahav, E. On the practical computational power of finite precision RNNs for language recognition. In ACL, 2018. +Weiss, G., Goldberg, Y., and Yahav, E. Thinking like transformers. ICML, 2021. +Xiong, R., Yang, Y., He, D., Zheng, K., Zheng, S., Xing, C., Zhang, H., Lan, Y., Wang, L., and Liu, T.-Y. On layer normalization in the transformer architecture. In ICML, 2020. \ No newline at end of file diff --git a/alogicforexpressinglogprecisiontransformers/images.zip b/alogicforexpressinglogprecisiontransformers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b093335850140e9de02dc09b87e339f8314bd3d1 --- /dev/null +++ b/alogicforexpressinglogprecisiontransformers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13f89b1067077411828f86728deee96d692bdeb72e4e69d511df4f23edb6ff89 +size 119095 diff --git a/alogicforexpressinglogprecisiontransformers/layout.json b/alogicforexpressinglogprecisiontransformers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..28f9158fbcaaee684e2833a502890f388fc632be --- /dev/null +++ b/alogicforexpressinglogprecisiontransformers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15e5d9b5c860aef655083d8b6797ec15fe0873f430c2b93b6497b1f06cc90993 +size 770858 diff --git a/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_content_list.json b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4f2588cdce767f650f2ce6e62db5c680151de722 --- /dev/null +++ b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c40c9eab565d6b21288b63cd0fa295f76b6b244b4174ba4baf46ae5baf48cf51 +size 80021 diff --git a/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_model.json b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0c6bda177c6d57b48f0ddde0a8b77960162c229c --- /dev/null +++ b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c855f672b875933b8960fe892f528653a393d0da2b05768c81002352666af58e +size 97613 diff --git a/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_origin.pdf b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..90ab9373518513f8faca15dacef3974d42313197 --- /dev/null +++ b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/1879388f-5ef1-4482-b5ab-12df90c1f73f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51e25eb87672780d68cd4da3d265b23da88d7ab74c39607264dfbb38a4230d42 +size 20790420 diff --git a/alongnstepsurrogatestagerewardfordeepreinforcementlearning/full.md b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8a17ac0d45446aa10876214092d05069e60cc41b --- /dev/null +++ b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/full.md @@ -0,0 +1,333 @@ +# A Long $N$ -step Surrogate Stage Reward for Deep Reinforcement Learning + +Junmin Zhong + +Arizona State University + +Ruofan Wu + +Arizona State University + +Jennie Si * + +Arizona State University + +# Abstract + +We introduce a new stage reward estimator named the long $N$ -step surrogate stage (LNSS) reward for deep reinforcement learning (RL). It aims at mitigating the high variance problem, which has shown impeding successful convergence of learning, hurting task performance, and hindering applications of deep RL in continuous control problems. In this paper we show that LNSS, which utilizes a long reward trajectory of future steps, provides consistent performance improvement measured by average reward, convergence speed, learning success rate, and variance reduction in $Q$ values and rewards. Our evaluations are based on a variety of environments in DeepMind Control Suite and OpenAI Gym by using LNSS piggybacked on baseline deep RL algorithms such as DDPG, D4PG, and TD3. We show that LNSS reward has enabled good results that have been challenging to obtain by deep RL previously. Our analysis also shows that LNSS exponentially reduces the upper bound on the variances of $Q$ values from respective single-step methods. + +# 1 Introduction + +Reinforcement learning (RL) shows great promise as a theory of learning in complex dynamic tasks [1, 2]. For continuous control problems involving high-dimensional states and control spaces, several deep reinforcement learning (DRL) algorithms, such as deep deterministic policy gradient (DDPG) [3], proximal policy optimization (PPO) [4], Soft actor critic (SAC) [5], D4PG [6] and twin delayed DDPG (TD3) [7], have demonstrated their great potential. + +However, the high variance problem [8, 9] in DRL still poses great challenges for it to be effective in solving complex contentious control problems. On one hand, this problem is intrinsic to the trial and error nature of reinforcement learning, which causes low data efficiency, slow learning, or even instability of learning and poor task performance [10, 11]. On the other hand, high variances are often caused by several factors, such as variances in value estimation [12], variances resulting from different random seeds [10], and variances due to noisy stochastic reward or corrupted sparse reward [13]). + +Different approaches have been proposed to address the high variance problem, from the original $\mathrm{TD}(\lambda)$ algorithm to $Q(\sigma)$ [14] as a generalization of $\mathrm{TD}(\lambda)$ [15]. Rollout methods [16, 17, 18] have enabled the success of some of the most important milestone cases such as Backgammon [17] and AlphaGo [19] in machine learning history. Although these algorithms has been demonstrated in discrete state and control problems, variants of rollout are believed to be effective also in continuous control tasks. Modern DRL algorithms such as Rainbow [20] integrate $n$ -step learning on top of DQN and results in a speed-up in learning and an improvement of the final performance on Atari 2600 games. D4PG [6] uses the same $n$ -step methods to update target values, and have been shown effective in continuous control tasks. PPO [4] uses a truncated version of generalized advantage estimation (GAE) to update policy which reduces policy variance [21]. However these methods do + +not explicitly account for noisy rewards where such uncertainties have been reported to degrade the model/controller performance [22]. + +In this paper, we propose a new algorithm, namely the long $N$ -step surrogate stage (LNSS) reward estimator. LNSS aims at providing variance reduction for DRL methods in continuous control problems, even under noisy reward signals or corrupted sparse reward. LNSS is simple, and can be easily piggybacked on DRL algorithms. It is based on a surrogate reward $r'$ obtained from a trajectory of rewards of future " $N$ -step"s. In turn, $r'$ is used in $Q$ value update where the target function is then of the same form as that in single-step TD methods. Using a long " $N$ -step"s of the reward sequence containing environment information, it is expected to be more advantageous to learning than using a short trajectory of $n$ -step of the reward sequence. + +We show theoretically that the LNSS method exponentially reduces the upper bound on the variance of $Q$ value from a respective single-step method. We then conduct extensive experiments in selected environments in DMC and GYM where most current DRL algorithms such as DDPG, D4PG, and TD3 that typically struggle to obtain good results [23]. We consider uniform random noises of different magnitudes to show how LNSS can benefit a variety of DRL algorithms while the performances of other methods degrade significantly. We further show how LNSS can still be effective in environments where only sparse rewards are available. + +Contributions. 1) We introduce a new, simple yet effective surrogate reward $r'$ that interpolates between the single-step TD methods and Monte-Carlo (MC) methods [15], to trade off the strengths from the two extremes. 2) We theoretically show the proposed method helps reduce the upper bound on the variance of $Q$ value with a long trajectory of rewards of future "N-step"s. 3) Extensive experiments on DMC and GYM benchmarks show that LNSS robustly benefits the learning performance of a variety of DRL algorithms even under reward noise or sparse reward. Mainly, the LNSS consistently reduces variances in the learned value function and achieves robust results from different random seeds in several different types of continuous control problems. Please check our code + +# 2 Related Work + +The " $n$ -step" methods. Here, as in the literature, $n$ -step methods refer to those involving a trajectory of rewards of multiple future steps in DRL algorithms for speeding up learning and reducing variance [24, 25, 6]. They usually proceed as follows. First, collect $n$ -step returns to form a discounted sum of rewards [26, 27]. Then, perform $n$ -step return backup to update the value function [15]. It was empirically suggested that $n$ -step methods perform better than single-step methods since $n$ -step returns propagate rewards to relevant state-action pairs more efficiently [24, 25]. + +Using longer trajectories with a larger $n$ is expected to improve learning performance [28]. However, to date, the $n$ -step DRL methods have only been demonstrated by using relatively small $n$ . Rainbow [20] uses $n = 1,3,5$ to evaluate performance of 57 Atari 2600 games and reported $n = 3$ as the best-performing reward trajectory length. D4PG [6] uses $n = 1,5$ to evaluate a variety of continuous control tasks in DMC and results show that $n = 5$ performs uniformly better than others. However, 3 and 5 are relative small for complex tasks which usually have a maximum of 1000 steps in an episode. + +There are potentially two reasons that prevent $n$ -step methods from using larger length $n$ . One is that the longer the $n$ -step return, the larger reward scale that can cause saturation and inefficiency in learning for most DRL algorithms [10]. The second reason is that the longer the $n$ -step return, the smaller discount factor $(\gamma^n)$ in $n$ -step backup [15]. Based on previous studies [29], variations in the scale of the discount factor directly causes uncertainty to the RL performance. New algorithmic ideas are needed for DRL algorithms to benefit from longer reward bootstrapping. + +Reward estimation. Reward estimation aims at reducing sample complexity and learning variance as reward variances may be introduced from using random seeds in training [9], from sensor noises in real-world applications [8] and from corrupted sparse rewards [13]. Several approaches have been proposed for reward estimation. Small backups [30], instead of directly using an actual reward, use multi-step imaginary rollouts as a predicted reward. Model-based policy search with Bayesian neural network [31] finds expected cost by averaging over multiple virtual rollout. The $\lambda$ -prediction [32] uses an extra network to predict reward and the training of which may introduce additional variance. Model-based value expansion [33] uses an imaginary reward from a model to estimate the values of state value functions. To reduce learning variance resulted from noisy rewards, regression reward + +estimator [13] expects the estimated reward to reduce the variance propagated to the value function. Unbiased reward estimator [22] estimates a surrogate reward by learning a reward confusion matrix which estimates true reward from noisy reward. However, all these estimation methods require extra models (either in the form of neural networks, environment transitions, or reward noise model), which imposes overhead on learning and introduces estimation error. Additionally, how to effectively and efficiently integrate these reward estimates into DRL algorithms is not clear, or at least not straightforward. + +LNSS. In contrast, our method does not introduce any new hyper parameters into learning as we still use the same form of reward signals but only to a longer horizon of " $N$ -step"s. The LNSS method can be piggybacked on any DRL algorithms directly. Comparing to $n$ -step methods, by using our surrogate reward $r'$ to perform single-step updates, we can benefit from $n$ -step returns without changing the reward scale. This frees LNSS from a limited reward horizon to a much longer length $N$ . + +Selecting DRL algorithms. First, note that few existing $n$ -step methods have been successfully applied to different DRL algorithms with consistent learning variance reduction. We selected DDPG, D4PG, and TD3 as base DRL algorithms, with which our LNSS is to be piggybacked on for the following considerations. Among all DMC based benchmark results [23], distributional method D4PG [6] outperforms other DRL algorithms such as DDPG [3], and TD3 [5]. On the other hand, TD3, DDPG have relatively good performance in simple tasks such as walker but struggled to achieve good results in harder DMC tasks such as humanoid-walk and fish-swim. We thus use the three chosen DRL algorithms to systematically test the idea of LNSS piggybacked on promising DRL algorithms in complex tasks, even in noisy reward environments. + +# 3 Background + +Reinforcement Learning. We consider a reinforcement learning agent interacts with its environment in discrete time. At each time step $k$ , the agent observes a state $s_k \in S$ and select an action $a_k \in \mathcal{A}$ based on its policy $\pi : S \to \mathcal{A}$ , namely, $a_k = \pi(s_k)$ , and receives a scalar reward $r(s_k, a_k) \in \mathcal{R}$ (use $r_k$ as short hand notation). + +Evaluation of a policy $\pi$ is performed using the expected return after taking an action $a_{k}$ based on state $s_k$ following the policy $\pi$ : + +$$ +Q ^ {\pi} \left(s _ {k}, a _ {k}\right) = \mathbb {E} \left[ R _ {k} \mid s _ {k}, a _ {k} \right] +$$ + +$$ +\text {w h e r e} R _ {k} = \sum_ {t = k} ^ {\infty} \gamma^ {t - k} r _ {t}, \tag {1} +$$ + +$$ +s _ {k} \sim p (\cdot | s _ {k - 1}, a _ {k - 1}), +$$ + +$$ +a _ {k} = \pi (s _ {k}), +$$ + +with $0 < \gamma < 1$ . The two common approaches (single-step and $n$ -step) to update the $Q$ value for a policy $\pi$ are as follows. + +Single-step RL Algorithms. RL algorithms rely on using TD error or residual of the Bellman equation to update the state-action $Q$ value for a policy $\pi$ , as described below, + +$$ +Q ^ {\pi} \left(s _ {k}, a _ {k}\right) = r _ {k} + \gamma Q ^ {\pi} \left(s _ {k + 1}, a _ {k + 1}\right). \tag {2} +$$ + +This equation provides the basis for single-step RL methods. It is also known as a backup operation since it transfers information from one step ahead back to the current step. + +" $n$ -step" RL algorithms. Using $n$ -step rewards for faster reward propagation in RL has long been investigated [26, 14, 6, 20]. In $n$ -step methods, the value function $Q^{\pi}(s_k,a_k)$ update using an $n$ -step return is based on the following, + +$$ +Q ^ {\pi} \left(s _ {k}, a _ {k}\right) = \sum_ {t = k} ^ {k + n - 1} \gamma^ {t - k} r _ {t} + \gamma^ {n} Q ^ {\pi} \left(s _ {k + n}, a _ {k + n}\right). \tag {3} +$$ + +The $n$ -step return is expected to help agents learn more efficiently by affecting multiple state action pairs within one update and gain information over the $n$ -step horizon. + +More details on specific modern DRL algorithms such as the single-step methods (DDPG and TD3) and the $n$ -step methods (D4PG and PPO) can be found in Appendix D. + +# 4 Long $N$ -step Surrogate Stage (LNSS) Reward + +In this section, we introduce LNSS based on infinite horizon discounted reward formulation of reinforcement learning. Given a reward trajectory of $N$ steps from time step $k$ , let $G(s_{k:k+N-1}, a_{k:k+N-1}) \in \mathbf{R}$ (use $G_k$ as short hand notation) denote the discounted $N$ -step reward, i.e., + +$$ +G _ {k} = \sum_ {t = k} ^ {k + N - 1} \gamma^ {t - k} r _ {t}, \tag {4} +$$ + +where $r_t$ is the $t$ th stage reward and $t$ is from $k$ to $k + N - 1$ . In LNSS, we introduce $r_k'$ as a surrogate stage reward in place of $r_k$ in Equation (2). To determine $r_k'$ , we treat it as a weighted average of the $N$ -step reward sequence, namely + +$$ +r _ {k} ^ {\prime} = \frac {\sum_ {t = k} ^ {k + N - 1} \gamma^ {t - k} r _ {t}}{\sum_ {n = 0} ^ {N - 1} \gamma^ {n}}. \tag {5} +$$ + +We then propose the surrogate stage reward $r_k'$ to be + +$$ +r _ {k} ^ {\prime} = G _ {k} \frac {\gamma - 1}{\gamma^ {N} - 1}. \tag {6} +$$ + +This surrogate stage reward $r_k'$ as formulated in Equation (6) relies on a discounted reward of an $N$ -step horizon, from time step $k$ to step $(k + N - 1)$ , from the stored experiences into a temporary replay buffer $\mathbb{D}'$ . For a training episode of $T$ steps $[0, 1, 2\ldots, T]$ , the $\mathbb{D}'$ is a moving window of size $N$ from the initial state $s_0$ until the terminal state $s_T$ . As a result, when there is a sufficient number (i.e., $N$ ) of reward samples, LNSS computes the surrogate reward $r_k'$ from below, + +$$ +r _ {k} ^ {\prime} = \frac {\gamma - 1}{\gamma^ {N} - 1} \sum_ {t = k} ^ {k + N - 1} \gamma^ {t - k} r _ {t}. \tag {7} +$$ + +Note that, when the reward is estimated at the beginning or toward the end of a long trial, less than $N$ reward samples are available for estimating $r_{k}^{\prime}$ . A simple adjustment is given in Appendix C. + +Once $r_k'$ is obtained, $r_k'$ and state action pairs $(s_k, a_k, s_{k+1})$ will append as a new transition $(s_k, a_k, r_k', s_{k+1})$ stored into the memory buffer $\mathbb{D}$ . + +Note that many DRL algorithms [4, 20, 6] use distributed learning procedure to accelerate experience sample collection. We use the same technique to speed up sampling experiences. Then a DRL algorithm is ready to update the $Q$ value and the respective policy based on mini-batch data from the memory buffer. In general form, we have + +$$ +Q _ {i + 1} \left(s _ {k}, a _ {k}\right) = r _ {k} ^ {\prime} + \gamma Q _ {i} \left(s _ {k + 1}, \pi_ {i} \left(s _ {k + 1}\right)\right), +$$ + +$$ +\pi_ {i} \left(s _ {k}\right) = \arg \max _ {a _ {k}} Q _ {i} \left(s _ {k}, a _ {k}\right), \tag {8} +$$ + +where $i$ is iteration number. Putting the above two equations together, we have + +$$ +Q _ {i + 1} \left(s _ {k}, a _ {k}\right) = r _ {k} ^ {\prime} + \gamma \max _ {a _ {k + 1}} Q _ {i} \left(s _ {k + 1}, a _ {k + 1}\right). \tag {9} +$$ + +Remark 1. 1) Different from $n$ -step methods, in our LNSS, $N$ is the number of steps for accumulating rewards in Equation (6). We still perform a single-step update as in Equation (8). This allows us to prevent steep discounts of $\gamma^n$ in $n$ -step backup when using longer $n$ -step returns. As a result, LNSS can effectively use large $N$ at a scale of 100 while D4PG (Barth-Maron et al., 2018) uses $n$ at a scale of 5 steps, and the same for Rainbow (Hessel et al., 2018). + +2) LNSS can also be effective in a sparse reward environment. With a longer reward trajectory of $N$ future steps, LNSS continuously provides a reward estimate as feedback to the agent by assigning credits progressively backward from the time of achieving the desired states. That is to say that LNSS + +has turned a sparse reward into a dense reward. Thus, this helps the agent effectively and efficiently learn the task. However, the single-step method does not have the ability to provide any guidance or feedback to the learning agent until reaching a goal. + +3) The $Q$ value for policy $\pi$ due to LNSS differ from that in Equation 1. The former aims to maximize the sum of a sequence of surrogate rewards which are the weighted averages of the sequence of $N$ -steps of the original rewards. As such, goal-related information can be progressively propagated back from the goal state to the current state. This prevents the agent from getting trapped from reaching its intended goal and thus being stuck in an undesirable set of states due to a lack of feedback. + +# 5 Variance Analysis + +We now analyze the behavior of an actor-critic RL and our LNSS actor-critic RL. We consider the infinite horizon discounted reward formulation of RL (with $0 < \gamma < 1$ ). Specifically, we show that the upper bound on the variance in $Q$ value due to LNSS differs by an exponential factor from that of a single step actor-critic RL (AC). As this upper bound reduces exponentially as $N$ increases, it suggests significant variance reduction by using LNSS from using single step reward. + +We first represent the $Q$ values using single step reward $r_k$ in Equation (2) and using surrogate reward $r_k'$ from LNSS in Equation (8), respectively as follows, + +$$ +\operatorname {V a r} \left[ Q _ {i + 1} \left(s _ {k}, a _ {k}\right) \right] = \operatorname {V a r} \left[ r _ {k} \right] + \operatorname {V a r} \left[ \gamma Q _ {i} \left(s _ {k + 1}, a _ {k + 1}\right) \right] + 2 \operatorname {C o v} \left[ r _ {k}, \gamma Q _ {i} \left(s _ {k + 1}, a _ {k + 1}\right) \right]. \tag {10} +$$ + +$$ +\operatorname {V a r} \left[ \mathbb {Q} _ {i + 1} \left(s _ {k}, a _ {k}\right) \right] = \operatorname {V a r} \left[ r _ {k} ^ {\prime} \right] + \operatorname {V a r} \left[ \gamma \mathbb {Q} _ {i} \left(s _ {k + 1}, a _ {k + 1}\right) \right] + 2 \operatorname {C o v} \left[ r _ {k} ^ {\prime}, \gamma \mathbb {Q} _ {i} \left(s _ {k + 1}, a _ {k + 1}\right) \right]. \tag {11} +$$ + +Lemma 1. Assume $\{r_k\}$ is IID and drawn from the memory buffer $\mathbb{D}$ . Let $Q_{i}(s_{k + 1},a_{k + 1})$ in Equation (8) be the $i$ -th approximated return to solve Equation (1). We then have the following, + +$$ +\operatorname {C o v} \left[ r _ {k}, r _ {j \neq k} \right] = 0, \tag {12} +$$ + +$$ +\operatorname {C o v} \left[ r _ {k}, Q _ {i} \left(s _ {k + 1}, a _ {k + 1}\right) \right] = 0. \tag {13} +$$ + +Theorem 1. Consider the variances of two $Q$ value sequences, denoted as $Q_{i}$ and $\mathbb{Q}_i$ , in Equation (10) and Equation (11), which are obtained respectively from a single step method and an LNSS method. + +Additionally, assume that $Q_0 = \operatorname{Var}[Q_0] = 0$ and $\mathbb{Q}_0 = \operatorname{Var}[\mathbb{Q}_0] = 0$ . Let the IID reward $\{r_k\}$ and $\{r_k'\}$ be drawn from the memory buffer $\mathbb{D}$ . Assume the variance of $\{r_k\}$ is upper bounded by a finite positive number $\mathbb{B}$ , i.e., $\operatorname{Var}[r_k] \leq \mathbb{B}$ . Further define a constant $\psi$ as, + +$$ +\psi = \left(\frac {\gamma - 1}{\gamma^ {N} - 1}\right) ^ {2} \left(\frac {\gamma^ {2 N} - 1}{\gamma^ {2} - 1}\right). \tag {14} +$$ + +Then the upper bounds of the variances of the two $Q$ value sequences, $\operatorname{Var}[Q_{i+1}]$ and $\operatorname{Var}[\mathbb{Q}_{i+1}]$ , are respectively described below, + +![](images/265ce120b07c3be1d4598f4a21a54fbf94f32bc24fd9c11176b07f865d4163f1.jpg) +Figure 1: Variance discount factor $\psi$ in Equation (14). + +$$ +\operatorname {V a r} \left[ Q _ {i + 1} \left(s _ {k}, a _ {k}\right) \right] \leq \sum_ {t = 1} ^ {i + 1} \left(\gamma^ {t - 1}\right) ^ {2} \mathbb {B}, \tag {15} +$$ + +$$ +\operatorname {V a r} \left[ \mathbb {Q} _ {i + 1} \left(s _ {k}, a _ {k}\right) \right] \leq \psi \sum_ {t = 1} ^ {i + 1} \left(\gamma^ {t - 1}\right) ^ {2} \mathbb {B}. \tag {16} +$$ + +Proof. Both proofs of Lemma 1 and Theorem 1 are given in Appendix A. + +Remark 2. We now provide some insights based on the variance analysis above. + +1) Given $\psi$ in Equation (14), i.e., $\psi = (\frac{\gamma - 1}{\gamma^N - 1})^2 (\frac{\gamma^{2N} - 1}{\gamma^2 - 1})$ , it follows that for large $N$ , $\psi = (\gamma - 1)^2 (\frac{-1}{\gamma^2 - 1}) = \frac{1 - \gamma}{1 + \gamma}$ . +2) Furthermore, by the following identifies, $\gamma^2 - 1 = (\gamma - 1)(\gamma + 1)$ and $\gamma^{2N} - 1 = (\gamma^N - 1)(\gamma^N + 1)$ , we have that $\psi = \left(\frac{\gamma - 1}{\gamma + 1}\right)\left(1 + \frac{2}{\gamma^N - 1}\right)$ . Therefore, $\psi$ decreases exponentially (refer to Figure 1). +3) From inspecting Equations (15) and (16), we can see a clear advantage of using long " $N$ -step" in LNSS over the typical reward $r_k$ . + +# 6 Experiments and Results + +In this section, we first show how LNSS can benefit learning by using a simple Maze environment. To provide insight on the effect of using LNSS, we compare the performance of the original $Q$ -learning with the one that the stage reward is replaced by LNSS reward. + +We then provide a comprehensive evaluation of our proposed LNSS piggybacked on three promising DRL algorithms (DDPG, D4PG, TD3) by measuring their performance on several benchmarks in DMC and GYM (for results on GYM, please refer to Appendix E.5). Details of the implementation, training, and evaluation procedures are provided in Appendix B. In reporting evaluation results below, we use the following short-form descriptions. + +1) "Base": the original DRL algorithms (TD3, D4PG, DDPG). +2) "LNSS": LNSS piggybacked on the respective DRL algorithms where $N = 100$ unless otherwise specified. +3) "n5": the DRL algorithms with an " $n$ -step" implementation as in Equation (3) with $n = 5$ . + +Our evaluations aim to quantitatively address the following questions: +Q1. How is LNSS different from original reward? and How it helps learning? +Q2. How does LNSS improve Base method? +Q3. How does LNSS compare with previous $n$ -step method ( $n5$ )? +Q4. Is LNSS robust enough to compensate for stochastic reward? +Q5. Does LNSS improve Base methods under a sparse reward setting? +Q6. How does LNSS reduce the variance of estimated $Q$ values? +Q7. How different $N$ in LNSS affects performance? + +![](images/44a1c951625feecbcc5a9f29de44b6b302a44855cc7a4ecd1145638f184f8aa3.jpg) +Figure 2: Evaluation of LNSS piggybacked on $Q$ -learning in simple Maze (right panel). The evaluations are averaged over 10 seeds. The x-axis is the number of episodes (top-left panel) and the state number (bottom-left panel) in the plots for episode reward and stage reward, respectively. Note that the stage reward for LNSS is calculated based on Equation 6 + +# 6.1 The Simple Maze Problem + +The right panel of Figure 2 shows the Maze environment where $S_0$ is the initial state, $S_5$ is the goal state, and black blocks are walls. The unique optimal policy is {right,right,down,down,left}. We consider two problem formulations: + +1) Without Penalty: the agent only receives a reward 1 upon reaching goal state $S_{5}$ . For all other states, the agent receives 0's. +2) With Penalty: the agent only receives a reward 1 upon reaching goal state $S_{5}$ . For all other states, the agent receives -0.1 per stage as a penalty. + +Q1 LNSS takes future rewards into formulating the surrogate stage reward to guide a RL agent in reaching the goal state. As Figure 2 lower-left panels show, the stage reward from LNSS is more + +
No Penalty Case
UpDownRightLeft
StateOriginal RewardLNSS RewardOriginal RewardLNSS RewardOriginal RewardLNSS RewardOriginal RewardLNSS Reward
000000.471.3300
100000.621.4500
2000.761.480000
3000.891.360000
40000000.990.99
With Penalty Case
UpDownRightLeft
StateOriginal RewardLNSS RewardOriginal RewardLNSS RewardOriginal RewardLNSS RewardOriginal RewardLNSS Reward
0-0.09-0.05-0.09-0.05-0.091.1-0.09-0.05
1-0.07-0.020-0.01-0.11.26-0.13-0.03
2-0.0550-0.0551.36-0.0540-0.055-0.01
3-0.0300.041.3-0.030.0300
4-0.010.03-00.0600.070.380.997
+ +informative as LNSS provides sequences of gradual and smoother rewards to be used in $Q$ -learning. As a result, from the upper-left panels, this informative guidance from LNSS significantly enhances the performance in terms of episode rewards. The effect is even more pronounced when dealing with penalties. + +From Table 1, with the original reward, the $Q$ value is the sum of discounted expected rewards as in Equation 4. The farther away the goal state, the smaller the $Q$ values are as a result of discounts over the horizon. When learning under the environment of using penalty rewards, the $Q$ values in States $S_0, S_1, S_2$ show that they are even farther away from the goal state value. The $Q$ values can not be effectively updated due to a lack of informative feedback signal, and thus, causing the agent to get stuck and cease learning. In comparison, with the LNSS reward, goal-related information can be propagated back from the goal state to the initial state. This guidance plays a crucial role in assisting agents to overcome situations where they might otherwise get stuck. + +Table 1: $Q$ tables of $Q$ -learning using original reward and LNSS reward. The $Q$ values reported are averages across 10 different random seeds. + +
Acrobot SwingUpHumanoid WalkFish SwimFinger TurnHard
AlgorithmSuccess [%]Avg. Rwd [μ±2σ]Rank [%]Success [%]Avg. Rwd [μ±2σ]Rank [%]Success [%]Avg. Rwd [μ±2σ]Rank [%]Success [%]Avg. Rwd [μ±2σ]Rank [%]
Noise Level 0% (0.0)
DDPG-LNSS100304.6 ± 82.9-25.1100250.9 ± 16.5-34.6100645.1 ± 86.9-12.7100943.3 ± 32.90
DDPG-Base100187.9 ± 68.5-53.801.3 ± 0.4-99.6100272.9 ± 27.3-63.180301.8 ± 160.3-68.0
DDPG-n5100195.2 ± 67.6-51.902.2 ± 1.5-99.4100150.5 ± 33.7-79.6100459.8 ± 126.3-51.3
TD3-LNSS100114.8 ± 33.8-71.8100307.9 ± 16.9-19.8100676.8 ± 44.2-8.4100940.1 ± 29.6-0.3
TD3-Base05.2 ± 4.1-98.760184.2 ± 178.8-52.1100483.1 ± 127.6-34.690251.8 ± 101.4-73.3
TD3-n57061.7 ± 45.7-84.801.01 ± 0.41-99.6100731.7 ± 132.3-1.190394.2 ± 164.4-58.2
D4PG-LNSS100406.3 ± 13.10100383.9 ± 37.10100738.7 ± 59.80100811.5 ± 59.9-14.0
D4PG-Base100133.5 ± 16.2-67.1100277.7 ± 79.8-27.7100585.7 ± 36.1-20.7100336.9 ± 107.8-64.3
D4PG-n5100310.1 ± 49.5-23.7100360.3 ± 50.4-6.2100683.5 ± 95.6-7.590413.7 ± 243.8-56.1
Noise Level 1% (0.01)
DDPG-LNSS100243.5 ± 62.3-40.1100255.1 ± 22.6-33.6100700.4 ± 107.6-5.2100934.3 ± 30.58-0.1
DDPG-Base100188.2 ± 58.7-53.700.99 ± 0.25-99.8100340.1 ± 135.4-53.9100302.5 ± 130.4-67.9
DDPG-n5100159.6 ± 25.8-60.701.15 ± 0.43-99.7100149.4 ± 59.1-79.8100416.9 ± 168.4-55.8
TD3-LNSS100109.3 ± 20.8-73.1100197.6 ± 45.1-48.5100700.1 ± 84.5-5.2100912.8 ± 92.2-3.2
TD3-Base02.4 ± 1.53-99.402.05 ± 1.43-99.5100506.9 ± 111.5-31.460193.4 ± 142.2-79.5
TD3-n57069.6 ± 23.5-82.901.55 ± 0.8-99.6100688.7 ± 132.3-6.890384.1 ± 232.4-59.3
D4PG-LNSS100330.3 ± 60.6-18.7100247.9 ± 42.3-35.4100688.3 ± 55.3-6.8100801.9 ± 80.2-15.0
D4PG-Base100136.5 ± 30.2-66.4100210.5 ± 44-45.2100650.2 ± 89.3-11.9100300.5 ± 143.2-68.1
D4PG-n5100262.9 ± 67.1-35.3010.9 ± 2.3-97.2100578.7 ± 78.9-21.790383.7 ± 293.8-59.3
Noise Level 10% (0.1)
DDPG-LNSS100232.6 ± 61.5-42.8100235.4 ± 25.7-38.7100699.5 ± 95.3-5.3100940.2 ± 54.1-0.3
DDPG-Base100170.7 ± 42.1-58.000.76 ± 0.42-99.8100330.42 ± 144.2-55.3100315.2 ± 205.4-66.6
DDPG-n5100139.1 ± 28.7-65.801.3 ± 0.9-99.7100116.5 ± 16.8-84.2100456.5 ± 219.7-51.6
TD3-LNSS10080.7 ± 15.73-80.1100200.2 ± 117.6-47.3100696.5 ± 174.1-5.7100888.54 ± 92-5.8
TD3-Base02.19 ± 1.35-99.501.5 ± 0.33-99.7100454.9 ± 272.9-38.460190.1 ± 215.7-79.9
TD3-n5024.9 ± 22.7-93.901.1 ± 0.28-99.7100654.7 ± 130.2-11.480338.1 ± 216.4-64.2
D4PG-LNSS100331.3 ± 49.5-18.5100240.8 ± 46.3-37.3100681.4 ± 212.6-7.8100803.9 ± 83.3-14.8
D4PG-Base100130.1 ± 28.9-67.901.42 ± 0.7-99.7100645.3 ± 245.2-12.6100302.5 ± 153.2-67.9
D4PG-n5100251.4 ± 56.4-38.100.89 ± 0.6-99.8100563.4 ± 214.7-23.790355.1 ± 285.5-62.4
+ +Table 2: Systematic evaluations of LNSS respectively augmented Base algorithms, and comparisons to the Base and $n5$ -augmented Base algorithms. "Rank" (\%) is the "percent of reward difference", the closer it is to 0 the better. + +# 6.2 Main Results + +Evaluation results in Table 2 are performed using default setup of Acrobot SwingUp, Humanoid Walk, Fish Swim, and Finger TurnHard in DMC. In the Table 2, "Success" is shorthand for success rate, "Avg. Rwd" stands for average reward, and "Rank" (\%) is the "percent of reward difference", + +which is ( the average reward of the evaluated algorithm over that of the top performing algorithm - 1), the closer it is to 0 the better. Note that, in computing the success rate, only those trials that have achieved a reward of at least 50 are accounted for as successes. + +The results are obtained at different noise levels, based on the last 50 evaluations of 10 different random seeds (same for all compared algorithms). Best performances are boldfaced for average reward (Avg. Rwd). + +Q2 LNSS improves Base methods. The learning curves for the four continuous dense reward benchmark environments are shown in Figure 3. Quantitative comparisons to the baselines are summarized in the first section of Table 2 (noise = 0). Overall, LNSS (solid lines) outperforms their respective base methods (dash lines) in terms of averaged reward (Awg.Rwd), learning speed, and success rate for all three DRL algorithms. Among the measures, the success (learning) rate is to address the random initialization challenge caused by random seeds [10]. LNSS method helps its Base method to achieve a $100\%$ learning success rates whereas DDPG and TD3 Base method struggle with Humanoid Walk and Finger TurnHard. Besides, according to the "Rank" measure in Table 2, LNSS helps enhance the performances of almost all Base methods up to the top-performing algorithm in each category in all the tested environments. Detailed plots of pair-wise comparison for without noise condition are provided in appendix E.1. + +![](images/f0cfb1eaaadc2bedee92c2721705c9774d5b9b89b6e970d4520efa26d94cea91.jpg) +Figure 3: Systematic evaluation of LNSS piggybacked on three DRL algorithms (DDPG, D4PG, TD3) in DMC environments without reward noise. The shaded regions represent the $95\%$ confidence range of the evaluations over 10 seeds. The x-axis is the number of steps. Detailed pair-wise comparison plots are provided in appendix E.1. + +Q3 LNSS outperforms previous "n-step" methods $(n5)$ . The main improvement of LNSS over $n5$ are reflected in improved success rate, averaged reward, and "Rank". This improvement can be quantitatively viewed in the first section of Table 2 (noise = 0). LNSS consistently enables the Base methods to achieve $100\%$ learning success rate. However, the $n5$ method may even cause some learning to fail such as TD3 in Humanoid Walk and D4PG in Finger TurnHard. Additionally, according to "Rank" in Table 2, LNSS methods enable all cases to reach performances near those from the best algorithms. However, only TD3- $n5$ has resulted in slightly better "Rank" in Fish Swim only for without noise condition while all other $n5$ methods are below LNSS or even below its Base such as TD3- $n5$ for humanoid walk and DDPG- $n5$ for Fish swim. We also observe that LNSS has improved the average reward and lowered the standard deviation for all Base algorithms over those by the $n5$ method. + +Q4 LNSS is robust under noisy rewards. We performed the same set of evaluations in Table 2 in DMC experiments under random uniform reward noise settings with magnitudes at $1\%$ and $10\%$ of the maximum DMC step reward (1). As results in the appendix E.2 (noise level $1\%$ ), Appendix E.3 (Noise level $10\%$ ), and the last two sections in Table 2 (noise $= 1\%$ and $10\%$ ) show, at larger noise level, all algorithms have degraded performance with lower mean averaged reward, larger standard division, and more negative "Rank". However, LNSS consistently outperformed Base and $n5$ methods in terms of all measures for all three DRL algorithms. It is important to point out that we observe LNSS methods robustly improve the success rate with $100\%$ for all environments. However, we observe that both Base and $n5$ methods suffer greatly due to the increased noise level such as + +in Humanoid Walk and Finger Turn Harde environments. As Equation (3) shows the performance degradation for $n5$ methods are due to accumulated noise when collecting the discounted sum of returns. This accumulated noise will induce a much larger approximation variance and result in degraded performance. + +Q5 LNSS is effective in environments with sparse rewards. For setting up sparse rewards in Cheetah environments, refer to appendix B. As Cheetah Sparse and Cartpole Sparse result in Figure 3. One can observe that LNSS enables all DRL algorithms successfully learn all three sparse reward tasks with converged episode rewards greater than 800. However, due to the limited feedback from sparse rewards, all base methods struggle to learn and even fail to learn (e.g., Cheetah Run). Moreover, it is important to point out that LNSS helps improve the learning speed over the base methods. This is expected as a long future reward trajectory of $N$ -step is able to provide continuous reward feedback to the agent by assigning credits progressively backward from the time of achieving the desired states. In contrast, a single-step method does not provide any guidance or feedback to the learning agent until reaching the goal. + +Q6 LNSS helps reduce the variance of $Q$ value estimation. For this evaluation, we let coefficient of variation of $Q$ value (in percentage) as $cv(Q) = \frac{std(Q)}{Q}\%$ . In Figure 4, we show $cv(Q)$ every 2e5 steps where as an example, we show TD3-LNSS outperforms TD3-Base and TD3-n5 under $1\%$ noise in rewards. Similar results are obtained for DDPG and D4PG classes of algorithms and shown in appendix E.4. Also shown in the same Appendix are respective results at different noise levels. Results (Figure 12 to Figure 20 in appendix E.4) also show that for each LNSS augmented DRL algorithms, it has resulted in improved performance in $cv(Q)$ to near the best performance among all DRL algorithms. Figure 4 provides an empirical validation for our theoretical result on bounds of variances of the $Q$ value, namely, with large LNSS parameter $N$ or at the late stage of learning, $Q$ value variances become smaller as the upper bound of $Q$ value variance decreases significantly according to Equation (14) in Theorem 1. + +![](images/19fdb3f64ed5168691718fb67336e7b9bf331540171747a499386c0ea5623e59.jpg) +Figure 4: The $cv(Q)$ (coefficient of variation of $Q$ value, the lower the better) of TD3 in DMC with $1\%$ noise. Results are averaged over 10 seeds. The x-axis is the number of steps. Additional and respective results for DDPG and D4PG can be found in appendix E.4 + +Q7. Effects of different length $N$ in LNSS. In Figure 5, we show the effect of different choices of $N$ ( $N = 5, 10, 20, 50, 100$ ) in our proposed LNSS on top of TD3. (Similar results hold for D4PG and DDPG which can be found in appendix E.7.) Comparisons are performed using the same set of hyperparameters except a varying $N$ . Figure 5 depicts performance curves for DMC tasks. The LNSS with $N = 50$ and $N = 100$ outperform LNSS with $N = 5, 10$ in terms of final reward and learning speed. The results corroborate what we expect from our theoretical analysis (please refer to Figure 1 in the paper). We can see a clear benefit of using large $N$ ( $N = 50$ , and 100) over small ( $N = 5$ , and 10). + +Additionally, in Figure 6, we show how $cv(Q)$ (coefficient of variation of $Q$ ) varies with different $N$ . Among the three $N$ values, $N = 100$ outperforms others. Also, we observe that, in more complex environment, a longer $N$ is favorable. Notice that, for cartpole sparse environment, there is little difference among different $N$ values. However, in Finger TurnHard and Humanoid Walk, the $cv(Q)$ values of Base cases and $N = 5$ cases increase dramatically. However, the $cv(Q)$ values for $N = 50$ and $N = 100$ change only slightly. Figure 6 provides an empirical validation for our theoretical analysis on variance discount factor $\psi$ in Figure 1 that the longer the $N$ , the more reduction in variance of $Q$ value estimation. + +![](images/69177db94b7cc538183edbfe01f5727f55100fd05f75b46aa1b2a11a75867e72.jpg) +Figure 5: Episode rewards for the 3 tasks in DMC by TD3-LNSS with different $N$ ( $N = 5,10,20,50,100$ ) in Equation (6). The results are averaged from 10 seeds. The x-axis is the number of steps. Respective results for DDPG and D4PG are provided in appendix E.7 + +![](images/51214dea68a144318e49bc66b71286dac5da58bf74ffacc0a4ee05b180cb89fe.jpg) +Figure 6: $cv(Q)$ (coefficient of variation of $Q$ value, the lower the better) for the 3 tasks in DMC by LNSS with different $N$ ( $N = 5, 50, 100$ ) in Equation (6). + +![](images/db2aa845479d04dd25504914790f19521389c9726d438a48c7e70cf6478c1748.jpg) + +![](images/55bf906762edb4d0c9a35aa57192a847c2ffc954560d76bbe8b550e1afdf694c.jpg) + +# 6.3 Limitation of This Study. + +Here we introduce a new multi-step method, the LNSS, to effectively utilize longer reward trajectories of future steps than those used in existing methods to estimate the $Q$ value with an aim of reducing variances in learning. By piggybacking LNSS on top of TD3, DDPG, and D4PG, we demonstrate improved performance in terms of final reward, learning speed, and variance reduction in $Q$ value. However, there exist two limitations. 1) The LNSS requires positive semi-definite reward values in a long trajectory. Large negative reward may wash out the significance of good positive rewards. However, this limitation can be easily fixed by elevated reward stored in the temporary buffer $\mathbb{D}'$ by a positive constant, and lower bounded it by 0. 2) Early termination is another limitation during implementation, which mostly affects LNSS in GYM environments (Hopper, Humanoid, Walker2d). Different from DMC tasks, GYM allows an environment to terminate before reaching the maximum time steps. As such, in early training stage, only 10 or 20 steps may be used to compute LNSS which diminishes the power of large $N$ (such as 50 or 100). To resolve this limitation, re-programming task settings in the original GYM environments is required. + +# 7 Discussion and Conclusion + +1) In this work, we introduce a novel " $N$ -step" method, the LNSS which utilize longer reward trajectories of future steps than those used in existing methods to estimate the $Q$ value with an aim of reducing variances in learning. It is easy to implement, as shown in the paper, and it can be easily piggybacked on policy gradient DRL algorithms. It has been shown consistently outperform respective Base and short $n$ -step algorithms in solving benchmark tasks in terms of performance score, learning success rate, and convergence speed. 2) We provide a theoretical analysis to show that LNSS reduces the upper bound of variance in $Q$ value exponentially from respective single step methods. 3) We empirically demonstrate the performance of LNSS piggybacked on TD3, DDPG, and D4PG in a variety of benchmark environments that have been challenging for existing methods to obtain good results. 4) We show LNSS can provide consistent performance improvement under various reward noise settings, and for sparse rewards. Our results suggest that LNSS is a promising tool for improving learning speed, learning performance score, and reducing learning variance. Further investigation on how to maximize the benefit of selecting an optimal reward length $N$ , and how to take advantage of a different $n$ in $Q$ value update are exciting questions to be addressed in future work. + +# 8 Acknowledgment + +This research was supported in part under NSF grants #1808752 and #2211740. + +# References + +[1] Wentao Liu, Junmin Zhong, Ruofan Wu, Bretta L Fylstra, Jennie Si, and He Helen Huang. Inferring human-robot performance objectives during locomotion using inverse reinforcement learning and inverse optimal control. IEEE Robotics and Automation Letters, 7(2):2549–2556, 2022. +[2] Ruofan Wu, Junmin Zhong, Brent Wallace, Xiang Gao, He Huang, and Jennie Si. Human-robotic prosthesis as collaborating agents for symmetrical walking. Advances in Neural Information Processing Systems, 35:27306-27320, 2022. +[3] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. +[4] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +[5] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pages 1861-1870. PMLR, 2018. +[6] Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva Tb, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018. +[7] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587-1596. PMLR, 2018. +[8] Gabriel Dulac-Arnold, Nir Levine, Daniel J Mankowitz, Jerry Li, Cosmin Paduraru, Sven Gowal, and Todd Hester. Challenges of real-world reinforcement learning: definitions, benchmarks and analysis. Machine Learning, 110(9):2419-2468, 2021. +[9] Johan Bjorck, Carla P Gomes, and Kilian Q Weinberger. Is High Variance Unavoidable in RL? A Case Study in Continuous Control. arXiv preprint arXiv:2110.11222, 2021. +[10] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. +[11] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International conference on machine learning, pages 1329-1338. PMLR, 2016. +[12] Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(9), 2004. +[13] Joshua Romoff, Peter Henderson, Alexandre Piché, Vincent Francois-Lavet, and Joelle Pineau. Reward estimation for variance reduction in deep reinforcement learning. arXiv preprint arXiv:1805.03359, 2018. +[14] Christopher De Asis, J Hernandez-Garcia, G Holland, and Richard Sutton. Multi-step reinforcement learning: A unifying algorithm. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. +[15] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. + +[16] Dimitri P Bertsekas. Rollout algorithms for discrete optimization: A survey. Handbook of Combinatorial Optimization, D. Zu and P. Pardalos, Eds. Springer, 2010. +[17] Gerald Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural computation, 6(2):215-219, 1994. +[18] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140-1144, 2018. +[19] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489, 2016. +[20] Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In Thirty-second AAAI conference on artificial intelligence, 2018. +[21] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. +[22] Jingkang Wang, Yang Liu, and Bo Li. Reinforcement learning with perturbed rewards. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 6202-6209, 2020. +[23] Fabio Pardo. Tonic: A deep reinforcement learning library for fast prototyping and benchmarking. arXiv preprint arXiv:2011.07537, 2020. +[24] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928-1937. PMLR, 2016. +[25] Yonathan Efroni, Gal Dalal, Bruno Scherrer, and Shie Mannor. Beyond the one-step greedy approach in reinforcement learning. In International Conference on Machine Learning, pages 1387-1396. PMLR, 2018. +[26] Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. 1989. +[27] Jing Peng and Ronald J Williams. Incremental multi-step q-learning. In Machine Learning Proceedings 1994, pages 226-232. Elsevier, 1994. +[28] J Fernando Hernandez-Garcia and Richard S Sutton. Understanding multi-step deep reinforcement learning: a systematic study of the dqn target. arXiv preprint arXiv:1901.07510, 2019. +[29] Harm Van Seijen, Mehdi Fatemi, and Arash Tavakoli. Using a logarithmic mapping to enable lower discount factors in reinforcement learning. Advances in Neural Information Processing Systems, 32, 2019. +[30] Harm Van Seijen and Richard S Sutton. Efficient planning in MDPs by small backups. In Proc. 30th Int. Conf. Mach. Learn, volume 28. Citeseer, 2013. +[31] Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, and Steffen Udluft. Learning and policy search in stochastic dynamical systems with bayesian neural networks. arXiv preprint arXiv:1605.07127, 2016. +[32] David Silver, Hado Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and planning. In International Conference on Machine Learning, pages 3191-3199. PMLR, 2017. + +[33] Vladimir Feinberg, Alvin Wan, Ion Stoica, Michael I Jordan, Joseph E Gonzalez, and Sergey Levine. Model-based value estimation for efficient model-free reinforcement learning. arXiv preprint arXiv:1803.00101, 2018. +[34] Alfredo V Clemente, Humberto N Castejón, and Arjun Chandra. Efficient parallel methods for deep reinforcement learning. arXiv preprint arXiv:1705.04862, 2017. +[35] Yinlong Yuan, Zhu Liang Yu, Zhenghui Gu, Yao Yeboah, Wu Wei, Xiaoyan Deng, Jingcong Li, and Yuanqing Li. A novel multi-step Q-learning method to improve data efficiency for deep reinforcement learning. Knowledge-Based Systems, 175:107-117, 2019. \ No newline at end of file diff --git a/alongnstepsurrogatestagerewardfordeepreinforcementlearning/images.zip b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..29069d073b11b245c7bc4d196eb039d80bc47bd2 --- /dev/null +++ b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0130e2da01179ff076fc0de512d47665e0c07e925ad93d843865ff843ebdbfe2 +size 545980 diff --git a/alongnstepsurrogatestagerewardfordeepreinforcementlearning/layout.json b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f8b9cbfce0d24eebdcff7b3883936ed997a5e541 --- /dev/null +++ b/alongnstepsurrogatestagerewardfordeepreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ab354b8641deff1943e38ba64784e5adb6a5f7d76398aa0f4049346fc986cc2 +size 522509 diff --git a/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_content_list.json b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..83f3b128b12799b2e72b50415c8bd31b209bed04 --- /dev/null +++ b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20f4aee52eb81e3bb2605e2b2e00fbc49b7a4e091614865f353280f22aa855f9 +size 201178 diff --git a/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_model.json b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fac959cbff592004c5c9e1c24a174553f53a9eeb --- /dev/null +++ b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4ffa1cbff1e5f7528c5424a447bab6d2aeb073ec1ecb6d3ad5b0cadde376339 +size 234202 diff --git a/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_origin.pdf b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..713ace93eb4d92c72c7888de985b956467a1bf82 --- /dev/null +++ b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/acd3475a-b540-4ac1-bfbc-7e6d9d9a25cb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e374c4d56de3e97a4168adb3a294b0c5ac433e7c64487a9b6424573e59a7c8d0 +size 1634500 diff --git a/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/full.md b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..949e75ac6fa207dc71a4d8f1a4ff152add55ccac --- /dev/null +++ b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/full.md @@ -0,0 +1,745 @@ +# (Amplified) Banded Matrix Factorization: A unified approach to private training + +Christopher A. Choquette-Choo + +Google DeepMind + +cchoquette@google.com + +Arun Ganesh + +Google Research + +arunganesh@google.com + +Ryan McKenna + +Google Research + +mckennar@google.com + +H. Brendan McMahan + +Google Research + +mcmahan@google.com + +Keith Rush + +Google Research + +krush@google.com + +Abhradeep Thakurta + +Google DeepMind + +athakurta@google.com + +Zheng Xu + +Google Research + +xuzheng@google.com + +# Abstract + +Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a variety of scenarios, but in both the centralized and federated settings there remain instances where either MF cannot be easily applied, or other algorithms provide better tradeoffs (typically, as $\epsilon$ becomes small). In this work, we show how MF can subsume prior state-of-the-art algorithms in both federated and centralized training settings, across all privacy budgets. The key technique throughout is the construction of MF mechanisms with banded matrices (lower-triangular matrices with at most $\hat{b}$ nonzero bands including the main diagonal). For cross-device federated learning (FL), this enables multiple-participations with a relaxed device participation schema compatible with practical FL infrastructure (as demonstrated by a production deployment). In the centralized setting, we prove that banded matrices enjoy the same privacy amplification results as the ubiquitous DP-SGD algorithm, but can provide strictly better performance in most scenarios—this lets us always at least match DP-SGD, and often outperform it. + +# 1 Introduction + +We consider machine learning (ML) with DP in the centralized (datacenter) setting and the crossdevice FL setting, extending and improving matrix factorization (MF) mechanisms to advance the state-of-the-art in both. Given bounded-sensitivity batch gradients $\mathbf{x}_i\in \mathbb{R}^d$ for $i\in [n]$ steps, the MF-DP-FTRL algorithm uses a noise generation matrix $\mathbf{C}^{-1}\in \mathbb{R}^{n\times n}$ to return DP gradient estimates $\hat{\mathbf{x}}_i = \mathbf{x}_i + [\mathbf{C}^{-1}\mathbf{z}]_{[i,:]}$ where $\mathbf{z}\in \mathbb{R}^{n\times d}$ has IID entries $\mathcal{N}(0,\sigma^2)$ for suitable $\sigma >0$ . The noise correlation induced by $\mathbf{C}^{-1}$ is key to the success of MF-DP-FTRL. Alg. 1 and Sec. 2 provide details and intuition, and Table 1 in App. A summarizes notation and symbols. + +![](images/e8ecc7c8daaad60ff6cd44771f1b9da9980150ac6c69b157e2560377867f0269.jpg) +(a) Centralized CIFAR-10 + +![](images/ba3a68db9f3d6653dd0ea9ecb9e8f4a36cd2194afb2da92e594ade23cdbf77e8.jpg) +(b) Centralized StackOverflow +Figure 1: In the centralized setting, our BANDMF mechanism consistently performs at least as well as the best prior methods. LEFT, (A): At $\epsilon \geq 0.5$ , our BANDMF mechanism offers consistent utility benefits of around 1 - 4 percentage points over either DP-SGD [1] or MULTI-EPOCH MF [15]. RIGHT, (B): BANDMF (bands $\hat{b} = 9, 18, 32$ , and 64 for $\epsilon = 1 - 8$ respectively) significantly outperform both (unamplified) MULTI-EPOCH MF and amplified DP-SGD. + +In datacenter applications, precise control of the sampling/shuffling of training data is possible, and so DP-SGD with privacy amplification [1] is one of the most popular ways to train machine learning models with formal privacy guarantees. However, Choquette-Choo et al. [15] recently demonstrated that a multi-epoch extension of the MF-DP-FTRL algorithm can outperform amplified DP-SGD in some settings, depending on the privacy and computational budget (typically larger budgets above $\epsilon \approx 2$ and a small number of training epochs). This leaves the state-of-the-art for centralized DP training in the unsatisfactory state where one must try both algorithms to be assured of the best performance. + +In cross-device federated learning, devices choose when they are available to participate in training, and so precise sampling and shuffling is generally not possible (see Sec. 3 for more details). Motivated by these limitations which make amplified DP-SGD infeasible, Kairouz et al. [35] developed the (tree-aggregation-based) DP-FTRL algorithm. Their DP-FTRL does not rely on (or benefit from) privacy amplification and instead adds carefully correlated noise to the gradients to boost utility. Denisov et al. [17] proposed MF-DP-FTRL, replacing the tree-aggregation scheme of Kairouz et al. [35] with a general matrix-factorization mechanism. By optimizing over this space to find mechanisms with optimal error, substantial performance improvements were possible. However, the work of Denisov et al. [17] applies only to the single participation (single epoch) setting. Hence, for cross-device FL the state-of-the-art also requires considering multiple algorithms: tree-aggregation-based DP-FTRL when devices may participate more than one time, or MF-DP-FTRL when devices participate only once. Importantly, the extension of MF-DP-FTRL to multiple epochs of Choquette-Choo et al. [15] only applies in the centralized setting, as it again requires precise control of the participation pattern. + +In this work, we address the limitations of MF-DP-FTRL (MF for short) noted above, and show that it can, in fact, achieve across-the-board state-of-the-art performance in both settings across all $\epsilon$ . To accomplish this, we define a family of banded MF mechanisms, shown in Fig. 4 (c.f. Fig. 8 of App. D for visualizations of other factorization structures). We summarize our main contributions below. + +Contributions for cross-device FL Here, the $(k,b)$ -participation schema of Choquette-Choo et al. [15] cannot be enforced. We propose a strict generalization, $b$ -min-sep-participation, which can be practically enforced by FL infrastructure. We show how to efficiently and exactly bound the sensitivity for banded matrices in Thm. 2, allowing formal DP guarantees and the numerical optimization of optimal mechanisms (Sec. 4). These innovations lead to significant privacy-utility benefits in a production deployment (Fig. 6 of Sec. 6). Our work also generalizes the sensitivity calculations of Choquette-Choo et al. [15] to provide a general upper-bound on $b$ -min-sep-participation sensitivity (Thm. 3), which allows the matrices of Choquette-Choo et al. [15] to be used in the FL setting, as well as removing the need to exactly bound $b$ before training (see Sec. 6 and App. K). + +Contributions for centralized training The existing privacy amplification analysis of DP-SGD does not allow for the correlated noise that is applied in MF-DP-FTRL. Our paper introduces a novel partitioning of the BANDMF iterates into independent queries. This allows us to prove in Thm. 4 of Sec. 5 that banded matrices enjoy the benefits of privacy amplification, and show that DP-SGD is a special case, giving us the best of both algorithms. This enables us to always pareto-dominate DP-SGD, unlike Choquette-Choo et al. [15] which only does so for large enough $\epsilon$ as observed in Fig. 1. Further, this allows us to improve on both baselines, between $1 - 4\%$ -points. Informally: + +![](images/62ba79fb98d2d058086ce16f9296bf2031f8f57a13743f17881f4d70d5f5f34a.jpg) +Figure 2: MF-DP-FTRL (Alg. 1) enables noise cancelling across steps, where DP-SGD does not. The entries $\mathbf{C}_{i,j}^{-1}$ are mostly negative (in $[0, -1)$ ) in matrices $\mathbf{C}^{-1}$ we consider (see Fig. 8). Thus, the red terms show that MF-DP-FTRL "cancels out" noise added on earlier iterations. For simplicity, we assume $\mathbf{C}^{-1}$ has 1s on the main diagonal and entries $\mathbf{C}_{i,j}^{-1}$ otherwise, with $\mathbf{z}_i \coloneqq \mathbf{z}_{[i,:]}$ the rows of $\mathbf{z}$ . + +Theorem 1 (Informal version of Theorems 4 and 5). Suppose we partition the dataset into $b$ equal-size subsets, and in step $i$ each example in the $i$ ( $\bmod b$ )-th subset participates with probability $\frac{Bb}{m}$ where there are $m$ examples and the batch size is $B$ . Then, a $\hat{b}$ -banded $n$ -iteration matrix mechanism with $\hat{b} \leq b$ satisfies the same privacy guarantees as answering $n/b$ queries on a dataset, where each element of the dataset is independently included in each query with probability $\frac{Bb}{m}$ . + +As an example of Thm. 1, consider doing $n = 2,000$ iterations of DP-SGD on CIFAR-10, which has $m = 50,000$ examples, using a minibatch of $B = 500$ examples in each round. This has the same DP guarantees as answering 2,000 queries using a subsampled Gaussian mechanism with sampling probability $p = 500 / 50,000 = 0.01$ . If we instead use, e.g., BANDMF with $\hat{b} = b = 10$ , our suggested sampling scheme is the following: Partition CIFAR-10 into 10 subsets of 5,000 examples each, $D_{1},D_{2},\ldots ,D_{10}$ . In rounds 1, 11, 21... we sample 500 examples from $D_{1}$ , in rounds 2, 12, 22... we sample 500 examples from $D_{2}$ , and so on. We sample from each $D_{i}$ a total of $2000 / 10 = 200$ times, and each time our sampling probability is $500 / 5000 = 0.1$ within the subset. So Theorem 1 shows $\hat{b} = 10$ BANDMF satisfies the same DP guarantees as answering 200 queries with $p = 0.1$ . As a special case of Theorem 1, DP-SGD is simply MF with a suitable diagonal matrix with $\hat{b} = 1$ , and thus Thm. 1 recovers the privacy guarantees of DP-SGD with amplification by sampling. Empirically, we show that MF with amplification has privacy-utility tradeoffs that are no worse than DP-SGD for all $\epsilon$ , and often significantly better as can be seen in Fig. 1. + +Finally, we explore the computational tradeoffs of our approach. We find that banded matrices with $b$ -min-sep-participation are equally efficient to optimize as those under $(k,b)$ -participation but significantly reduce the memory and time complexity of the per-iteration noise generation from $\mathcal{O}(n)$ to a constant $\mathcal{O}(\hat{b})$ (where often total steps $n \gg d$ ). We will release all code with the final manuscript. + +Related work The matrix mechanism (MF mechanism or MF) [40] has a rich history in offline, statistical queries [22, 28, 42, 62], with many applications including to online PCA [21], estimating marginals [22], and top-k selection [17]. Recently, this has been studied under the adaptive streaming setting, where privacy analysis must account for an adversary adaptively defining the inputs at each step [17, 24]. Denisov et al. [17] showed a connection with the DP-FTRL algorithm of Kairouz et al. [35] and with DP ML broadly; they showed that computing optimal MF significantly improves the privacy-utility-computation tradeoffs when making only a single pass (epoch) over the training data. Choquette-Choo et al. [15] showed that MF achieves state-of-the-art results in DP ML by showing how to optimize MF under arbitrary passes over the training data. Henzinger and Upadhyay [29] study the problem of DP-continual observation [13, 20] and the explicit factorization of the workload matrix A that minimizes the completely bounded norm, which is only off from the optimal by an additive constant. The connection between DP empirical risk minimization [3, 4, 5, 7, 8, 9, 14, 16, 23, 25, 32, 36, 39, 45, 52, 53, 54, 57] and DP online regret minimization [2, 4, 5, 26, 33, 35] has been studied for a long time. Asi et al. [5] demonstrated that DP-FTRL style algorithms [15, 17, 35] achieve the best known regret in certain classes of online learning problems (a.k.a. the realizable regime). An important question that still remains open is whether DP-FTRL style algorithms can obtain optimal population risk guarantees under DP [9]. + +Algorithm 1 MF-DP-FTRL and DP-SGD +Inputs: Initial model $\pmb{\theta}_{1} \in \mathbb{R}^{d}$ , dataset $D$ examples, matrix $\mathbf{C}^{-1} \in \mathbb{R}^{n \times n}$ , noise $\mathbf{z} \in \mathbb{R}^{n \times d}$ with entries i.i.d. $\mathcal{N}(0, \sigma^2)$ , clipnorm $\zeta$ $\triangleright$ DP-SGD is simply the case $\mathbf{C}^{-1} = \mathbf{I}$ , the $n \times n$ identity matrix. +for $i = 1, 2, \dots, n$ do + Select a set $S_{i} = \{\mathbf{d}_{1}, \dots, \mathbf{d}_{B}\} \subseteq D$ $\triangleright$ Respecting schema II, possibly sampling, Alg. 2 + $\mathbf{g}_{j} = \mathrm{clip}(\nabla_{\theta} \mathrm{loss}(\mathbf{d}_{j}, \pmb{\theta}_{i}), \zeta)$ for $j \in [B]$ , where $\mathrm{clip}(\mathbf{d}, \zeta) = \min(1, \zeta / \| \mathbf{d} \|) \mathbf{d}$ $\mathbf{x}_{i} = \sum_{j=1}^{B} \mathbf{g}_{j}$ $\hat{\mathbf{x}}_{i} = \mathbf{x}_{i} + \zeta [\mathbf{C}^{-1} \mathbf{z}]_{[i,:]}$ $\triangleright$ $\hat{\mathbf{x}}_{i}$ is now a DP estimate of $\mathbf{x}_{i}$ $\pmb{\theta}_{i+1} = \mathrm{SGDM}(\pmb{\theta}_{i}, \hat{\mathbf{x}}_{i})$ $\triangleright$ Any first-order optimizer can be used in place of SGDM + +# 2 Matrix Factorization, Sensitivity, and Efficient Implementations + +Let $\mathbf{x} \in \mathbb{R}^{n \times d}$ be a stream of model gradients, and let $\mathbf{A} \in \mathbb{R}^{n \times n}$ be an appropriate linear query workload (prefix-sums, or a matrix encoding of stochastic gradient descent with momentum (SGDM) [17]). Matrix mechanisms use a factorization $\mathbf{A} = \mathbf{BC}$ to privately estimate the quantity $\mathbf{A}\mathbf{x}$ as + +$$ +\widehat {\mathbf {A x}} = \mathbf {B} (\mathbf {C x} + \mathbf {z}), \tag {1} +$$ + +where $\mathbf{z}$ is suitably calibrated noise to the sensitivity of the so-called 'query matrix' $\mathbf{C}$ . + +Efficiently implementing MF-DP-FTRL Eq. (1) can be re-arranged as $\mathbf{A}(\mathbf{x} + \mathbf{C}^{-1}\mathbf{z})$ . The multiplication by the linear operator $\mathbf{A}$ can now be viewed as post-processing of the noisy mechanism outputs $\mathbf{x} + \mathbf{C}^{-1}\mathbf{z}$ ; in many cases, this postprocessing has an efficient streaming implementation, e.g., simply passing gradients into a SGDM implementation. Thus, implementing MF-DP-FTRL is essentially equivalent to DP-SGD. The only difference is that the per-iteration gradient $\mathbf{x}_i$ is protected with noise $[\hat{\mathbf{C}}^{-1}\mathbf{z}]_{[i,:]}$ rather than $\mathbf{z}_{[i,:]}$ . Indeed, we see DP-SGD is a special case simply by taking $\mathbf{C} = \mathbf{C}^{-1} = \mathbf{I}$ . Further, as long as the noise $\mathbf{C}^{-1}\mathbf{z}$ is computed correctly, the privacy guarantee holds, independent of the choice of $\mathbf{A}$ . Alg. 1 gives the complete algorithm; with an appropriate choice of the matrix $\mathbf{C}^{-1}$ , this algorithm captures DP-SGD, tree-aggregation DP-FTRL² [35], as well as MF-DP-FTRL [15, 17]. The multiplication of $\mathbf{C}^{-1}$ by Gaussian noise $\mathbf{z} \in \mathbb{R}^{n\times d}$ (which need never be fully materialized at once) is the critical step in the efficient implementation of MF. In App. J, we note this multiplication can be completed online for $\hat{b}$ -banded matrices (defined formally in Sec. 3) in time and memory $\mathcal{O}(\hat{b}d)$ per training iteration compared to $\mathcal{O}(nd)$ for a non-banded matrix. + +Multiple participations We adopt the formalisms for multiple-participations of Choquette-Choo et al. [15]. We assume there are $m$ examples (or users in FL) in the database where $B$ examples are selected on each step $i \in [n]$ . These $B$ chosen examples are said to participate on this step $i$ . The examples at each step are processed via any adaptive function, e.g., computing a gradient of the current model (which depends on the model parameter values) as in Alg. 1. The per-example output vectors $\mathbf{g} \in \mathbb{R}^d$ are each bounded to $\ell_2$ norm at most $\zeta$ (noting that our notions of sensitivity scale linearly in $\zeta$ , without loss of generality (WLOG) we take to be 1 in analysis below). These clipped vectors are then summed to yield $\mathbf{x}_i = \sum_{j=1}^{B} \mathbf{g}_j$ . The MF-DP-FTRL mechanism releases the privatized estimates of $\mathbf{x}_i$ in a streaming fashion. The multi-epoch setting occurs when $m < n \cdot B$ , so that every example necessarily participates more than once. + +Intuition for (anti-)correlated noise in MF-DP-FTRL Fig. 2 compares DP-SGD and MF-DP-FTRL. To gain an intuition for why MF-DP-FTRL can perform better than DP-SGD, observe that vanilla SGD has iterates $\theta_{t} = \theta_{0} - \eta \sum_{i=1}^{t} \hat{\mathbf{x}}_{i}$ , and hence when the noisy gradients $\hat{\mathbf{x}}_{i}$ are added, the $\mathbf{C}_{[i,j]}^{-1} \mathbf{z}_{[i,:]}$ terms in MF-DP-FTRL serve to cancel out some of the noise introduced on previous rounds. This reduces the total error in the final model (i.e., the prefix sums). However, this worsens the sensitivity of the mechanism to $>1$ (as it is for DP-SGD). This is because an adversary trying to learn $\mathbf{x}_{1}$ via $\hat{\mathbf{x}}_{1}$ can partially learn the value of $\mathbf{z}_{1}$ from $\hat{\mathbf{x}}_{2}$ , whereas in DP-SGD $\hat{\mathbf{x}}_{1}$ and $\hat{\mathbf{x}}_{2}$ are uncorrelated. This tradeoff is what MF-DP-FTRL aims to minimize. More details on this intuition are in App. A.1. + +Adjacency and participation schemas DP requires a notion of adjacent datasets. Two data streams $\mathbf{x}$ and $\tilde{\mathbf{x}}$ are adjacent if the data associated with any single example is altered, but not when this example participated. Thus, any $\mathbf{x}_i$ where example $\mathbf{d}_j$ participated can be changed subject to the constraint $\| \mathbf{g}_j^{(i)}\| \leq \zeta$ . However, the participation pattern does not change. A participation schema $\Pi$ gives the set of possible participation patterns $\pi \in \Pi$ , with each $\pi \subseteq [n]$ indicating a set of steps in which a single example might participate. Let $\mathbf{N}$ be the set of all pairs of neighboring streams $\mathbf{x}$ and $\mathfrak{D} := \{\mathbf{x} - \tilde{\mathbf{x}} \mid (\mathbf{x},\tilde{\mathbf{x}})\in \mathbf{N}\}$ represent the set of all possible deltas between neighboring $\mathbf{x}$ , $\tilde{\mathbf{x}}$ . We say a $\mathfrak{D}$ satisfies the participation schema $\Pi$ if the indices of all nonzero rows in each $\mathbb{R}^{n\times d}$ matrix $\mathbf{u} \in \mathfrak{D}$ are a subset of some $\pi \in \Pi$ . To illustrate this, single-participation is represented as $\Pi = \{\{1\},\{2\},\ldots\{n\}\}$ and full-batch gradient descent (every-step) as $\Pi = \{[n]\}$ . Choquette-Choo et al. [15] studied fixed-epoch-order participation, denoted $(k,b)$ -participation, where each example participates at most $k$ times, with any adjacent participations exactly $b$ steps apart: formally, $\Pi$ is the set of all $\pi$ such that $|\pi |\leq k$ , and if $\pi = \{i_1,\dots,i_k\}$ indexed in increasing order, we have $\forall j\in \{2,\dots,k\}, i_j - i_{j - 1} = b$ . For example $(k = 2,b = 3)$ -participation has $\Pi = \{\{1,4\},\{2,5\},\{3,6\}\}$ . As discussed in Choquette-Choo et al. [15], this setting faithfully captures centralized multi-epoch ML training setups with single and every-step as special cases. We can now define the sensitivity of the matrix factorization mechanism as + +$$ +\operatorname {s e n s} _ {\Pi} (\mathbf {C}) = \sup _ {(\mathbf {x}, \tilde {\mathbf {x}}) \in \mathbf {N}} \| \mathbf {C x} - \mathbf {C} \tilde {\mathbf {x}} \| _ {F} = \sup _ {\mathbf {u} \in \mathfrak {D}} \| \mathbf {C u} \| _ {F}. \tag {2} +$$ + +Optimizing factorizations Different factorizations $\mathbf{A} = \mathbf{BC}$ can have very different performance in practice. Thus, in MF applications it is common to optimize over the space of factorizations, where the objective function is the expected total squared error on $\mathbf{A}$ , given as $\mathcal{L}(\mathbf{B},\mathbf{C}) = \mathrm{sens}_{\Pi}^{2}(\mathbf{C})\|\mathbf{B}\|_{F}^{2}$ . We define the expected root-mean-squared error (RMSE) as $\sigma \sqrt{\mathcal{L}(\mathbf{B},\mathbf{C}) / n}$ , where $\sigma$ is the standard deviation of the Gaussian noise. We take $\sigma = 1$ when simply comparing mechanisms, or (in Sec. 6), calibrate $\sigma$ to achieve specific $(\epsilon, \delta)$ -DP guarantees. + +To facilitate optimization, utilizing the fact that the optimal-for-squared-error decoder $\mathbf{B}$ is $\mathbf{AC}^{\dagger}$ [17], we note $\mathcal{L}(\mathbf{B},\mathbf{C}) = \mathcal{L}(\mathbf{AC}^{\dagger},\mathbf{C})$ . The expected total squared error is invariant to scaling $\mathbf{C}$ by a constant, and hence it is sufficient to optimize under a sensitivity 1 constraint. Further expressing the sensitivity and error in terms of $\mathbf{X} = \mathbf{C}^{\top}\mathbf{C}$ (note $\mathbf{X}$ is unrelated to the data $\mathbf{x}$ ), we have + +$$ +\mathcal {L} (\mathbf {B}, \mathbf {C}) = \mathcal {L} \left(\mathbf {A C} ^ {\dagger}, \mathbf {C}\right) = \left\| \mathbf {A C} ^ {\dagger} \right\| _ {F} ^ {2} = \operatorname {t r} \left[ \mathbf {A} ^ {\top} \mathbf {A X} ^ {- 1} \right], \tag {3} +$$ + +assuming $\mathrm{sens}_{\Pi}(\mathbf{C}) = 1$ and $\mathbf{A}$ is in the rowspace of $\mathbf{C}$ . Thus, we arrive at: + +Problem 1. The matrix factorization optimization problem is to solve the convex optimization + +$$ +\underset {\mathbf {X} \in \mathbf {S} _ {+} ^ {n}} {\text {m i n i m i z e}} \quad \operatorname {t r} \left[ \mathbf {A} ^ {\top} \mathbf {A} \mathbf {X} ^ {- 1} \right] \quad \text {s u b j e c t t o} \quad \operatorname {s e n s} _ {\Pi} ^ {2} (\mathbf {X}) \leq 1, \tag {4} +$$ + +and then find $\mathbf{C}$ so that $\mathbf{C}^{\top}\mathbf{C} = \mathbf{X}$ , e.g., via Cholesky decomposition. + +# 3 A Participation Schema for FL and the Sensitivity of Banded Matrices + +In cross-device FL, devices locally evaluate eligibility criteria to determine when they might participate in training [10, 34], for example only checking-in to the coordinating server when they are plugged in, on unmetered wifi, and idle. This makes it practically difficult to enforce the $(k,b)$ -participation of Choquette-Choo et al. [15], where devices are assumed to participate at the same relative position in each epoch: devices are unlikely to meet the eligibility criteria during the narrow windows of both step $i$ and $i + b$ . Further, precise sampling cannot provide the same level of privacy amplification as in the centralized setting. Consider if 6500 devices are needed to complete a round [59]. An extreme (but realistic depending on time of day) setting may have only 6500 devices meeting eligibility criteria. Thus, either the protocol proceed without any sampling/amplification or wait until more devices are available; neither are desirable. We avoid amplification in the cross-device setting and instead proceed by addressing the question: Can MF-DP-FTRL be extended to the cross-device federated learning setting with multiple client participations? + +With $(k,b)$ -participation difficult to enforce in practice, our first challenge is to define a new participation schema with several properties: (a) the sensitivity of any matrix mechanism under this query can be bounded; (b) this bound is tight over an expressive class of matrices; (c) this bound + +can be efficiently represented as a constraint in a mathematical program so as to be able to find a near-optimal factorization $\mathbf{A} = \mathbf{BC}$ . In Defn. 1, we propose $b$ -min-sep-participation, a generalization of $(k,b)$ -participation which can be practically enforced by cross-device FL systems, thus enabling us to leverage BANDMF in this setting (see Sec. 6). In $b$ -min-sep-participation, the distance between any two participations is at least $b$ , rather than exactly $b$ as in $(k,b)$ -participation: + +Definition 1. The $b$ -min-sep-participation schema is given by + +$$ +\Pi_ {b} = \left\{\pi \subseteq [ n ] \mid \{i, j \} \subseteq \pi , i \neq j \Rightarrow | i - j | \geq b \right\}. +$$ + +Observe this participation schema is easy for devices to enforce: each device remembers the last step $i$ in which it participated, and when it again becomes eligible, it checks in to the server, and participates in training as long as the current step is at least $i + b$ ; it does not need to check in during a narrow (and unknown to the device) time window for a specific step. + +We now turn to computing sensitivity under $b$ -min-sep-participation. For $(k, b)$ -participation, $|\Pi_{(k, b)}| = b$ , a fact Choquette-Choo et al. [15, Eq. 3] critically exploited when computing sensitivity via brute force computation of a maximum over the elements in $\Pi$ . With $b$ -min-sep-participation, we have $|\Pi_b| = \mathcal{O}(\exp(n))$ , and hence any brute force approach which requires checking some value for all $\pi \in \Pi_b$ will be impractical. Following the formalism of [15, Section 2], a participation schema $\Pi$ (plus a specification of model dimension $d$ ) yields an expression for the sensitivity of the function $\mathbf{x} \mapsto \mathbf{C}\mathbf{x}$ assuming that the contributions of any given user to the data structure $\mathbf{x}$ are restricted to the rows in $\mathbf{x}$ indexed by some $\pi \in \Pi$ . By Prop. E.1 of App. E.2, independent of model dimension $d$ , we show sensitivity for any schema $\Pi$ may be bounded by + +$$ +\operatorname {s e n s} _ {\Pi} (\mathbf {C}) ^ {2} \leq \max _ {\pi \in \Pi} \sum_ {i, j \in \pi} | \mathbf {X} _ {[ i, j ]} |. \tag {5} +$$ + +Eq. (5) highlights several subtleties in computing sensitivity. First, is the challenge presented by the exponentially large number of patterns in $\Pi_b$ ? Second is the question of tightness of the inequality in Eq. (5): how much are we losing by effectively ignoring any cancellation in the matrix $\mathbf{X}$ ? + +Banded matrices Fortunately, banded matrices render Eq. (5) both exactly computable and tight (independent of dimension $d$ ), showing that $\Pi_b$ satisfies the requirements of (b) and (c) above. We say a (general) matrix $\mathbf{X}$ is $\hat{b}$ -banded if for all $i,j\in [n],|i - j|\geq \hat{b}$ implies $\mathbf{X}_{[i,j]} = 0$ . While this is off-by-one from the bandwidth ( $\mathbf{X}$ has bandwidth $b - 1$ ), our definition will be useful as it will be natural to match $\hat{b}$ -banded matrices with $b$ -min-separation. Further, for $\hat{b}$ -banded lower-triangular matrices (which will play a central role), $\hat{b}$ intuitively gives the number of bands in the matrix. + +For non-banded matrices, the right-hand side of Eq. (5) remains efficiently computable (but not easily expressible in a mathematical program), enabling us to provide nontrivial privacy guarantees for matrices which are not $b$ banded under $b$ -min-sep, showing that $\Pi_b$ satisfies (a) as well. The key subroutine is Alg. 3, which gives an efficient dynamic program for solving linear optimization over $\Pi_b$ . Define $\mathbf{u}(\pi) \in \{0,1\}^n$ by $\mathbf{u}(\pi)_i = 1$ if $i \in \pi$ and 0 otherwise. Then, Alg. 3 solves + +$$ +\min _ {\pi \in \Pi_ {b}} \left\langle \mathbf {v}, \mathbf {u} (\pi) \right\rangle . +$$ + +This is the key subroutine in Alg. 4 and Alg. 5. Proofs for Thm. 2 and Thm. 3 are deferred to App. E.2. + +Theorem 2. Let $\mathbf{C} \in \mathbb{R}^{n \times n}$ be a lower-triangular matrix, and $\Pi_b$ the $b$ -min-sep-participation schema. Further, suppose $k'$ upper-bounds the actual maximum number of participations that occurred in a data stream $\mathbf{x}$ (at worst, we may take $k' \leq \lceil \frac{n}{b} \rceil$ ): Then: (1) If $\kappa$ is an upper-bound on the column norms of $\mathbf{C}$ , that is $\forall j \in [n]$ , $\| \mathbf{C}_{[j]} \| \leq \kappa$ , and $\mathbf{C}$ is $b$ -banded, then the sensitivity is bounded by $\kappa \sqrt{k'}$ . (2) If $\mathbf{C}$ is $b$ -banded, Alg. 5 invoked with Gram matrix $\mathbf{X} = \mathbf{C}^\top \mathbf{C}$ and $b, k'$ as in the setup, exactly computes $\mathrm{sen}(C)$ under schema $\Pi_b$ in polynomial time for any dimension $d$ . + +Theorem 3. For an arbitrary (non-banded) $\mathbf{C}$ , let $\mathbf{X} = \mathbf{C}^{\top}\mathbf{C}$ and $b, k'$ as in Thm. 2. Then Alg. 4 of App. E upper-bounds $\mathrm{sen}(C)$ under schema $\Pi_b$ in polynomial time for any dimension $d$ . + +# 4 Optimizing Banded Matrices + +To enjoy the benefits of banded matrices within the framework of MF-DP-FTRL, we need to design an algorithm that can efficiently optimize over the space of $\hat{b}$ -banded $\mathbf{C}$ matrices. To solve this problem, we will work in the domain of $\mathbf{X} = \mathbf{C}^{\top}\mathbf{C}$ , and utilize the following fact: + +Proposition 4.1. Let $\mathbf{X} \in \mathbb{R}^{n \times n}$ be a $\hat{b}$ -banded symmetric positive definite matrix. Then there exists a lower triangular $\hat{b}$ -banded matrix $\mathbf{C} \in \mathbb{R}^{n \times n}$ such that $\mathbf{X} = \mathbf{C}^{\top} \mathbf{C}$ . + +Utilizing Prop. 4.1, we can modify Problem 1 by introducing the constraint $\mathbf{X}_{[i,j]} = 0$ if $|i - j|\geq \hat{b}$ . This additional linear constraint preserves convexity of the optimization problem, and makes the sensitivity calculation tractable as well. However, it is still not immediately obvious how to solve the optimization problem, since we need to run the dynamic program defined in Alg. 5 of App. E to compute sensitivity. For this reason, we impose the additional constraint that $\mathrm{diag}(\mathbf{X}) = \mathbf{1}$ . This constraint, together with bandedness, ensures that the squared sensitivity is equal to $k$ for all $\mathbf{X}$ by Thm. 2. The final optimization problem we seek to solve is stated below: + +Problem 2. The matrix factorization optimization problem for banded matrices is to solve + +$$ +\underset {\mathbf {X} \in \mathbf {S} _ {+} ^ {n}} {\text {m i n i m i z e}} \operatorname {t r} \left[ \mathbf {A} ^ {\top} \mathbf {A} \mathbf {X} ^ {- 1} \right] \quad \text {s u b j e c t t o} \quad \operatorname {d i a g} (\mathbf {X}) = \mathbf {1} \text {a n d} \mathbf {X} _ {[ i, j ]} = 0 \text {i f} | i - j | \geq \hat {b}, \tag {6} +$$ + +and then find $\mathbf{C}$ so that $\mathbf{C}^{\top}\mathbf{C} = \mathbf{X}$ via Prop. 4.1. + +We would like to remark on the similarity between Problem 2 and the single-participation version of Problem 1. The two problems are identical modulo the bandedness constraint, which is an equality constraint on individual entries of $\mathbf{X}$ . Therefore, existing primal-optimization based solvers [43] for the single-participation matrix mechanism can be extended to optimize over this new space of matrices with little modification. Specifically, the only modification necessary is to initialize to an appropriately banded feasible $\mathbf{X}$ matrix, like $\mathbf{X} = \mathbf{I}$ , and to post-process the gradient w.r.t $\mathbf{X}$ by setting $\frac{\partial L}{\partial \mathbf{X}_{[i,j]}} = 0$ if $|i - j| \geq \hat{b}$ in each step. Since the equality constraints exactly specify individual entries of $\mathbf{X}$ , Problem 2 can be solved as an unconstrained optimization problem (over the remaining entries in $\mathbf{X}$ ), using any number of off-the-shelf unconstrained optimization algorithms. As recommended by McKenna et al. [43], we use the LBFGS algorithm [12] to solve this problem. + +Remarks on the diag $(\mathbf{X}) = 1$ constraint The constraint on diag $(\mathbf{X})$ serves multiple purposes. First, $\mathrm{diag}(\mathbf{X}) = \mathbf{1}$ implies that $\| \mathbf{C}_{[i,i]}\|_2 = 1$ for all $i$ , i.e., that $\mathbf{C}$ has equal column norms. This ensures that BANDMF reduces to DP-SGD when $\hat{b} = 1$ , which is desirable. Second $\mathrm{diag}(\mathbf{X}) = \mathbf{1}$ simplifies the optimization problem greatly, as the sensitivity computation for both $(k,b)$ -participation and $b$ -min-sep are trivial and tight under this constraint (Thm. 2 Claim (1)). Third, imposing this constraint does not drastically change the search landscape, or cost much in terms of RMSE; see Table 2 for a comparison of matrices with and without this constraint, and Fig. 8 for a visualization. Fourth, this constraint allows us to solve a single optimization problem that is simultaneously tailored for $(k,b)$ -participation and $b$ -min-sep-participation. In Appendices B and C, we formulate an optimization problem without the $\mathrm{diag}(\mathbf{X}) = \mathbf{1}$ constraint, discuss how we solve it, and compare matrices generated with and without this constraint empirically. + +# 5 Amplification for Banded Matrix Mechanisms + +In the centralized setting where we can control the participation patterns of individual examples, the privacy guarantees of BANDMF can be amplified. We focus on amplification by sampling with fixed batch size in this section, but give a more general statement in App. F. + +Existing privacy analysis of MF-DP-FTRL is based on the reduction to the batch release of the entire $\mathbf{Cx} + \mathbf{z}$ as a single Gaussian mechanism event [15, 17] so standard amplification techniques don't directly apply. Instead, for each participation by an example, we consider the set of rows in $\mathbf{Cx}$ affected by this participation as a Gaussian mechanism (see the groups of rows in Fig. 4). Then as + +# Algorithm 2 Sampling scheme + +```latex +$\overline{D_1,\ldots,D_b}\gets$ arbitrary partition of $D$ +for $i = 1,2,\dots ,n$ do $j = i$ (mod $b$ ) $(b$ if $i / b$ is integer). $S_{i}\gets$ random size $B$ subset of $D_{j}$ Compute $\mathbf{x}_i$ on $S_{i}$ +Release $\mathbf{Cx} + \mathbf{z}$ +``` + +![](images/ffb0df0f8f150f07077ebf557d20014b2547856d8afc0da20b48643e30614208.jpg) +Figure 3: An example of our sampling scheme that gives privacy amplification for BANDMF. +Figure 4: Visualization of how we can decompose BANDMF into independent queries when using Alg. 2. Larger view in Fig. 10 of App. F. + +long as the sets of rows corresponding to different participations do not interact, which is ensured by the bandedness of $\mathbf{C}$ , we can apply amplification to them separately. + +Observe from Fig. 4 that the structure of BANDMF guarantees that the set of rows of $\mathbf{Cx}$ which depend on each of $\mathbf{x}_j, \mathbf{x}_{\hat{b} + j}, \mathbf{x}_{2\hat{b} + j}, \ldots$ are disjoint sets. Thus, we use the following sampling scheme for determining which examples participate in each step which is made formal as Alg. 2. Let $D_1, D_2, \ldots, D_{\hat{b}}$ be an arbitrary partition of $D$ into $\hat{b}$ indexed subsets of size $\breve{m} := \lfloor m / \hat{b} \rfloor$ (for simplicity, we discard extra examples so all $D_j$ have size exactly $\breve{m}$ ). In steps $j, \hat{b} + j, 2\hat{b} + j, \ldots$ , we will only use examples in $D_j$ . Hence, participation follows $(k, b)$ -participation for $b = \hat{b}$ , because it is optimal for $(k, b)$ -participation to have the number of bands $\hat{b}$ equal the min-separation $b$ , in the remainder of this section and the associated appendices we simply write $b$ instead of $\hat{b}$ . Within each of these steps, we sample a size $B$ subset of $D_j$ uniformly at random to use in computing $\mathbf{x}$ . + +Roughly speaking, Thm. 4 below shows that if we use Alg. 2, BANDMF satisfies any standard privacy guarantees satisfied by DP-SGD run for $k$ rounds, where in each round we sample $B$ examples from a dataset of size $m$ . In other words, it is equivalent to running DP-SGD for $1 / b$ times as many rounds, but with the sampling probability multiplied by $b$ . + +Theorem 4. Suppose $\mathbf{C}$ is $b$ -banded and lower triangular, and the examples participating in each step are chosen according to Alg. 2. Then BANDMF satisfies any standard DP guarantee satisfied by performing $k$ sensitivity- $\kappa$ queries on a dataset of size $m$ using the Gaussian mechanism, where each query is run on a random subset of examples of size $B$ . $\kappa$ is the maximum column norm of $\mathbf{C}$ . + +The key idea behind Thm. 4 is the following: assume we have two datasets $D, D'$ that differ in an example in $D_{1}$ (WLOG), i.e., the differing example can only participate in steps $1, b + 1, \ldots, (k - 1)b + 1$ . Then by the banded structure of $\mathbf{C}$ and the standard technique of reducing adaptive queries to non-adaptive queries (see e.g. Claim D.1 in [17]), the first $b$ rows of $\mathbf{Cx}$ (i.e., all the rows where examples in step 1 influence the output) can be viewed as a query on the examples in $D_{1}$ that were included in step 1, the next $b$ rows can be viewed as an adaptively chosen query on the examples included in steps $b + 1$ , and so on. See Fig. 4 for a visualization. + +Generalization to other privacy amplification techniques Thm. 5 in App. F.3 provides a strong a generalization of Thm. 4. It shows that shuffling, another common amplification technique often used for DP-SGD, can also be applied to our BANDMF. We also provide an explicit algorithm for accounting for amplification via sampling in terms of the dp_accounting library [18], along with examples of concrete privacy parameters derived from these corollaries. + +Optimizing the number of bands Thm. 4 shows that different numbers of bands (with a corresponding sampling scheme) give different privacy amplification guarantees. This implies given a particular privacy (or RMSE) target, one should optimize the number of bands to get the best RMSE (or privacy) possible. This can be done efficiently. Generally, for large values of $\epsilon$ larger numbers of bands perform better, and as $\epsilon \to 0$ , eventually amplification dominates so $\hat{b} = 1$ becomes optimal. Details are in App. F.4. + +![](images/ac57359c04e0a55b161e2191ebb34097fecb390a3dbd45e149bd412782637ce2.jpg) +(a) DP-SGD vs. MULTI-EPOCH MF + +![](images/2d4ca31443b6e2f245a7b9f39d245bb8aaef18c8b46032f1306d64cb0cee5866.jpg) +(b) DP-SGD vs Ampl. BANDMF +Figure 5: Comparison between DP-SGD, MULTI-EPOCH MF, and BANDMF in terms of RMSE on the prefix-sum queries, for $n = 1024$ iterations, $\epsilon \in [\frac{1}{32}, 16]$ , $\delta = 10^{-6}$ , and epochs $\in [1, 1024]$ . Color indicates the ratio in RMSE between two mechanisms. Additional details in App. G. + +![](images/7f8b3f54b3e9e37c8049b4ee4bef73135e8dc88b553eea55601e8a242d792e35.jpg) +(c) RMSE vs Epochs at $\epsilon = 1$ + +# 6 Experiments + +Our experiments on example-level DP for image classification (of CIFAR10) and user-level DP for next word prediction (NWP) (of Stack Overflow NWP) focus on comparing our BANDMF with the existing state-of-the-art MULTI-EPOCH MF [15] and DP-SGD [1]. We finish by showing that BANDMF improves over state-of-the-art [35] for a production mobile keyboard next word prediction model. In all cases, noise $\sigma$ is calibrated using privacy loss distributions [38] to achieve the stated privacy guarantees for zero-out adjacency [35, 48], as implemented in [18]. Following the common convention, amplified results are based on privacy accounting for Poisson sampling, though shuffling was used in non-production training. We find BANDMF can outperform both mechanisms across a wide range of privacy budgets to as low as $\epsilon \approx 0.5$ . Past this, it is no worse than either. + +RMSE We begin by comparing DP-SGD with MULTI-EPOCH MF and BANDMF in terms of their RMSE (Sec. 2) on the prefix workload. This is one measure for how much noise these mechanisms add during training, and is a reasonable proxy for learning performance. Observe in Fig. 5(a) that there are regimes where DP-SGD outperforms MULTI-EPOCH MF and vice versa in terms of RMSE. When then number of epochs equals $n$ , DP-SGD reduces to full gradient descent (GD) (and there is no amplification benefit), and the optimal MF mechanism is close to the identity matrix (that is, GD), and so the algorithms become almost identical (MF has a very small advantage, as the identity matrix is not quite optimal for RMSE). However, as shown in Fig. 5(b), we see that BANDMF is always at least as good as DP-SGD in terms of RMSE. The improvement is most potent in the lower number of epochs and higher $\epsilon$ regime, which is standard for large model training. In Fig. 5(c), we see that for fixed $n$ as the number of epochs increases, all mechanisms enjoy improved RMSE, and BANDMF in fact reduces to DP-SGD in that regime (though BANDMF may be outperformed in RMSE by MULTI-EPOCH MF due to our imposed optimization constraint of constant diagonals in X). + +Centralized training with amplification Our full experimental setup is described in App. H, and closely follows prior work [15]. We train for 20 epochs on CIFAR-10, and tune all mechanisms to achieve their best performance for each $\epsilon$ , using 12 repeated runs. Fig. 1(a) shows that BANDMF with amplification and an optimized number of bands $\hat{b}$ can obtain utility benefits over both prior mechanisms. We find that for $\epsilon \in [2,5]$ , BANDMF achieves a consistent $\approx 1$ percentage point boost in performance over MULTI-EPOCH MF. Below $\epsilon \approx 2$ , where DP-SGD previously dominated, we find that BANDMF obtains a benefit around 3 percentage points. These two findings show that BANDMF is able to balance and leverage the benefits of both amplification and correlated noise effectively. As the budget $\epsilon$ gets smaller, we find that BANDMF is equivalent to DP-SGD. + +We next consider the now-standard StackOverflow next-word-prediction (NWP) task with user-level differential privacy, again following [15] (full details in App. I), in particular 2052 steps and 6 epochs, with $B = 1000$ . The previous state-of-the-art for centralized training at $\epsilon \geq 2$ corresponds to their MULTI-EPOCH MF. We again tune $\hat{b}$ under amplification for optimal RMSE, selecting $\hat{b} = 9, 18, 32, 64$ for $\epsilon = 1, 2, 4, 8$ respectively. At $\epsilon = 16$ we find MULTI-EPOCH MF is optimal. Fig. 1(b) shows substantial improvements for combining amplification with MF for $\epsilon \in [1, 8]$ ; Table 5 of App. I gives the hyperparameters and accuracy values for this figure. + +![](images/1ae272a8fec0a867ada11be61667b5ebedbfb9e29e524a60b31cab17913dd77d.jpg) +(a) Simulated cross-device FL, SO NWP + +![](images/8aa06e11c0bf02e4994a557836dfc69fdf0fe95443128fcde1a9f982044d2412.jpg) +(b) Production mobile keyboard training +Figure 6: BOTH: Amplification is infeasible as outlined in Sec. 6 and App. K. LEFT, (A): Crossdevice FL results under $b = 342$ -min-sep-participation. BANDMF, and MULTI-EPOCH MF (with the application to FL made possible by Thm. 3) outperform all prior work. RIGHT, (B): Evaluation accuracy of a language model trained in a real-world FL system. BANDMF achieves higher utility and a stronger $(4.35, 10^{-10})$ -DP compared to the $(6.69, 10^{-10})$ -DP achieved by ONLINE TREEAGG [35]. + +Cross-device federated learning We consider SO NWP again, but now assuming each user's data corresponds to the data on one device. We assume 2052 rounds and 6 epochs as above. Amplification is generally not possible in the cross-device setting, and so the prior state-of-the-art was (1) SINGLE-EPOCH MF of Denisov et al. [17] for single-epoch training (using $B = 167$ rather than $B = 1000$ ), and (2) OPTIMAL TREEAGG, which essentially takes the binary-tree matrix $\mathbf{C}_{\mathcal{T}}^{\dagger}$ , and instead of using the less-efficient "online" estimator of Honaker [30], uses the pseudo-inverse $\mathbf{C}_{\mathcal{T}}^{\dagger}$ for noise generation (see Sec. 2). The $b$ -min-sep-participation sensitivity of $\mathbf{C}_{\mathcal{T}}$ can still be calculated using the dynamic program of Kairouz et al. [35], while the use of $\mathbf{C}_{\mathcal{T}}^{\dagger}$ requires the machinery of Denisov et al. [17]; see e.g. the OPTDECODERHONAKER results of Choquette-Choo et al. [15, Fig. 6]. Our Thm. 3 further enables an upper-bound on the $b$ -min-sep-participation sensitivity of the MULTI-EPOCH MF matrices of Choquette-Choo et al. [15]; this incurs a penalty of about $15\%$ (see Table 2 in App. C) compared to $(k,b)$ -sensitivity. Fig. 6(a) shows that our BANDMF and MULTI-EPOCH MF again outperform prior baselines (though the previously untested multi-epoch OPTIMAL TREEAGG performs quite well); Table 4 gives the hyperparameters and accuracy values for this figure. + +Application in production cross-device FL We fine-tune a Spanish next word prediction model, pretrained on the multilingual C4 dataset [49, 60], with on-device user data using FL. Our setup follows [59], and is described in full in App. K. We compared to an existing implementation of the ONLINE TREEAGG algorithm of Kairouz et al. [35] (not the optimal version using $\mathbf{C}_{\mathcal{T}}^{\dagger}$ in simulation). Both algorithms ran for $n = 2000$ training rounds. The BANDMF matrix was optimized for $\hat{b} = 400$ bands; however, the production system only allows approximate control of the separation between participations, and post-hoc we could only bound $b$ by 390 rounds for ONLINE TREEAGG and 385 for BANDMF, necessitating the use of Thm. 3 for the analysis of BANDMF as $b < \hat{b}$ . + +We used the same clients/round goal of 6500 for both, and tuned noise multipliers to achieve comparable RMSE, hence tuning for a stronger privacy guarantee rather than improved accuracy. Fig. 6(b) shows our results, and we see BANDMF actually achieves a slight improvement in accuracy, possibly due to learning-rate cooldown (which was only implemented for BANDMF). Our primary result is then that we are able to improve the privacy guarantee from $\rho = 0.52$ -zCDP for ONLINE TREEAGG to $\rho = 0.24$ -zCDP for BANDMF, or $(\epsilon = 6.69, \delta = 10^{-10})$ -DP to $(\epsilon = 4.35, \delta = 10^{-10})$ -DP. Details of the privacy guarantee following the best practices of Ponomareva et al. [48] are in App. K.1. + +# 7 Discussion and Limitations + +In this paper, we proposed the BANDMF mechanism, which extends MF-DP-FTRL and enjoys the benefits of privacy amplification. This allows it to solely operate above the previous Pareto frontier defined by both amplified DP-SGD and MF-DP-FTRL in centralized training scenarios. Moreover, BANDMF is well-suited to federated training scenarios, and improves state-of-the-art there as well. Additionally, the computational overhead of BANDMF is less than MF-DP-FTRL by a factor of $\frac{b}{n}$ . It still has a $b \times$ time and space overhead compared to DP-SGD, which can be prohibitive for very large models with billions of parameters. This is an interesting and important future research direction. + +# Acknowledgement + +The authors thank the early feedback and discussion from Natalia Ponomareva, and the support of FL production training from Yanxiang Zhang and Yuanbo Zhang. + +# References + +[1] Martin Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proc. of the 2016 ACM SIGSAC Conf. on Computer and Communications Security (CCS'16), pages 308-318, 2016. +[2] Naman Agarwal and Karan Singh. The price of differential privacy for online learning. In International Conference on Machine Learning, pages 32-40. PMLR, 2017. +[3] Hilal Asi, Daniel Asher Nathan Levy, and John Duchi. Adapting to function difficulty and growth conditions in private optimization. In Advances in Neural Information Processing Systems, 2021. +[4] Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private online prediction from experts: Separations and faster rates. arXiv preprint arXiv:2210.13537, 2022. +[5] Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Near-optimal algorithms for private online optimization in the realizable regime. arXiv preprint arXiv:2302.14154, 2023. +[6] Borja Balle, Peter Kairouz, H Brendan McMahan, Om Thakkar, and Abhradeep Thakurta. Privacy amplification via random check-ins. In NeurIPS, 2020. +[7] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proc. of the 2014 IEEE 55th Annual Symp. on Foundations of Computer Science (FOCS), pages 464-473, 2014. +[8] Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems, pages 11279-11288, 2019. +[9] Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. arXiv preprint arXiv:2006.06914, 2020. +[10] Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecny, Stefano Mazzocchi, H Brendan McMahan, et al. Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046, 2019. +[11] Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography Conference, pages 635-658. Springer, 2016. +[12] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on scientific computing, 16(5):1190-1208, 1995. +[13] T.-H. Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Trans. on Information Systems Security, 14(3):26:1-26:24, November 2011. +[14] Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(Mar):1069-1109, 2011. +[15] Christopher A Choquette-Choo, H Brendan McMahan, Keith Rush, and Abhradeep Thakurta. Multi-epoch matrix factorization mechanisms for private machine learning. arXiv preprint arXiv:2211.06530, 2023. +[16] Rishav Chourasia, Jiayuan Ye, and Reza Shokri. Differential privacy dynamics of Langevin diffusion and noisy gradient descent. In Advances in Neural Information Processing Systems, 2021. +[17] Sergey Denisov, Brendan McMahan, Keith Rush, Adam Smith, and Abhradeep Guha Thakurta. Improved differential privacy for sgd via optimal private linear operators on adaptive streams, 2022. URL https://arxiv.org/abs/2202.08312. +[18] DP Team. Google's differential privacy libraries., 2022. https://github.com/google/differential-privacy. + +[19] Jeremy Du Croz, Peter Mayes, and Giuseppe Radicati. Factorizations of band matrices using level 3 bias. In CONPAR 90—VAPP IV: Joint International Conference on Vector and Parallel Processing Zurich, Switzerland, September 10–13, 1990 Proceedings, pages 222–231. Springer, 2005. +[20] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum. Differential privacy under continual observation. In Proc. of the Forty-Second ACM Symp. on Theory of Computing (STOC'10), pages 715-724, 2010. +[21] Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, and Li Zhang. Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 11-20, 2014. +[22] Alexander Edmonds, Aleksandar Nikolov, and Jonathan Ullman. The Power of Factorization Mechanisms in Local and Central Differential Privacy, page 425-438. Association for Computing Machinery, New York, NY, USA, 2020. ISBN 9781450369794. URL https://doi.org/10.1145/3357713.3384297. +[23] Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in linear time. In Proc. of the Fifty-Second ACM Symp. on Theory of Computing (STOC'20), 2020. +[24] Hendrik Fichtenberger, Monika Henzinger, and Jalaj Upadhyay. Constant matters: Fine-grained complexity of differentially private continual observation, 2022. URL https://arxiv.org/abs/2202.11205. +[25] Sivakanth Gopi, Yin Tat Lee, and Daogao Liu. Private convex optimization via exponential mechanism. arXiv preprint arXiv:2203.00263, 2022. +[26] Abhradeep Guha Thakurta and Adam Smith. (nearly) optimal algorithms for private online learning in full-information and bandit settings. Advances in Neural Information Processing Systems, 26, 2013. +[27] Andrew Hard, Kanishka Rao, Rajiv Mathews, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloe Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction. CoRR, abs/1811.03604, 2018. URL http://arxiv.org/abs/1811.03604. +[28] Moritz Hardt and Kunal Talwar. On the geometry of differential privacy. In STOC, 2010. +[29] Monika Henzinger and Jalaj Upadhyay. Constant matters: Fine-grained complexity of differentially private continual observation using completely bounded norms. arXiv preprint arXiv:2202.11205, 2022. +[30] James Honaker. Efficient use of differentially private binary trees. Theory and Practice of Differential Privacy (TPDP 2015), London, UK, 2:26-27, 2015. +[31] Dzmitry Huba, John Nguyen, Kshitiz Malik, Ruiyu Zhu, Mike Rabbat, Ashkan Yousefpour, Carole-Jean Wu, Hongyuan Zhan, Pavel Ustinov, Harish Srinivas, et al. Papaya: Practical, private, and scalable federated learning. Proceedings of Machine Learning and Systems, 4: 814-832, 2022. +[32] Roger Iyengar, Joseph P Near, Dawn Song, Om Thakkar, Abhradeep Thakurta, and Lun Wang. Towards practical differentially private convex optimization. In 2019 IEEE Symposium on Security and Privacy (SP), 2019. +[33] Prateek Jain, Pravesh Kothari, and Abhradeep Thakurta. Differentially private online learning. In Proc. of the 25th Annual Conf. on Learning Theory (COLT), volume 23, pages 24.1-24.34, June 2012. +[34] Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascon, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramère, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, + +and Sen Zhao. Advances and open problems in federated learning. CoRR, abs/1912.04977, 2019. URL http://arxiv.org/abs/1912.04977. +[35] Peter Kairouz, Brendan McMahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, and Zheng Xu. Practical and private (deep) learning without sampling or shuffling. In ICML, 2021. +[36] Daniel Kifer, Adam Smith, and Abhradeep Thakurta. Private convex empirical risk minimization and high-dimensional regression. In Conference on Learning Theory, pages 25-1, 2012. +[37] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37, aug 2009. ISSN 0018-9162. doi: 10.1109/MC.2009.263. URL https://doi.org/10.1109/MC.2009.263. +[38] Antti Koskela, Joonas Jälkö, Lukas Prediger, and Antti Honkela. Tight approximate differential privacy for discrete-valued mechanisms using fft, 2020. +[39] Janardhan Kulkarni, Yin Tat Lee, and Daogao Liu. Private non-smooth erm and sco in subquadratic steps. Advances in Neural Information Processing Systems, 34, 2021. +[40] Chao Li, Gerome Miklau, Michael Hay, Andrew Mcgregor, and Vibhor Rastogi. The matrix mechanism: optimizing linear counting queries under differential privacy. The VLDB Journal, 24:757-781, 2015. +[41] Zitao Li, Bolin Ding, Ce Zhang, Ninghui Li, and Jingren Zhou. Federated matrix factorization with privacy guarantee. Proc. VLDB Endow., 15(4):900-913, dec 2021. ISSN 2150-8097. doi: 10.14778/3503585.3503598. URL https://doi.org/10.14778/3503585.3503598. +[42] Ryan McKenna, Gerome Miklau, Michael Hay, and Ashwin Machanavajjhala. Optimizing error of high-dimensional statistical queries under differential privacy. Proc. VLDB Endow., 11(10):1206-1219, jun 2018. ISSN 2150-8097. doi: 10.14778/3231751.3231769. URL https://doi.org/10.14778/3231751.3231769. +[43] Ryan McKenna, Gerome Miklau, Michael Hay, and Ashwin Machanavajjhala. Hdmm: Optimizing error of high-dimensional statistical queries under differential privacy. arXiv preprint arXiv:2106.12118, 2021. +[44] H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private language models without losing accuracy. CoRR, abs/1710.06963, 2017. URL http://arxiv.org/abs/1710.06963. +[45] H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963, 2017. +[46] Ilya Mironov, Kunal Talwar, and Li Zhang. Rényi differential privacy of the sampled gaussian mechanism. CoRR, abs/1908.10530, 2019. URL http://arxiv.org/abs/1908.10530. +[47] Matthias Paulik, Matt Seigel, Henry Mason, Dominic Telaar, Joris Kluivers, Rogier van Dalen, Chi Wai Lau, Luke Carlson, Filip Granqvist, Chris Vandeveldte, et al. Federated evaluation and tuning for on-device personalization: System design & applications. arXiv preprint arXiv:2102.08503, 2021. +[48] Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H. Brendan McMahan, Sergei Vassilvitskii, Steve Chien, and Abhradeep Thakurta. How to dp-fy ml: A practical guide to machine learning with differential privacy, 2023. +[49] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67, 2020. URL http://jmlr.org/papers/v21/20-074.html. +[50] Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečny, Sanjiv Kumar, and H. Brendan McMahan. Adaptive federated optimization. CoRR, abs/2003.00295, 2020. URL https://arxiv.org/abs/2003.00295. +[51] Hyejin Shin, Sungwook Kim, Junbum Shin, and Xiaokui Xiao. Privacy enhanced matrix factorization for recommendation with local differential privacy. IEEE Transactions on Knowledge and Data Engineering, 30(9):1770-1782, 2018. doi: 10.1109/TKDE.2018.2805356. + +[52] Adam Smith, Abhradeep Thakurta, and Jalaj Upadhyay. Is interaction necessary for distributed private learning? In 2017 IEEE Symposium on Security and Privacy (SP), pages 58-77. IEEE, 2017. +[53] Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pages 245-248. IEEE, 2013. +[54] Shuang Song, Om Thakkar, and Abhradeep Thakurta. Characterizing private clipped gradient descent on convex generalized linear problems. arXiv preprint arXiv:2006.06783, 2020. +[55] Thomas Steinke. Composition of differential privacy & privacy amplification by subsampling, 2022. +[56] Maxime Vono, Nicolas Dobigeon, and Pierre Chainais. High-dimensional gaussian sampling: a review and a unifying approach based on a stochastic proximal point algorithm, 2021. +[57] Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, and Jeffrey F. Naughton. Bolt-on differential privacy for scalable stochastic gradient descent-based analytics. In Semih Salihoglu, Wenchao Zhou, Rada Chirkova, Jun Yang, and Dan Suciu, editors, Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD, 2017. +[58] Zheng Xu, Yanxiang Zhang, Galen Andrew, Christopher Choquette, Peter Kairouz, Brendan McMahan, Jesse Rosenstock, and Yuanbo Zhang. Federated learning of gboard language models with differential privacy, 2023. +[59] Zheng Xu, Yanxiang Zhang, Galen Andrew, Christopher A Choquette-Choo, Peter Kairouz, H Brendan McMahan, Jesse Rosenstock, and Yuanbo Zhang. Federated learning of gboard language models with differential privacy. arXiv preprint arXiv:2305.18465, 2023. +[60] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934, 2020. +[61] Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. Applied federated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903, 2018. +[62] Ganzhao Yuan, Yin Yang, Zhenjie Zhang, and Zhifeng Hao. Convex optimization for linear query processing under approximate differential privacy. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 2005–2014, 2016. +[63] Chen Zhu, Zheng Xu, Mingqing Chen, Jakub Konečný, Andrew Hard, and Tom Goldstein. Diurnal or nocturnal? federated learning of multi-branch networks from periodically shifting distributions. In International Conference on Learning Representations, 2022. + +# A Notation summary + +
nNumber of steps of the streaming linear query (SGD steps or FL rounds)
mTotal number of records (examples or users) in the database/dataset
bMinimum separation between participations; b = 1 allows participation in every step
ˆb-banded matrixA (general) matrix X isˆb-banded if for all i, j ∈ [n], |i-j| ≥ˆb implies X[i,j] = 0. We use b to refer to min-separation, and for example write b-banded when we set the number of bands equal to the min-separation; we useˆb-banded for a number of bands possibly different than b.
kThe maximum number of times any user might participate in training
dDimension of per-step user contributions (e.g., model size)
xi ∈ R or RdSum of per-example gradients (or per-user model updates) on step i
x ∈ Rn×dStream of inputs xi, equiv. matrix with rows xi (so xi = x[i,:])
ζClipping norm that limits the size of per-example contributions to xi
π ⊆ [n]Participation pattern, the set of steps that an example participates in
ΠParticipation schema, set of sets of steps (set of all π) an example could participate in
D= {x -ˆx | (x,ˆx) ∈ N}, the set of deltas between neighboring input streams x,ˆx.
DCorners of D when assumed to be a polytope, D = conv(D).
(k,b)-participationparticipation schema Π with at most k participations, separated by exactly b steps
b-min-sep-participationRelaxation of of (k,b)-participation where participations have separation at least b
A ∈ Rn×nLower-triangular linear query matrix to be factorized as A = BC
M†Moore-Penrose pseudoinverse of matrix M
MTTranspose of M
M[i,j]The (i,j)th entry of matrix A
M[i,:] and M(:,j]Theith row and jth column of M (numpy-style indexing)
conv(S)Convex hull of the set S
[n]={1,...,n}
||X||FThe Frobenius norm of a matrix X
+ +Table 1: Summary of notation + +# A.1 Intuition behind MF-DP-FTRL + +Fig. 7 compares DP-SGD and MF-DP-FTRL. To gain an intuition for why MF-DP-FTRL can perform better than DP-SGD, observe that vanilla SGD has iterates $\theta_{t} = \theta_{0} - \eta \sum_{i=1}^{t} \hat{\mathbf{x}}_{i}$ , and hence when the noisy gradients $\hat{\mathbf{x}}_{i}$ are added, the $\mathbf{C}_{[i,j]}^{-1} \mathbf{z}_{[i,j]}$ terms in MF-DP-FTRL serve to cancel out some of the noise introduced on previous rounds. The noise cancellation reduces the total error in all the prefix sums of gradients $\sum_{i=1}^{t} \hat{\mathbf{x}}_{i}$ for $t \in [n]$ , but also worsens the privacy guarantee of the mechanism, i.e. increases its sensitivity. The privacy worsens as e.g., an adversary trying to learn $\mathbf{x}_1$ via $\hat{\mathbf{x}}_1$ can partially learn the value of $\mathbf{z}_1$ from $\hat{\mathbf{x}}_2$ , whereas in DP-SGD $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$ are uncorrelated. Hence there is a tradeoff between the total error and sensitivity (see next paragraph): DP-SGD sets $\mathbf{C}_{[i,j]}^{-1} = 0$ + +![](images/86d7e7dee7abe2d63e7a28d561a93d57947f9ba258b44d50a9bf934e495a9f2f.jpg) +Figure 7: MF-DP-FTRL (Alg. 1) enables noise cancelling across steps, where DP-SGD does not. The entries $\mathbf{C}_{i,j}^{-1}$ are mostly negative (in $[0, -1)$ ) in matrices $\mathbf{C}^{-1}$ we consider (see Fig. 8). Thus, the red terms show that MF-DP-FTRL "cancels out" noise added on earlier iterations. For simplicity, we assume $\mathbf{C}^{-1}$ has 1s on the main diagonal and entries $\mathbf{C}_{i,j}^{-1}$ otherwise, with $\mathbf{z}_i \coloneqq \mathbf{z}_{[i,:]}$ the rows of $\mathbf{z}$ . + +below the main diagonal, effectively minimizing sensitivity (assuming a fixed normalization of the main diagonal), but with a large total error due to no noise cancellation. On the other hand, MF-DP-FTRL can arrive at a better compromise between mechanism sensitivity and the total error. This is formalized in the optimization problem of Sec. 4. + +Without a sampling assumption, the implied DP adversary knows which examples participated in batch $S_{i}$ , and for DP-SGD with uncorrelated noise, knows they only need to "attack" $\hat{\mathbf{x}}_i$ . However, with MF-DP-FTRL, the information from $S_{i}$ can potentially be masked with a larger amount of initial noise in $\hat{\mathbf{x}}_i$ , which is then canceled out over subsequent rounds. "Spreading out" the release of information about batch $S_{i}$ over a larger number of iterations in this way can intuitively provide better privacy, while still allowing for accurate partial sums of gradients (and hence SGD iterates). This is, in a loose sense, similar to the way SGD with sampling "hibes" information about a particular example at randomly chosen iterations. + +# B Dropping the diag $(\mathbf{X}) = 1$ constraint + +As discussed in Sec. 4, BANDMF by default imposes an equal column norm constraint on the generated factorization. In the optimization problem, this is accomplished by imposing the constraint $\operatorname{diag}(\mathbf{X}) = \mathbf{1}$ . In this section we show how we can solve the optimization problem without this constraint for $(k,b)$ -participation. This optimization problem is formulated to minimize total squared error with respect to $(k,b)$ -participation, although in principle the optimized matrices could be used in the $b$ -min-sep-participation setting with some degradation in solution quality. Prop. B.1 provides an expression for efficiently computing the sensitivity of a $b$ -banded matrix. + +Proposition B.1. Let $\mathbf{X} \in \mathbf{S}_{+}^{n}$ be a $b$ -banded matrix, and let $\Pi$ denote the $(k,b)$ -participation schema. Then + +$$ +\operatorname {s e n s} _ {\Pi} (\mathbf {X}) = \max _ {i = 1, \dots , b} \sum_ {j = 0} ^ {k - 1} \operatorname {d i a g} \left(\mathbf {X}\right) _ {i + j b}. +$$ + +To integrate this expression into the optimization problem, we can replace the diag $(\mathbf{X}) = \mathbf{1}$ constraint with $b$ linear constraints on diag $(\mathbf{X})$ . This modification does not affect the convexity of the problem, although it does slightly complicate the algorithms needed to solve it. To handle this new problem, our approach is to replace the gradient with respect to $\mathbf{X}$ at each iteration, with a projected gradient, which is obtained by setting $v_{i} = \sum_{j=0}^{k-1} \mathrm{diag}(\Delta \mathbf{X})_{i+jb}$ for all $i = 1, \ldots, b$ , and setting $\mathrm{diag}(\Delta \mathbf{X})_{i+jb} = \mathrm{diag}(\Delta \mathbf{X})_{i+jb} - v_{i}/k$ . This ensures that sensitivity does not change between iterations of the numerical optimization procedure. + +For the reasons mentioned in Sec. 4, by default we impose the simpler constraint $\mathrm{diag}(\mathbf{X}) = \mathbf{1}$ . In App. C, we provide some numerical comparisons between these two approaches. Specifically, the rows of Table 2 with Equal column norms? (F)also correspond to the approach described here; observe this results in slightly improved RMSE under $(k,b)$ -participation compared to the corresponding rows with Equal column norms? (T)true. + +
MechanismMatrixBands bEqual column norms? (Ours)SensitivityError
k=1 [17](k=6, b=342) [15]b≥342-min-sep (Ours)(A) RMSE under (k=6, b=342) [15](B) RMSE under b≥342-min-sep (Ours)
OPTIMAL TREEAGG [30, 35]-F0.321.001.001.531.53
DP-SGD [1]1T0.411.001.009.639.63
MF (b=128) (Ours)128F0.521.001.041.231.29
MF (b=128) (Ours)128T0.411.001.001.271.27
MF (b=342) (Ours)342F0.521.001.041.041.08
MF (b=342) (Ours)342T0.411.001.001.051.05
MF [15]-F0.501.00≤1.151.001.15
MF [15]-T0.411.00≤1.131.011.14
+ +Table 2: A comparison of matrix mechanisms for $n = 2052$ under different participation patterns. Banded matrices are near-optimal under $(k,b)$ -participation and best under $b$ -min-sepparticipation. Each error is computed under the indicated measure of sensitivity. Thus, the error in column (B) can be obtained by multiplying the error in column (A) by the corresponding entry under $b \geq 342$ sensitivity. + +# C Empirical evaluation of banded matrices + +Table 2 compares the matrix mechanisms studied under different participation patterns but normalized to have sensitivity $\mathrm{sens}(\mathbf{C}) = 1$ under $(k = 6, b = 342)$ -participation. The sensitivity under single participation $k = 1$ is lowest as expected. With column normalization, sensitivity is also 1 under $b \geq 342$ -min-sep-participation. We make the following observations: + +- For the MF mechanisms, column normalization hurts RMSE for $(k,b)$ -participation compared to the approach of App. B (as it is an additional constraint), but actually improves RMSE under $b$ -min-sep-participation. +- We conjecture that the $(k, b)$ -participation optimized matrices (MF without column normalization, App. B) are optimal for the prefix-sum workload8; With this in mind, we see there is at most a small increase in RMSE for switching to the more challenging $b$ -min-sepparticipation schema $(1.00 \to 1.05)$ . If (as we further conjecture) the optimal matrices for prefix-sum in fact are $k$ -banded, the gap is even smaller (at most $1.04 \to 1.05$ ). Hence, at least for the prefix-sum workload $\mathbf{A}$ , there is limited room for improvement in developing optimization procedures that directly optimize over the larger feasible set offered by $b$ -min-sep-participation. +- Using fewer than $b$ bands does degrade performance on the RMSE metric, with DP-SGD being the extreme case, yielding prefix sum estimates almost $10 \times$ worse than the MF mechanisms. +- The results of Denisov et al. [17] imply that the binary-tree $\mathbf{C}$ matrix can in fact be used in the online setting, with the Moore-Penrose pseudo-inverse giving the optimal decoder for RMSE [15], corresponding to the 'full' estimator of Honaker [30]. We include this in the table as a baseline, and see that it is in general outperformed by our MF mechanisms by about $1.5 \times$ in RMSE. + +# D Example structures of MF + +Figs. 8 and 9 show the structure of some of the key matrix factorization approaches considered in this work. One can immediately see the impact of the $(k,b)$ -participation schema in the optimal matrices, in particular for the non-banded MULTI-EPOCH MF matrices (the two top-right matrices), where $\mathbf{C}$ contains diagonals of negative entries separated by $b$ steps. In the bottom two rows, we see that requiring equal column norms ("EN-" for equal norms) has a relatively minor impact on the structure of the matrices. + +![](images/9754355f563a56b8100db268e4a0ae0a0a764e1e75543989d465e56c71dec73c.jpg) + +![](images/426f0dfc8939bca62703f1924787a870fe007e96ec59c6baf8c652d8ed46dae9.jpg) + +![](images/dc5f6ed85fc8bd23e655833a9b9e18b40f80793f4fd26ce512c849065a3488d2.jpg) + +![](images/715fca16415f3b7a51b67ce96ec834e61eae0324a8b3c757a1a27538b3686a20.jpg) + +![](images/876f37ce21b5bb13a77e995eec267f3c346757bf2d9980affee40327fd2b896b.jpg) + +![](images/ad959affc2e09ea32996c0bbaa672591af74c0cf7bca94a93d524122f7388428.jpg) + +![](images/e7b1bbf496ef3f7895431aa98277009d69e9c7c7d3ee25695f1ec9ba2868954f.jpg) + +![](images/c9b0eaa4ed13c974d8795491a72117a78f65f50a5b9eaabc6823e6c3234dcaf8.jpg) + +![](images/392d907e025d6246948fa40617f4f1b4b4945049c78e09a040db5bdd8c65514d.jpg) + +![](images/11e40b56b34096b7a0da8b08d20e8019644e7f6ac6f9b185eb5fb128d343dc75.jpg) + +![](images/f24980775d94933ca23dee5c163331608e61cffa5ef24796cffe10ced362a0d7.jpg) + +![](images/d26bd3e0ed4a9e0357785c90ae0a27a5372b3bc1d37b86a317050895530dbb50.jpg) + +![](images/5990db0141a064ae482878573c21a417caf6117557b26adcc0cf55fbf8e055fe.jpg) +Figure 8: Factorizations for $n = 64$ of the prefix-sum workload (A taken to be the lower-triangular matrix of 1s). For each factorization $\mathbf{A} = \mathbf{BC}$ , we show $\mathbf{C}$ and its inverse $\mathbf{C}^{-1}$ , as the inverse is the matrix used in noise generation. Single-epoch is the approach of Denisov et al. [17], SGD is simply the identity matrix $\mathbf{I}$ (shown for completeness), and $(k = 8)$ -epoch MF and $(k = 4)$ -epoch are the MULTI-EPOCH MF approach of Choquette-Choo et al. [15] for 8 and 4 epochs, respectively. For our banded matrices (3rd and 4th rows), we fix 4 epochs $(b = 16)$ , and show $\hat{b} = 8$ and $\hat{b} = 16$ bands, with column normalization ("EN-") and without. + +![](images/abf0d738e11cf1c499519c00e56b90a0e26d6fc67adb1f4b8a4fff028a8bbaad.jpg) + +![](images/86808e375580052e9865fe22d22487064d5a97cf8dca1ff528348378aa4a157b.jpg) + +![](images/eb300fe24d6916d2c51675ce8c19bc05d65d31620f7843152f9d6f7fcc684369.jpg) + +![](images/ce17ea33aa8e737736593bcd1dea6953805d2159c8f6077333f05fdaf05217cf.jpg) +Figure 9: The transpose of binary-tree encoder matrix $\mathbf{C}_{\mathcal{T}}$ , and its pseudoinverse $\mathbf{C}_{\mathcal{T}}^{\dagger}$ , which corresponds to the "full" or optimal decoder of Honaker [30]. This is the matrix used in OPTIMAL TREEAGG in Fig. 6[a]. + +![](images/888393f6484c867d92fd508f89e28b6593b88ac7158854c800d9b81d69503312.jpg) + +# E Algorithms and Analysis for Sec. 3 + +# E.1 Algorithms + +Algorithm 3 (VECSENS): Maximum of $\langle \mathbf{v},\mathbf{u}\rangle$ where $\mathbf{u}$ is a vector in the $\ell_{\infty}$ unit ball satisfying $\Pi_b$ . +```txt +Inputs: min-separation $b$ , vector $\mathbf{v}$ , max participations $k$ +Initialize $F\in \mathbb{R}^{n\times k}$ +for $m = 1,\dots ,k$ do for $i = n,\ldots ,1$ do $\triangleright$ We use the convention that $F[s,t] = 0$ if $s,t$ are out-of-bounds. $F[i,m] = \max \Bigl (\mathbf{v}_i + F[i + b,m - 1],F[i + 1,m]\Bigr)$ +return $F[1,k]$ +``` + +Algorithm 4 Efficient sensitivity upper bound for $b$ -min-sep-participation +```txt +Inputs: min-separation $b$ , matrix $\mathbf{X}$ , max participations $k$ Initialize $F\in \mathbb{R}^{n\times k}$ $\mathbf{v}\in \mathbb{R}^n$ +for $j = 1,\dots ,n$ do $\mathbf{v}_i = \mathrm{VECSENS}(b,|\mathbf{X}_{[i,:]}|,k)$ +return $\sqrt{\mathrm{VECSENS}(b,\mathbf{v},k)}$ +``` + +Algorithm 5 Efficient sensitivity calculation for $b$ -min-sep-participation, assuming $\mathbf{X}$ is $b$ -banded. +```txt +Inputs: min-separation $b$ , $b$ -banded matrix $\mathbf{X}$ , max participations $k$ . +return $\sqrt{\mathrm{VECSENS}(b, \mathrm{diag}(\mathbf{X}), k)}$ +``` + +# E.2 Analysis + +Proposition E.1. The sensitivity of $\mathbf{C}$ for a given participation schema $\Pi$ may be expressed as: + +$$ +\operatorname {s e n s} _ {\Pi} \left(\mathbf {C}\right) ^ {2} = \max _ {\pi \in \Pi} \sup _ {\mathbf {u} \in \mathfrak {D}} \operatorname {t r} \left(\left[ \mathbf {P} _ {\pi} \mathbf {C} ^ {\top} \mathbf {C P} _ {\pi} \right] \left[ \mathbf {u u} ^ {\top} \right]\right), \tag {7} +$$ + +where $\mathbf{P}_{\pi}$ represent the axis-aligned projection onto the set of rows indexed by $\pi$ ; that is, $\mathbf{P}_{\pi}[i,i] = 1$ for $i \in \pi$ , and 0 otherwise. Assuming that $\mathfrak{D}$ represents a set of matrices with rows bounded by $\ell_2$ norm 1, this can be upper bounded by: + +$$ +\max _ {\pi \in \Pi} \sum_ {i, j \in \pi} | \mathbf {X} _ {[ i, j ]} |. +$$ + +where $\mathbf{X} = \mathbf{C}^{\top}\mathbf{C}$ . This upper bound is tight when $\mathbf{P}_{\pi}\mathbf{C}^{\top}\mathbf{C}\mathbf{P}_{\pi} \geq 0 \forall \pi \in \Pi$ , and is independent of the dimension $d$ of the rows of $\mathbf{u}$ . + +Proof. Recall that $\Pi$ determines the rows of $\mathbf{u}$ which may be nonzero in the definition Eq. (2). Take some $\mathbf{u} \in \mathfrak{D}$ , an element of $\mathbb{R}^{n \times d}$ , which therefore has nonzero rows only at some set of indices $\pi \in \Pi$ . Note, clearly $\mathbf{u} = \mathbf{P}_{\pi} \mathbf{u}$ , $\mathbf{P}_{\pi}^{\top} = \mathbf{P}_{\pi}$ , and $\mathbf{P}_{\pi} = \mathbf{P}_{\pi} \mathbf{P}_{\pi}$ . + +Therefore + +$$ +\begin{array}{l} \left\| \mathbf {C} \mathbf {u} \right\| _ {F} ^ {2} = \operatorname {t r} \left(\left[ \mathbf {C P} _ {\pi} \mathbf {u} \right] ^ {\top} \mathbf {C P} _ {\pi} \mathbf {u}\right) = \operatorname {t r} \left(\mathbf {u} ^ {\top} \mathbf {P} _ {\pi} ^ {\top} \mathbf {C} ^ {\top} \mathbf {C P} _ {\pi} \mathbf {u}\right) \\ = \operatorname {t r} \left(\mathbf {P} _ {\pi} \mathbf {P} _ {\pi} \mathbf {C} ^ {\top} \mathbf {C P} _ {\pi} \mathbf {P} _ {\pi} \mathbf {u u} ^ {\top}\right) = \operatorname {t r} \left(\left[ \mathbf {P} _ {\pi} \mathbf {C} ^ {\top} \mathbf {C P} _ {\pi} \right] \left[ \mathbf {P} _ {\pi} \mathbf {u u} ^ {\top} \mathbf {P} _ {\pi} \right]\right) \tag {8} \\ = \operatorname {t r} \left(\left[ \mathbf {P} _ {\pi} \mathbf {C} ^ {\top} \mathbf {C P} _ {\pi} \right] \left[ \mathbf {u u} ^ {\top} \right]\right). \\ \end{array} +$$ + +This implies the statement Eq. (7) by the definition of sensitivity and neighboring in our setting. + +Now, let $\mathbf{X}_{\pi} \coloneqq \mathbf{P}_{\pi} \mathbf{C}^{\top} \mathbf{C} \mathbf{P}_{\pi}$ be the matrix formed by zeroing out the rows and columns not indexed by $\pi$ from $\mathbf{X}$ . Assume that every $\mathbf{u} \in \mathfrak{D}$ has row norms bounded by 1. Expanding the trace in Eq. (7), writing $x_{ij}$ for the elements of $\mathbf{X}_{\pi}$ and $\mathbf{u}_{[j,:]}$ for the $j^{th}$ row of $\mathbf{u}$ , we have + +$$ +\operatorname {t r} \left(\mathbf {X} _ {\pi} \mathbf {u} \mathbf {u} ^ {\top}\right) = \sum_ {i = 1} ^ {k} \sum_ {j = 1} ^ {k} x _ {i j} \langle \mathbf {u} _ {[ i,: ]}, \mathbf {u} _ {[ j,: ]} \rangle \leq \sum_ {i = 1} ^ {k} \sum_ {j = 1} ^ {k} | x _ {i j} | +$$ + +which yields the claimed bound. When $\mathbf{X}_{\pi}$ is elementwise nonnegative, taking $\mathbf{u}_{[i,:]} = \mathbf{u}_{[j,:]}$ for any unit vector shows the claimed tightness in this case. + +![](images/8ed0519905757bc270194399af06adb512e0cb1cd70fd87b1ee4de89b108bf09.jpg) + +Remark. This statement can be viewed as a partial extension of [15, Theorem G.1]. It does not imply every case handled there, but also implies results which cannot be derived from that Theorem. + +Proof of Thm. 2. Conclusion (1) is implied by (2), noting that the conditions on $\mathbf{C}$ imply that Alg. 5 will return a value at most $\kappa \sqrt{k'}$ in this setting. + +For (2), let $c \in \mathbb{R}^n$ with entries $c_i = \| \mathbf{C}_{[i,i]} \|^2$ for $i \in \{0, \dots, n-1\}$ . We have + +$$ +\operatorname {s e n s} _ {\Pi} ^ {1} (\mathbf {C}) = \max _ {\pi \in \Pi_ {b}} \| \mathbf {C u} (\pi) \| = \max _ {\pi \in \Pi_ {b}} \| \sum_ {i \in \pi} \mathbf {C} _ {[:, i ]} \| = \max _ {\pi \in \Pi_ {b}} \sqrt {\sum_ {i \in \pi} c _ {i}} \tag {9} +$$ + +where $\mathbf{u}(\pi) \in \{0,1\}^n$ is given by $\mathbf{u}(\pi)_i = 1$ if $i \in \pi$ and 0 otherwise. The last equality follows from the orthogonality condition on sufficiently separated columns of $\mathbf{C}$ trivially implied by bandedness. It is straightforward to verify the dynamic program of Alg. 3 constructs a feasible $\pi$ which attains the maximum. + +Proof of Thm. 3. Via Prop. E.1, the result follows from showing that Alg. 4 outputs a value at least as large as $\sum_{(i,j)\in \pi}|\mathbf{X}_{ij}|$ for any $\pi \in \Pi_b$ . So let $\hat{\pi}$ be an element of $\Pi_b$ . Note that VECSENS is monotonically increasing in values of the vector $\mathbf{v}$ if $\mathbf{v}$ is nonnegative, and therefore Alg. 4 is monotonically increasing in absolute values of $\mathbf{X}$ . Therefore we will have our conclusion (3) if we can show that, for $\mathbf{X}_{\hat{\pi}}$ the matrix formed by zeroing out all rows and columns of $\mathbf{X}$ not indexed by $\hat{\pi}$ , Alg. 4 returns the value $\sum_{(i,j)\in \pi}|\mathbf{X}_{ij}|$ . Yet this is straightforward by the characterization of VECSENS as an oracle for computing the maximum of $\langle \mathbf{v},\mathbf{u}\rangle$ , where $\mathbf{u}$ is a vector in the $\ell_{\infty}$ unit ball. + +Proof of Prop. 4.1. The proof will be constructive. Let $\mathbf{J}$ be the $n\times n$ exchange matrix defined as + +$$ +\mathbf {J} = \left[ \begin{array}{c c c c} & & & 1 \\ & & 1 & \\ & \ddots & & \\ 1 & & & \end{array} \right] +$$ + +Let $\mathbf{Y} = \mathbf{J}\mathbf{X}\mathbf{J}$ and note that $\mathbf{Y}$ is symmetric and positive definite. Let $\mathbf{H} = \mathrm{Cholesky}(\mathbf{Y})^{\top}$ and note that (1) $\mathbf{H}^{\top}\mathbf{H} = \mathbf{Y}$ by definition of Cholesky decomposition, (2) $\mathbf{H}$ is upper triangular, and (3) $\mathbf{H}$ is $\hat{b}$ -banded by Du Croz et al. [19]. + +We will show that for $\mathbf{C} = \mathbf{JHJ}$ , we have (1) $\mathbf{X} = \mathbf{C}^{\top}\mathbf{C}$ , (2) $\mathbf{C}$ is lower triangular, and (3) $\mathbf{C}$ is $\hat{b}$ -banded. + +For Claim (1) observe that: + +$$ +\begin{array}{l} \mathbf {C} ^ {\top} \mathbf {C} = (\mathbf {J H J}) ^ {\top} (\mathbf {J H J}) \\ = \mathbf {J} ^ {\top} \mathbf {H} ^ {\top} \mathbf {J} ^ {\top} \mathbf {J} \mathbf {H} \mathbf {J} \\ = \mathbf {J} \left(\mathbf {H} ^ {\top} \mathbf {H}\right) \mathbf {J} \\ = \mathrm {J Y J} \\ = J J X J J \\ = \mathbf {X} \\ \end{array} +$$ + +For Claim (2) and (3), note that left-multiplying by $\mathbf{J}$ reverses the rows and right-multiplying by $\mathbf{J}$ reverses the columns, and therefore $\mathbf{C}_{[i,j]} = \mathbf{H}_{n - i + 1,n - j + 1}$ . + +For Claim (2), we need to show $\mathbf{C}_{[i,j]} = 0$ if $i < j$ . If $i < j$ then $n - i + 1 > n - j + 1$ , and since $\mathbf{H}$ is upper triangular, we know $\mathbf{H}_{[n - i + 1,n - j + 1]} = 0$ , as desired. + +For Claim (3), we need to show that $\mathbf{C}_{[i,j]} = 0if|i - j|\geq \hat{b}$ . Observe that if $|(n - i + 1) - (n - j + 1)| = |i - j|$ and therefore since $\mathbf{H}$ is $\hat{b}$ -banded, so is $\mathbf{C}$ . + +This completes the proof. + +![](images/7c8c8242ec85315fd527252efde2c23022ce6c41ebe99662acf64beb6977e1ac.jpg) + +# F Additional Analysis for Sec. 5 + +In this section we prove our general amplification statement Theorem 5, of which Theorem 4 is a corollary. Recall that we use $b$ instead of $\hat{b}$ in this appendix since our sampling scheme enforces $(k,b)$ -participation. Throughout this section, we slightly abuse notation by letting $i \pmod{b} = b$ instead of 0 if $i / b$ is integer. + +# F.1 Algorithms for Sampling + +We first give the general sampling scheme (Alg. 6) as well as sequence of queries (Alg. 7) that provides an upper bound on the privacy guarantees of DP-MF using this sampling scheme. + +Algorithm 6 Sampling scheme for banded DP-MF +Inputs: Dataset $D$ , sampling distribution $\mathcal{S}$ over $(2^{[\tilde{m} ]})^k$ noise standard deviation $\sigma$ $D_{1},\ldots ,D_{b}\gets$ arbitrary partition of $D$ such that $\forall j:|D_j| = \tilde{m}$ +Let $D_{j} = \{d_{j,1},d_{j,2},\dots ,d_{j,\tilde{m}}\}$ for each $j$ +for $j = 1,2,\ldots ,b$ do Sample $k$ sets to index $D_{j}$ as $(S_{j},S_{b + j},\dots ,S_{(k - 1)b + j})\sim S$ , with $S_{j}\subseteq [\breve{m} ]$ +for $i = 1,2,\dots ,n$ do Let $j = i$ (mod $b$ ); compute $\mathbf{x}_i$ by querying $\{d_{j,\ell}:\ell \in S_i\}$ +Let $\mathbf{x} = [\mathbf{x}_1,\dots ,\mathbf{x}_n]^{\top}\in \mathbb{R}^{n\times d}$ , release $\mathbf{Cx} + \mathbf{z}$ with each entry of $\mathbf{z}_{[i,j]}\sim \mathcal{N}(0,\sigma^2)$ D If C is lower-triangular, results can also be released in streaming fashion + +Algorithm 7 Sequence of queries that bounds privacy of Alg. 6 +Inputs: Dataset $\tilde{D} = \{d_1,d_2,\dots ,d_{\tilde{m}}\}$ sampling distribution $\mathcal{S}$ over $(2^{[\tilde{m}]})^k$ Sample $(S_{1},S_{2},\ldots ,S_{k})\sim \mathcal{S}$ +for $i = 1,2,\ldots ,k$ do $\tilde{D}_i\gets \{d_j:j\in S_i\}$ Perform (adaptively chosen) sensitivity $\Delta$ query on $\tilde{D}_i$ with noise $\mathcal{N}(0,\sigma^2)$ + +![](images/28df3ae8939807037e1a77efc900252f1e234b012ee7ecb732ce436d13835eb9.jpg) +Figure 10: A visualization of how we can decompose a banded matrix mechanism into independent queries on $D_{j}$ (as in Alg. 7) under our sampling scheme. + +# F.2 General Amplification Statement and Proof + +Given the sampling scheme and query sequence, we can now state our general amplification statement: Theorem 5. Suppose $\mathbf{C}$ is $b$ -banded and lower triangular, and the examples participating in each step are chosen according to Alg. 6 with a given choice of $\mathcal{S}$ . Then BANDMF satisfies any standard DP guarantee satisfied by Alg. 7 in App. F.1 with $\kappa = \max_{i\in [n]}\| \mathbf{C}\mathbf{e}_i\| _2 = \max_{i\in [n]}\sqrt{\mathbf{X}_{i,i}}$ and the same choice of $\mathcal{S}$ . + +Proof. Consider two datasets $D, D'$ that differ by an example contained in the partition subset $D_j$ . We argue about the privacy of $\mathbf{Cx} + \mathbf{z}$ . For simplicity we assume $j$ is such that $(k - 1)b + j \leq n$ ; elements in $D_j$ such that $j$ does not satisfy this condition can potentially participate $k - 1$ times instead of $k$ , and in turn the privacy guarantee we can prove for these elements can only be stronger. + +Since $\mathbf{C}$ is $b$ -banded, we can partition the rows of $\mathbf{C}$ into $k + 1$ subsets + +$$ +R _ {j}, R _ {b + j}, R _ {2 b + j} \dots R _ {(k - 1) b + j}, R _ {\emptyset}, +$$ + +where $R_{j}$ (resp. $R_{b + j}, R_{2b + j} \ldots R_{(k - 1)b + j}$ ) denotes the set of rows in $\mathbf{C}$ for which the $j$ th entry is non-zero, and $R_{\emptyset} = [n] \setminus (R_{j} \cup R_{b + j} \cup \ldots)$ , i.e., $R_{\emptyset}$ are the rows not included in any of these sets, i.e., rows of $\mathbf{C}$ where entries $j, b + j, \ldots$ are all zero. The fact that $\mathbf{C}$ is lower-triangular and $b$ -banded ensures that these subsets do not overlap, i.e., this is a valid partition as can be observed in Fig. 4. + +Let $\mathbf{C}_R$ denote $\mathbf{C}$ restricted to the set of rows in $R$ . From the perspective of an adversary distinguishing $D$ from $D'$ , each row of $(\mathbf{Cx} + \mathbf{z})_{R_\emptyset} = \mathbf{C}_{R_\emptyset} \mathbf{x} + \mathbf{z}_{R_\emptyset}$ has a distribution independent of whether $D$ or $D'$ was used. So it suffices to give privacy guarantees for outputting only $(\mathbf{Cx} + \mathbf{z})_{R_j}, (\mathbf{Cx} + \mathbf{z})_{R_{b+j}}, \ldots, (\mathbf{Cx} + \mathbf{z})_{R_{(k-1)b+j}}$ . + +We can decompose rows $R_{j}$ of $\mathbf{Cx} + \mathbf{z}$ as follows: + +$$ +\left(\mathbf {C} \mathbf {x} + \mathbf {z}\right) _ {R _ {j}} = \mathbf {C} _ {R _ {j}} \mathbf {x} + \mathbf {z} _ {R _ {j}} = \mathbf {C} _ {R _ {j}} \mathbf {x} _ {j} + \mathbf {C} _ {R _ {j}} \mathbf {x} _ {- j} + \mathbf {z} _ {R _ {j}}. \tag {10} +$$ + +Where $\mathbf{x}_j$ denotes $\mathbf{x}$ with all rows except $j$ zeroed out, and $\mathbf{x}_{-j}$ denotes $\mathbf{x} - \mathbf{x}_j$ , i.e., $\mathbf{x}$ with row $j$ zeroed out. By the $b$ -bonded property of $\mathbf{C}$ , $\mathbf{C}_{R_j}\mathbf{x}_{-j}$ has 0 sensitivity to the examples in $D\setminus D_{j}$ . Then, + +by Eq. (10), for $i \in R_j$ , we observe that the $i$ th row of $(\mathbf{Cx} + \mathbf{z})_{R_j}$ corresponds to an (adaptive) query made with $\ell_2$ -sensitivity $\mathbf{e}_i^\top \mathbf{C}\mathbf{e}_j$ to the examples used in step $j$ , i.e., those given by $D_j$ and $S_j$ , and noise $N(0, \sigma^2)^d$ . So $(\mathbf{Cx} + \mathbf{z})_{R_j}$ corresponds to a sequence of adaptive queries on the examples used in step $j$ , and answering this sequence of queries satisfies any standard privacy guarantee satisfied by answering a single (scalar, adaptively chosen) query with sensitivity $\| \mathbf{C}\mathbf{e}_j\|_2$ to the example chosen in step $j$ and noise $N(0, \sigma^2)$ by Claim D.1 in [17]. + +The same logic applies to each of $(\mathbf{Cx} + \mathbf{z})_{R_{b + j}},\ldots ,(\mathbf{Cx} + \mathbf{z})_{R_{(k - 1)b + j}}$ . Putting it all together and taking a max over the sensitivity of the individual queries, releasing $\mathbf{Cx} + \mathbf{z}$ satisfies any standard privacy guarantee satisfied by answering $k$ adaptively chosen queries, with sensitivity $\max_{i\in [n]}\| \mathbf{Ce}_i\| _2$ to the examples used in steps $j,b + j,\ldots ,(k - 1)b + j$ respectively. This is exactly Alg. 7 with the specified choice of $\Delta ,S$ + +# F.3 Corollaries of Thm. 5 + +We give here several corollaries of Thm. 5 that are of interest. + +Equivalence to DP-SGD: Note that when $b = 1$ , the partition contains a single subset, i.e., is the entire dataset. In particular, in this setting Thm. 5 recovers the privacy guarantees of amplified DP-SGD under any amplification scheme, e.g. including the ones discussed below. + +Amplification via sampling: To recover Thm. 4 from Thm. 5, we take the distribution over $2^{[\tilde{m} ]}$ corresponding to the uniform distribution over subsets of size $B$ , and let $\mathcal{S}$ be the product of this distribution with itself $k$ times. This is equivalent to the following: in step $i$ , we include each element of $D_{i\pmod{b}}$ independently with probability $q$ . For this choice of $\mathcal{S}$ , Alg. 6 reduces to Alg. 2 and Thm. 4. We next make the amplified privacy guarantee explicit in terms of the dp_accounting Python library [18]. Given $n,m,b$ and a target per-step batch size $B$ , we could write a dp_accounting.DpEvent capturing the privacy guarantees of the matrix factorization mechanism as follows: + +# Example F.1. + +```txt +gaussian_event $=$ dp_accounting.GaussianDpEvent(noise-multiplier) + $\mathrm{q} = \mathrm{B}$ /math.floor(n/b) +sampled_event $=$ dp_accounting.PoissonSampledDpEvent( q, gaussian_event +) +composed_event $=$ dp_accounting.SelfComposedDpEvent( sampled_event,math.ceil(m/b) +) +``` + +Example F.2. To give an example of the amplification guarantee, for simplicity assume $n / b, m / b$ are integer. If all column norms in $\mathbf{C}$ are 1, each row of $\mathbf{x}$ has sensitivity 1, and each entry of $\mathbf{z}$ has standard deviation $\sigma$ , then outputting $\mathbf{Cx} + \mathbf{z}$ satisfies $(\alpha, \frac{\alpha n}{2\sigma^2b})$ -RDP. + +Using Theorem 11 of [46] and Thm. 4, for appropriate choice of $\alpha$ and $q$ , this improves to $(\alpha, q^2 \cdot \frac{2\alpha n}{\sigma^2 b})$ -RDP with amplification by sampling. In particular, if we have a target per-step batch size $B$ , then we should choose $q = \frac{Bb}{m}$ , and if this choice of $q$ satisfies the conditions in [46] plugging this in gives $(\alpha, \frac{2\alpha B^2 b n}{\sigma^2 m^2})$ -RDP. Notice that $b = 1$ recovers the privacy guarantees of DP-SGD with sampling probability $B / m$ , and this privacy guarantee weakens as $b$ increases. + +Amplification via shuffling: Fix a per-step batch size $B$ . Then, suppose we shuffle the list of examples, and cyclically iterate over batches of size $B$ in this list as the sets of examples to use in each step of matrix factorization. That is, we shuffle $D$ into an ordered list $d_{1}, d_{2}, \ldots$ , and in step $i$ use examples $d_{(i-1)B+1} (\bmod m), d_{(i-1)B+2} (\bmod m), \ldots, d_{iB} (\bmod m)$ . + +For simplicity let's consider the case where $m / (Bb)$ is integer. In particular, this means in this shuffling scheme, each example appears once every $m / B$ steps, and for each of these steps $i$ , $i \pmod{b}$ is the same. Then this shuffling scheme is equivalent to the following: First, rather than choose an arbitrary partition to apply Thm. 5, we choose a uniformly random partition into $b$ subsets + +of size $m / b$ . Then, we choose $S$ to be the distribution giving by shuffling $[m / b]$ and then cyclically iterating over the shuffled list in batches of size $B$ . Given this equivalence, we get the following: + +Corollary F.1. Suppose the examples in matrix factorization are chosen by shuffling $D$ and then iterating over batches of size $B$ . If $n / (Bb)$ is integer, then the matrix factorization mechanism satisfies any standard privacy guarantee satisfied by $k$ adaptive scalar queries with sensitivity $\max_{i\in [n]}\| \mathbf{C}\mathbf{e}_i\| _2$ and noise $N(0,\sigma^2)$ , with the examples in each query given by shuffling a dataset of size $m / b$ and cyclically iterating over this list in batches of size $B$ . + +Example F.3. Consider the simplified case where $m = n$ , we choose a random permutation $\pi$ , and in step $i$ query example $d_{\pi(i)}$ . In this case, if all the column norms of $\mathbf{C}$ are 1, $\mathbf{x}$ 's rows have sensitivity 1, and $\mathbf{z}$ 's entries have standard deviation $\sigma = \mathcal{O}\left(\frac{\sqrt{\ln(1 / \delta)}}{\epsilon}\right)$ , we get that $\mathbf{Cx} + \mathbf{z}$ satisfies $(\epsilon, \delta)$ -DP. With e.g., the amplification for shuffled $(\epsilon, \delta)$ -DP mechanisms given by Theorem 5.1 of [6] and Cor. F.1, if $\epsilon$ is a constant, we instead get that $\mathbf{Cx} + \mathbf{z}$ satisfies $\left(\epsilon \cdot \mathcal{O}\left(\sqrt{\frac{b\log(1 / \delta)}{n}}\right), \delta \cdot \mathcal{O}\left(\frac{n\ln(1 / \delta)}{b}\right)\right)$ -DP. + +# F.4 Optimizing the number of bands + +Let $\sigma_{\epsilon, \delta}(b)$ be the required Gaussian noise magnitude for a $b$ -banded MF run for $n$ iterations using e.g. Alg. 2 to achieve $(\epsilon, \delta)$ -DP with per-step batch size $B$ . Then, the expected total squared error introduced while achieving $(\epsilon, \delta)$ -DP with amplification can be calculated as + +$$ +\sigma_ {\epsilon , \delta} (b) ^ {2} \mathcal {L} \left(\mathbf {A C} _ {b} ^ {- 1}, \mathbf {C} _ {b}\right) +$$ + +where $\mathbf{C}_b$ is a $b$ -banded lower triangular matrix optimized via Problem 2. Generally, smaller values of $b$ will allow for more amplification, and hence a smaller $\sigma$ ; however, this introduces a stronger set of constraints on the optimization problem, likely increasing the $\mathcal{L}$ term. Hence, the choice of $b$ should be optimized. Fortunately, $\sigma_{\epsilon,\delta}(\cdot)$ can be computed efficiently: Thm. 4 implies a procedure to compute $\epsilon$ given $\sigma, \delta, b$ , and then one can use binary search10 and this procedure to find the $\sigma$ giving a desired $\epsilon$ . In addition, one can pre-compute the optimal matrices $\mathbf{C}_b$ for different numbers of bands. The search can be restricted to $b \in \{1,2,\dots,\frac{m}{B},n\}$ since for $b = \frac{m}{B}$ we have $|D_j| = B$ , i.e. Alg. 2 and Thm. 4 provide no privacy amplification. + +Unlike the un-amplified version of MF, now the best factorization depends on the privacy parameters $(\epsilon, \delta)$ . The benefits of amplification are generally stronger for small $\epsilon$ : For example, amplification by sampling with probability $p$ roughly improves $\epsilon$ to $\log(1 + p(e^{\epsilon} - 1))$ (see e.g. Section 6 of [55]), which is approximately $p\epsilon$ for $\epsilon \leq 1$ , and approximately $\epsilon - \log(1/p)$ if $e^{\epsilon} \gg 1/p$ . Hence, with smaller values of $\epsilon$ , we expect the benefits of amplification to outweigh the benefits of correlated noise, in which case $b = 1$ will be optimal. With larger values of $\epsilon$ , we expect the benefits of correlated noise to outweigh the benefits of amplification, and in this regime $b = n$ will be optimal. For moderate values of $\epsilon$ , we expect the optimal $b$ to be somewhere in the middle. + +# F.5 Applying BANDMF with Privacy Amplification + +Consider the setup of Sec. 5 with our CIFAR10 setting described fully in App. H. As mentioned in that section, we use the convention that our privacy analysis will assume Poisson sampling, even though we are using passes over a shuffled dataset. We have $m = 50,000$ and train for $k = 20$ epochs with a batch size $B = 500$ . Choquette-Choo et al. [15] lets us bound the sensitivity and optimize matrices for this setting, however, without privacy amplification. Suppose we choose $\hat{b} = 100$ . Because $m / \hat{b} = 500 = B$ , we get that the sampling probability is $100\%$ (see Example F.1) and thus get no benefits from amplification. For all $\hat{b} \in [1,100)$ we get amplification benefits which can be seen intuitively as follows. + +If $\hat{b} = 2$ , we get that there are two partitions $D_{1}, D_{2}$ . Then or first event will be the simultaneous release of $\mathbf{x}_1, \mathbf{x}_2$ , our second of $\mathbf{x}_3, \mathbf{x}_4$ , and so on. Because each partition is of size $|D_j| = m / \hat{b} = 25,000$ and $B = 500$ , we have a sampling probability $q = 2\%$ . Given our parameters, we also have $d = k \cdot m / B = 2,000$ , and so we must compose $d / \hat{b} = 1,000$ events (as seen in Example F.1). Because in this setting each event is the batch release of $\hat{b}$ steps of $\mathbf{x}$ , where each example participates at most + +once on each release, observe that we need only normalize the sensitivity of this mechanism under $(k = 1, b = \hat{b} = 2)$ -participation. Generally, as $\hat{b}$ increases we have a higher sampling probability, but fewer events to compose, and vice versa. As can be seen, when $\hat{b} = 1$ , this reduces to the standard accounting for DP-SGD. It can also be seen that we desire each $D_{j}$ to be a non-overlapping partition of $D$ , as otherwise, a single example may participate multiple times in the same event (and thus have a higher sampling probability). + +# G Additional RMSE Experiment Details + +# G.1 Optimal Number of Bands + +In this section, we provide supplementary data surrounding the RMSE experiments in Fig. 5. Table 3 shows the optimal number of bands for each $(\epsilon, k)$ pair considered in the RMSE experiments. It shows the general trend that as $\epsilon$ decreases, or $k$ increases, the optimal number of bands decreases. + +# G.2 Explaining many-epoch setting + +Observe in Fig. 5 that as BANDMF and MULTI-EPOCH MF incur similar RMSE as DP-SGD when the number of epochs increases. + +This phenomenon occurs because as the number of epochs changes, so does the sensitivity of the $\mathbf{X}$ matrix, and subsequently the geometry of the optimization problem. By Eq. (5), we see that sensitivity can be calculated as a sum of absolute values of entries of $\mathbf{X}$ corresponding to iterations where a single user might participate. As the number of epochs increases, the number of entries of $\mathbf{X}$ that we have to sum up also increases. It turns out that under the constraint of constant sensitivity, it is better to put more weight on the diagonal of $\mathbf{X}$ than the off-diagonal. This is an observation we have made by solving this optimization problem numerically in a number of settings. Hence, as the number of epochs increases, $\mathbf{X}$ becomes more and more diagonally dominant, making the mechanism closer to DP-SGD ( $\mathbf{X} = \mathbf{Identity}$ ). + +
ε/k12481632641282565121024
0.0312522111111111
0.062542111111111
0.12584211111111
0.2584421111111
0.5168442111111
1.03216842211111
2.064321684221111
4.01286432168422111
8.010245122563216842211
16.01024512256128643284421
+ +Table 3: Optimal number of bands for each $\left( {\epsilon ,k}\right)$ pair,when $n = {1024}$ and $\delta = {10}^{-6}$ . + +# H Additional CIFAR-10 Experiment Details + +# H.1 Setup and Tuning + +We tune all jobs on a learning rate grid of coefficients in $\{1,2,5\}$ on powers in [-2, 3]. We find that no momentum works best for DP-SGD and momentum=0.95 works best for MF-DP-FTRL mechanisms on average in tuning; though initial tuning found that tuning momentum as well could lead to slightly better results at some $\epsilon$ budgets, we found that a more refined grid of learning rates nearly always led to a fixed momentum being optimal, and so we fix this parameter. We also found that a learning rate cooldown to $0.05\times$ the initial learning rate over the last 500 steps of training improved all runs and so we fix this parameter. All models trained for 20 epochs on CIFAR10 with a batch size of 500. We repeat each setting 12 times and show $95\%$ bootstrapped confidence intervals. + +# H.2 Additional Figures + +![](images/7dc043f2f022aee4eadd2a10c2d620a281103ad227d58c69c08803678b22d48c.jpg) +Figure 11: On CIFAR-10, BANDMF is at least as good as DP-SGD across all $\epsilon$ , and often significantly better. BANDMF is better than the prior MF-DP-FTRL from Choquette-Choo et al. [15] up to $\epsilon \approx 5$ . We compare the ratio of the total error (see Sec. 4) of BANDMF with either mechanism. Lower values indicate that BANDMF is better. The yellow markers indicate the best BANDMF mechanism that was better for that $\epsilon$ budget if one existed. Unlike in Fig. 5(b), We only optimize the Band MF over $\hat{b} \in [0, n/k]$ which leads to a regime around $\epsilon > 5$ where the it performs worse than the Multi-epoch MF of Choquette-Choo et al. [15]. + +![](images/c4f2de0c678cf209776dbf51670a24e5b1b39bdd80845a05610c45e6711153ce.jpg) + +![](images/c32017ea361563131d5922e178816f4a87cf2a582138ed9e5b40e79c8aaa373a.jpg) +Figure 12: Our banded matrices consistently perform at least as well as the best prior method in each range of $\epsilon$ . Around $\epsilon \approx 1$ , we observe significant utility benefits from the banded mechanism around $2 - 3$ percentage points over DP-SGD. We only optimize the Band MF over $\hat{b} \in [0, n / k]$ which leads to a regime around $\epsilon > 5$ where the it performs worse than the Multi-epoch MF of Choquette-Choo et al. [15]; $\hat{b} = n$ is equivalent to this approach modulo the sensitivity definition which we exclude to emphasize the regime we improve on. Empirical setup is in App. H. + +![](images/096809fdc4876289500edeac3a85ca152dd1803c278e9c2fe71f737afd5f8edf.jpg) + +# I Additional StackOverflow Next-Word-Prediction Experiment Details + +We follow the experimental setup for StackOverflow NWP from Denisov et al. [17] and Choquette-Choo et al. [15]. Except for SINGLE-EPOCH MF (which uses $B = 167$ clients/round for 1 epoch), all privacy guarantees and accuracy results are for 6 epochs of training using $B = 1000$ clients/round for 2052 rounds (also 1 epoch). The matrices used in these experiments are included in Table 2. + +For computational efficiency in estimating model accuracy at a given privacy guarantee, we actually compute in simulation updates from only 100 clients/round, and scale the noise multiplier by a corresponding factor $(\frac{100}{1000}$ for 6 epoch experiments, $\frac{100}{167}$ for SINGLE-EPOCH MF). This approach has been used previously [35, 44], and we independently verified it has a negligible impact on the estimates of accuracy figures we report. Tables 4 and 5 include the unscaled noise multipliers $\sigma$ for our experiments. + +![](images/b6a79830ceb1f54a74c53ea12b2da36b2c60fcb5455799029c7426d76c1bc3de.jpg) +Figure 13: Correlation between optimal server learning rates $\eta_{s}$ and the effective RMSE during training, see Eq. (11). + +![](images/bf9a4d5a59241cdc4c44340bd7e542abdd5f361461a9388e264b2805ba84ed81.jpg) + +Optimizer and learning-rate tuning For all SO NWP experiments we use the FedSGDM optimizer [50]. This optimization approach first takes multiple local SGD steps (with learning rate 1.0 in our experiments) on the training data of each user in the batch (cohort) before clipping to $\zeta = 1$ , summing, and passing the result $\mathbf{x}_i$ into the DP mechanism which adds noise $[\mathbf{C}^{\dagger}\mathbf{z}]_{[i,:]} \in \mathbb{R}^{d}$ on each iteration $i$ . The resulting privatized sum is then divided by the batch size $B$ and passed to the "server" (post-aggregation) optimizer, in our case SGDM with momentum parameter $\beta = 0.95$ and learning rate $\eta_s$ . We find tuning $\eta_s$ depending on the noise level is critical. By using the computationally efficient approach mentioned above, we were able to conduct rigorous tuning over a learning rate grid of $1.7^i$ for powers $i$ in $\{-9,\dots,4\}$ , estimating good initial guesses based on prior work. Table 6 gives the full set of results, and Fig. 14 shows convergence as a function of the number of rounds (iters). + +Learning rate warmup and cooldown Denisov et al. [17] found learning rate cooldown was effective, and Choquette-Choo et al. [15] found that zeroing-out client updates with large $\ell_{\infty}$ norms was critical to stability in early training. We find that additionally introducing a learning-rate warmup schedule reduces the need for this zeroing-out (though we still enable it), and generally decreases the variance in training results. Hence, all of our experiments (for all algorithms) using a linear learning rate warmup from $0.05\eta_{s}$ to $1.0\eta_{s}$ over the first $15\%$ of rounds (309), and a linear decay from $1.0\eta_{s}$ to $0.05\eta_{s}$ over the last $25\%$ of rounds (513). + +Using RMSE to tune optimal server learning rates Fig. 13 plots the server learning rates $\eta_{s}$ from Table 6 on the $y$ -axis (with the optimal rates shown as larger symbols, and sub-optimal rates as small symbols, versus two different measures of the error for the DP mechanism on the $x$ -axis: The left plot gives the effective prefix-sum RMSE (the objective we use for optimizing (banded) matrices C), + +$$ +(\text {M e c h a n i s m} \times \text {n o i s e - m u l t i p l i e r} / (\text {c l i e n t s - p e r - r o u n d}) = \sqrt {\mathcal {L} \left(\mathbf {S C} ^ {- 1} , \mathbf {C}\right) / n} \times \sigma / B, \tag {11} +$$ + +where $\mathbf{S}$ is the prefix-sum workload (lower-triangular matrix of ones) and $\sigma$ and $B$ are as given in Table 4. The right plot uses the RMSE in error of individual gradients, computed by replacing the $\mathcal{L}$ term in the above with $\mathcal{L}(\mathbf{IC}^{-1},\mathbf{C})$ where we take the workload $\mathbf{A}$ to be the identity matrix $\mathbf{I}$ rather than the prefix sum matrix $\mathbf{S}$ . + +We see a strong linear correlation between the prefix-sum RMSE and optimal learning rate in the left plot; this does not hold for individual gradient errors (right plot). Based on this, we use the following linear regression to choose learning rates for the non-federated (amplified) SO NWP experiments (still rounding to the nearest $1.7^{i}$ for consistency): + +$$ +\log (\eta_ {s}) = - 0. 9 5 \cdot \log (L _ {e}) - 4. 6 4 +$$ + +This allowed us to estimate learning rates for the amplified experiments with a high degree of accuracy; Table 7 gives the final selected learning rates. + +![](images/735fe94877fe292fe4d9104a811fc905071c149f36356e6aa0f372689ebb4c39.jpg) +Figure 14: Convergence plots for all cross-device federated learning simulation experiments. + +
Mechanismclients per round Bεnoise mult. σserver lr ηsEval Accuracy (%, Smoothed)Test Accuracy (%)
DP-SGD100014.224680.024416.8216.69
Single-epoch MF16714.224680.119720.2920.44
Optimal TreeAgg100014.224680.203521.1521.25
Multi-epoch MF100014.760790.203521.9621.92
Band MF (Ours)100014.224680.203522.1222.05
DP-SGD100022.230480.041418.7018.42
Single-epoch MF16722.230480.203521.6621.70
Optimal TreeAgg100022.230480.346022.5222.59
Multi-epoch MF100022.513520.346023.1523.04
Band MF (Ours)100022.230480.588223.3123.19
DP-SGD100041.193520.070420.0719.81
Single-epoch MF16741.193520.346022.9422.90
Optimal TreeAgg100041.193520.588223.6623.62
Multi-epoch MF100041.344980.588224.1924.02
Band MF (Ours)100041.193521.000024.3524.16
DP-SGD100080.652940.119721.2621.08
Single-epoch MF16780.652930.588224.0323.88
Optimal TreeAgg100080.652941.000024.5424.45
Multi-epoch MF100080.735791.000024.95-
Band MF (Ours)100080.652941.000025.0624.88
DP-SGD1000160.368610.346022.5122.26
Single-epoch MF167160.368611.000024.8024.62
Optimal TreeAgg1000160.368611.700025.1525.14
Multi-epoch MF1000160.415391.700025.5025.33
Band MF (Ours)1000160.368611.700025.5925.41
+ +Table 4: Parameters and metrics for simulated cross-device FL, Fig. 6[a]. The noise multipliers $\sigma$ are calibrated to achieve the given $\epsilon$ guarantees at $\delta = 10^{-6}$ under $b = 342$ -min-separation. The matrices are scaled to have sensitivity 1 under $(k = 6, b = 342)$ , see Table 2, and so a larger noise multiplier $\sigma$ is necessary for the MULTI-EPOCH MF matrices. Test-set accuracy for MULTI-EPOCH MF at $\epsilon = 8$ was unavailable. All BANDMF matrices use $\hat{b} = 342$ . Note that $\sigma$ is reported for reproducibility purposes (it is the parameter passed to Alg. 1), but it does not directly capture the noise added in either gradients $\hat{\mathbf{x}}_i$ or their cumulative sums as the noise $\mathbf{z}$ is linearly transformed by $\mathbf{C}^{-1}$ which varies across the mechanisms above. + +
Mechanismclients per round Bεnoise mult. σserver lr ηsEval Accuracy (%, Smoothed)Test Accuracy (%)
DP-SGD, w/ ampl.100010.373130.346022.5022.22
Multi-epoch MF, no ampl.100014.224680.203522.1122.10
(Band) MF, w/ ampl. (Ours)100010.791180.346023.1122.83
DP-SGD, w/ ampl.100020.304810.346022.8922.62
Multi-epoch MF, no ampl.100022.230480.346023.3623.24
(Band) MF, w/ ampl. (Ours)100020.647080.588224.0123.71
DP-SGD, w/ ampl.100040.251360.346023.2722.94
Multi-epoch MF, no ampl.100041.193520.588224.3624.16
(Band) MF, w/ ampl. (Ours)100040.522241.000024.6724.42
DP-SGD, w/ ampl.100080.205670.588223.5923.30
Multi-epoch MF, no ampl.100080.652941.000025.0824.88
(Band) MF, w/ ampl. (Ours)100080.434901.700025.2624.99
DP-SGD, w/ ampl.1000160.168760.588223.9623.61
Multi-epoch MF, no ampl.1000160.368611.700025.5925.43
(Band) MF, w/ ampl. (Ours)1000160.368611.700025.5925.43
+ +Table 5: Parameters and metrics for centralized StackOverflow, Fig. 1(b). The noise multipliers are calibrated to achieve the given $\epsilon$ guarantees at $\delta = 10^{-6}$ under $(k = 6, b = 342)$ -participation, assuming Poisson sampling for DP-SGD and BANDMF. For BANDMF, we tune $\hat{b}$ under amplification for optimal RMSE, selecting $\hat{b} = 9, 18, 32, 64, 2052$ for $\epsilon = 1, 2, 4, 8, 16$ respectively. For $\epsilon = 16$ , we have $n = \hat{b}$ , and hence BANDMF is identical to MULTI-EPOCH MF optimized with Eq. (6). Values of $\sigma$ are provided for reproducibility only, see comments on $\sigma$ from Table 4. + +
εserver lr ηsEval Accuracy (%, Smoothed)
0.00840.01430.02440.04140.07040.11970.20350.34600.58821.00001.70002.89004.91308.3521
Mechanism
1.0DP-SGD14.3115.9816.8216.5315.464.67--------
Single-epoch MF----20.1620.2919.68-------
Optimal TreeAgg-----21.0821.1520.34------
Multi-epoch MF-----21.5621.9621.60------
Band MF, b=342 (Ours)-----21.7022.1221.96------
2.0DP-SGD-16.2017.8818.7018.2217.7515.52-------
Single-epoch MF-----21.4621.6621.26------
Optimal TreeAgg------22.4022.5221.87-----
Multi-epoch MF------22.8023.1522.96-----
Band MF, b=342 (Ours)-------23.2723.3122.57----
4.0DP-SGD--18.0819.4520.0719.9719.4818.08------
Single-epoch MF------22.6622.9422.60-----
Optimal TreeAgg-------23.5723.6623.27----
Multi-epoch MF-------23.8724.1924.01----
Band MF, b=342 (Ours)--------24.2624.3523.74---
8.0DP-SGD----20.6121.2621.2420.8920.00-----
Single-epoch MF-------23.7324.0323.71----
Optimal TreeAgg--------24.5224.5424.15---
Multi-epoch MF--------24.7224.9524.7724.17--
Band MF, b=342 (Ours)--------24.7625.0624.92---
16.0DP-SGD------22.3922.5122.17-----
Single-epoch MF--------24.6624.8024.50---
Optimal TreeAgg--------24.8925.1525.15---
Multi-epoch MF---------25.3825.5025.34--
Band MF, b=342 (Ours)---------25.3825.5925.47--
infDP-SGD----------25.9626.2326.248.03
+ +Table 6: Federated learning rate tuning for StackOverflow NWP. Validation accuracy smoothed over the final 400 rounds of training, used to select the best server learning rates for the comparison of test-set accuracy presented in Fig. 6[a]. + +
εserver lr ηsEval Accuracy (%, Smoothed)
Mechanism0.11970.20350.34600.58821.00001.70002.89004.9130
1.0DP-SGD-22.3922.5022.03----
Multi-epoch MF21.7522.1121.95-----
Band MF, b=9 (Ours)-22.8323.1123.03----
2.0DP-SGD-22.7022.8922.66----
Multi-epoch MF-22.8923.3623.26----
Band MF, b=18 (Ours)--23.8024.0123.77---
4.0DP-SGD-22.8823.2723.20----
Multi-epoch MF--23.9624.3624.2223.71--
Band MF, b=32 (Ours)---24.5224.6724.43--
8.0DP-SGD--23.4823.5923.28---
Multi-epoch MF---24.7925.0824.9824.55-
Band MF, b=64 (Ours)----25.1525.2624.79-
16.0DP-SGD--23.8523.9623.72---
Multi-epoch MF----25.4225.5925.5024.92
Band MF, b=342 (Ours)----25.3725.5525.4524.90
Band MF, b=64 (Ours)----25.3825.5425.40-
+ +Table 7: Centralized learning rate tuning for StackOverflow NWP.. Validation accuracy smoothed over the final 400 rounds of training, used to select the best server learning rates for the comparison of test-set accuracy presented in Fig. 1(b). DP-SGD and BANDMF use amplification. + +Algorithm 8 Banded Matrix Multiplication +Input: $\hat{b}$ -Banded lower triangular matrix $\mathbf{C} \in \mathbb{R}^{n \times n}$ , vector $\mathbf{x} \in \mathbb{R}^n$ +Output: $\mathbf{Cx}$ +for $i = 1, \dots, n$ do + $\mathbf{y}_i = \sum_{j=i-\hat{b}+1}^i \mathbf{C}_{[i,j]}\mathbf{x}_j$ +return $\mathbf{y}$ + +Algorithm 9 Banded Inverse Multiplication +Input: $\hat{b}$ -Banded lower triangular matrix $\mathbf{C} \in \mathbb{R}^{n \times n}$ , vector $\mathbf{y} \in \mathbb{R}^n$ +Output: $\mathbf{C}^{-1}\mathbf{y}$ +for $i = 1, \dots, n$ do + $\mathbf{x}_i = (\mathbf{y}_i - \sum_{j=i-\hat{b}+1}^{i-1} \mathbf{C}_{[i,j]} \mathbf{x}_j) / \mathbf{C}_{[i,i]}$ +return $\mathbf{x}$ + +Figure 15: Algorithms for matrix-vector and inverse matrix-vector multiplication by a banded matrix. To simplify the presentation, we use the convention that out-of-bounds indexing into a matrix or vector returns 0. + +# J Efficient Multiplication and Inverse of Banded Matrices + +Algorithms 8 and 9 (Fig. 15) give algorithms for lower triangular banded matrix-vector multiplication and inverse banded matrix-vector multiplication. Note that both algorithms are compatible with the streaming nature of gradients. As soon as the next input $\mathbf{x}_i$ is received, the algorithm can immediately output $\mathbf{y}_i$ . Both algorithms require storing a state of size $\hat{b}$ , and run in $O(n \cdot \hat{b})$ time. While the algorithms are described with respect to computing matrix-vector products, they can also be used to compute matrix-matrix products where the right-hand-side is a $n \times d$ matrix by multiplying by each column independently. In this setting, these algorithms require $O(\hat{b} \cdot d)$ space and $O(n \cdot \hat{b} \cdot d)$ time. Both algorithms have appeared previously in the literature on Monte Carlo methods, which have a similar problem at their core to that of noise generation for MF; see e.g. [56, Section 2]. + +# K Application to a Real-World Cross-Device FL System + +We train a one-layer LSTM language model of $\sim 2.4$ million parameters in a practical cross-device FL system following [58]. The model is used for predicting the next word of Spanish in a mobile virtual keyboard. We pretrain the model on public multilingual C4 dataset [49, 60], and then fine-tune with on-device user data in FL. In a common practical FL system, clients have to satisfy criteria like being charged, idle and connected to unmetered network to participate in a round [10, 27, 31, 47], hence only a subset of clients can be reached and there is a strong diurnal pattern of client participation [61, 63]. It is very challenging to hold a fixed set of clients for evaluation, or develop random sampling for privacy amplification. Though the current implementation of client participation control is feasible through the client timer, the tuning of separation $b$ in practice can be challenging. Therefore, the training of Fig. 6 (b) can achieve smaller separation $b$ than in [58]. + +# K.1 Reporting privacy guarantees + +We follow the guidelines outlined in [48, Sec. 5.3] to report privacy guarantees. + +1. DP setting. This is a central DP guarantee where the service provider is trusted to correctly implement the mechanism. + +# 2. Instantiating the DP Definition + +(a) Data accesses covered: The DP guarantee applies to all well-behaved clients $^{11}$ in a single training run. We do not account for hyperparameter tuning, or the selection of the final model checkpoint using evaluation metrics or A/B testing in our guarantees. Public multilingual C4 data [49, 60] is used for pre-training. +(b) Final mechanism output: Only the final model checkpoint is released for use in production, however the mechanism's output is technically the full sequence of privatized gradients, and so the guarantee also applies at this level, and hence all intermediate models are protected (including those sent to devices participating in federated learning). + +(c) Unit of privacy. Device-level DP is considered, i.e., the notion of adjacency is with respect to arbitrary training datasets on each client device, and the device might have an arbitrarily large local dataset containing arbitrary training examples. For user's with a single device, this corresponds directly to user-level DP; for devices shared with multiple users, this provides a stronger notion of DP than user-level; for a user with multiple devices that happen to both participate in training the model, the notion is weaker, but group privacy can be used to obtain a user-level guarantee. +(d) Adjacency definition for "neighbouring" datasets: We use the zero-out definition [35]. This is a special form of the add-or-remove definition, where neighboring data sets differ by addition/removal of a single client. In the absence of a client at any training step, we assume that the client's model update gets replaced with the all zeros vector. This assumption enforces a subtle modification to the traditional definition of the add/remove notion of DP which allows neighboring data sets to have the same number of records. + +# 3. Privacy accounting details + +(a) Type of accounting used: Both $\rho$ -zCDP [11] accounting, and PLD accounting [18] for $(\epsilon, \delta)$ -DP are used. +(b) Accounting assumptions: Each client only participates limited times during the training, and there are at least a min-separation of $b$ rounds between two consecutive participation of a client. This is enforced by a timer on clients in the cross-device FL system. +(c) The formal DP statement: The privacy guarantees are $\rho = 0.52$ -zCDP and $(\epsilon = 6.69, \delta = 10^{-10})$ -DP for ONLINE TREEAGG, while BANDMF achieves $\rho = 0.24$ -zCDP and $(\epsilon = 4.35, \delta = 10^{-10})$ -DP. +(d) Transparency and verifiability: We are going to open source our code based on TensorFlow Federated and Tensorflow Privacy. Key portions of the cross-device FL system will also open sourced. \ No newline at end of file diff --git a/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/images.zip b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c8388cdababb7bc3e0efcbdc5e907d4ea2eaedcd --- /dev/null +++ b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8a6794f09a62e8bb1fc5319238c17cd6be29065cd7ff11ea3a8a1643127f4d7 +size 1612039 diff --git a/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/layout.json b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9e728daf6507346bc367fe48133318ae85d374d1 --- /dev/null +++ b/amplifiedbandedmatrixfactorizationaunifiedapproachtoprivatetraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17a77f92a3d8420e84a02ab6b36029cc6ee0acc6a6fe15d923b7eb2b4af63ad3 +size 1546165 diff --git a/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_content_list.json b/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3b18ef918eecdcc4e610204323aa82e768554889 --- /dev/null +++ b/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c113ebd7d2e90fa1735a6b72b8a79a40d5255cd4f08a90e4d76757870e1f9173 +size 314182 diff --git a/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_model.json b/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b36f34a473f4049e472a938ff6781f455ea7d778 --- /dev/null +++ b/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a080de2c691cd9f9a57898c2db101d43c12921c4a19d4cde33f6c04a14dc612f +size 355298 diff --git a/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_origin.pdf b/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ebe1f04ca3b068fe3e6df4df625e6dcbf5e3dc4e --- /dev/null +++ b/hconsistencyboundscharacterizationandextensions/0bb9840f-97cb-467f-b464-844b9a53ae2b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f43a04627b9342c78e280454a06e1a9ab53cac931b48dd048ec62efabed74cf +size 573042 diff --git a/hconsistencyboundscharacterizationandextensions/full.md b/hconsistencyboundscharacterizationandextensions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e5fb32dc3d498a79554eb5ad27f02636897747cf --- /dev/null +++ b/hconsistencyboundscharacterizationandextensions/full.md @@ -0,0 +1,1608 @@ +# $\mathcal{H}$ -Consistency Bounds: Characterization and Extensions + +Anqi Mao +Courant Institute +New York, NY 10012 +aqmao@cims.nyu.edu + +Mehryar Mohri +Google Research & CIMS +New York, NY 10011 +mohri@google.com + +Yutao Zhong +Courant Institute +New York, NY 10012 +yutao@cims.nyu.edu + +# Abstract + +A series of recent publications by Awasthi, Mao, Mohri, and Zhong [2022b] have introduced the key notion of $\mathcal{H}$ -consistency bounds for surrogate loss functions. These are upper bounds on the zero-one estimation error of any predictor in a hypothesis set, expressed in terms of its surrogate loss estimation error. They are both non-asymptotic and hypothesis set-specific and thus stronger and more informative than Bayes-consistency. However, determining if they hold and deriving these bounds have required a specific proof and analysis for each surrogate loss. Can we derive more general tools and characterizations? This paper provides both a general characterization and an extension of $\mathcal{H}$ -consistency bounds for multi-class classification. We present new and tight $\mathcal{H}$ -consistency bounds for both the family of constrained losses and that of comp-sum losses, which covers the familiar cross-entropy, or logistic loss applied to the outputs of a neural network. We further extend our analysis beyond the completeness assumptions adopted in previous studies and cover more realistic bounded hypothesis sets. Our characterizations are based on error transformations, which are explicitly defined for each formulation. We illustrate the application of our general results through several special examples. A by-product of our analysis is the observation that a recently derived multi-class $\mathcal{H}$ -consistency bound for cross-entropy reduces to an excess bound and is not significant. Instead, we prove a much stronger and more significant guarantee. + +# 1 Introduction + +Bayes-consistency is an important property of surrogate loss functions. It requires that minimizing the surrogate excess error over the family of all measurable functions leads to the minimization of the target error loss in the limit [Steinwart, 2007]. This property applies to a broad family of convex margin-based losses in binary classification [Zhang, 2004a, Bartlett et al., 2006], as well as some extensions in multi-class classification [Tewari and Bartlett, 2007]. However, Bayes-consistency does not apply to the hypothesis sets commonly used for learning, such as the family of linear models or that of neural networks, which of course do not include all measurable functions. Furthermore, it is also only an asymptotic property and does not supply any convergence guarantee. + +To address these limitations, a series of recent publications by Awasthi, Mao, Mohri, and Zhong [2022b] introduced the key notion of $\mathcal{H}$ -consistency bounds for surrogate loss functions. These are upper bounds on the zero-one estimation error of any predictor in a hypothesis set, expressed in terms of its surrogate loss estimation error. They are both non-asymptotic and hypothesis set-specific and thus stronger and more informative than Bayes-consistency. However, determining the validity of these bounds and deriving them have required a specific proof and analysis for each surrogate loss. Can we derive more general tools and characterizations for $\mathcal{H}$ -consistency bounds? + +This paper provides both a general characterization and an extension of $\mathcal{H}$ -consistency bounds for multi-class classification. Previous approaches to deriving these bounds required the development of new proofs for each specific case. In contrast, we introduce the general concept of an error transformation function that serves as a very general tool for deriving such guarantees with tightness guarantees. We show that deriving an $\mathcal{H}$ -consistency bound for comp-sum losses and constrained losses for both complete and bounded hypothesis sets can be reduced to the calculation of their corresponding error transformation function. Our general tools and tight bounds show several remarkable advantages: first, they improve existing bounds for complete hypothesis sets previously proven in [Awasthi et al., 2022b]; second, they encompass all previously comp-sum and constrained losses studied thus far as well as many new ones [Awasthi et al., 2022a, Mao et al., 2023h]; third, they extend beyond the completeness assumption adopted in previous work; fourth, they provide novel guarantees for bounded hypothesis sets; and, finally, they help prove a much stronger and more significant guarantee for logistic loss with linear hypothesis set than [Zheng et al., 2023]. + +Previous work. Here, we briefly discuss recent studies of $\mathcal{H}$ -consistency bounds by Awasthi et al. [2022a,b], Mao et al. [2023h] and Zheng et al. [2023]. Awasthi et al. [2022a] introduced and studied $\mathcal{H}$ -consistency bounds in binary classification. They provided a series of tight $\mathcal{H}$ -consistency bounds for bounded hypothesis set of linear models and one-hidden-layer neural networks. The subsequent study [Awasthi et al., 2022b] further generalized the framework to multi-class classification and presented an extensive study of $\mathcal{H}$ -consistency bounds for diverse multi-class surrogate losses, including negative results for max losses [Crammer and Singer, 2001] and positive results for sum losses [Weston and Watkins, 1998], and constrained losses [Lee et al., 2004]. However, the hypothesis sets examined in their analysis were assumed to be complete, which rules out the bounded hypothesis sets typically used in practice. Moreover, the final bounds derived from [Awasthi et al., 2022b] are based on ad hoc methods and may not be tight. [Mao et al., 2023h] complemented this previous work by studying a wide family of comp-sum losses in the multi-class classification, which generalizes the sum-losses and includes as special cases the logistic loss [Verhulst, 1838, 1845, Berkson, 1944, 1951], the generalized cross-entropy loss [Zhang and Sabuncu, 2018], and the mean absolute error loss [Ghosh et al., 2017]. Here too, the completeness assumption on the hypothesis sets was adopted and their $\mathcal{H}$ -consistency bounds do not apply to common bounded hypothesis sets in practice. Recently, Zheng et al. [2023] proved $\mathcal{H}$ -consistency bounds for multi-class logistic loss with bounded linear hypothesis sets. However, their bounds require a crucial distributional assumption, under which the minimizability gaps coincide with the approximation errors. Thus, their bounds can be recovered as excess error bounds, which are less significant. + +Other related work on $\mathcal{H}$ -consistency bounds includes $\mathcal{H}$ -consistency bounds for pairwise ranking [Mao, Mohri, and Zhong, 2023d,e]; theoretically grounded surrogate losses and algorithms for learning with abstention supported by $\mathcal{H}$ -consistency bounds, including the study of score-based abstention [Mao, Mohri, and Zhong, 2023c] and learning to abstain with a fixed predictor with application in decontextualization [Mohri, Andor, Choi, Collins, Mao, and Zhong, 2023]; principled approaches for learning to defer with multiple experts that benefit from strong $\mathcal{H}$ -consistency bounds, including the single-stage scenario [Mao, Mohri, and Zhong, 2023b] and a two-stage scenario [Mao, Mohri, Mohri, and Zhong, 2023a]; $\mathcal{H}$ -consistency theory and algorithms for adversarial robustness [Awasthi et al., 2021a,b, 2023a, Mao et al., 2023h, Awasthi et al., 2023b]; and efficient algorithms and loss functions for structured prediction with stronger $\mathcal{H}$ -consistency guarantees [Mao et al., 2023g]. + +Structure of this paper. We present new and tight $\mathcal{H}$ -consistency bounds for both the family of comp-sum losses (Section 4.1) and that of constrained losses (Section 5.1), which cover the familiar cross-entropy, or logistic loss applied to the outputs of a neural network. We further extend our analysis beyond the completeness assumptions adopted in previous studies and cover more realistic bounded hypothesis sets (Section 4.2 and 5.2). Our characterizations are based on error transformations, which are explicitly defined for each formulation. We illustrate the application of our general results through several special examples. A by-product of our analysis is the observation that a recently derived multi-class $\mathcal{H}$ -consistency bound for cross-entropy reduces to an excess bound independent of the hypothesis set. Instead, we prove a much stronger and more significant guarantee (Section 4.2). + +We give a comprehensive discussion of related work in Appendix A. We start with some basic definitions and notation in Section 2. + +# 2 Preliminaries + +We denote by $\mathcal{X}$ the input space, by $\mathcal{Y}$ the output space, and by $\mathcal{D}$ a distribution over $\mathcal{X} \times \mathcal{Y}$ . We consider the standard scenario of multi-class classification, where $\mathcal{Y} = \{1, \ldots, n\}$ . Given a hypothesis set $\mathcal{H}$ of functions mapping $\mathcal{X} \times \mathcal{Y}$ to $\mathbb{R}$ , the multi-class classification problem consists of finding a hypothesis $h \in \mathcal{H}$ with small generalization error $\mathcal{R}_{\ell_{0-1}}(h)$ , defined by $\mathcal{R}_{\ell_{0-1}}(h) = \mathbb{E}_{(x,y) \sim \mathcal{D}}[\ell_{0-1}(h,x,y)]$ , where $\ell_{0-1}(h,x,y) = \mathbb{1}_{\mathfrak{h}(x) \neq y}$ is the multi-class zero-one loss with $\mathfrak{h}(x) = \operatorname{argmax}_{y \in \mathcal{Y}} h(x,y)$ the prediction of $h$ for the input point $x$ . We also denote by $\mathsf{H}(x)$ the set of all predictions associated to input $x$ generated by functions in $\mathcal{H}$ , that is, $\mathsf{H}(x) = \{\mathfrak{h}(x) : h \in \mathcal{H}\}$ . + +We will analyze the guarantees of surrogate multi-class losses in terms of the zero-one loss. We denote by $\ell$ a surrogate loss and by $\mathcal{R}_{\ell}(h)$ its generalization error, $\mathcal{R}_{\ell}(h) = \mathbb{E}_{(x,y)\sim \mathcal{D}}[\ell (h,x,y)]$ . For a loss function $\ell$ , we define the best-in-class generalization error within a hypothesis set $\mathcal{H}$ as $\mathcal{R}_{\ell}^{*}(\mathcal{H}) = \inf_{h\in \mathcal{H}}\mathcal{R}_{\ell}(h)$ , and refer to $\mathcal{R}_{\ell}(h) - \mathcal{R}_{\ell}^{*}(\mathcal{H})$ as the estimation error. We will study the key notion of $\mathcal{H}$ -consistency bounds [Awasthi et al., 2022a,b], which are upper bounds on the zero-one estimation error of any predictor in a hypothesis set, expressed in terms of its surrogate loss estimation error, for some real-valued function $f$ that is non-decreasing: + +$$ +\forall h \in \mathcal {H}, \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) \leq f \big (\mathcal {R} _ {\ell} (h) - \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) \big). +$$ + +These bounds imply that the zero-one estimation error is at most $f(\epsilon)$ whenever the surrogate loss estimation error is bounded by $\epsilon$ . Thus, the learning guarantees provided by $\mathcal{H}$ -consistency bounds are both non-asymptotic and hypothesis set-specific. The function $f$ appearing in these bounds is expressed in terms of a minimizability gap, which is a quantity measuring the difference of best-in-class error $\mathcal{R}_{\ell}^{*}(\mathcal{H})$ and the expected best-in-class conditional error $\mathbb{E}_x[\mathcal{C}_{\ell}^*(\mathcal{H}, x)]$ : $\mathcal{M}_{\ell}(\mathcal{H}) = \mathcal{R}_{\ell}^{*}(\mathcal{H}) - \mathbb{E}_X[\mathcal{C}_{\ell}^*(\mathcal{H}, x)]$ , where $\mathcal{C}_{\ell}(h, x) = \mathbb{E}_{y|x}[\ell(h, x, y)]$ and $\mathcal{C}_{\ell}^{*}(\mathcal{H}, x) = \inf_{h \in \mathcal{H}} \mathcal{C}_{\ell}(h, x)$ are the conditional error and best-in-class conditional error respectively. We further write $\Delta \mathcal{C}_{\ell, \mathcal{H}} = \mathcal{C}_{\ell}(h, x) - \mathcal{C}_{\ell}^{*}(\mathcal{H}, x)$ to denote the conditional regret. Note that the minimizability gap is an inherent quantity depending on a hypothesis set $\mathcal{H}$ and the loss function $\ell$ . + +By Lemma 1, the minimizability gap for the zero-one loss, $\mathcal{M}_{\ell_{0 - 1}}(\mathcal{H})$ , coincides with its approximation error $\mathcal{A}_{\ell_{0 - 1}}(\mathcal{H}) = \mathcal{R}_{\ell_{0 - 1}}^{*}(\mathcal{H}) - \mathcal{R}_{\ell_{0 - 1}}^{*}(\mathcal{H}_{\mathrm{all}})$ when the set of all possible predictions generated by $\mathcal{H}$ covers the label space $\mathcal{Y}$ . This holds for typical hypothesis sets used in practice. However, for a surrogate loss $\ell$ , the minimizability gap $\mathcal{M}_{\ell}(\mathcal{H})$ is always upper bounded by and in general finer than its approximation error $\mathcal{A}_{\ell}(\mathcal{H}) = \mathcal{R}_{\ell}^{*}(\mathcal{H}) - \mathcal{R}_{\ell}^{*}(\mathcal{H}_{\mathrm{all}})$ since $\mathcal{M}_{\ell}(\mathcal{H}) = \mathcal{A}_{\ell}(\mathcal{H}) - I_{\ell}(\mathcal{H})$ where $\mathcal{H}_{\mathrm{all}}$ is the family of all measurable functions and $I_{\ell}(\mathcal{H}) = \mathbb{E}_x[\mathcal{C}_{\ell}^{*}(\mathcal{H},x) - \mathcal{C}_{\ell}^{*}(\mathcal{H}_{\mathrm{all}},x)]$ (see Appendix B for a more detailed discussion). Thus, an $\mathcal{H}$ -consistency bound, expressed as follows for some increasing function $\Gamma$ : + +$$ +\left. \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) \leq \Gamma \left(\mathcal {R} _ {\ell} (h) - \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell} (\mathcal {H})\right), \right. \tag {1} +$$ + +is more favorable than an excess error bound expressed in terms of approximation errors $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{A}_{\ell_{0-1}}(\mathcal{H}) \leq \Gamma(\mathcal{R}_\ell(h) - \mathcal{R}_\ell^*(\mathcal{H}) + \mathcal{A}_\ell(\mathcal{H}))$ . Here, $\Gamma$ is typically linear or the square-root function modulo constants. When $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ , the family of all measurable functions, an $\mathcal{H}$ -consistency bound coincides with the excess error bound and implies Bayes-consistency by taking the limit. It is therefore a stronger guarantee than an excess error bound and Bayes-consistency. + +The minimizability gap is always non-negative, since the infimum of the expectation is greater than or equal to the expectation of infimum. Furthermore, as shown in Appendix B, when $\mathcal{H}$ is the family of all measurable functions or when the Bayes-error coincides with the best-in-class error, that is, $\mathcal{R}_{\ell}^{*}(\mathcal{H}) = \mathcal{R}_{\ell}^{*}(\mathcal{H}_{\mathrm{all}})$ , the minimizability gap vanishes. In such cases, (1) implies the $\mathcal{H}$ -consistency of a surrogate loss $\ell$ with respect to the zero-one loss $\ell_{0-1}$ : + +$$ +\mathcal {R} _ {\ell} \left(h _ {n}\right) - \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) \xrightarrow {n \to + \infty} 0 \Longrightarrow \mathcal {R} _ {\ell_ {0 - 1}} \left(h _ {n}\right) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) \xrightarrow {n \to + \infty} 0. +$$ + +In the next sections, we will provide both a general characterization and an extension of $\mathcal{H}$ -consistency bounds for multi-class classification. Before proceeding, we first introduce a useful lemma from [Awasthi et al., 2022b] which characterizes the conditional regret of zero-one loss explicitly. We denote by $p(x) = (p(x,1),\ldots ,p(x,n))$ as the conditional distribution of $y$ given $x$ . + +Lemma 1. For zero-one loss $\ell_{0-1}$ , the best-in-class conditional error and the conditional regret for $\ell_{0-1}$ can be expressed as follows: for any $x \in \mathcal{X}$ , we have + +$$ +\mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) = 1 - \max _ {y \in \mathsf {H} (x)} p (x, y) \quad a n d \quad \Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x) = \max _ {y \in \mathsf {H} (x)} p (x, y) - p (x, \mathsf {h} (x)). +$$ + +# 3 Comparison with previous work + +Here, we briefly discuss previous studies of $\mathcal{H}$ -consistency bounds [Awasthi et al., 2022a,b, Zheng et al., 2023, Mao et al., 2023h] in standard binary or multi-class classification and compare their results with those we present. + +Awasthi et al. [2022a] studied $\mathcal{H}$ -consistency bounds in binary classification. They provided a series of tight $\mathcal{H}$ -consistency bounds for the bounded hypothesis set of linear models $\mathcal{H}_{\mathrm{lin}}^{\mathrm{bi}}$ and one-hidden-layer neural networks $\mathcal{H}_{\mathrm{NN}}^{\mathrm{bi}}$ , defined as follows: + +$$ +\mathcal {H} _ {\mathrm {l i n}} ^ {\mathrm {b i}} = \left\{x \mapsto w \cdot x + b | \| w \| \leq W, | b | \leq B \right\} +$$ + +$$ +\mathcal {H} _ {\mathrm {N N}} ^ {\mathrm {b i}} = \left\{x \mapsto \sum_ {j = 1} ^ {n} u _ {j} \left(w _ {j} \cdot x + b\right) _ {+} \mid \| u \| _ {1} \leq \Lambda , \| w _ {j} \| \leq W, | b | \leq B \right\}, +$$ + +where $B$ , $W$ , and $\Lambda$ are positive constants and where $(\cdot)_{+} = \max (\cdot ,0)$ . We will show that our bounds recover these binary classification $\mathcal{H}$ -consistency bounds. + +The scenario of multi-class classification is more challenging and more crucial in applications. Recent work by Awasthi et al. [2022b] showed that max losses [Crammer and Singer, 2001], defined as $\ell^{\max}(h,x,y) = \max_{y' \neq y} \Phi(h(x,y) - h(x,y'))$ for some convex and non-increasing function $\Phi$ , cannot admit meaningful $\mathcal{H}$ -consistency bounds, unless the distribution is deterministic. They also presented a series of $\mathcal{H}$ -consistency bounds for sum losses [Weston and Watkins, 1998] and constrained losses [Lee et al., 2004] for symmetric and complete hypothesis sets, that is such that: + +$$ +\mathcal {H} = \{h: \mathcal {X} \times \mathcal {Y} \rightarrow \mathbb {R}: h (\cdot , y) \in \mathcal {F}, \forall y \in \mathcal {Y} \} \quad \text {(s y m m e t r y)} +$$ + +$$ +\forall x \in \mathcal {X}, \{f (x): f \in \mathcal {F} \} = \mathbb {R}, \tag {completeness} +$$ + +for some family $\mathcal{F}$ of functions mapping from $\mathcal{X}$ to $\mathbb{R}$ . The completeness assumption rules out the bounded hypothesis sets typically used in practice such as $\mathcal{H}_{\mathrm{lin}}$ . Moreover, the final bounds derived from [Awasthi et al., 2022b] are based on ad hoc proofs and may not be tight. In contrast, we will study both the complete and bounded hypothesis sets, and provide a very general tool to derive $\mathcal{H}$ -consistency bounds. Our bounds are tighter than those of Awasthi et al. [2022b] given for complete hypothesis sets and extend beyond the completeness assumption. + +[Mao et al., 2023h] complemented the work of [Awasthi et al., 2022b] by studying a wide family of comp-sum losses in multi-class classification, which generalized the sum-losses and included as special cases the logistic loss [Verhulst, 1838, 1845, Berkson, 1944, 1951], the generalized cross-entropy loss [Zhang and Sabuncu, 2018], and the mean absolute error loss [Ghosh et al., 2017]. Here too, the completeness assumption was adopted, thus their $\mathcal{H}$ -consistency bounds do not apply to common bounded hypothesis sets used in practice. We illustrate the application of our general results through a broader set of surrogate losses than [Mao et al., 2023h] and significantly generalize the bounds of [Mao et al., 2023h] to bounded hypothesis sets. + +Recently, Zheng et al. [2023] proved $\mathcal{H}$ -consistency bounds for logistic loss with linear hypothesis sets in the multi-class classification: $\mathcal{H}_{\mathrm{lin}} = \{x \mapsto w_y \cdot x + b_y \mid \|w_y\| \leq W, |b_y| \leq B, y \in \mathcal{Y}\}$ . However, their bounds require a crucial distributional assumption under which, the minimizability gaps $\mathcal{M}_{\ell_{0-1}}(\mathcal{H}_{\mathrm{lin}})$ and $\mathcal{M}_{\ell_{\log}}(\mathcal{H}_{\mathrm{lin}})$ coincide with the approximation errors $\mathcal{R}_{\ell_{0-1}}(\mathcal{H}_{\mathrm{lin}}) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}_{\mathrm{all}})$ and $\mathcal{R}_{\ell_{\log}}(\mathcal{H}_{\mathrm{lin}}) - \mathcal{R}_{\ell_{\log}}^*(\mathcal{H}_{\mathrm{all}})$ respectively (see the note before [Zheng et al., 2023, Appendix F]). Thus, their bounds can be recovered as excess error bounds $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}_{\mathrm{all}}) \leq \sqrt{2} \left( \mathcal{R}_{\ell_{\log}}(h) - \mathcal{R}_{\ell_{\log}}^*(\mathcal{H}_{\mathrm{all}}) \right)^{\frac{1}{2}}$ which are less significant. In contrast, our $\mathcal{H}_{\mathrm{lin}}$ -consistency bound is much finer and takes into account the role of the parameter $B$ and that of the number of labels $n$ . Thus, we provide stronger and more significant guarantees for logistic loss with linear hypothesis set than [Zheng et al., 2023]. + +In summary, our general tools offer the remarkable advantages of deriving tight bounds, which improve upon the existing bounds of Awasthi et al. [2022b] given for complete hypothesis sets, cover the comp-sum and constrained losses considered in [Awasthi et al., 2022a, Mao et al., 2023h] as well as new ones, extend beyond the completeness assumption with novel guarantees valid for bounded hypothesis sets, and are much stronger and more significant guarantees for logistic loss with linear hypothesis sets than those of Zheng et al. [2023]. + +# 4 Comp-sum losses + +In this section, we present a general characterization of $\mathcal{H}$ -consistency bounds for comp-sum losses, a family of loss functions including the logistic loss [Verhulst, 1838, 1845, Berkson, 1944, 1951], the sum exponential loss [Weston and Watkins, 1998, Awasthi et al., 2022b], the generalized cross entropy loss [Zhang and Sabuncu, 2018], the mean absolute error loss [Ghosh et al., 2017], and many other loss functions used in applications. + +This is a family of loss functions defined via the composition of a non-negative and non-decreasing function $\Psi$ with the sum exponential losses (see [Mao et al., 2023h]): + +$$ +\forall h \in \mathcal {H}, \forall (x, y) \times \mathcal {X} \times \mathcal {Y}, \quad \ell^ {\operatorname {c o m p}} (h, x, y) = \Psi \left(\sum_ {y ^ {\prime} \neq \mathcal {Y}} e ^ {h (x, y ^ {\prime}) - h (x, y)}\right). \tag {2} +$$ + +This expression can be equivalently written as $\ell^{\mathrm{comp}}(h,x,y) = \Phi \left(\frac{e^{h(x,y)}}{\sum_{y'\in\mathcal{Y}}e^{h(x,y')}}\right)$ , where $\Phi: u \mapsto \Psi\left(\frac{1-u}{u}\right)$ is a non-increasing auxiliary function from $[0,1]$ to $\mathbb{R}_+ \cup \{+\infty\}$ . As an example, the logistic loss corresponds to the choice $\Phi: u \mapsto -\log(u)$ and the sum exponential loss to $\Phi: u \mapsto \frac{1-u}{u}$ . + +# 4.1 $\mathcal{H}$ -consistency bounds + +In previous work, deriving $\mathcal{H}$ -consistency bounds has required giving new proofs for each instance. The following result provides a very general tool for deriving such bounds with tightness guarantees. We introduce an error transformation function and show that deriving an $\mathcal{H}$ -consistency bound for comp-sum losses can be reduced to the calculation of this function. + +Theorem 2 (H-consistency bound for comp-sum losses). Assume that $\mathcal{H}$ is symmetric and complete and that $\mathfrak{T}^{\mathrm{comp}}$ is convex. Then, the following inequality holds for any hypothesis $h\in \mathcal{H}$ and any distribution + +$$ +\mathfrak {T} ^ {\operatorname {c o m p}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H})\right) \leq \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\operatorname {c o m p}}} (\mathcal {H}), \tag {3} +$$ + +with $\mathfrak{T}^{\mathrm{comp}}$ an $\mathcal{H}$ -estimation error transformation for comp-sum losses defined for all $t\in [0,1]$ by $\mathfrak{T}^{\mathrm{comp}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \in [ 0, \frac {1}{2} ]} \sup _ {\mu \in [ - \tau , 1 - \tau ]} \left\{\frac {1 + t}{2} \big [ \Phi (\tau) - \Phi (1 - \tau - \mu) \big ] + \frac {1 - t}{2} \big [ \Phi (1 - \tau) - \Phi (\tau + \mu) \big ] \right\} & n = 2 \\ \inf _ {P \in [ \frac {1}{n - 1} \vee t, 1 ]} \inf _ {\substack {\tau_ {1} \geq \max (\tau_ {2}, 1 / n) \\ \tau_ {1} + \tau_ {2} \leq 1, \tau_ {2} \geq 0}} \sup _ {\mu \in [ - \tau_ {2}, \tau_ {1} ]} \left\{\frac {P + t}{2} \big [ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \big ] + \frac {P - t}{2} \big [ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \big ] \right\} & n > 2. \end{array} \right. +$$ + +Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and a hypothesis $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{comp}}}(h) - \mathcal{R}_{\ell^{\mathrm{comp}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{comp}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{comp}}(t)$ . + +Thus, Theorem 2 shows that, when $\mathcal{T}^{\mathrm{comp}}$ is convex, to make these guarantees explicit, all that is needed is to calculate $\mathcal{T}^{\mathrm{comp}}$ . Moreover, the last statement shows the tightness of the guarantees derived using this function. The constraints in $\mathcal{T}^{\mathrm{comp}}$ are due to the forms that the conditional probability vector and scoring functions take. These forms become more flexible for $n > 2$ , leading to intricate constraints. Note that our $\mathcal{H}$ -consistency bounds are distribution-independent and we cannot claim tightness across all distributions. + +The general expression of $\mathfrak{T}^{\mathrm{comp}}$ in Theorem 2 is complex, but it can be considerably simplified under some broad assumptions, as shown by the following result. + +Theorem 3 (characterization of $\mathfrak{T}^{\mathrm{comp}}$ ). Assume that $\Phi$ is convex, differentiable at $\frac{1}{2}$ and $\Phi' \left( \frac{1}{2} \right) < 0$ . Then, $\mathfrak{T}^{\mathrm{comp}}$ can be expressed as follows: + +$$ +\mathfrak {T} ^ {\mathrm {c o m p}} (t) = \left\{ \begin{array}{l l} \Phi \Big (\frac {1}{2} \Big) - \inf _ {\mu \in [ - \frac {1}{2}, \frac {1}{2} ]} \Big \{\frac {1 - t}{2} \Phi \Big (\frac {1}{2} + \mu \Big) + \frac {1 + t}{2} \Phi \Big (\frac {1}{2} - \mu \Big) \Big \} & n = 2 \\ \inf _ {\tau \in [ \frac {1}{n}, \frac {1}{2} ]} \Big \{\Phi (\tau) - \inf _ {\mu \in [ - \tau , \tau ]} \Big \{\frac {1 + t}{2} \Phi (\tau - \mu) + \frac {1 - t}{2} \Phi (\tau + \mu) \Big \} \Big \} & n > 2. \end{array} \right. +$$ + +The proof of this result as well as that of other theorems in this section are given in Appendix C. + +Examples. We now illustrate the application of our theory through several examples. To do so, we compute the $\mathcal{H}$ -estimation error transformation $\mathcal{T}^{\mathrm{comp}}$ for comp-sum losses and present the results in + +Table 1: $\mathcal{H}$ -estimation error transformation for common comp-sum losses. + +
Auxiliary function Φ-log(t)1/t-11/q(1-t^q), q ∈ (0,1)1/t(1-t)^2
Transformation Jcomp1+t/2 log(1+t) + 1/t/2 log(1-t)1 - √1 - t^21/qn^q((1+t)^1/q + (1-t)^1/q)1-q/nt^2/4
+ +Table 1. Remarkably, by applying Theorem 2, we are able to obtain the same $\mathcal{H}$ -consistency bounds for comp-sum losses with $\Phi(t) = -\log(t)$ , $\frac{1}{t} - 1$ , $\frac{1}{q}(1 - t^q)$ , $q \in (0,1)$ and $1 - t$ as those derived using ad hoc methods in [Mao et al., 2023h], and a novel tight $\mathcal{H}$ -consistency bound for the new comp-sum loss $\ell_{\mathrm{sq}} = \left[1 - \frac{e^{h(x,y)}}{\sum_{y' \in y} e^{h(x,y')} \right]^2$ with $\Phi(t) = (1 - t)^2$ in Theorem 4. + +The calculation of $\mathfrak{T}^{\mathrm{comp}}$ for all entries of Table 1 is detailed in Appendix C.3. To illustrate the effectiveness of our general tools, here, we show how the error transformation function can be straightforwardly calculated in the case of the new surrogate loss $\ell_{\mathrm{sq}}$ . + +Theorem 4 (H-consistency bound for a new comp-sum loss). Assume that $\mathcal{H}$ is symmetric and complete. Then, for all $h\in \mathcal{H}$ and any distribution, the following tight bound holds. + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) \leq 2 \big (\mathcal {R} _ {\ell_ {\mathrm {s q}}} (h) - \mathcal {R} _ {\ell_ {\mathrm {s q}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {\mathrm {s q}}} (\mathcal {H}) \big) ^ {\frac {1}{2}} - \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}). +$$ + +Proof. For $n = 2$ , plugging in $\Phi(t) = (1 - t)^2$ in Theorem 3, gives + +$$ +\mathfrak {T} ^ {\mathrm {c o m p}} = \frac {1}{4} - \inf _ {\mu \in [ - \frac {1}{2}, \frac {1}{2} ]} \left\{\frac {1 - t}{2} \bigg (\frac {1}{2} - \mu \bigg) ^ {2} + \frac {1 + t}{2} \bigg (\frac {1}{2} + \mu \bigg) ^ {2} \right\} = \frac {1}{4} - \frac {1 - t ^ {2}}{4} = \frac {t ^ {2}}{4}. +$$ + +Similarly, for $n > 2$ , plugging in $\Phi(t) = (1 - t)^2$ in Theorem 3 yields + +$$ +\begin{array}{l} \mathfrak {T} ^ {\mathrm {c o m p}} = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \left\{\left(1 - \tau\right) ^ {2} - \inf _ {\mu \in [ - \tau , \tau ]} \left\{\frac {1 + t}{2} (1 - \tau + \mu) ^ {2} + \frac {1 - t}{2} (1 - \tau - \mu) ^ {2} \right\} \right\} \\ = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \left\{\left(1 - \tau\right) ^ {2} - \left(1 - \tau\right) ^ {2} \left(1 - t ^ {2}\right) \right\} \quad (\text {m i n i m u m a c h i e v e d a t} \mu = t (\tau - 1)) \\ = \frac {t ^ {2}}{4}. \quad (\text {m i n i m u m a c h i e v e d a t} \tau = \frac {1}{2}) \\ \end{array} +$$ + +By Theorem 2, the bound obtained is tight, which completes the proof. + +![](images/7f34ad2562f1b26d2871a48d3667f631dc33389112aa0ae9d77e183d2bf3b1cd.jpg) + +# 4.2 Extension to non-complete/bounded hypothesis sets: comp-sum losses + +As pointed out earlier, the hypothesis sets typically used in practice are bounded. Let $\mathcal{F}$ be a family of real-valued functions $f$ with $|f(x)| \leq \Lambda(x)$ for all $x \in \mathcal{X}$ and such that all values in $[- \Lambda(x), +\Lambda(x)]$ can be reached, where $\Lambda(x) > 0$ is a fixed function on $\mathcal{X}$ . We will study hypothesis sets $\overline{\mathcal{H}}$ in which each scoring function is bounded: + +$$ +\overline {{\mathcal {H}}} = \{h: \mathcal {X} \times \mathcal {Y} \rightarrow \mathbb {R} \mid h (\cdot , y) \in \mathcal {F}, \forall y \in \mathcal {Y} \}. \tag {4} +$$ + +This holds for most hypothesis sets used in practice. The symmetric and complete hypothesis sets studied in previous work correspond to the special case of $\overline{\mathcal{H}}$ where $\Lambda(x) = +\infty$ for all $x \in \mathcal{X}$ . The hypothesis set of linear models $\mathcal{H}_{\mathrm{lin}}$ , defined by + +$$ +\mathcal {H} _ {\mathrm {l i n}} = \left\{\left(x, y\right) \mapsto w _ {y} \cdot x + b _ {y} \mid \| w _ {y} \| \leq W, | b _ {y} | \leq B, y \in \mathcal {Y} \right\}, +$$ + +is also a special instance of $\overline{\mathcal{H}}$ where $\Lambda(x) = W\|x\| + B$ . Let us emphasize that previous studies did not establish any $\mathcal{H}$ -consistency bound for these general hypothesis sets, $\overline{\mathcal{H}}$ . + +Theorem 5 ( $\overline{\mathcal{H}}$ -consistency bound for comp-sum losses). Assume that $\overline{\mathcal{T}}^{\mathrm{comp}}$ is convex. Then, the following inequality holds for any hypothesis $h \in \overline{\mathcal{H}}$ and any distribution: + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}})\right) \leq \mathcal {R} _ {\ell^ {\mathrm {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\mathrm {c o m p}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell^ {\mathrm {c o m p}}} (\overline {{\mathcal {H}}}) +$$ + +with $\overline{\mathcal{T}}^{\mathrm{comp}}$ the $\overline{\mathcal{H}}$ -estimation error transformation for comp-sum losses defined for all $t \in [0,1]$ by $\overline{\mathcal{T}}^{\mathrm{comp}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \in \left[ 0, \frac {1}{2} \right]} \sup _ {\mu \in \left[ s _ {\min } - \tau , 1 - \tau - s _ {\min } \right]} \left\{\frac {1 + t}{2} \left[ \Phi (\tau) - \Phi (1 - \tau - \mu) \right] + \frac {1 - t}{2} \left[ \Phi (1 - \tau) - \Phi (\tau + \mu) \right] \right\} & n = 2 \\ \inf _ {P \in \left[ \frac {1}{n - 1} \vee t, 1 \right]} \inf _ {S _ {\min } \leq \tau_ {2} \leq \tau_ {1} \leq S _ {\max }} \sup _ {\mu \in C} \left\{\frac {P + t}{2} \left[ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \right] + \frac {P - t}{2} \left[ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \right] \right\} & n > 2, \end{array} \right. +$$ + +where $C = \left[\max \{s_{\min} - \tau_2, \tau_1 - s_{\max}\}, \min \{s_{\max} - \tau_2, \tau_1 - s_{\min}\}\right]$ , $s_{\max} = \frac{1}{1 + (n-1)e^{-2\inf_x\Lambda(x)}}$ and $s_{\min} = \frac{1}{1 + (n-1)e^{2\inf_x\Lambda(x)}}$ . Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{comp}}}(h) - \mathcal{R}_{\ell^{\mathrm{comp}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{comp}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{comp}}(t)$ . + +This theorem significantly broadens the applicability of our framework as it encompasses bounded hypothesis sets. The last statement of the theorem further shows the tightness of the $\mathcal{H}$ -consistency bounds derived using this error transformation function. We now illustrate the application of our theory through several examples. + +A. Example: logistic loss. We first consider the multinomial logistic loss, that is $\ell^{\mathrm{comp}}$ with $\Phi(u) = -\log(u)$ , for which we give the following guarantee. + +Theorem 6 ( $\overline{\mathcal{H}}$ -consistency bounds for logistic loss). For any $h \in \overline{\mathcal{H}}$ and any distribution, we have + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \Big (\mathcal {R} _ {\ell_ {\log}} (h) - \mathcal {R} _ {\ell_ {\log}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\log}} (\overline {{\mathcal {H}}}) \Big), +$$ + +where $\ell_{\log} = -\log \left(\frac{e^{h(x,y)}}{\sum_{y'\in\mathcal{Y}}e^{h(x,y')}}\right)$ and $\Psi (t) = \begin{cases} \frac{1 + t}{2}\log (1 + t) + \frac{1 - t}{2}\log (1 - t) & t\leq \frac{s_{\max} - s_{\min}}{s_{\min} + s_{\max}}\\ \frac{t}{2}\log \left(\frac{s_{\max}}{s_{\min}}\right) + \log \left(\frac{2\sqrt{s_{\max}s_{\min}}}{s_{\max} + s_{\min}}\right) & \text{otherwise.} \end{cases}$ + +The proof of Theorem 6 is given in Appendix E.2. With the help of some simple calculations, we can derive a simpler upper bound: + +$$ +\Psi^ {- 1} (t) \leq \Gamma (t) = \left\{ \begin{array}{l l} \sqrt {2 t} & t \leq \frac {(s _ {\max} - s _ {\min}) ^ {2}}{2 (s _ {\min} + s _ {\max}) ^ {2}} \\ \frac {2 (s _ {\min} + s _ {\max})}{s _ {\max} - s _ {\min}} t & \text {o t h e r w i s e .} \end{array} \right. +$$ + +When the relative difference between $s_{\mathrm{min}}$ and $s_{\mathrm{max}}$ is small, the coefficient of the linear term in $\Gamma$ explodes. On the other hand, making that difference large essentially turns $\Gamma$ into a square-root function for all values. In general, $\Lambda$ is not infinite since a regularization is used, which controls both the complexity of the hypothesis set and the magnitude of the scores. + +Comparison with [Mao et al., 2023h]. For the symmetric and complete hypothesis sets $\mathcal{H}$ considered in [Mao et al., 2023h], $\Lambda(x) = +\infty$ , $s_{\max} = 1$ , $s_{\min} = 0$ , $\Psi(t) = \frac{1+t}{2} \log(1+t) + \frac{1-t}{2} \log(1-t)$ and $\Gamma(t) = \sqrt{2t}$ . By Theorem 6, this yields an $\mathcal{H}$ -consistency bound for the logistic loss. + +Corollary 7 (H-consistency bounds for logistic loss). Assume that $\mathcal{H}$ is symmetric and complete. Then, for any $h\in \mathcal{H}$ and any distribution, we have + +$$ +\left. \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) \leq \Psi^ {- 1} \left(\mathcal {R} _ {\ell_ {\log}} (h) - \mathcal {R} _ {\ell_ {\log}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {\log}} (\mathcal {H})\right) - \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) \right. +$$ + +where $\Psi(t) = \frac{1 + t}{2} \log(1 + t) + \frac{1 - t}{2} \log(1 - t)$ and $\Psi^{-1}(t) \leq \sqrt{2t}$ . + +Corollary 7 recovers the $\mathcal{H}$ -consistency bounds of Mao et al. [2023h]. + +Comparison with [Awasthi et al., 2022a] and [Zheng et al., 2023]. For the linear models $\mathcal{H}_{\mathrm{lin}} = \{(x,y)\mapsto w_y\cdot x + b_y\mid \| w_y\| \leq W,|b_y|\leq B\}$ , we have $\Lambda (x) = W\| x\| +B$ . By Theorem 6, we obtain $\mathcal{H}_{\mathrm{lin}}$ -consistency bounds for logistic loss. + +Corollary 8 ( $\mathcal{H}_{\mathrm{lin}}$ -consistency bounds for logistic loss). For any $h \in \mathcal{H}_{\mathrm{lin}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Psi^ {- 1} \left(\mathcal {R} _ {\ell_ {\log}} (h) - \mathcal {R} _ {\ell_ {\log}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\ell_ {\log}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H} _ {\mathrm {l i n}}) +$$ + +where $\Psi(t) = \begin{cases} \frac{1 + t}{2}\log (1 + t) + \frac{1 - t}{2}\log (1 - t) & t\leq \frac{(n - 1)(e^{2B} - e^{-2B})}{2 + (n - 1)(e^{2B} + e^{-2B})} \\ \frac{t}{2}\log \left(\frac{1 + (n - 1)e^{2B}}{1 + (n - 1)e^{-2B}}\right) + \log \left(\frac{2\sqrt{(1 + (n - 1)e^{2B})(1 + (n - 1)e^{-2B})}}{2 + (n - 1)(e^{2B} + e^{-2B})}\right) & \text{otherwise.} \end{cases}$ + +For $n = 2$ , we have $\Psi(t) = \begin{cases} \frac{t + 1}{2}\log (t + 1) + \frac{1 - t}{2}\log (1 - t) & t \leq \frac{e^{2B} - 1}{e^{2B} + 1} \\ \frac{t}{2}\log \left(\frac{1 + e^{2B}}{1 + e^{-2B}}\right) + \log \left(\frac{2\sqrt{(1 + e^{2B})(1 + e^{-2B})}}{2 + e^{2B} + e^{-2B}}\right) & \text{otherwise,} \end{cases}$ which coincides with the $\mathcal{H}_{\mathrm{lin}}$ -estimation error transformation in [Awasthi et al., 2022a]. Thus, Corollary 8 includes as a special case the $\mathcal{H}_{\mathrm{lin}}$ -consistency bounds given by Awasthi et al. [2022a] for binary classification. + +Our bounds of Corollary 8 improve upon the multi-class $\mathcal{H}_{\mathrm{lin}}$ -consistency bounds of recent work [Zheng et al., 2023, Theorem 3.3] in the following ways: i) their bound holds only for restricted distributions while our bound holds for any distribution; ii) their bound holds only for restricted values of the estimation error $\mathcal{R}_{\ell_{\log}}(h) - \mathcal{R}_{\ell_{\log}}^{*}(\mathcal{H}_{\mathrm{lin}})$ while ours holds for any value in $\mathbb{R}$ and more precisely admits a piecewise functional form; iii) under their distributional assumption, the minimizability gaps $\mathcal{M}_{\ell_{0-1}}(\mathcal{H}_{\mathrm{lin}})$ and $\mathcal{M}_{\ell_{\log}}(\mathcal{H}_{\mathrm{lin}})$ coincide with the approximation errors $\mathcal{R}_{\ell_{0-1}}(\mathcal{H}_{\mathrm{lin}}) - \mathcal{R}_{\ell_{0-1}}^{*}(\mathcal{H}_{\mathrm{all}})$ and $\mathcal{R}_{\ell_{\log}}(\mathcal{H}_{\mathrm{lin}}) - \mathcal{R}_{\ell_{\log}}^{*}(\mathcal{H}_{\mathrm{all}})$ respectively (see the note before [Zheng et al., 2023, Appendix F]). Thus, their bounds can be recovered as an excess error bound $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^{*}(\mathcal{H}_{\mathrm{all}}) \leq \sqrt{2}\big[\mathcal{R}_{\ell_{\log}}(h) - \mathcal{R}_{\ell_{\log}}^{*}(\mathcal{H}_{\mathrm{all}})\big]^{\frac{1}{2}}$ , which is not specific to the hypothesis set $\mathcal{H}$ and thus not as significant. In contrast, our $\mathcal{H}_{\mathrm{lin}}$ -consistency bound is finer and takes into account the role of the parameter $B$ as well as the number of labels $n$ ; iv) [Zheng et al., 2023, Theorem 3.3] only offers approximate bounds that are not tight; in contrast, by Theorem 5, our bound is tight. + +Note that our $\mathcal{H}$ -consistency bound in Theorem 6 are not limited to specific hypothesis set forms. They are directly applicable to various types of hypothesis sets including neural networks. For example, the same derivation can be extended to one-hidden-layer neural networks studied in [Awasthi et al., 2022a] and their multi-class generalization by calculating and substituting the corresponding $\Lambda(x)$ . As a result, we can obtain novel and tight $\mathcal{H}$ -consistency bounds for bounded neural network hypothesis sets in multi-class classification, which highlights the versatility of our general tools. + +B. Example: sum exponential loss. We then consider the sum exponential loss, that is $\ell^{\mathrm{comp}}$ with $\Phi(u) = \frac{1 - u}{u}$ . By computing the error transformation in Theorem 5, we obtain the following result. + +Theorem 9 ( $\mathcal{H}$ -consistency bounds for sum exponential loss). For any $h \in \mathcal{H}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell_ {\exp}} (h) - \mathcal {R} _ {\ell_ {\exp}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\exp}} (\overline {{\mathcal {H}}}) \big) +$$ + +where $\ell_{\mathrm{exp}} = \sum_{y^{\prime}\neq y}e^{h(x,y^{\prime}) - h(x,y)}$ and $\Psi (t) = \left\{ \begin{array}{ll}1 - \sqrt{1 - t^2} & t\leq \frac{s_{\mathrm{max}}^2 - s_{\mathrm{min}}^2}{s_{\mathrm{min}}^2 + s_{\mathrm{max}}^2}\\ \frac{s_{\mathrm{max}} - s_{\mathrm{min}}}{2s_{\mathrm{max}}s_{\mathrm{min}}} t - \frac{(s_{\mathrm{max}} - s_{\mathrm{min}})^2}{2s_{\mathrm{max}}s_{\mathrm{min}}(s_{\mathrm{max}} + s_{\mathrm{min}})} & \mathrm{otherwise.} \end{array} \right.$ + +The proof of Theorem 9 is given in Appendix E.3. Observe that $1 - \sqrt{1 - t^2} \geq t^2 / 2$ . By Theorem 9, making $s_{\min}$ close to zero, that is making $\Lambda$ close to infinite for any $x \in \mathcal{X}$ , essentially turns $\Psi$ into a square function for all values. In general, $\Lambda$ is not infinite since a regularization is used in practice, which controls both the complexity of the hypothesis set and the magnitude of the scores. + +C. Example: generalized cross-entropy loss and mean absolute error loss. Due to space limitations, we present the results for these loss functions in Appendix E. + +# 5 Constrained losses + +In this section, we present a general characterization of $\mathcal{H}$ -consistency bounds for constrained loss, that is loss functions defined via a constraint, as in [Lee et al., 2004]: + +$$ +\ell^ {\text {c s t n d}} (h, x, y) = \sum_ {y ^ {\prime} \neq y} \Phi (- h (x, y ^ {\prime})) \tag {5} +$$ + +with the constraint that $\sum_{y\in \mathcal{Y}}h(x,y) = 0$ for a non-negative and non-increasing auxiliary function $\Phi$ . + +# 5.1 $\mathcal{H}$ -consistency bounds + +As in the previous section, we prove a result that supplies a very general tool, an error transformation function for deriving $\mathcal{H}$ -consistency bounds for constrained losses. When $\mathfrak{T}^{\mathrm{cstnd}}$ is convex, to make these guarantees explicit, we only need to calculate $\mathfrak{T}^{\mathrm{cstnd}}$ . + +Table 2: $\mathcal{H}$ -estimation error transformation for common constrained losses. + +
Auxiliary function ΦΦexp(t) = e- tΦhinge(t) = max{0,1-t}Φsq-hinge(t) = (1-t)2Πt≤1Φsq = (1-t)2
Transformation TcstdTcstd(n) = 2 - √4 - t2Tcstd(n) = tTcstd(n) = t2/2Tcstd(n) = t2/2
+ +Theorem 10 ( $\mathcal{H}$ -consistency bound for constrained losses). Assume that $\mathcal{H}$ is symmetric and complete. Assume that $\mathfrak{T}^{\mathrm{cstnd}}$ is convex. Then, for any hypothesis $h\in \mathcal{H}$ and any distribution, + +$$ +\mathfrak {T} ^ {\mathrm {c s t n d}} \Big (\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) \Big) \leq \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\mathrm {c s t n d}}} (\mathcal {H}) +$$ + +with $\mathcal{H}$ -estimation error transformation for constrained losses defined on $t \in [0,1]$ by $\mathfrak{T}^{\mathrm{cstnd}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \geq 0} \sup _ {\mu \in \mathbb {R}} \Big \{\frac {1 - t}{2} \big [ \Phi (\tau) - \Phi (- \tau + \mu) \big ] + \frac {1 + t}{2} \big [ \Phi (- \tau) - \Phi (\tau - \mu) \big ] \Big \} & n = 2 \\ \inf _ {P \in \big [ \frac {1}{n - 1}, 1 \big ]} \inf _ {\tau_ {1} \geq \max \{\tau_ {2}, 0 \}} \sup _ {\mu \in \mathbb {R}} \Big \{\frac {2 - P - t}{2} \big [ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \big ] + \frac {2 - P + t}{2} \big [ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \big ] \Big \} & n > 2. \end{array} \right. +$$ + +Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and a hypothesis $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{cstnd}}}(h) - \mathcal{R}_{\ell^{\mathrm{cstnd}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{cstnd}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{cstnd}}(t)$ . + +Here too, the theorem guarantees the tightness of the bound. This general expression of $\mathfrak{T}^{\mathrm{cstnd}}$ can be considerably simplified under some broad assumptions, as shown by the following result. + +Theorem 11 (characterization of $\mathfrak{T}^{\mathrm{cstnd}}$ ). Assume that $\Phi$ is convex, differentiable at zero and $\Phi'(0) < 0$ . Then, $\mathfrak{T}^{\mathrm{cstnd}}$ can be expressed as follows: + +$$ +\begin{array}{l} \mathfrak{T}^{\text{cstnd}}(t) = \left\{ \begin{array}{ll}\Phi (0) - \inf_{\mu \in \mathbb{R}}\Bigl\{\frac{1 - t}{2}\Phi (\mu) + \frac{1 + t}{2}\Phi (-\mu)\Bigr \} & n = 2\\ \inf_{\tau \geq 0}\Bigl\{\bigl(2 - \frac{1}{n - 1}\bigr)\Phi (-\tau) - \inf_{\mu \in \mathbb{R}}\Bigl\{\frac{2 - t - \frac{1}{n - 1}}{2}\Phi (-\tau +\mu) + \frac{2 + t - \frac{1}{n - 1}}{2}\Phi (-\tau -\mu)\Bigr \} \Bigr \} & n > 2 \end{array} \right. \\ \begin{array}{r l} & {\geq \left\{\Phi (0) - \inf _ {\mu \in \mathbb {R}} \Bigl \{\frac {1 - t}{2} \Phi (\mu) + \frac {1 + t}{2} \Phi (- \mu) \Bigr \} \right. n = 2} \\ & {\left. \inf _ {\tau \geq 0} \Bigl \{2 \Phi (- \tau) - \inf _ {\mu \in \mathbb {R}} \Bigl \{\frac {2 - t}{2} \Phi (- \tau + \mu) + \frac {2 + t}{2} \Phi (- \tau - \mu) \Bigr \} \right\} n > 2.} \end{array} \\ \end{array} +$$ + +The proof of all the results in this section are given in Appendix D. + +Examples. We now compute the $\mathcal{H}$ -estimation error transformation for constrained losses and present the results in Table 2. Here, we present the simplified $\mathcal{I}^{\mathrm{cstnd}}$ by using the lower bound in Theorem 11. Remarkably, by applying Theorem 10, we are able to obtain tighter $\mathcal{H}$ -consistency bounds for constrained losses with $\Phi = \Phi_{\mathrm{hinge}}, \Phi_{\mathrm{sq - hinge}}, \Phi_{\mathrm{exp}}$ than those derived using ad hoc methods in [Awasthi et al., 2022b], and a novel $\mathcal{H}$ -consistency bound for the new constrained loss $\ell^{\mathrm{cstnd}}(h,x,y) = \sum_{y' \neq y} (1 + h(x,y'))^2$ with $\Phi(t) = (1 - t)^2$ . + +# 5.2 Extension to non-complete or bounded hypothesis sets + +As in the case of comp-sum losses, we extend our results beyond the completeness assumption adopted in previous work and establish $\overline{\mathcal{H}}$ -consistency bounds for bounded hypothesis sets. This significantly broadens the applicability of our framework. + +Theorem 12 ( $\overline{\mathcal{H}}$ -consistency bound for constrained losses). Assume that $\overline{\mathfrak{T}}^{\mathrm{cstnd}}$ is convex. Then, the following inequality holds for any hypothesis $h \in \overline{\mathcal{H}}$ and any distribution: + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}})\right) \leq \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell^ {\mathrm {c s t n d}}} (\overline {{\mathcal {H}}}). \qquad (6) +$$ + +with $\overline{\mathfrak{T}}^{\mathrm{cstnd}}$ the $\overline{\mathcal{H}}$ -estimation error transformation for constrained losses defined for all $t \in [0,1]$ by $\overline{\mathfrak{T}}^{\mathrm{cstnd}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \geq 0} \sup _ {\mu \in [ \tau - \Lambda_ {\min }, \tau + \Lambda_ {\min } ]} \left\{\frac {1 - t}{2} \big [ \Phi (\tau) - \Phi (- \tau + \mu) \big ] + \frac {1 + t}{2} \big [ \Phi (- \tau) - \Phi (\tau - \mu) \big ] \right\} & n = 2 \\ \inf _ {P \in \big [ \frac {1}{n - 1}, 1 \big ]} \inf _ {\tau_ {1} \geq \max \{\tau_ {2}, 0 \}} \sup _ {\mu \in C} \Big \{\frac {2 - P - t}{2} \big [ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \big ] + \frac {2 - P + t}{2} \big [ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \big ] \Big \} & n > 2, \end{array} \right. +$$ + +where $C = \left[\max \{\tau_1, -\tau_2\} - \Lambda_{\min}, \min \{\tau_1, -\tau_2\} + \Lambda_{\min}\right]$ and $\Lambda_{\min} = \inf_{x \in \mathcal{X}} \Lambda(x)$ . Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and a hypothesis $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{cstnd}}}(h) - \mathcal{R}_{\ell^{\mathrm{cstnd}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{cstnd}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{cstnd}}(t)$ . + +The proof is presented in Appendix F.1. Next, we illustrate the application of our theory through an example of constrained exponential losses, that is $\ell^{\mathrm{cstnd}}$ with $\Phi(t) = e^{-t}$ . By using the error transformation in Theorem 12, we obtain new $\overline{\mathcal{H}}$ -consistency bounds in Theorem 13 (see Appendix F.2 for the proof) for bounded hypothesis sets $\overline{\mathcal{H}}$ . + +Theorem 13 ( $\overline{\mathcal{H}}$ -consistency bounds for constrained exponential loss). Let $\Phi(t) = e^{-t}$ . For any $h \in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell^ {\mathrm {c s t n d}}} (\overline {{\mathcal {H}}}) \big) +$$ + +$$ +w h e r e \Psi (t) = \left\{ \begin{array}{l l} 1 - \sqrt {1 - t ^ {2}} & t \leq \frac {e ^ {2 \Lambda_ {\operatorname* {m i n}}} - 1}{e ^ {2 \Lambda_ {\operatorname* {m i n}}} + 1}. \\ \frac {t}{2} \big (e ^ {\Lambda_ {\operatorname* {m i n}}} - e ^ {- \Lambda_ {\operatorname* {m i n}}} \big) + \frac {2 - e ^ {\Lambda_ {\operatorname* {m i n}}} - e ^ {- \Lambda_ {\operatorname* {m i n}}}}{2} & \text {o t h e r w i s e .} \end{array} \right. +$$ + +Awasthi et al. [2022b] proves $\mathcal{H}$ -consistency bounds for constrained exponential losses when $\mathcal{H}$ is symmetric and complete. Theorem 13 significantly generalizes those results to the non-complete hypothesis sets. Different from the complete case, the functional form of our new bounds has two pieces which corresponds to the linear and the square root convergence respectively, modulo the constants. Furthermore, the coefficient of the linear piece depends on the the magnitude of $\Lambda_{\mathrm{min}}$ . When $\Lambda_{\mathrm{min}}$ is small, the coefficient of the linear term in $\Psi^{-1}$ explodes. On the other hand, making $\Lambda_{\mathrm{min}}$ large essentially turns $\Psi^{-1}$ into a square-root function. + +# 6 Discussion + +Here, we further elaborate on the practical value of our tools and $\mathcal{H}$ -consistency bounds. Our contributions include a more general and convenient mathematical tool for proving $\mathcal{H}$ -consistency bounds, along with tighter bounds that enable a better comparison of surrogate loss functions and extensions beyond previous completeness assumptions. As mentioned by [Awasthi et al., 2022b], given a hypothesis set $\mathcal{H}$ , $\mathcal{H}$ -consistency bounds can be used to compare different surrogate loss functions and select the most favorable one, which depends on the functional form of the $\mathcal{H}$ -consistency bound; the smoothness of the surrogate loss and its optimization properties; approximation properties of the surrogate loss function controlled by minimizability gaps; and the dependency on the number of classes in the multiplicative constant. Consequently, a tighter $\mathcal{H}$ -consistency bound provides a more accurate comparison, as a loose bound might not adequately capture the full advantage of using one surrogate loss. In contrast, Bayes-consistency does not take into account the hypothesis set and is an asymptotic property, thereby failing to guide the comparison of different surrogate losses. + +Another application of our $\mathcal{H}$ -consistency bounds involves deriving generalization bounds for surrogate loss minimizers [Mao et al., 2023h], expressed in terms of the same quantities previously discussed. Therefore, when dealing with finite samples, a tighter $\mathcal{H}$ -consistency bound could also result in a corresponding tighter generalization bound. Moreover, our novel results extend beyond previous completeness assumptions, offering guarantees applicable to bounded hypothesis sets commonly used with regularization. This enhancement provides meaningful learning guarantees. Technically, our error transformation function serves as a very general tool for deriving $\mathcal{H}$ -consistency bounds with tightness guarantees. These functions are defined within each class of loss functions including comp-sum losses and constrained losses, and their formulation depends on the structure of the individual loss function class, the range of the hypothesis set and the number of classes. To derive explicit bounds, all that is needed is to calculate these error transformation functions. Under some broad assumptions on the auxiliary function within a loss function, these error transformation functions can be further distilled into more simplified forms, making them straightforward to compute. + +# 7 Conclusion + +We presented a general characterization and extension of $\mathcal{H}$ -consistency bounds for multi-class classification. We introduced new tools for deriving such bounds with tightness guarantees and illustrated their benefits through several applications and examples. Our proposed method is a significant advance in the theory of $\mathcal{H}$ -consistency bounds for multi-class classification. It can provide a general and powerful tool for deriving tight bounds for a wide variety of other loss functions and hypothesis sets. We believe that our work will open up new avenues of research in the field of multi-class classification consistency. + +# References + +A. Agarwal and S. Agarwal. On consistent surrogate risk minimization and property elicitation. In Conference on Learning Theory, pages 4-22, 2015. +P. Awasthi, N. Frank, A. Mao, M. Mohri, and Y. Zhong. Calibration and consistency of adversarial surrogate losses. In Advances in Neural Information Processing Systems, pages 9804-9815, 2021a. +P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. A finer calibration analysis for adversarial robustness. arXiv preprint arXiv:2105.01550, 2021b. +P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. $\mathcal{H}$ -consistency bounds for surrogate loss minimizers. In International Conference on Machine Learning, pages 1117-1174, 2022a. +P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. Multi-class $\mathcal{H}$ -consistency bounds. In Advances in neural information processing systems, 2022b. +P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. Theoretically grounded loss functions and algorithms for adversarial robustness. In International Conference on Artificial Intelligence and Statistics, pages 10077-10094, 2023a. +P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. DC-programming for neural network optimizations. Journal of Global Optimization, 2023b. +P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138-156, 2006. +J. Berkson. Application of the logistic function to bio-assay. Journal of the American Statistical Association, 39:357-365, 1944. +J. Berkson. Why I prefer logits to probits. Biometrics, 7(4):327-339, 1951. +M. Blondel. Structured prediction with projection oracles. In Advances in neural information processing systems, 2019. +D.-R. Chen and T. Sun. Consistency of multiclass empirical risk minimization methods based on convex loss. Journal of Machine Learning Research, 7:2435-2447, 2006. +D.-R. Chen and D.-H. Xiang. The consistency of multicategory support vector machines. Advances in Computational Mathematics, 24(1):155-169, 2006. +C. Ciliberto, L. Rosasco, and A. Rudi. A consistent regularization approach for structured prediction. In Advances in neural information processing systems, 2016. +C. Cortes, G. DeSalvo, and M. Mohri. Learning with rejection. In Algorithmic Learning Theory, pages 67-82, 2016a. +C. Cortes, G. DeSalvo, and M. Mohri. Boosting with abstention. In Advances in Neural Information Processing Systems, pages 1660-1668, 2016b. +C. Cortes, G. DeSalvo, and M. Mohri. Theory and algorithms for learning with rejection in binary classification. Annals of Mathematics and Artificial Intelligence, to appear, 2023. +K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of machine learning research, 2(Dec):265-292, 2001. +K. Dembczynski, W. Kotlowski, and E. Hüllermeier. Consistent multilabel ranking through univariate losses. arXiv preprint arXiv:1206.6401, 2012. +U. Dogan, T. Glasmachers, and C. Igel. A unified view on multi-class support vector classification. Journal of Machine Learning Research, 17:1-32, 2016. +J. Finocchiaro, R. M. Frongillo, and B. Waggoner. An embedding framework for the design and analysis of consistent polyhedral surrogates. arXiv preprint arXiv:2206.14707, 2022. + +R. Frongillo and B. Waggoner. Surrogate regret bounds for polyhedral losses. In Advances in Neural Information Processing Systems, pages 21569-21580, 2021. +W. Gao and Z.-H. Zhou. On the consistency of multi-label learning. In Conference on learning theory, pages 341-358, 2011. +W. Gao and Z.-H. Zhou. On the consistency of AUC pairwise optimization. In International Joint Conference on Artificial Intelligence, 2015. +A. Ghosh, H. Kumar, and P. S. Sastry. Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2017. +V. Kuznetsov, M. Mohri, and U. Syed. Multi-class deep boosting. In Advances in Neural Information Processing Systems, pages 2501-2509, 2014. +Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association, 99(465):67-81, 2004. +Y. Liu. Fisher consistency of multicategory support vector machines. In Artificial intelligence and statistics, pages 291-298, 2007. +P. Long and R. Servedio. Consistency versus realizable H-consistency for multiclass classification. In International Conference on Machine Learning, pages 801-809, 2013. +A. Mao, C. Mohri, M. Mohri, and Y. Zhong. Two-stage learning to defer with multiple experts. In Advances in neural information processing systems, 2023a. +A. Mao, M. Mohri, and Y. Zhong. Principled approaches for learning to defer with multiple experts. arXiv preprint arXiv:2310.14774, 2023b. +A. Mao, M. Mohri, and Y. Zhong. Predictor-rejector multi-class abstention: Theoretical analysis and algorithms. arXiv preprint arXiv:2310.14772, 2023c. +A. Mao, M. Mohri, and Y. Zhong. H-consistency bounds for pairwise misranking loss surrogates. In International conference on Machine learning, 2023d. +A. Mao, M. Mohri, and Y. Zhong. Ranking with abstention. In ICML 2023 Workshop The Many Facets of Preference-Based Learning, 2023e. +A. Mao, M. Mohri, and Y. Zhong. Theoretically grounded loss functions and algorithms for score-based multi-class abstention. arXiv preprint arXiv:2310.14770, 2023f. +A. Mao, M. Mohri, and Y. Zhong. Structured prediction with stronger consistency guarantees. In Advances in Neural Information Processing Systems, 2023g. +A. Mao, M. Mohri, and Y. Zhong. Cross-entropy loss functions: Theoretical analysis and applications. In International Conference on Machine Learning, 2023h. +C. Mohri, D. Andor, E. Choi, M. Collins, A. Mao, and Y. Zhong. Learning to reject with a fixed predictor: Application to decontextualization. arXiv preprint arXiv:2301.09044, 2023. +M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, second edition, 2018. +H. Narasimhan, H. Ramaswamy, A. Saha, and S. Agarwal. Consistent multiclass algorithms for complex performance measures. In International Conference on Machine Learning, pages 2398-2407, 2015. +A. Osokin, F. Bach, and S. Lacoste-Julien. On structured prediction theory with calibrated convex surrogate losses. In Advances in Neural Information Processing Systems, 2017. +F. Pedregosa, F. Bach, and A. Gramfort. On the consistency of ordinal regression methods. Journal of Machine Learning Research, 18:1-35, 2017. + +B. Á. Pires and C. Szepesvári. Multiclass classification calibration functions. arXiv preprint arXiv:1609.06385, 2016. +B. A. Pires, C. Szepesvari, and M. Ghavamzadeh. Cost-sensitive multiclass classification risk bounds. In International Conference on Machine Learning, pages 1391-1399, 2013. +H. G. Ramaswamy and S. Agarwal. Classification calibration dimension for general multiclass losses. In Advances in Neural Information Processing Systems, 2012. +H. G. Ramaswamy and S. Agarwal. Convex calibration dimension for multiclass loss matrices. Journal of Machine Learning Research, 17(1):397-441, 2016. +H. G. Ramaswamy, S. Agarwal, and A. Tewari. Convex calibrated surrogates for low-rank loss matrices with applications to subset ranking losses. In Advances in Neural Information Processing Systems, 2013. +H. G. Ramaswamy, A. Tewari, and S. Agarwal. Consistent algorithms for multiclass classification with a reject option. arXiv preprint arXiv:1505.04137, 2015. +P. Ravikumar, A. Tewari, and E. Yang. On NDCG consistency of listwise ranking methods. In International Conference on Artificial Intelligence and Statistics, pages 618-626, 2011. +I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26(2):225-287, 2007. +A. Tewari and P. L. Bartlett. On the consistency of multiclass classification methods. Journal of Machine Learning Research, 8(36):1007-1025, 2007. +A. Thilagar, R. Frongillo, J. J. Finocchiaro, and E. Goodwill. Consistent polyhedral surrogates for top-k classification and variants. In International Conference on Machine Learning, pages 21329-21359, 2022. +K. Uematsu and Y. Lee. On theoretically optimal ranking functions in bipartite ranking. Journal of the American Statistical Association, 112(519):1311-1322, 2017. +P. F. Verhulst. Notice sur la loi que la population suit dans son accroissement. Correspondance mathématique et physique, 10:113-121, 1838. +P. F. Verhulst. Recherches mathématiques sur la loi d'accroissement de la population. Nouveaux Mémoires de l'Académie Royale des Sciences et Belles-Lettres de Bruxelles, 18:1-42, 1845. +Y. Wang and C. Scott. Weston-Watkins hinge loss and ordered partitions. In Advances in neural information processing systems, pages 19873-19883, 2020. +Y. Wang and C. D. Scott. On classification-calibration of gamma-phi losses. arXiv preprint arXiv:2302.07321, 2023. +J. Weston and C. Watkins. Multi-class support vector machines. Technical report, Citeseer, 1998. +R. C. Williamson, E. Vernet, and M. D. Reid. Composite multiclass losses. Journal of Machine Learning Research, 17:1-52, 2016. +M. Zhang and S. Agarwal. Bayes consistency vs. H-consistency: The interplay between surrogate loss functions and the scoring function class. In Advances in Neural Information Processing Systems, pages 16927-16936, 2020. +M. Zhang, H. G. Ramaswamy, and S. Agarwal. Convex calibrated surrogates for the multi-label f-measure. In International Conference on Machine Learning, pages 11246-11255, 2020. +T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32(1):56-85, 2004a. +T. Zhang. Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research, 5(Oct):1225-1251, 2004b. + +Z. Zhang and M. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in neural information processing systems, 2018. +C. Zheng, G. Wu, F. Bao, Y. Cao, C. Li, and J. Zhu. Revisiting discriminative vs. generative classifiers: Theory and implications. In International Conference on Machine Learning, 2023. + +# Contents of Appendix + +# A Related work 16 + +# B Minimizability gap 17 + +B.1 Zero minimizability 17 +B.2 Relationship with approximation error 17 +B.3 Significance of $\mathcal{H}$ -consistency bounds 18 + +# C Proofs for comp-sum losses 18 + +C.1 Proof of $\mathcal{H}$ -consistency bounds with $\mathcal{T}^{\mathrm{comp}}$ (Theorem 2) 19 +C.2 Characterization of $\mathfrak{T}^{\mathrm{comp}}$ (Theorem 3) 21 +C.3 Computation of examples 22 + +# D Proofs for constrained losses 23 + +D.1 Proof of $\mathcal{H}$ -consistency bounds with $\mathfrak{T}^{\mathrm{cstnd}}$ (Theorem 10) 24 +D.2 Characterization of $\mathfrak{F}^{\mathrm{cstnd}}$ (Theorem 11) 26 +D.3 Computation of examples 27 + +# E Extensions of comp-sum losses 28 + +E.1 Proof of $\overline{\mathcal{H}}$ -consistency bounds with $\overline{\mathcal{T}}^{\mathrm{comp}}$ (Theorem 5) 28 +E.2 Logistic loss 30 +E.3 Sum exponential loss 32 +E.4 Generalized cross-entropy loss 33 +E.5 Mean absolute error loss 34 + +# F Extensions of constrained losses 35 + +F.1 Proof of $\overline{\mathcal{H}}$ -consistency bound with $\overline{\mathcal{T}}^{\mathrm{cstnd}}$ (Theorem 12) 35 +F.2 Constrained exponential loss 37 + +# A Related work + +The notions of Bayes-consistency (also known as consistency) and calibration have been well studied not only with respect to the binary zero-one loss [Zhang, 2004a, Bartlett et al., 2006, Steinwart, 2007, Mohri et al., 2018], but also with respect to the multi-class zero-one loss [Zhang, 2004b, Tewari and Bartlett, 2007], the general multi-class losses [Ramaswamy and Agarwal, 2012, Narasimhan et al., 2015, Ramaswamy and Agarwal, 2016], the multi-class SVMs [Chen and Sun, 2006, Chen and Xiang, 2006, Liu, 2007, Dogan et al., 2016, Wang and Scott, 2020], the gamma-phi losses [Wang and Scott, 2023], the multi-label losses [Gao and Zhou, 2011, Dembczynski et al., 2012, Zhang et al., 2020], the losses with a reject option [Ramaswamy et al., 2015, Cortes et al., 2016a,b, 2023], the ranking losses [Ravikumar et al., 2011, Ramaswamy et al., 2013, Gao and Zhou, 2015, Uematsu and Lee, 2017], the cost sensitive losses [Pires et al., 2013, Pires and Szepesvári, 2016], the structured losses [Ciliberto et al., 2016, Osokin et al., 2017, Blondel, 2019], the polyhedral losses [Frongillo and Waggoner, 2021, Finocchiaro et al., 2022], the Top- $k$ classification losses [Thilagar et al., 2022], the proper losses [Agarwal and Agarwal, 2015, Williamson et al., 2016] and the losses of ordinal regression [Pedregosa et al., 2017]. + +Bayes-consistency only holds for the full family of measurable functions, which of course is distinct from the more restricted hypothesis set used by a learning algorithm. Therefore, a hypothesis set-dependent notion of $\mathcal{H}$ -consistency has been proposed by Long and Servedio [2013] in the realizable setting, used by Zhang and Agarwal [2020] for linear models, and generalized by Kuznetsov et al. [2014] to the structured prediction case. Long and Servedio [2013] showed that there exists a case where a Bayes-consistent loss is not $\mathcal{H}$ -consistent while inconsistent losses can be $\mathcal{H}$ -consistent. Zhang and Agarwal [2020] further investigated the phenomenon in [Long and Servedio, 2013] and showed that the situation of losses that are not $\mathcal{H}$ -consistent with linear models can be remedied by carefully choosing a larger piecewise linear hypothesis set. Kuznetsov et al. [2014] proved positive results for the $\mathcal{H}$ -consistency of several multi-class ensemble algorithms, as an extension of $\mathcal{H}$ -consistency results in [Long and Servedio, 2013]. + +Recently, Awasthi et al. [2022a,b], Mao et al. [2023h], Zheng et al. [2023] presented a series of results providing $\mathcal{H}$ -consistency bounds. These are upper bounds on the zero-one estimation error of any predictor in a hypothesis set, expressed in terms of its surrogate loss estimation error. They are more informative guarantees than similar excess error bounds derived in the literature, which correspond to the special case where $\mathcal{H}$ is the family of all measurable functions [Zhang, 2004a, Bartlett et al., 2006, Mohri et al., 2018]. Awasthi et al. [2022a] studied $\mathcal{H}$ -consistency bounds in binary classification. They provided a series of tight $\mathcal{H}$ -consistency bounds for bounded hypothesis set of linear models and one-hidden-layer neural networks. The subsequent study [Awasthi et al., 2022b] further generalized the framework to multi-class classification, where they presented a extensive study of $\mathcal{H}$ -consistency bounds for diverse multi-class surrogate losses including negative results for max losses [Crammer and Singer, 2001] and positive results for sum losses [Weston and Watkins, 1998], and constrained losses [Lee et al., 2004]. However, the hypothesis sets adopted there were assumed to be complete, which rules out the bounded hypothesis sets typically used in practice. Moreover, the final bounds derived from [Awasthi et al., 2022b] are based on ad hoc methods and may not be tight. [Mao et al., 2023h] complemented the previous work by studying a wide family of comp-sum losses in the multi-class classification, which generalized the sum-losses and included as special cases the logistic loss [Verhulst, 1838, 1845, Berkson, 1944, 1951], the generalized cross-entropy loss [Zhang and Sabuncu, 2018], and the mean absolute error loss [Ghosh et al., 2017]. Here too, the completeness assumption on the hypothesis sets was adopted and their $\mathcal{H}$ -consistency bounds do not apply to common bounded hypothesis sets in practice. Zheng et al. [2023] proved $\mathcal{H}$ -consistency bounds for multi-class logistic loss with bounded linear hypothesis sets. However, their bounds require a crucial distributional assumption under which, the minimizability gaps coincide with the approximation errors. Thus, their bounds can be recovered as excess error bounds, which are less significant. + +This paper provides both a general characterization and an extension of $\mathcal{H}$ -consistency bounds for multi-class classification. Our general tools and tight bounds show several remarkable advantages: first, they improve existing bounds for complete hypothesis sets previously proven in [Awasthi et al., 2022b], second, they encompass all previously comp-sum and constrained losses studied thus far as well as many new ones [Awasthi et al., 2022a, Mao et al., 2023h], third, they extend beyond the completeness assumption adopted in previous work, fourth, they give novel guarantees for bounded + +hypothesis sets, and finally they help prove a much stronger and more significant guarantee for logistic loss with linear hypothesis set than [Zheng et al., 2023]. + +Other related work on $\mathcal{H}$ -consistency bounds includes: $\mathcal{H}$ -consistency bounds for pairwise ranking [Mao et al., 2023d,e]; theoretically grounded surrogate losses and algorithms for learning with abstention supported by $\mathcal{H}$ -consistency bounds, including the study of score-based abstention [Mao et al., 2023f], predictor-rejection abstention [Mao et al., 2023c] and learning to abstain with a fixed predictor with application in decontextualization [Mohri et al., 2023]; principled approaches for learning to defer with multiple experts that benefit from strong $\mathcal{H}$ -consistency bounds, including the single-stage scenario [Mao et al., 2023b] and a two-stage scenario [Mao et al., 2023a]; $\mathcal{H}$ -consistency theory and algorithms for adversarial robustness [Awasthi et al., 2021a,b, 2023a, Mao et al., 2023h, Awasthi et al., 2023b]; and efficient algorithms and loss functions for structured prediction with stronger $\mathcal{H}$ -consistency guarantees [Mao et al., 2023g]. + +# B Minimizability gap + +This is a brief discussion of minimizability gaps and their properties. By definition, for any loss function $\ell$ , the minimizability gap is defined by + +$$ +\mathcal{M}_{\ell}(\mathcal{H}) = \inf_{h\in \mathcal{H}}\left\{\underset {(x,y)\sim \mathcal{D}}{\mathbb{E}}\bigl[\ell (h,x,y)\bigr ]\right\} -\underset {x}{\mathbb{E}}\Biggl\[\inf_{h\in \mathcal{H}}\underset {y|x}{\mathbb{E}}\bigl[\ell (h,x,y)\bigr ]\Biggr] = \mathcal{R}_{\ell}^{*}(\mathcal{H}) - \underset {x}{\mathbb{E}}\bigl[\mathcal{C}_{\ell}^{*}(\mathcal{H},x)\bigr]. +$$ + +# B.1 Zero minimizability + +Lemma 14. Let $\ell$ be a surrogate loss such that for $(x,y) \in \mathcal{X} \times \mathcal{Y}$ and any measurable function $h \in \mathcal{H}_{\mathrm{all}}$ , the loss $\ell(h,x,y)$ only depends on $h(x)$ and $y$ (thus we can write $\ell(h,x,y) = \overline{\ell}(h(x),y)$ for some function $\overline{\ell}$ ). Then, the minimizability gap vanishes: $\mathcal{M}_{\ell}(\mathcal{H}_{\mathrm{all}}) = 0$ . + +Proof. Fix $\epsilon > 0$ . Then, by definition of the infimum, for any $x \in \mathcal{X}$ , there exists $h_x \in \mathcal{H}_{\mathrm{all}}$ such that + +$$ +\mathop{\mathbb{E}}\limits_{y|x}\left[\ell \bigl(h_{x},x,y\bigr)\right]\leq \inf_{h\in \mathcal{H}_{\text{all}}}\mathop{\mathbb{E}}\limits_{y|x}\left[\ell \bigl(h,x,y\bigr)\right] + \epsilon . +$$ + +Now, define the function $h$ by $h(x) = h_x(x)$ , for all $x \in \mathcal{X}$ . $h$ can be shown to be measurable, for example, when $\mathcal{X}$ admits a countable dense subset. Then, + +$$ +\begin{array}{l} \underset {(x, y) \sim \mathcal {D}} {\mathbb {E}} [ \ell (h, x, y) ] = \underset {(x, y) \sim \mathcal {D}} {\mathbb {E}} [ \bar {\ell} (h (x), y) ] = \underset {(x, y) \sim \mathcal {D}} {\mathbb {E}} [ \bar {\ell} (h _ {x} (x), y) ] \\ = \underset {(x, y) \sim \mathcal {D}} {\mathbb {E}} \left[ \ell \left(h _ {x}, x, y\right) \right] \\ \leq \mathbb{E}_{x}\left[ \inf_{h\in \mathcal{H}_{\text{all}}}\mathbb{E}_{y|x}\left[\ell (h,x,y)\right] + \epsilon \right] \\ = \mathbb {E} _ {x} \left[ \mathcal {C} _ {\ell} ^ {*} \left(\mathcal {H} _ {\text {a l l}}, x\right) \right] + \epsilon . \\ \end{array} +$$ + +Thus, we have + +$$ +\inf _ {h \in \mathcal {H} _ {\text {a l l}}} \underset {(x, y) \sim \mathcal {D}} {\mathbb {E}} \left[ \ell (h, x, y) \right] \leq \underset {x} {\mathbb {E}} \left[ \mathcal {C} _ {\ell} ^ {*} \left(\mathcal {H} _ {\text {a l l}}, x\right) \right] + \epsilon . +$$ + +Since the inequality holds for any $\epsilon > 0$ , we have $\mathcal{R}_{\ell}^{*}(\mathcal{H}_{\mathrm{all}}) = \inf_{h \in \mathcal{H}_{\mathrm{all}}} \mathbb{E}_{(x,y) \sim \mathcal{D}}[\ell(h,x,y)] \leq \mathbb{E}_x[\mathcal{C}_{\ell}^{*}(\mathcal{H}_{\mathrm{all}},x)]$ . This implies equality since the inequality $\mathcal{R}_{\ell}^{*}(\mathcal{H}) \geq \mathbb{E}_x[\mathcal{C}_{\ell}^{*}(\mathcal{H},x)]$ holds for any $\mathcal{H}$ . + +# B.2 Relationship with approximation error + +Let $\mathcal{A}_{\ell}$ denote the approximation error of a loss function $\ell$ and a hypothesis set $\mathcal{H}$ : $\mathcal{A}_{\ell}(\mathcal{H}) = \mathcal{R}_{\ell}^{*}(\mathcal{H}) - \mathcal{R}_{\ell}^{*}(\mathcal{H}_{\mathrm{all}})$ . We will denote by $I_{\ell}(\mathcal{H})$ the difference of pointwise infima $I_{\ell}(\mathcal{H}) = \mathbb{E}_x[\mathcal{C}_\ell^*(\mathcal{H}, x) - \mathcal{C}_\ell^*(\mathcal{H}_{\mathrm{all}}, x)]$ , which is non-negative. The minimizability gap can be decomposed as + +follows in terms of the approximation error and the difference of pointwise infima: + +$$ +\begin{array}{l} \mathcal {M} _ {\ell} (\mathcal {H}) = \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) - \mathbb {E} _ {x} \left[ \mathcal {C} _ {\ell} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) - \mathcal {R} _ {\ell} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) + \mathcal {R} _ {\ell} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {x} \left[ \mathcal {C} _ {\ell} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {A} _ {\ell} (\mathcal {H}) + \mathcal {R} _ {\ell} ^ {*} (\mathcal {H} _ {\text {a l l}}) - \mathbb {E} _ {x} \left[ \mathcal {C} _ {\ell} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {A} _ {\ell} (\mathcal {H}) + \mathbb {E} _ {x} \left[ \mathcal {C} _ {\ell} ^ {*} \left(\mathcal {H} _ {\text {a l l}}, x\right) - \mathcal {C} _ {\ell} ^ {*} (\mathcal {H}, x) \right] \tag {By Lemma 14} \\ = \mathcal {A} _ {\ell} (\mathcal {H}) - I _ {\ell} (\mathcal {H}). \\ \end{array} +$$ + +The decomposition immediately implies the following result. + +Lemma 15. Let $\ell$ be a surrogate loss such that for $(x,y)\in \mathcal{X}\times \mathcal{Y}$ and any measurable function $h\in \mathcal{H}_{\mathrm{all}}$ , the loss $\ell (h,x,y)$ only depends on $h(x)$ and $y$ (thus we can write $\ell (h,x,y) = \overline{\ell} (h(x),y)$ for some function $\overline{\ell}$ ). Then, for any loss function $\ell$ and hypothesis set $\mathcal{H}$ , we have: $\mathcal{M}_{\ell}(\mathcal{H})\leq \mathcal{A}_{\ell}(\mathcal{H})$ . + +By Lemma 1, when $\ell$ is the zero-one loss, $I_{\ell}(\mathcal{H}) = 0$ when the hypothesis set generates labels that cover all possible outcomes for each input. However, for a surrogate loss function, $I_{\ell}(\mathcal{H})$ is non-negative, and is generally non-zero. + +Take the example of binary classification and denote the conditional distribution as $\eta(x) = D(Y = 1 | X = x)$ . Let $\mathcal{H}$ be a family of functions $h$ with $|h(x)| \leq \Lambda$ for all $x \in \mathcal{X}$ and such that all values in $[- \Lambda, +\Lambda]$ can be reached. Consider for example the exponential-based margin loss: $\ell(h, x, y) = e^{-yh(x)}$ . Then, $\mathcal{C}_{\ell}(h, x) = \eta(x)e^{-h(x)} + (1 - \eta(x))e^{h(x)}$ . Upon observing this, it becomes apparent that the infimum over all measurable functions can be expressed in the following way, for all $x$ : + +$$ +\mathcal {C} _ {\ell} ^ {*} \left(\mathcal {H} _ {\text {a l l}}, x\right) = 2 \sqrt {\eta (x) \left(1 - \eta (x)\right)}, +$$ + +while the infimum over $\mathcal{H}$ , $\mathcal{C}_{\ell}^{*}(\mathcal{H},x)$ , depends on $\Lambda$ and can be expressed as + +$$ +\mathcal {C} _ {\ell} ^ {*} (\mathcal {H}, x) = \left\{ \begin{array}{l l} \max \{\eta (x), 1 - \eta (x) \} e ^ {- \Lambda} + \min \{\eta (x), 1 - \eta (x) \} e ^ {\Lambda} & \Lambda < \frac {1}{2} \Big | \log \frac {\eta (x)}{1 - \eta (x)} \Big | \\ 2 \sqrt {\eta (x) (1 - \eta (x))} & \text {o t h e r w i s e .} \end{array} \right. +$$ + +Thus, in the deterministic scenario, + +$$ +I _ {\ell} \big (\mathcal {H} \big) = \mathbb {E} _ {x} \big [ \mathcal {C} _ {\ell} ^ {*} \big (\mathcal {H}, x \big) - \mathcal {C} _ {\ell} ^ {*} \big (\mathcal {H} _ {\mathrm {a l l}}, x \big) \big ] = e ^ {- \Lambda}. +$$ + +# B.3 Significance of $\mathcal{H}$ -consistency bounds + +As shown in the previous section, for target loss $\ell_{0-1}$ , the minimizability gap coincides with the approximation error $\mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = \mathcal{A}_{\ell_{0-1}}(\mathcal{H})$ when the hypothesis set generates labels that cover all possible outcomes for each input. However, for a surrogate loss $\ell$ , the minimizability gap is generally strictly less than the approximation error $\mathcal{M}_{\ell}(\mathcal{H}) < \mathcal{A}_{\ell}(\mathcal{H})$ . Thus, an $\mathcal{H}$ -consistency bound, expressed as follows for some increasing function $\Gamma$ : + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) \leq \Gamma \left(\mathcal {R} _ {\ell} (h) - \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell} (\mathcal {H})\right). +$$ + +is more favorable than an excess error bound expressed in terms of approximation errors: + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {A} _ {\ell_ {0 - 1}} (\mathcal {H}) \leq \Gamma \left(\mathcal {R} _ {\ell} (h) - \mathcal {R} _ {\ell} ^ {*} (\mathcal {H}) + \mathcal {A} _ {\ell} (\mathcal {H})\right). +$$ + +Here, $\Gamma$ is typically linear or the square-root function modulo constants. When $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ , the family of all measurable functions, by Lemma 14, the $\mathcal{H}$ -consistency bound coincides with the excess error bound and implies Bayes-consistency by taking the limit. It is therefore a stronger guarantee than an excess error bound and Bayes-consistency. + +# C Proofs for comp-sum losses + +Let $y_{\max} = \operatorname{argmax}_{y \in \mathcal{Y}} p(x, y)$ and $h(x) = \operatorname{argmax}_{y \in \mathcal{Y}} h(x, y)$ , where we choose the label with the highest index under the natural ordering of labels as the tie-breaking strategy. + +# C.1 Proof of $\mathcal{H}$ -consistency bounds with $\mathcal{T}^{\mathrm{comp}}$ (Theorem 2) + +Theorem 2 (H-consistency bound for comp-sum losses). Assume that $\mathcal{H}$ is symmetric and complete and that $\mathfrak{T}^{\mathrm{comp}}$ is convex. Then, the following inequality holds for any hypothesis $h\in \mathcal{H}$ and any distribution + +$$ +\mathfrak {T} ^ {\operatorname {c o m p}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H})\right) \leq \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\operatorname {c o m p}}} (\mathcal {H}), \tag {3} +$$ + +with $\mathfrak{T}^{\mathrm{comp}}$ an $\mathcal{H}$ -estimation error transformation for comp-sum losses defined for all $t\in [0,1]$ by + +$$ +\mathfrak {T} ^ {\mathrm {c o m p}} (t) = +$$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \in [ 0, \frac {1}{2} ] \mu \in [ - \tau , 1 - \tau ]} \sup _ {\substack {\mu \in [ - \tau , 1 - \tau ]}} \Bigl \{\frac {1 + t}{2} \bigl [ \Phi (\tau) - \Phi (1 - \tau - \mu) \bigr ] + \frac {1 - t}{2} \bigl [ \Phi (1 - \tau) - \Phi (\tau + \mu) \bigr ] \Bigr \} & n = 2 \\ \inf _ {P \in [ \frac {1}{n - 1} \vee t, 1 ]} \inf _ {\substack {\tau_ {1} \geq \max (\tau_ {2}, 1 / n) \\ \tau_ {1} + \tau_ {2} \leq 1, \tau_ {2} \geq 0}} \sup _ {\mu \in [ - \tau_ {2}, \tau_ {1} ]} \Bigl \{\frac {P + t}{2} \bigl [ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \bigr ] + \frac {P - t}{2} \bigl [ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \bigr ] \Bigr \} & n > 2. \end{array} \right. +$$ + +Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and a hypothesis $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{comp}}}(h) - \mathcal{R}_{\ell^{\mathrm{comp}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{comp}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{comp}}(t)$ . + +Proof. For the comp-sum loss $\ell^{\mathrm{comp}}$ , the conditional $\ell^{\mathrm{comp}}$ -risk can be expressed as follows: + +$$ +\begin{array}{l} \mathcal {C} _ {\ell^ {\operatorname {c o m p}}} (h, x) = \sum_ {y \in \mathcal {Y}} p (x, y) \ell^ {\operatorname {c o m p}} (h, x, y) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \Phi \left(\frac {e ^ {h (x , y)}}{\sum_ {y ^ {\prime} \in \mathcal {Y}} e ^ {h (x , y ^ {\prime})}}\right) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \Phi \left(S _ {h} (x, y)\right) \\ = p (x, y _ {\max }) \Phi \left(S _ {h} (x, y _ {\max })\right) + p (x, \mathsf {h} (x)) \Phi \left(S _ {h} (x, \mathsf {h} (x))\right) \\ + \sum_ {y \notin \{y _ {\max }, h (x) \}} p (x, y) \Phi \left(S _ {h} (x, y)\right). \\ \end{array} +$$ + +where we let $S_{h}(x,y) = \frac{e^{h(x,y)}}{\sum_{y^{\prime}\in\mathcal{Y}}e^{h(x,y^{\prime})}}$ for any $y\in \mathcal{Y}$ with the constraint that $\sum_{y\in \mathcal{Y}}S_h(x,y) = 1$ . For any $h\in \mathcal{H}$ such that $\mathsf{h}(x)\neq y_{\max}$ and $x\in \mathcal{X}$ , we can always find a family of hypotheses $\{h_\mu \} \subset \mathcal{H}$ such that $S_{h,\mu}(x,\cdot) = \frac{e^{h_{\mu}(x,\cdot)}}{\sum_{y^{\prime}\in\mathcal{Y}}e^{h_{\mu}(x,y^{\prime})}}$ take the following values: + +$$ +S _ {h, \mu} (x, y) = \left\{ \begin{array}{l l} S _ {h} (x, y) & \text {i f} y \notin \{y _ {\max }, h (x) \} \\ S _ {h} (x, y _ {\max }) + \mu & \text {i f} y = h (x) \\ S _ {h} (x, h (x)) - \mu & \text {i f} y = y _ {\max }. \end{array} \right. +$$ + +Note that $S_{h,\mu}$ satisfies the constraint: + +$$ +\sum_ {y \in \mathcal {Y}} S _ {h, \mu} (x, y) = \sum_ {y \in \mathcal {Y}} S _ {h} (x, y) = 1. +$$ + +Let $p_1 = p(x, y_{\max})$ , $p_2 = p(x, \mathsf{h}(x))$ , $\tau_1 = S_h(x, \mathsf{h}(x))$ and $\tau_2 = S_h(x, y_{\max})$ to simplify the notation. Then, by the definition of $S_{h,\mu}$ , we have for any $h \in \mathcal{H}$ and $x \in \mathcal{X}$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\ell^ {\operatorname {c o m p}}} (h, x) - \inf _ {\mu \in \left[ - \tau_ {2}, \tau_ {1} \right]} \mathcal {C} _ {\ell^ {\operatorname {c o m p}}} (h _ {\mu}, x) \\ = \sup _ {\mu \in [ - \tau_ {2}, \tau_ {1} ]} \left\{p _ {1} \left[ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \right] + p _ {2} \left[ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \right] \right\} \\ = \sup _ {\mu \in [ - \tau_ {2}, \tau_ {1} ]} \left\{\frac {P + p _ {1} - p _ {2}}{2} \left[ \Phi \left(\tau_ {2}\right) - \Phi \left(\tau_ {1} - \mu\right) \right] + \frac {P - p _ {1} + p _ {2}}{2} \left[ \Phi \left(\tau_ {1}\right) - \Phi \left(\tau_ {2} + \mu\right) \right] \right\} \\ (P = p _ {1} + p _ {2} \in \left[ \frac {1}{n - 1} \vee p _ {1} - p _ {2}, 1 \right]) \\ \leq \inf _ {P \in \left[ \frac {1}{n - 1} \vee p _ {1} - p _ {2}, 1 \right]} \inf _ {\substack {\tau_ {1} \geq \max (\tau_ {2}, 1 / n) \\ \tau_ {1} + \tau_ {2} \leq 1, \tau_ {2} \geq 0}} \sup _ {\mu \in \left[ - \tau_ {2}, \tau_ {1} \right]} \left\{\frac {P + p _ {1} - p _ {2}}{2} \left[ \Phi \left(\tau_ {2}\right) - \Phi \left(\tau_ {1} - \mu\right) \right] \right. \\ \left. + \frac {P - p _ {1} + p _ {2}}{2} \left[ \Phi \left(\tau_ {1}\right) - \Phi \left(\tau_ {2} + \mu\right) \right] \right\} \quad (\tau_ {1} \geq \max \left(\tau_ {2}, 1 / n\right), \tau_ {1} + \tau_ {2} \leq 1, \tau_ {2} \geq 0) \\ = \mathfrak {T} ^ {\text {c o m p}} \left(p _ {1} - p _ {2}\right) \\ = \mathfrak {T} ^ {\operatorname {c o m p}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right), \quad (\text {b y}) \\ \end{array} +$$ + +where for $n = 2$ , an additional constraint $\tau_{1} + \tau_{2} = 1$ is imposed and the expression of $\mathcal{T}^{\mathrm{comp}}$ is simplified. Since $\mathcal{T}^{\mathrm{comp}}$ is convex, by Jensen's inequality, we obtain for any hypothesis $h \in \mathcal{H}$ and any distribution, + +$$ +\begin{array}{l} \mathfrak {T} ^ {\text {c o m p}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H})\right) \\ = \mathfrak {T} ^ {\operatorname {c o m p}} \left(\mathbb {E} _ {X} \left[ \Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x) \right]\right) \\ \leq \mathbb {E} _ {X} \left[ \mathcal {T} ^ {\operatorname {c o m p}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right) \right] \\ \leq \mathbb {E} _ {X} \left[ \Delta \mathcal {C} _ {\ell \text {c o m p}, \mathcal {H}} (h, x) \right] \\ = \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\operatorname {c o m p}}} (\mathcal {H}). \\ \end{array} +$$ + +For the second part, we first consider $n = 2$ . For any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $S_{h}(x,1) = \tau_{\epsilon} \in \left[0,\frac{1}{2}\right]$ and satisfies + +$$ +\sup _ {\mu \in [ - \tau_ {\epsilon}, 1 - \tau_ {\epsilon} ]} \left\{\frac {1 + t}{2} \bigl [ \Phi (\tau_ {\epsilon}) - \Phi (1 - \tau_ {\epsilon} - \mu) \bigr ] + \frac {1 - t}{2} \bigl [ \Phi (1 - \tau_ {\epsilon}) - \Phi (\tau_ {\epsilon} + \mu) \bigr ] \right\} < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell_ {0 - 1}} (h, x) - \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \\ = t \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathfrak {T} ^ {\operatorname {c o m p}} (t) \leq \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\operatorname {c o m p}}} (\mathcal {H}) \\ = \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell \operatorname {c o m p}} (h, x) - \mathcal {C} _ {\ell \operatorname {c o m p}} ^ {*} (\mathcal {H}, x) \\ = \sup _ {\mu \epsilon [ - \tau_ {\epsilon}, 1 - \tau_ {\epsilon} ]} \left\{\frac {1 + t}{2} \left[ \Phi \left(\tau_ {\epsilon}\right) - \Phi (1 - \tau_ {\epsilon} - \mu) \right] + \frac {1 - t}{2} \left[ \Phi (1 - \tau_ {\epsilon}) - \Phi (\tau_ {\epsilon} + \mu) \right] \right\} \\ < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . \\ \end{array} +$$ + +By letting $\epsilon \to 0$ , we prove the tightness for $n = 2$ . The proof for $n > 2$ directly extends from the case when $n = 2$ . Indeed, for any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ , $p(x,y) = 0$ , $3 \leq y \leq n$ . For any $\epsilon > 0$ , by the definition + +of infimum, we can take $h \in \mathcal{H}$ such that $S_{h}(x,1) = \tau_{1,\epsilon}, S_{h}(x,2) = \tau_{2,\epsilon}, S_{h}(x,y) = 0, 3 \leq y \leq n$ and satisfies $\tau_{1,\epsilon} + \tau_{2,\epsilon} = 1$ , and + +$$ +\begin{array}{l} \inf _ {P \in \left[ \frac {1}{n - 1} \vee t, 1 \right]} \sup _ {\mu \in \left[ - \tau_ {2, \epsilon}, \tau_ {1, \epsilon} \right]} \left\{\frac {P + t}{2} \left[ \Phi \left(\tau_ {2, \epsilon}\right) - \Phi \left(\tau_ {1, \epsilon} - \mu\right) \right] + \frac {P - t}{2} \left[ \Phi \left(\tau_ {1, \epsilon}\right) - \Phi \left(\tau_ {2, \epsilon} + \mu\right) \right] \right\} \\ = \sup _ {\mu \in [ - \tau_ {2, \epsilon}, \tau_ {1, \epsilon} ]} \left\{\frac {1 + t}{2} \left[ \Phi (\tau_ {2, \epsilon}) - \Phi (\tau_ {1, \epsilon} - \mu) \right] + \frac {1 - t}{2} \left[ \Phi (\tau_ {1, \epsilon}) - \Phi (\tau_ {2, \epsilon} + \mu) \right] \right\} \\ < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . \\ \end{array} +$$ + +Then, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = t +$$ + +and + +$$ +\mathfrak {T} ^ {\operatorname {c o m p}} (t) < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . +$$ + +By letting $\epsilon \rightarrow 0$ , we prove the tightness for $n > 2$ . + +![](images/fd6c432cfd1f987a9a797b478b75541c05d897054b23b776aef5e217b9c5dfd2.jpg) + +# C.2 Characterization of $\mathfrak{T}^{\mathrm{comp}}$ (Theorem 3) + +Theorem 3 (characterization of $\mathfrak{T}^{\mathrm{comp}}$ ). Assume that $\Phi$ is convex, differentiable at $\frac{1}{2}$ and $\Phi' \left( \frac{1}{2} \right) < 0$ . Then, $\mathfrak{T}^{\mathrm{comp}}$ can be expressed as follows: + +$$ +\mathfrak{T}^{\mathrm{comp}}(t) = \left\{ \begin{array}{ll}\Phi \Big(\frac{1}{2}\Big) - \inf_{\mu \in \bigl [-\frac{1}{2},\frac{1}{2}\bigr ]}\Big\{\frac{1 - t}{2}\Phi \Big(\frac{1}{2} +\mu \Big) + \frac{1 + t}{2}\Phi \Big(\frac{1}{2} -\mu \Big)\Big\} & n = 2\\ \inf_{\tau \in \bigl [ \frac{1}{n},\frac{1}{2}\bigr ]}\Big\{\Phi (\tau) - \inf_{\mu \in \bigl [-\tau ,\tau \bigr ]}\Big\{\frac{1 + t}{2}\Phi (\tau -\mu) + \frac{1 - t}{2}\Phi (\tau +\mu)\Big\} \Big\} & n > 2. \end{array} \right. +$$ + +Proof. For $n = 2$ , we have + +( $\Phi$ is convex) + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}}(t) = \inf_{\tau \in \left[0,\frac{1}{2}\right]}\sup_{\mu \in [-\tau ,1 - \tau ]}\left\{\frac{1 + t}{2}\bigl[\Phi (\tau) - \Phi (1 - \tau -\mu)\bigr ] + \frac{1 - t}{2}\bigl[\Phi (1 - \tau) - \Phi (\tau +\mu)\bigr ]\right\} \\ = \inf _ {\tau \in [ 0, \frac {1}{2} ]} \left(\frac {1 + t}{2} \Phi (\tau) + \frac {1 - t}{2} [ \Phi (1 - \tau) ] - \inf _ {\mu \in [ - \tau , 1 - \tau ]} \left\{\frac {1 + t}{2} \Phi (1 - \tau - \mu) + \frac {1 - t}{2} \Phi (\tau + \mu) \right\}\right) \\ = \inf _ {\tau \in [ 0, \frac {1}{2} ]} \left(\frac {1 + t}{2} \Phi (\tau) + \frac {1 - t}{2} [ \Phi (1 - \tau) ]\right) - \inf _ {\mu \in [ - \frac {1}{2}, \frac {1}{2} ]} \left\{\frac {1 - t}{2} \Phi \left(\frac {1}{2} + \mu\right) + \frac {1 + t}{2} \Phi \left(\frac {1}{2} - \mu\right) \right\} \\ \geq \inf _ {\tau \in [ 0, \frac {1}{2} ]} \left(\Phi \left(\frac {1}{2}\right) + \Phi^ {\prime} \left(\frac {1}{2}\right) t \left(\tau - \frac {1}{2}\right)\right) - \inf _ {\mu \in \left[ - \frac {1}{2}, \frac {1}{2} \right]} \left\{\frac {1 - t}{2} \Phi \left(\frac {1}{2} + \mu\right) + \frac {1 + t}{2} \Phi \left(\frac {1}{2} - \mu\right) \right\} \\ = \Phi \left(\frac {1}{2}\right) - \inf _ {\mu \in \left[ - \frac {1}{2}, \frac {1}{2} \right]} \left\{\frac {1 - t}{2} \Phi \left(\frac {1}{2} + \mu\right) + \frac {1 + t}{2} \Phi \left(\frac {1}{2} - \mu\right) \right\} \quad (\Phi^ {\prime} \left(\frac {1}{2}\right) < 0, t \left(\tau - \frac {1}{2}\right) \leq 0) \\ \end{array} +$$ + +where the equality can be achieved by $\tau = \frac{1}{2}$ . + +For $n > 2$ , we have + +$$ +\mathfrak{T}^{\mathrm{comp}}(t) = \inf_{P\in \left[\frac{1}{n - 1},1\right]}\inf_{\substack{\tau_{1}\geq \max (\tau_{2},1 / n)\\ \tau_{1} + \tau_{2}\leq 1,\tau_{2}\geq 0}}\sup_{\mu \in \left[-\tau_{2},\tau_{1}\right]}F(P,\tau_{1},\tau_{2},\mu) +$$ + +where we let $F(P, \tau_1, \tau_2, \mu) = \frac{P + t}{2} \left[ \Phi(\tau_2) - \Phi(\tau_1 - \mu) \right] + \frac{P - t}{2} \left[ \Phi(\tau_1) - \Phi(\tau_2 + \mu) \right]$ . For simplicity, we assume that $\Phi$ is differentiable. For general convex $\Phi$ , we can proceed by using left and right derivatives, which are non-decreasing. Differentiate $F$ with respect to $\mu$ , we have + +$$ +\frac {\partial F}{\partial \mu} = \frac {P + t}{2} \Phi^ {\prime} (\tau_ {1} - \mu) + \frac {t - P}{2} \Phi^ {\prime} (\tau_ {2} + \mu). +$$ + +Using the fact that $P \in \left[\frac{1}{n - 1} \vee t, 1\right], t \in [0,1]$ and $\Phi'$ is non-decreasing, we obtain that $\frac{\partial F}{\partial \mu}$ is non-increasing. Furthermore, $\Phi'$ is non-decreasing and non-positive, $\Phi$ is non-negative, we obtain that + +$\Phi^{\prime}(+\infty) = 0$ . This implies that $\frac{\partial F}{\partial\mu} (+\infty)\leq 0$ and $\frac{\partial F}{\partial\mu} (-\infty)\geq 0$ . Therefore, there exists $\mu_0\in \mathbb{R}$ such that + +$$ +\frac {\partial F}{\partial \mu} (\mu_ {0}) = \frac {P + t}{2} \Phi^ {\prime} (\tau_ {1} - \mu_ {0}) + \frac {t - P}{2} \Phi^ {\prime} (\tau_ {2} + \mu_ {0}) = 0 +$$ + +By taking $\mu = \tau_{1} - \tau_{2}$ and using the fact that $\tau_{2} \leq \frac{1}{2}$ , $\Phi' \left(\frac{1}{2}\right) < 0$ , we have + +$$ +\frac {\partial F}{\partial \mu} \left(\tau_ {1} - \tau_ {2}\right) = \frac {P + t}{2} \Phi^ {\prime} \left(\tau_ {2}\right) + \frac {t - P}{2} \Phi^ {\prime} \left(\tau_ {1}\right) < 0. +$$ + +Thus, since $\frac{\partial F}{\partial \mu}$ is non-increasing, we obtain $\mu_0 < \tau_1 - \tau_2$ . Differentiate $F$ with respect to $\tau_2$ at $\mu_0$ , we have + +$$ +\frac {\partial F}{\partial \tau_ {2}} = \frac {P + t}{2} \Phi^ {\prime} (\tau_ {2}) + \frac {t - P}{2} \Phi^ {\prime} (\tau_ {2} + \mu_ {0}). +$$ + +Since $\Phi^{\prime}$ is non-decreasing, we obtain + +$$ +\frac {\partial F}{\partial \tau_ {2}} \leq \frac {P + t}{2} \Phi^ {\prime} (\tau_ {2}) + \frac {t - P}{2} \Phi^ {\prime} (\tau_ {2} + \tau_ {1} - \tau_ {2}) = \frac {\partial F}{\partial \mu} (\tau_ {1} - \tau_ {2}) < 0, +$$ + +which implies that the infimum $\inf_{\tau_1 \geq \max \{\tau_2, \frac{1}{n}\}}$ is achieved when $\tau_2 = \tau_1$ . Differentiate $F$ with respect to $P$ at $\mu_0$ and $\tau_1 = \tau_2$ , by the convexity of $\Phi$ , we obtain + +$$ +\frac {\partial F}{\partial P} = \Phi (\tau_ {1}) - \Phi (\tau_ {1} - \mu_ {0}) + \Phi (\tau_ {1}) - \Phi (\tau_ {1} + \mu_ {0}) \leq 0, +$$ + +which implies that the infimum $\inf_{P\in \left[\frac{1}{n - 1},1\right]}$ is achieved when $P = 1$ . Above all, we obtain + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}}(t) = \inf_{\tau \in \left[\frac{1}{n},\frac{1}{2}\right]}\sup_{\mu \in [-\tau ,\tau ]}F(1,\tau ,\tau ,\mu) \\ = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \left\{\Phi (\tau) - \inf _ {\mu \in [ - \tau , \tau ]} \left\{\frac {1 + t}{2} \Phi (\tau - \mu) + \frac {1 - t}{2} \Phi (\tau + \mu) \right\} \right\}. \\ \end{array} +$$ + +![](images/24cd819cde979bf32b42c7812976ce2d6e1227728b99ba7268e6193f61dfbf23.jpg) + +# C.3 Computation of examples + +Example: $\Phi(t) = -\log(t)$ . For $n = 2$ , plugging in $\Phi(t) = -\log(t)$ in Theorem 3, gives + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}} = \log 2 - \inf_{\mu \in \left[ - \frac{1}{2},\frac{1}{2}\right]}\left\{-\frac{1 - t}{2}\log \left(\frac{1}{2} +\mu\right) - \frac{1 + t}{2}\log \left(\frac{1}{2} -\mu\right)\right\} \\ = \frac {1 + t}{2} \log (1 + t) + \frac {1 - t}{2} \log (1 - t). \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - \frac {t}{2}) \\ \end{array} +$$ + +Similarly, for $n > 2$ , plugging in $\Phi(t) = -\log(t)$ in Theorem 3 yields + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}} = \inf_{\tau \in \left[ \frac{1}{n},\frac{1}{2}\right]}\left\{-\log \tau -\inf_{\mu \in \left[ -\tau ,\tau \right]} \left\{-\frac{1 - t}{2}\log (\tau +\mu) - \frac{1 + t}{2}\log (\tau -\mu)\right\} \right\} \\ = \frac {1 + t}{2} \log (1 + t) + \frac {1 - t}{2} \log (1 - t). \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - \tau t) \\ \end{array} +$$ + +Example: $\Phi(t) = \frac{1}{t} - 1$ . For $n = 2$ , plugging in $\Phi(t) = \frac{1}{t} - 1$ in Theorem 3, gives + +$$ +\begin{array}{l} \mathcal {T} ^ {\mathrm {c o m p}} = 2 - \inf _ {\mu \in \left[ - \frac {1}{2}, \frac {1}{2} \right]} \left\{\frac {1 - t}{2} \frac {1}{\frac {1}{2} + \mu} + \frac {1 + t}{2} \frac {1}{\frac {1}{2} - \mu} \right\} \\ = 1 - \sqrt {1 - t ^ {2}}. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = \frac {(1 - t) ^ {\frac {1}{2}} - (1 + t) ^ {\frac {1}{2}}}{2 \left((1 + t) ^ {\frac {1}{2}} + (1 - t) ^ {\frac {1}{2}}\right)} \\ \end{array} +$$ + +Similarly, for $n > 2$ , plugging in $\Phi(t) = \frac{1}{t} - 1$ in Theorem 3 yields + +$$ +\begin{array}{l} \mathfrak{T}^{\text{comp}} = \inf_{\tau \in \left[\frac{1}{n},\frac{1}{2}\right]}\left\{\frac{1}{\tau} -\inf_{\mu \in [-\tau ,\tau ]}\left\{\frac{1 + t}{2}\frac{1}{\tau - \mu} +\frac{1 + t}{2}\frac{1}{\tau + \mu}\right\} \right\} \\ = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \frac {1}{2 \tau} \left(1 - \sqrt {1 - t ^ {2}}\right) \quad (\text {m i n i m u m a c h i e v e d a t} \mu = \frac {(1 - t) ^ {\frac {1}{2}} - (1 + t) ^ {\frac {1}{2}}}{(1 + t) ^ {\frac {1}{2}} + (1 - t) ^ {\frac {1}{2}}} \tau) \\ = 1 - \sqrt {1 - t ^ {2}}. \quad (\text {m i n i m u m a c h i e v e d a t} \tau = \frac {1}{2}) \\ \end{array} +$$ + +Example: $\Phi(t) = \frac{1}{q}(1 - t^q), q \in (0,1)$ . For $n = 2$ , plugging in $\Phi(t) = \frac{1}{q}(1 - t^q)$ in Theorem 3, gives + +(minimum achieved at $\mu = \frac{(1 - t)^{\frac{1}{1 - q}} - (1 + t)^{\frac{1}{1 - q}}}{2\left((1 + t)^{\frac{1}{1 - q}} + (1 - t)^{\frac{1}{1 - q}}\right)}$ + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}} = -\frac{1}{q2^{q}} -\inf_{\mu \in \left[-\frac{1}{2},\frac{1}{2}\right]}\left\{-\frac{1 - t}{2q}\bigg(\frac{1}{2} +\mu \bigg)^{q} - \frac{1 + t}{2q}\bigg(\frac{1}{2} -\mu \bigg)^{q}\right\} \\ = \frac {1}{q 2 ^ {q}} \left(\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2}\right) ^ {1 - q} - \frac {1}{q 2 ^ {q}}. \\ \end{array} +$$ + +Similarly, for $n > 2$ , plugging in $\Phi(t) = \frac{1}{q} (1 - t^q)$ in Theorem 3 yields + +$$ +\begin{array}{l} \mathfrak{T}^{\text{comp}} = \inf_{\tau \in \left[\frac{1}{n},\frac{1}{2}\right]}\left\{-\frac{\tau^{q}}{q} -\inf_{\mu \in \left[ - \tau ,\tau \right]} \left\{-\frac{1 + t}{2q} (\tau -\mu)^{q} - \frac{1 - t}{2q} (\tau +\mu)^{q}\right\} \right\} \\ = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \frac {\tau^ {q}}{q} \left(\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2}\right) ^ {1 - q} - \frac {\tau^ {q}}{q} \\ \end{array} +$$ + +(minimum achieved at $\mu = \frac{(1 - t)^{\frac{1}{1 - q}} - (1 + t)^{\frac{1}{1 - q}}}{(1 + t)^{\frac{1}{1 - q}} + (1 - t)^{\frac{1}{1 - q}}} \tau$ ) + +$$ += \frac {1}{q n ^ {q}} \left(\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2}\right) ^ {1 - q} - \frac {1}{q n ^ {q}}. +$$ + +(minimum achieved at $\tau = \frac{1}{n}$ ) + +Example: $\Phi(t) = 1 - t$ . For $n = 2$ , plugging in $\Phi(t) = 1 - t$ in Theorem 3, gives + +$$ +\mathcal {T} ^ {\mathrm {c o m p}} = \frac {1}{2} - \inf _ {\mu \in [ - \frac {1}{2}, \frac {1}{2} ]} \left\{\frac {1 - t}{2} \bigg (\frac {1}{2} - \mu \bigg) + \frac {1 + t}{2} \bigg (\frac {1}{2} + \mu \bigg) \right\} = \frac {1}{2} - \frac {1 - t}{2} = \frac {t}{2}. +$$ + +Similarly, for $n > 2$ , plugging in $\Phi(t) = 1 - t$ in Theorem 3 yields + +$$ +\begin{array}{l} \mathcal {T} ^ {\text {c o m p}} = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \left\{(1 - \tau) - \inf _ {\mu \in [ - \tau , \tau ]} \left\{\frac {1 + t}{2} (1 - \tau + \mu) + \frac {1 - t}{2} (1 - \tau - \mu) \right\} \right\} \\ = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \tau t \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - \tau) \\ = \frac {t}{n}. \quad (\text {m i n i m u m a c h i e v e d a t} \tau = \frac {1}{n}) \\ \end{array} +$$ + +Example: $\Phi(t) = (1 - t)^2$ . For $n = 2$ , plugging in $\Phi(t) = (1 - t)^2$ in Theorem 3, gives + +$$ +\mathfrak {T} ^ {\mathrm {c o m p}} = \frac {1}{4} - \inf _ {\mu \in [ - \frac {1}{2}, \frac {1}{2} ]} \left\{\frac {1 - t}{2} \left(\frac {1}{2} - \mu\right) ^ {2} + \frac {1 + t}{2} \left(\frac {1}{2} + \mu\right) ^ {2} \right\} = \frac {1}{4} - \frac {1 - t ^ {2}}{4} = \frac {t ^ {2}}{4}. +$$ + +Similarly, for $n > 2$ , plugging in $\Phi(t) = (1 - t)^2$ in Theorem 3 yields + +$$ +\begin{array}{l} \mathfrak{T}^{\text{comp}} = \inf_{\tau \in \left[ \frac{1}{n},\frac{1}{2}\right]}\left\{(1 - \tau)^{2} - \inf_{\mu \in [-\tau ,\tau ]}\left\{\frac{1 + t}{2} (1 - \tau +\mu)^{2} + \frac{1 - t}{2} (1 - \tau -\mu)^{2}\right\} \right\} \\ = \inf _ {\tau \in \left[ \frac {1}{n}, \frac {1}{2} \right]} \left\{\left(1 - \tau\right) ^ {2} t ^ {2} \right\} \quad (\text {m i n i m u m a c h i e v e d a t} \mu = t (\tau - 1)) \\ = \frac {t ^ {2}}{4}. \quad \left(\text {m i n i m u m a c h i e v e d a t} \tau = \frac {1}{2}\right) \\ \end{array} +$$ + +# D Proofs for constrained losses + +Let $y_{\max} = \operatorname{argmax}_{y \in \mathcal{Y}} p(x, y)$ and $h(x) = \operatorname{argmax}_{y \in \mathcal{Y}} h(x, y)$ , where we choose the label with the highest index under the natural ordering of labels as the tie-breaking strategy. + +# D.1 Proof of $\mathcal{H}$ -consistency bounds with $\mathcal{T}^{\mathrm{cstnd}}$ (Theorem 10) + +Theorem 10 ( $\mathcal{H}$ -consistency bound for constrained losses). Assume that $\mathcal{H}$ is symmetric and complete. Assume that $\mathfrak{T}^{\mathrm{cstnd}}$ is convex. Then, for any hypothesis $h\in \mathcal{H}$ and any distribution, + +$$ +\mathfrak {T} ^ {\mathrm {c s t n d}} \Big (\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) \Big) \leq \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\mathrm {c s t n d}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\mathrm {c s t n d}}} (\mathcal {H}) +$$ + +with $\mathcal{H}$ -estimation error transformation for constrained losses defined on $t \in [0,1]$ by $\mathcal{T}^{\mathrm{cstnd}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \geq 0} \sup _ {\mu \in \mathbb {R}} \big \{\frac {1 - t}{2} \big [ \Phi (\tau) - \Phi (- \tau + \mu) \big ] + \frac {1 + t}{2} \big [ \Phi (- \tau) - \Phi (\tau - \mu) \big ] \big \} & n = 2 \\ \inf _ {P \in \big [ \frac {1}{n - 1}, 1 \big ]} \inf _ {\tau_ {1} \geq \max \{\tau_ {2}, 0 \}} \sup _ {\mu \in \mathbb {R}} \big \{\frac {2 - P - t}{2} \big [ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \big ] + \frac {2 - P + t}{2} \big [ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \big ] \big \} & n > 2. \end{array} \right. +$$ + +Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and a hypothesis $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{cstnd}}}(h) - \mathcal{R}_{\ell^{\mathrm{cstnd}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{cstnd}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{cstnd}}(t)$ . + +Proof. For the constrained loss $\ell^{\mathrm{cstnd}}$ , the conditional $\ell^{\mathrm{cstnd}}$ -risk can be expressed as follows: + +$$ +\begin{array}{l} \mathcal {C} _ {\ell^ {\mathrm {c s t n d}}} (h, x) = \sum_ {y \in \mathcal {Y}} p (x, y) \ell^ {\mathrm {c s t n d}} (h, x, y) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \sum_ {y ^ {\prime} \neq y} \Phi (- h (x, y ^ {\prime})) \\ = \sum_ {y \in \mathcal {Y}} \Phi (- h (x, y)) \sum_ {y ^ {\prime} \neq y} p (x, y ^ {\prime}) \\ = \sum_ {y \in \mathcal {Y}} \Phi (- h (x, y)) (1 - p (x, y)) \\ = \Phi (- h (x, y _ {\max }) (1 - p (x, y _ {\max })) + \Phi (- h (x, h (x))) (1 - p (x, h (x))) \\ + \sum_ {y \notin \{y _ {\max }, h (x) \}} \Phi (- h (x, y)) (1 - p (x, y)). \\ \end{array} +$$ + +For any $h \in \mathcal{H}$ and $x \in \mathcal{X}$ , by the symmetry and completeness of $\mathcal{H}$ , we can always find a family of hypotheses $\{h_{\mu} : \mu \in \mathbb{R}\} \subset \mathcal{H}$ such that $h_{\mu}(x, \cdot)$ take the following values: + +$$ +h _ {\mu} (x, y) = \left\{ \begin{array}{l l} h (x, y) & \text {i f} y \notin \{y _ {\max }, \mathsf {h} (x) \} \\ h (x, y _ {\max }) + \mu & \text {i f} y = \mathsf {h} (x) \\ h (x, \mathsf {h} (x)) - \mu & \text {i f} y = y _ {\max }. \end{array} \right. +$$ + +Note that the hypotheses $h_\mu$ satisfies the constraint: + +$$ +\sum_ {y \in \mathcal {Y}} h _ {\mu} (x, y) = \sum_ {y \in \mathcal {Y}} h (x, y) = 0, \forall \mu \in \mathbb {R}. +$$ + +Let $p_1 = p(x, y_{\max})$ , $p_2 = p(x, \mathsf{h}(x))$ , $\tau_1 = h(x, \mathsf{h}(x))$ and $\tau_2 = h(x, y_{\max})$ to simplify the notation. Then, by the definition of $h_\mu$ , we have for any $h \in \mathcal{H}$ and $x \in \mathcal{X}$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\ell \text {c s t n d}} (h, x) - \inf _ {\mu \in \mathbb {R}} \mathcal {C} _ {\ell \text {c s t n d}} (h _ {\mu}, x) \\ = \sup _ {\mu \in \mathbb {R}} \left\{\left(1 - p _ {1}\right) \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] + \left(1 - p _ {2}\right) \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} \\ = \sup _ {\mu \in \mathbb {R}} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] + \frac {2 - P + p _ {1} - p _ {2}}{2} \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} (P = p _ {1} + p _ {2} \in \left[ \frac {1}{n - 1}, 1 \right]) \\ = \inf _ {P \in \left[ \frac {1}{n - 1}, 1 \right]} \inf _ {\tau_ {1} \geq \max \left\{\tau_ {2}, 0 \right\}} \sup _ {\mu \in \mathbb {R}} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] \right. \\ \left. + \frac {2 - P + p _ {1} - p _ {2}}{2} \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} \quad (\tau_ {1} \geq 0, \tau_ {2} \leq \tau_ {1}) \\ = \mathfrak {T} ^ {\text {c s t n d}} \left(p _ {1} - p _ {2}\right) \\ = \mathcal {T} ^ {\text {c s t n d}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right). \tag {by Lemma 1} \\ \end{array} +$$ + +where for $n = 2$ , an additional constraint $\tau_{1} + \tau_{2} = 0$ is imposed and the expression of $\mathcal{T}^{\mathrm{comp}}$ is simplified. Since $\mathcal{T}^{\mathrm{cstnd}}$ is convex, by Jensen's inequality, we obtain for any hypothesis $h \in \mathcal{H}$ and any distribution, + +$$ +\begin{array}{l} \mathcal {T} ^ {\text {c s t n d}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H})\right) \\ = \mathcal {T} ^ {\text {c s t n d}} \left(\underset {X} {\mathbb {E}} \left[ \Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x) \right]\right) \\ \leq \mathbb {E} _ {X} \left[ \mathcal {T} ^ {\text {c s t n d}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right) \right] \\ \leq \mathbb {E} _ {X} \left[ \Delta \mathcal {C} _ {\ell^ {\text {c s t n d}}, \mathcal {H}} (h, x) \right] \\ = \mathcal {R} _ {\ell \text {c s t n d}} (h) - \mathcal {R} _ {\ell \text {c s t n d}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell \text {c s t n d}} (\mathcal {H}). \\ \end{array} +$$ + +For the second part, we first consider $n = 2$ . For any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $h(x,2) = \tau_{\epsilon} \geq 0$ and satisfies + +$$ +\sup _ {\mu \in \mathbb {R}} \Bigg \{\frac {1 - t}{2} [ \Phi (\tau_ {\epsilon}) - \Phi (- \tau_ {\epsilon} + \mu) ] + \frac {1 + t}{2} [ \Phi (- \tau_ {\epsilon}) - \Phi (\tau_ {\epsilon} - \mu) ] \Bigg \} < \mathfrak {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell_ {0 - 1}} (h, x) - \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \\ = t \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathcal {T} ^ {\text {c s t n d}} (t) \leq \mathcal {R} _ {\ell^ {\text {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\text {c s t n d}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\text {c s t n d}}} (\mathcal {H}) \\ = \mathcal {R} _ {\ell \text {c s t n d}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell \text {c s t n d}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell c s t n d} (h, x) - \mathcal {C} _ {\ell c s t n d} ^ {*} (\mathcal {H}, x) \\ = \sup _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \left[ \Phi \left(\tau_ {\epsilon}\right) - \Phi (- \tau_ {\epsilon} + \mu) \right] + \frac {1 + t}{2} \left[ \Phi (- \tau_ {\epsilon}) - \Phi (\tau_ {\epsilon} - \mu) \right] \right\} \\ < \mathcal {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . \\ \end{array} +$$ + +By letting $\epsilon \to 0$ , we conclude the proof. The proof for $n > 2$ directly extends from the case when $n = 2$ . Indeed, For any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ , $p(x,y) = 0$ , $3 \leq y \leq n$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $h(x,1) = \tau_{1,\epsilon}$ , $h(x,2) = \tau_{2,\epsilon}$ , $h(x,y) = 0$ , $3 \leq y \leq n$ and satisfies $\tau_{1,\epsilon} + \tau_{2,\epsilon} = 0$ , and + +$$ +\begin{array}{l} \inf_{P\epsilon \bigl[\frac{1}{n - 1},1\bigr ]}\sup_{\mu \in \mathbb{R}}\Biggl\{\frac{2 - P - t}{2}\bigl[\Phi \bigl( - \tau_{2,\epsilon}\bigr) - \Phi \bigl( - \tau_{1,\epsilon} + \mu \bigr)\bigr ] + \frac{2 - P + t}{2}\bigl[\Phi \bigl( - \tau_{1,\epsilon}\bigr) - \Phi \bigl( - \tau_{2,\epsilon} - \mu \bigr)\bigr ]\Biggr \} \\ = \sup _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \left[ \Phi (- \tau_ {2, \epsilon}) - \Phi (- \tau_ {1, \epsilon} + \mu) \right] + \frac {1 + t}{2} \left[ \Phi (- \tau_ {1, \epsilon}) - \Phi (- \tau_ {2, \epsilon} - \mu) \right] \right\} \\ < \mathcal {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . \\ \end{array} +$$ + +Then, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = t +$$ + +and + +$$ +\mathfrak {T} ^ {\mathrm {c s t n d}} (t) \leq \mathcal {R} _ {\ell \mathrm {c s t n d}} (h) - \mathcal {R} _ {\ell \mathrm {c s t n d}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell \mathrm {c s t n d}} (\mathcal {H}) < \mathfrak {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . +$$ + +# D.2 Characterization of $\mathcal{T}^{\mathrm{cstnd}}$ (Theorem 11) + +Theorem 11 (characterization of $\mathfrak{T}^{\mathrm{cstnd}}$ ). Assume that $\Phi$ is convex, differentiable at zero and $\Phi'(0) < 0$ . Then, $\mathfrak{T}^{\mathrm{cstnd}}$ can be expressed as follows: + +$$ +\begin{array}{l} \mathcal {T} ^ {\mathrm {c s t n d}} (t) = \left\{ \begin{array}{l l} \Phi (0) - \inf _ {\mu \in \mathbb {R}} \Big \{\frac {1 - t}{2} \Phi (\mu) + \frac {1 + t}{2} \Phi (- \mu) \Big \} & n = 2 \\ \inf _ {\tau \geq 0} \Big \{\big (2 - \frac {1}{n - 1} \big) \Phi (- \tau) - \inf _ {\mu \in \mathbb {R}} \Big \{\frac {2 - t - \frac {1}{n - 1}}{2} \Phi (- \tau + \mu) + \frac {2 + t - \frac {1}{n - 1}}{2} \Phi (- \tau - \mu) \Big \} \Big \} & n > 2 \end{array} \right. \\ \begin{array}{r l} & {\geq \left\{\Phi (0) - \inf _ {\mu \in \mathbb {R}} \Bigl \{\frac {1 - t}{2} \Phi (\mu) + \frac {1 + t}{2} \Phi (- \mu) \Bigr \} \right. n = 2} \\ & {\left. \inf _ {\tau \geq 0} \Bigl \{2 \Phi (- \tau) - \inf _ {\mu \in \mathbb {R}} \Bigl \{\frac {2 - t}{2} \Phi (- \tau + \mu) + \frac {2 + t}{2} \Phi (- \tau - \mu) \Bigr \} \right\} n > 2.} \end{array} \\ \end{array} +$$ + +Proof. For $n = 2$ , we have + +$$ +\begin{array}{l} \mathfrak {T} ^ {\mathrm {c s t n d}} (t) = \inf _ {\tau \geq 0} \sup _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} [ \Phi (\tau) - \Phi (- \tau + \mu) ] + \frac {1 + t}{2} [ \Phi (- \tau) - \Phi (\tau - \mu) ] \right\} \\ = \inf _ {\tau \geq 0} \left(\frac {1 - t}{2} \Phi (\tau) + \frac {1 + t}{2} [ \Phi (- \tau) ]\right) - \inf _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \Phi (- \tau + \mu) + \frac {1 + t}{2} \Phi (\tau - \mu) \right\} \\ = \inf _ {\tau \geq 0} \left(\frac {1 - t}{2} \Phi (\tau) + \frac {1 + t}{2} [ \Phi (- \tau) ]\right) - \inf _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \Phi (\mu) + \frac {1 + t}{2} \Phi (- \mu) \right\} \\ \geq \inf _ {\tau \geq 0} \left(\Phi (0) - \Phi^ {\prime} (0) t \tau\right) - \inf _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \Phi (\mu) + \frac {1 + t}{2} \Phi (- \mu) \right\} \quad (\Phi \text {i s c o n v e x}) \\ = \Phi (0) - \inf _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \Phi (\mu) + \frac {1 + t}{2} \Phi (- \mu) \right\} \quad \left(\Phi^ {\prime} (0) < 0, t \tau \geq 0\right) \\ \end{array} +$$ + +where the equality can be achieved by $\tau = 0$ + +For $n > 2$ , we have + +$$ +\mathfrak{T}^{\text{cstnd}}(t) = \inf_{P\in \left[ \begin{array}{c}\frac{1}{n - 1},1 \end{array} \right]}\inf_{\tau_{1}\geq \max \{\tau_{2},0\}}\sup_{\mu \in \mathbb{R}}F(P,\tau_{1},\tau_{2},\mu) +$$ + +where we let $F(P, \tau_1, \tau_2, \mu) = \frac{2 - P - t}{2} \left[ \Phi(-\tau_2) - \Phi(-\tau_1 + \mu) \right] + \frac{2 - P + t}{2} \left[ \Phi(-\tau_1) - \Phi(-\tau_2 - \mu) \right]$ . For simplicity, we assume that $\Phi$ is differentiable. For general convex $\Phi$ , we can proceed by using left and right derivatives, which are non-decreasing. Differentiate $F$ with respect to $\mu$ , we have + +$$ +\frac {\partial F}{\partial \mu} = \frac {P + t - 2}{2} \Phi^ {\prime} (- \tau_ {1} + \mu) + \frac {2 - P + t}{2} \Phi^ {\prime} (- \tau_ {2} - \mu). +$$ + +Using the fact that $P \in \left[\frac{1}{n - 1}, 1\right], t \in [0,1]$ and $\Phi'$ is non-decreasing, we obtain that $\frac{\partial F}{\partial \mu}$ is non-increasing. Furthermore, $\Phi'$ is non-decreasing and non-positive, $\Phi$ is non-negative, we obtain that $\Phi'(+\infty) = 0$ . This implies that $\frac{\partial F}{\partial \mu}(+\infty) \leq 0$ and $\frac{\partial F}{\partial \mu}(-\infty) \geq 0$ . Therefore, there exists $\mu_0 \in \mathbb{R}$ such that + +$$ +\frac {\partial F}{\partial \mu} (\mu_ {0}) = \frac {P + t - 2}{2} \Phi^ {\prime} (- \tau_ {1} + \mu_ {0}) + \frac {2 - P + t}{2} \Phi^ {\prime} (- \tau_ {2} - \mu_ {0}) = 0 +$$ + +By taking $\mu = \tau_{1} - \tau_{2}$ and using the fact that $\Phi'(0) < 0$ , we have + +$$ +\frac {\partial F}{\partial \mu} (\tau_ {1} - \tau_ {2}) = \frac {P + t - 2}{2} \Phi^ {\prime} (- \tau_ {2}) + \frac {2 - P + t}{2} \Phi^ {\prime} (- \tau_ {1}) < 0. +$$ + +Thus, since $\frac{\partial F}{\partial \mu}$ is non-increasing, we obtain $\mu_0 < \tau_1 - \tau_2$ . Differentiate $F$ with respect to $\tau_2$ at $\mu_0$ , we have + +$$ +\frac {\partial F}{\partial \tau_ {2}} = \frac {P + t - 2}{2} \Phi^ {\prime} (- \tau_ {2}) + \frac {2 - P + t}{2} \Phi^ {\prime} (- \tau_ {2} - \mu_ {0}). +$$ + +Since $\Phi^{\prime}$ is non-decreasing, we obtain + +$$ +\frac {\partial F}{\partial \tau_ {2}} \leq \frac {P + t - 2}{2} \Phi^ {\prime} (- \tau_ {2}) + \frac {2 - P + t}{2} \Phi^ {\prime} (- \tau_ {2} - \tau_ {1} + \tau_ {2}) = \frac {\partial F}{\partial \mu} (\tau_ {1} - \tau_ {2}) < 0, +$$ + +which implies that the infimum $\inf_{\tau_1\geq \max \{\tau_2,0\}}$ is achieved when $\tau_{2} = \tau_{1}$ . Differentiate $F$ with respect to $P$ at $\mu_0$ and $\tau_{1} = \tau_{2}$ , by the convexity of $\Phi$ , we obtain + +$$ +\frac {\partial F}{\partial P} = \Phi (- \tau_ {1} + \mu_ {0}) - \Phi (- \tau_ {1}) - \Phi (- \tau_ {1}) + \Phi (- \tau_ {1} - \mu_ {0}) \geq 0, +$$ + +which implies that the infimum $\inf_{P\in \left[\frac{1}{n - 1},1\right]}$ is achieved when $P = \frac{1}{n - 1}$ . Above all, we obtain + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{cstnd}}(t) = \inf_{\tau \geq 0}\sup_{\mu \in \mathbb{R}}F\bigg(\frac{1}{n - 1},\tau ,\tau ,\mu \bigg) \\ = \inf _ {\tau \geq 0} \left\{\left(2 - \frac {1}{n - 1}\right) \Phi (- \tau) - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t - \frac {1}{n - 1}}{2} \Phi (- \tau + \mu) + \frac {2 + t - \frac {1}{n - 1}}{2} \Phi (- \tau - \mu) \right\} \right\} \\ \geq \inf_{\tau \geq 0}\sup_{\mu \in \mathbb{R}}F(0,\tau ,\tau ,\mu) \\ = \inf _ {\tau \geq 0} \left\{2 \Phi (- \tau) - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} \Phi (- \tau + \mu) + \frac {2 + t}{2} \Phi (- \tau - \mu) \right\} \right\}. \\ \end{array} +$$ + +![](images/9685ad9a26a757364da0149ce64bf7b31379e9fceadfde7427d1caab9814ecbe.jpg) + +# D.3 Computation of examples + +Example: $\Phi(t) = \Phi_{\exp}(t) = e^{-t}$ . For $n = 2$ , plugging in $\Phi(t) = e^{-t}$ in Theorem 11, gives + +$$ +\begin{array}{l} \mathcal {T} ^ {\mathrm {c o m p}} = 1 - \inf _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} e ^ {- \mu} + \frac {1 + t}{2} e ^ {\mu} \right\} \\ = 1 - \sqrt {1 - t ^ {2}}. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = \frac {1}{2} \log \frac {1 - t}{1 + t}) \\ \end{array} +$$ + +For $n > 2$ , plugging in $\Phi(t) = e^{-t}$ in Theorem 11 yields + +$$ +\begin{array}{l} \mathfrak {T} ^ {\mathrm {c o m p}} \geq \inf _ {\tau \geq 0} \left\{2 e ^ {\tau} - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} e ^ {\tau - \mu} + \frac {2 + t}{2} e ^ {\tau + \mu} \right\} \right\} \\ \geq 2 - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} e ^ {- \mu} + \frac {2 + t}{2} e ^ {\mu} \right\} \quad (\text {m i n i m u m a c h i e v e d a t} \tau = 0) \\ = 2 - \sqrt {4 - t ^ {2}}. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = \frac {1}{2} \log \frac {2 - t}{2 + t}) \\ \end{array} +$$ + +Example: $\Phi(t) = \Phi_{\text{hinge}}(t) = \max \{0, 1 - t\}$ . For $n = 2$ , plugging in $\Phi(t) = \max \{0, 1 - t\}$ in Theorem 11, gives + +$$ +\begin{array}{l} \mathfrak {T} ^ {\mathrm {c o m p}} = 1 - \inf _ {\mu \in \mathbb {R}} \left\{\frac {1 - t}{2} \max \{0, 1 - \mu \} + \frac {1 + t}{2} \max \{0, 1 + \mu \} \right\} \\ = t. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - 1) \\ \end{array} +$$ + +For $n > 2$ , plugging in $\Phi(t) = \max \{0, 1 - t\}$ in Theorem 11 yields + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}}\geq \inf_{\tau \geq 0}\Bigg\{2\max \{0,1 + \tau \} -\inf_{\mu \in \mathbb{R}}\left\{\frac{2 - t}{2}\max \{0,1 + \tau - \mu \} +\frac{2 + t}{2}\max \{0,1 + \tau +\mu \} \right\} \Bigg\} \\ = 2 - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} \max \{0, 1 - \mu \} + \frac {2 + t}{2} \max \{0, 1 + \mu \} \right\} \quad (\text {m i n i m u m a c h i e v e d a t} \tau = 0) \\ = t. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - 1) \\ \end{array} +$$ + +Example: $\Phi(t) = \Phi_{\mathrm{sq - hinge}}(t) = (1 - t)^2 \mathbb{1}_{t \leq 1}$ . For $n = 2$ , plugging in $\Phi(t) = (1 - t)^2 \mathbb{1}_{t \leq 1}$ in Theorem 11, gives + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}} = 1 - \inf_{\mu \in \mathbb{R}}\left\{\frac{1 - t}{2}\big(1 - \mu \big)^{2}\mathbb{1}_{\mu \leq 1} + \frac{1 + t}{2}\big(1 + \mu \big)^{2}\mathbb{1}_{\mu \geq -1}\right\} \\ = t ^ {2}. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - t) \\ \end{array} +$$ + +For $n > 2$ , plugging in $\Phi(t) = (1 - t)^{2}\mathbb{1}_{t \leq 1}$ in Theorem 11 yields + +$$ +\begin{array}{l} \mathfrak {T} ^ {\text {c o m p}} \geq \inf _ {\tau \geq 0} \left\{2 (1 + \tau) ^ {2} \mathbb {1} _ {\tau \geq - 1} - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} (1 + \tau - \mu) ^ {2} \mathbb {1} _ {- \tau + \mu \leq 1} + \frac {2 + t}{2} (1 + \tau + \mu) ^ {2} \mathbb {1} _ {\tau + \mu \geq - 1} \right\} \right\} \\ \geq 2 - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} (1 - \mu) ^ {2} \mathbb {1} _ {\mu \leq 1} + \frac {2 + t}{2} (1 + \mu) ^ {2} \mathbb {1} _ {\mu \geq - 1} \right\} \quad (\text {m i n i m u m a c h i e v e d a t} \tau = 0) \\ = \frac {t ^ {2}}{2}. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - \frac {t}{2}) \\ \end{array} +$$ + +Example: $\Phi(t) = \Phi_{\mathrm{sq}}(t) = (1 - t)^2$ . For $n = 2$ , plugging in $\Phi(t) = (1 - t)^2$ in Theorem 11, gives + +$$ +\begin{array}{l} \mathfrak{T}^{\mathrm{comp}} = 1 - \inf_{\mu \in \mathbb{R}}\left\{\frac{1 - t}{2}\bigl(1 - \mu \bigr)^{2} + \frac{1 + t}{2}\bigl(1 + \mu \bigr)^{2}\right\} \\ = t ^ {2}. \quad (\text {m i n i m u m a c h i e v e d a t} \mu = - t) \\ \end{array} +$$ + +For $n > 2$ , plugging in $\Phi(t) = (1 - t)^2$ in Theorem 11 yields + +$$ +\begin{array}{l} \mathfrak {T} ^ {\operatorname {c o m p}} \geq \inf _ {\tau \geq 0} \left\{2 (1 + \tau) ^ {2} - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} (1 + \tau - \mu) ^ {2} + \frac {2 + t}{2} (1 + \tau + \mu) ^ {2} \right\} \right\} \\ \geq 2 - \inf _ {\mu \in \mathbb {R}} \left\{\frac {2 - t}{2} (1 - \mu) ^ {2} + \frac {2 + t}{2} (1 + \mu) ^ {2} \right\} \quad (\text {m i n i m u m a c h i e v e d a t} \tau = 0) \\ = \frac {t ^ {2}}{2}. \quad \left(\text {m i n i m u m a c h i e v e d a t} \mu = - \frac {t}{2}\right) \\ \end{array} +$$ + +# E Extensions of comp-sum losses + +# E.1 Proof of $\overline{\mathcal{H}}$ -consistency bounds with $\overline{\mathbb{J}}^{\mathrm{comp}}$ (Theorem 5) + +Theorem 5 ( $\overline{\mathcal{H}}$ -consistency bound for comp-sum losses). Assume that $\overline{\mathcal{T}}^{\mathrm{comp}}$ is convex. Then, the following inequality holds for any hypothesis $h \in \overline{\mathcal{H}}$ and any distribution: + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}})\right) \leq \mathcal {R} _ {\ell^ {\mathrm {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\mathrm {c o m p}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell^ {\mathrm {c o m p}}} (\overline {{\mathcal {H}}}) +$$ + +with $\overline{\mathfrak{T}}^{\mathrm{comp}}$ the $\overline{\mathcal{H}}$ -estimation error transformation for comp-sum losses defined for all $t \in [0,1]$ by $\overline{\mathfrak{T}}^{\mathrm{comp}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \in \left[ 0, \frac {1}{2} \right]} \sup _ {\mu \in \left[ s _ {\min } - \tau , 1 - \tau - s _ {\min } \right]} \left\{\frac {1 + t}{2} \big [ \Phi (\tau) - \Phi (1 - \tau - \mu) \big ] + \frac {1 - t}{2} \big [ \Phi (1 - \tau) - \Phi (\tau + \mu) \big ] \right\} & n = 2 \\ \inf _ {P \in \left[ \frac {1}{n - 1} \vee t, 1 \right]} \inf _ {S _ {\min } \leq \tau_ {2} \leq \tau_ {1} \leq S _ {\max }} \sup _ {\mu \in C} \Big \{\frac {P + t}{2} \big [ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \big ] + \frac {P - t}{2} \big [ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \big ] \Big \} & n > 2, \end{array} \right. +$$ + +where $C = \left[\max \{s_{\min} - \tau_2, \tau_1 - s_{\max}\}, \min \{s_{\max} - \tau_2, \tau_1 - s_{\min}\}\right]$ , $s_{\max} = \frac{1}{1 + (n-1)e^{-2\inf_x\Lambda(x)}}$ and $s_{\min} = \frac{1}{1 + (n-1)e^{2\inf_x\Lambda(x)}}$ . Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell^{\mathrm{comp}}}^*(h) - \mathcal{R}_{\ell^{\mathrm{comp}}}^*(\mathcal{H}) + \mathcal{M}_{\ell^{\mathrm{comp}}}(\mathcal{H}) = \mathcal{T}^{\mathrm{comp}}(t)$ . + +Proof. For the comp-sum loss $\ell^{\mathrm{comp}}$ , the conditional $\ell^{\mathrm{comp}}$ -risk can be expressed as follows: + +$$ +\begin{array}{l} \mathcal {C} _ {\ell \text {c o m p}} (h, x) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \ell^ {\operatorname {c o m p}} (h, x, y) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \Phi \left(\frac {e ^ {h (x , y)}}{\sum_ {y ^ {\prime} \in \mathcal {Y}} e ^ {h (x , y ^ {\prime})}}\right) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \Phi \left(S _ {h} (x, y)\right) \\ = p (x, y _ {\max }) \Phi \left(S _ {h} (x, y _ {\max })\right) + p (x, \mathsf {h} (x)) \Phi \left(S _ {h} (x, \mathsf {h} (x))\right) + \sum_ {y \notin \{y _ {\max }, \mathsf {h} (x) \}} p (x, y) \Phi \left(S _ {h} (x, y)\right) \\ \end{array} +$$ + +where we let $S_{h}(x,y) = \frac{e^{h(x,y)}}{\sum_{y^{\prime}\in\mathcal{Y}}e^{h(x,y^{\prime})}}$ for any $y\in \mathcal{V}$ with the constraint that $\sum_{y\in \mathcal{V}}S_h(x,y) = 1$ . Note that for any $h\in \mathcal{H}$ , + +$$ +\frac {1}{1 + (n - 1) e ^ {2 \Lambda (x)}} = \frac {e ^ {- \Lambda (x)}}{e ^ {- \Lambda (x)} + (n - 1) e ^ {\Lambda (x)}} \leq S _ {h} (x, y) \leq \frac {e ^ {\Lambda (x)}}{e ^ {\Lambda (x)} + (n - 1) e ^ {- \Lambda (x)}} = \frac {1}{1 + (n - 1) e ^ {- 2 \Lambda (x)}} +$$ + +Therefore for any $(x,y)\in \mathcal{X}\times \mathcal{Y}$ , $S_{h}(x,y)\in [S_{\min},S_{\max}]$ , where we let $S_{\mathrm{max}} = \frac{1}{1 + (n - 1)e^{-2\Lambda(x)}}$ and $S_{\mathrm{min}} = \frac{1}{1 + (n - 1)e^{2\Lambda(x)}}$ . Furthermore, all values in $[S_{\min},S_{\max}]$ of $S_{h}$ can be reached for some $h\in \mathcal{H}$ . + +Observe that $0 \leq S_{\max} + S_{\min} \leq 1$ . Let $y_{\max} = \operatorname{argmax}_{y \in \mathcal{Y}} p(x, y)$ , where we choose the label with the highest index under the natural ordering of labels as the tie-breaking strategy. For any $h \in \mathcal{H}$ such that $h(x) \neq y_{\max}$ and $x \in \mathcal{X}$ , we can always find a family of hypotheses $\{h_\mu\} \subset \mathcal{H}$ such that $S_{h,\mu}(x, \cdot) = \frac{e^{h_\mu(x, \cdot)}}{\sum_{y' \in \mathcal{Y}} e^{h_\mu(x, y')} }$ take the following values: + +$$ +S _ {h, \mu} (x, y) = \left\{ \begin{array}{l l} S _ {h} (x, y) & \text {i f} y \notin \{y _ {\max }, h (x) \} \\ S _ {h} (x, y _ {\max }) + \mu & \text {i f} y = h (x) \\ S _ {h} (x, h (x)) - \mu & \text {i f} y = y _ {\max }. \end{array} \right. +$$ + +Note that $S_{h,\mu}$ satisfies the constraint: + +$$ +\sum_ {y \in \mathcal {Y}} S _ {h, \mu} (x, y) = \sum_ {y \in \mathcal {Y}} S _ {h} (x, y) = 1. +$$ + +Since $S_{h,\mu}(x,y)\in [S_{\min},S_{\max}]$ , we have the following constraints on $\mu$ + +$$ +S _ {\min } - S _ {h} (x, y _ {\max }) \leq \mu \leq S _ {\max } - S _ {h} (x, y _ {\max }) +$$ + +$$ +\begin{array}{l} \min _ {h} h (x, h (x)) - f = \max _ {h} h (x, h (x)) \\ S _ {h} (x, h (x)) - S _ {\max } \leq \mu \leq S _ {h} (x, h (x)) - S _ {\min }. \end{array} \tag {7} +$$ + +Let $p_1 = p(x, y_{\max})$ , $p_2 = p(x, \mathsf{h}(x))$ , $\tau_1 = S_h(x, \mathsf{h}(x))$ and $\tau_2 = S_h(x, y_{\max})$ to simplify the notation. Let $\overline{C} = \{\mu \in \mathbb{R} : \mu \text{ verify constraint (7)}\}$ . Since $S_h(x, \mathsf{h}(x)) - S_{\max} \leq S_{\max} - S_h(x, y_{\max})$ and $S_{\min} - S_h(x, y_{\max}) \leq S_h(x, \mathsf{h}(x)) - S_{\min}$ , $\overline{C}$ is not an empty set and can be expressed as $\overline{C} = [\max \{S_{\min} - \tau_2, \tau_1 - S_{\max}\}, \min \{S_{\max} - \tau_2, \tau_1 - S_{\min}\}]$ . + +Then, by the definition of $S_{h,\mu}$ , we have for any $h \in \mathcal{H}$ and $x \in \mathcal{X}$ , + +$$ +\begin{array}{l} \mathcal{C}_{\ell^{\mathrm{comp}}}(h,x) - \inf_{\mu \in \overline{C}}\mathcal{C}_{\ell^{\mathrm{comp}}}(h_{\mu},x) \\ = \sup _ {\mu \in \overline {{C}}} \left\{p _ {1} \left[ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \right] + p _ {2} \left[ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \right] \right\} \\ = \sup _ {\mu \in \overline {{C}}} \left\{\frac {P + p _ {1} - p _ {2}}{2} \left[ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \right] + \frac {P - p _ {1} + p _ {2}}{2} \left[ \Phi (\tau_ {1}) - \Phi (\tau_ {2} + \mu) \right] \right\} \\ (P = p _ {1} + p _ {2} \in \left[ \frac {1}{n - 1} \vee p _ {1} - p _ {2}, 1 \right]) \\ \geq \inf _ {P \in \left[ \frac {1}{n - 1} \vee p _ {1} - p _ {2}, 1 \right]} \inf _ {S _ {\min } \leq \tau_ {2} \leq \tau_ {1} \leq S _ {\max } \atop \tau_ {1} + \tau_ {2} \leq 1}} \sup _ {\mu \in \overline {{C}}} \left\{\frac {P + p _ {1} - p _ {2}}{2} \left[ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \right] \right. \\ \left. + \frac {P - p _ {1} + p _ {2}}{2} \big [ \Phi \big (\tau_ {1} \big) - \Phi \big (\tau_ {2} + \mu \big) \big ] \right\} \qquad (S _ {\min} \leq \tau_ {2} \leq \tau_ {1} \leq S _ {\max}, \tau_ {1} + \tau_ {2} \leq 1) \\ \geq \inf _ {P \in \left[ \frac {1}{n - 1} \vee p _ {1} - p _ {2}, 1 \right]} \inf _ {S _ {\min } \leq \tau_ {2} \leq \tau_ {1} \leq S _ {\max } \atop \tau_ {1} + \tau_ {2} \leq 1}} \sup _ {\mu \in C} \left\{\frac {P + p _ {1} - p _ {2}}{2} \left[ \Phi (\tau_ {2}) - \Phi (\tau_ {1} - \mu) \right] \right. \\ \left. + \frac {P - p _ {1} + p _ {2}}{2} \left[ \Phi \left(\tau_ {1}\right) - \Phi \left(\tau_ {2} + \mu\right) \right] \right\} \quad (S _ {\min } \leq s _ {\min } \leq s _ {\max } \leq S _ {\max }) \\ = \mathfrak {T} ^ {\operatorname {c o m p}} \left(p _ {1} - p _ {2}\right) \\ = \mathcal {T} ^ {\operatorname {c o m p}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right), \quad (\text {b y}) \\ \end{array} +$$ + +where $C = \left[\max \{s_{\min} - \tau_2, \tau_1 - s_{\max}\}, \min \{s_{\max} - \tau_2, \tau_1 - s_{\min}\}\right] \subset \overline{C}$ , $s_{\max} = \frac{1}{1 + (n - 1)e^{-2\inf_x\Lambda(x)}}$ and $s_{\min} = \frac{1}{1 + (n - 1)e^{2\inf_x\Lambda(x)}}$ . Note that for $n = 2$ , an additional constraint $\tau_1 + \tau_2 = 1$ is imposed and the expression can be simplified as + +$$ +\begin{array}{l} \mathcal{C}_{\ell^{\mathrm{comp}}}(h,x) - \inf_{\mu \in \overline{C}}\mathcal{C}_{\ell^{\mathrm{comp}}}(h_{\mu},x) \\ \geq \inf _ {\tau \in \left[ 0, \frac {1}{2} \right]} \sup _ {\mu \in \left[ s _ {\min } - \tau , 1 - \tau - s _ {\min } \right]} \left\{\frac {1 + p _ {1} - p _ {2}}{2} \left[ \Phi (\tau) - \Phi (1 - \tau - \mu) \right] + \frac {1 - p _ {1} + p _ {2}}{2} \left[ \Phi (1 - \tau) - \Phi (\tau + \mu) \right] \right\} \\ = \mathfrak {T} ^ {\operatorname {c o m p}} \left(p _ {1} - p _ {2}\right) \\ = \mathcal {T} ^ {\operatorname {c o m p}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right), \tag {by Lemma 1} \\ \end{array} +$$ + +where we use the fact that $s_{\max} + s_{\min} = 1$ and $P = 1$ when $n = 2$ . Since $\mathfrak{T}^{\mathrm{comp}}$ is convex, by Jensen's inequality, we obtain for any hypothesis $h \in \mathcal{H}$ and any distribution, + +$$ +\begin{array}{l} \mathfrak {T} ^ {\text {c o m p}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H})\right) \\ = \mathcal {T} ^ {\operatorname {c o m p}} \left(\underset {X} {\mathbb {E}} \left[ \Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x) \right]\right) \\ \leq \mathbb {E} _ {X} \left[ \mathcal {T} ^ {\operatorname {c o m p}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right) \right] \\ \leq \mathbb {E} _ {X} \left[ \Delta \mathcal {C} _ {\ell \text {c o m p}, \mathcal {H}} (h, x) \right] \\ = \mathcal {R} _ {\ell \operatorname {c o m p}} (h) - \mathcal {R} _ {\ell \operatorname {c o m p}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell \operatorname {c o m p}} (\mathcal {H}). \\ \end{array} +$$ + +For the second part, we first consider $n = 2$ . For any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $S_{h}(x,1) = \tau_{\epsilon} \in \left[0,\frac{1}{2}\right]$ and satisfies + +$$ +\sup _ {\mu \in \left[ s _ {\min } - \tau_ {\epsilon}, 1 - \tau_ {\epsilon} - s _ {\min } \right]} \left\{\frac {1 + t}{2} \left[ \Phi \left(\tau_ {\epsilon}\right) - \Phi \left(1 - \tau_ {\epsilon} - \mu\right) \right] + \frac {1 - t}{2} \left[ \Phi \left(1 - \tau_ {\epsilon}\right) - \Phi \left(\tau_ {\epsilon} + \mu\right) \right] \right\} < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell_ {0 - 1}} (h, x) - \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \\ = t \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathfrak {T} ^ {\operatorname {c o m p}} (t) \leq \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\operatorname {c o m p}}} (\mathcal {H}) \\ = \mathcal {R} _ {\ell \operatorname {c o m p}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell \operatorname {c o m p}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell \text {c o m p}} (h, x) - \mathcal {C} _ {\ell \text {c o m p}} ^ {*} (\mathcal {H}, x) \\ = \sup _ {\mu \varepsilon \left[ s _ {\min } - \tau_ {\epsilon}, 1 - \tau_ {\epsilon} - s _ {\min } \right]} \left\{\frac {1 + t}{2} \left[ \Phi \left(\tau_ {\epsilon}\right) - \Phi \left(1 - \tau_ {\epsilon} - \mu\right) \right] + \frac {1 - t}{2} \left[ \Phi \left(1 - \tau_ {\epsilon}\right) - \Phi \left(\tau_ {\epsilon} + \mu\right) \right] \right\} \\ < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . \\ \end{array} +$$ + +By letting $\epsilon \to 0$ , we conclude the proof. The proof for $n > 2$ directly extends from the case when $n = 2$ . Indeed, For any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ , $p(x,y) = 0$ , $3 \leq y \leq n$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $S_h(x,1) = \tau_{1,\epsilon}$ , $S_h(x,2) = \tau_{2,\epsilon}$ and $S_h(x,y) = 0$ , $3 \leq y \leq n$ and satisfies $\tau_{1,\epsilon} + \tau_{2,\epsilon} = 1$ , and + +$$ +\begin{array}{l} \inf _ {P \in \left[ \frac {1}{n - 1} \vee t, 1 \right]} \sup _ {\mu \in C} \left\{\frac {P + t}{2} \left[ \Phi \left(\tau_ {2, \epsilon}\right) - \Phi \left(\tau_ {1, \epsilon} - \mu\right) \right] + \frac {P - t}{2} \left[ \Phi \left(\tau_ {1, \epsilon}\right) - \Phi \left(\tau_ {2, \epsilon} + \mu\right) \right] \right\} \\ = \sup _ {\mu \in C} \left\{\frac {1 + t}{2} \left[ \Phi \left(\tau_ {2, \epsilon}\right) - \Phi \left(\tau_ {1, \epsilon} - \mu\right) \right] + \frac {1 - t}{2} \left[ \Phi \left(\tau_ {1, \epsilon}\right) - \Phi \left(\tau_ {2, \epsilon} + \mu\right) \right] \right\} \\ < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . \\ \end{array} +$$ + +Then, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = t +$$ + +and + +$$ +\mathfrak {T} ^ {\operatorname {c o m p}} (t) \leq \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} (h) - \mathcal {R} _ {\ell^ {\operatorname {c o m p}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\operatorname {c o m p}}} (\mathcal {H}) < \mathfrak {T} ^ {\operatorname {c o m p}} (t) + \epsilon . +$$ + +By letting $\epsilon \to 0$ , we conclude the proof. + +![](images/55339ce60cf10f237f47077447e0960970d402d5283bd4e161d9a9c6081cba5c.jpg) + +# E.2 Logistic loss + +Theorem 6 ( $\overline{\mathcal{H}}$ -consistency bounds for logistic loss). For any $h \in \overline{\mathcal{H}}$ and any distribution, we have + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \Big (\mathcal {R} _ {\ell_ {\log}} (h) - \mathcal {R} _ {\ell_ {\log}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\log}} (\overline {{\mathcal {H}}}) \Big), +$$ + +$$ +\text{where}\ell_{\log} = -\log \Bigg(\frac{e^{h(x,y)}}{\sum_{y^{\prime}\in\mathcal{Y}}e^{h(x,y^{\prime})}}\Bigg) and \Psi (t) = \left\{ \begin{array}{ll}\frac{1 + t}{2}\log \bigl(1 + t\bigr) + \frac{1 - t}{2}\log \bigl(1 - t\bigr) & t\leq \frac{s_{\max} - s_{\min}}{s_{\min} + s_{\max}}\\ \frac{t}{2}\log \Big(\frac{s_{\max}}{s_{\min}}\Big) + \log \Big(\frac{2\sqrt{s_{\max}s_{\min}}}{s_{\max} + s_{\min}}\Big) & \text{otherwise.} \end{array} \right. +$$ + +Proof. For the multinomial logistic loss $\ell_{\log}$ , plugging in $\Phi(t) = -\log(t)$ in Theorem 5, gives $\overline{\mathcal{T}}^{\mathrm{comp}}$ + +$$ +\geq \inf_{P\in \left[\frac{1}{n - 1}\lor t,1\right]}\inf_{\substack{S_{\min}\leq \tau_{2}\leq \tau_{1}\leq S_{\max}\\ \tau_{1} + \tau_{2}\leq 1}}\sup_{\mu \in C}\Bigl\{\frac{P + t}{2} \bigl[ - \log \bigl(\tau_{2}\bigr) + \log \bigl(\tau_{1} - \mu \bigr)\bigr ] + \frac{P - t}{2}\bigl[ - \log \bigl(\tau_{1}\bigr) + \log \bigl(\tau_{2} + \mu \bigr)\bigr ]\Bigr \} +$$ + +where $C = \left[\max \{s_{\min} - \tau_2, \tau_1 - s_{\max}\}, \min \{s_{\max} - \tau_2, \tau_1 - s_{\min}\}\right]$ . Here, we only compute the expression for $n > 2$ . The expression for $n = 2$ will lead to the same result since it can be viewed as a special case of the expression for $n > 2$ . By differentiating with respect to $\tau_2$ and $P$ , we can see that the infimum is achieved when $\tau_1 = \tau_2 = \frac{s_{\min} + s_{\max}}{2}$ and $P = 1$ modulo some elementary analysis. Thus, $\overline{\mathcal{T}}^{\mathrm{comp}}$ can be reformulated as + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} = \sup _ {\mu \in C} \left\{\frac {1 + t}{2} \bigg [ - \log \bigg (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \bigg) + \log \bigg (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} - \mu \bigg) \right] \\ \left. + \frac {1 - t}{2} \left[ - \log \left(\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2}\right) + \log \left(\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2} + \mu\right) \right] \right\} \\ = - \log \left(\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2}\right) + \sup _ {\mu \in C} g (\mu) \\ \end{array} +$$ + +where $C = \left[\frac{s_{\min} - s_{\max}}{2}, \frac{s_{\max} - s_{\min}}{2}\right]$ and $g(\mu) = \frac{1 + t}{2} \log \left( \frac{s_{\min} + s_{\max}}{2} - \mu \right) + \frac{1 - t}{2} \log \left( \frac{s_{\min} + s_{\max}}{2} + \mu \right)$ . Since $g$ is continuous, it attains its supremum over a compact set. Note that $g$ is concave and differentiable. In view of that, the maximum over the open set $(-\infty, +\infty)$ can be obtained by setting its gradient to zero. Differentiate $g(\mu)$ to optimize, we obtain + +$$ +g \left(\mu^ {*}\right) = 0, \quad \mu^ {*} = - \frac {t \left(s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}\right)}{2}. +$$ + +Moreover, by the concavity, $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ . Since $s_{\max} - s_{\min} \geq 0$ , we have + +$$ +\mu^ {*} \leq 0 \leq \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{2} +$$ + +In view of the constraint $C$ , if $\mu^{*} \geq \frac{s_{\min} - s_{\max}}{2}$ , the maximum is achieved by $\mu = \mu^{*}$ . Otherwise, if $\mu^{*} < \frac{s_{\min} - s_{\max}}{2}$ , since $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ , the maximum is achieved by $\mu = \frac{s_{\min} - s_{\max}}{2}$ . Since $\mu^{*} \geq \frac{s_{\min} - s_{\max}}{2}$ is equivalent to $t \leq \frac{s_{\min} - s_{\min}}{s_{\min} + s_{\max}}$ , the maximum can be expressed as + +$$ +\max _ {\mu \in C} g (\mu) = \left\{ \begin{array}{l l} g (\mu^ {*}) & t \leq \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}} \\ g \Big (\frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2} \Big) & \text {o t h e r w i s e} \end{array} \right. +$$ + +Computing the value of $g$ at these points yields: + +$$ +g \left(\mu^ {*}\right) = \frac {1 + t}{2} \log \frac {(1 + t) \left(s _ {\min } + s _ {\max }\right)}{2} + \frac {1 - t}{2} \log \frac {(1 - t) \left(s _ {\min } + s _ {\max }\right)}{2} +$$ + +$$ +g \left(\frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2}\right) = \frac {1 + t}{2} \log \left(s _ {\operatorname* {m a x}}\right) + \frac {1 - t}{2} \log \left(s _ {\operatorname* {m i n}}\right) +$$ + +Then, if $t \leq \frac{s_{\max} - s_{\min}}{s_{\min} + s_{\max}}$ , we obtain + +$$ +\begin{array}{l} \overline {{\mathfrak {T}}} ^ {\text {c o m p}} = - \log \left(\frac {s _ {\min } + s _ {\max }}{2}\right) + \frac {1 + t}{2} \log \frac {(1 + t) (s _ {\min } + s _ {\max })}{2} + \frac {1 - t}{2} \log \frac {(1 - t) (s _ {\min } + s _ {\max })}{2} \\ = \frac {1 + t}{2} \log (1 + t) + \frac {1 - t}{2} \log (1 - t). \\ \end{array} +$$ + +Otherwise, we obtain + +$$ +\begin{array}{l} \overline {{\mathfrak {T}}} ^ {\text {c o m p}} = - \log \left(\frac {s _ {\min } + s _ {\max }}{2}\right) + \frac {1 + t}{2} \log \left(s _ {\max }\right) + \frac {1 - t}{2} \log \left(s _ {\min }\right) \\ = \frac {t}{2} \log \left(\frac {s _ {\mathrm {m a x}}}{s _ {\mathrm {m i n}}}\right) + \log \left(\frac {2 \sqrt {s _ {\mathrm {m a x}} s _ {\mathrm {m i n}}}}{s _ {\mathrm {m a x}} + s _ {\mathrm {m i n}}}\right). \\ \end{array} +$$ + +Since $\overline{\mathcal{T}}^{\mathrm{comp}}$ is convex, by Theorem 5, for any $h\in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \Big (\mathcal {R} _ {\ell_ {\log}} (h) - \mathcal {R} _ {\ell_ {\log}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\log}} (\overline {{\mathcal {H}}}) \Big), +$$ + +where + +$$ +\Psi \big (t \big) = \left\{ \begin{array}{l l} \frac {1 + t}{2} \log \big (1 + t \big) + \frac {1 - t}{2} \log \big (1 - t \big) & t \leq \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}} \\ \frac {t}{2} \log \Big (\frac {s _ {\operatorname* {m a x}}}{s _ {\operatorname* {m i n}}} \Big) + \log \Big (\frac {2 \sqrt {s _ {\operatorname* {m a x}} s _ {\operatorname* {m i n}}}}{s _ {\operatorname* {m a x}} + s _ {\operatorname* {m i n}}} \Big) & \text {o t h e r w i s e .} \end{array} \right. +$$ + +![](images/d9c98319c9e45666ae034ed30575ef001ee8b56ce91fb4e1f3f988338fae17a2.jpg) + +# E.3 Sum exponential loss + +Theorem 9 ( $\mathcal{H}$ -consistency bounds for sum exponential loss). For any $h \in \mathcal{H}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell_ {\exp}} (h) - \mathcal {R} _ {\ell_ {\exp}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\exp}} (\overline {{\mathcal {H}}}) \big) +$$ + +where $\ell_{\mathrm{exp}} = \sum_{y^{\prime}\neq y}e^{h(x,y^{\prime}) - h(x,y)}$ and $\Psi (t) = \left\{ \begin{array}{ll}1 - \sqrt{1 - t^2} & t\leq \frac{s_{\mathrm{max}}^2 - s_{\mathrm{min}}^2}{s_{\mathrm{min}}^2 + s_{\mathrm{max}}^2}\\ \frac{s_{\mathrm{max}} - s_{\mathrm{min}}}{2s_{\mathrm{max}}s_{\mathrm{min}}} t - \frac{(s_{\mathrm{max}} - s_{\mathrm{min}})^2}{2s_{\mathrm{max}}s_{\mathrm{min}}(s_{\mathrm{max}} + s_{\mathrm{min}})} & \mathrm{otherwise.} \end{array} \right.$ + +Proof. For the sum exponential loss $\ell_{\mathrm{exp}}$ , plugging in $\Phi(t) = \frac{1}{t} - 1$ in Theorem 5, gives $\overline{\mathcal{T}}^{\mathrm{comp}}$ + +$$ +\geq \inf_{P\in \left[\frac{1}{n - 1}\vee t,1\right]} \inf_{\substack{S_{\min}\leq \tau_{2}\leq \tau_{1}\leq S_{\max}\\ \tau_{1} + \tau_{2}\leq 1}}\sup_{\mu \in C}\Bigl\{\frac{P + t}{2}\biggl\[\frac{1}{\tau_{2}} -\frac{1}{\tau_{1} - \mu}\biggr ] + \frac{P - t}{2}\biggl\[\frac{1}{\tau_{1}} -\frac{1}{\tau_{2} + \mu}\biggr \Bigr \} +$$ + +where $C = \left[\max \{s_{\min} - \tau_2, \tau_1 - s_{\max}\}, \min \{s_{\max} - \tau_2, \tau_1 - s_{\min}\}\right]$ . Here, we only compute the expression for $n > 2$ . The expression for $n = 2$ will lead to the same result since it can be viewed as a special case of the expression for $n > 2$ . By differentiating with respect to $\tau_2$ and $P$ , we can see that the infimum is achieved when $\tau_1 = \tau_2 = \frac{s_{\min} + s_{\max}}{2}$ and $P = 1$ modulo some elementary analysis. Thus, $\overline{\mathcal{T}}^{\mathrm{comp}}$ can be reformulated as + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} = \sup _ {\mu \in C} \left\{\frac {1 + t}{2} \left[ \frac {2}{s _ {\min } + s _ {\max }} - \frac {2}{s _ {\min } + s _ {\max } - 2 \mu} \right] \right. \\ + \frac {1 - t}{2} \left[ \frac {2}{s _ {\min } + s _ {\max }} - \frac {2}{s _ {\min } + s _ {\max } + 2 \mu} \right] \Bigg \} \\ = \frac {2}{s _ {\min } + s _ {\max }} + \sup _ {\mu \in C} g (\mu) \\ \end{array} +$$ + +where $C = \left[\frac{s_{\min} - s_{\max}}{2}, \frac{s_{\max} - s_{\min}}{2}\right]$ and $g(\mu) = -\frac{1 + t}{s_{\min} + s_{\max} - 2\mu} - \frac{1 - t}{s_{\min} + s_{\max} + 2\mu}$ . Since $g$ is continuous, it attains its supremum over a compact set. Note that $g$ is concave and differentiable. In view of that, the maximum over the open set $(-\infty, +\infty)$ can be obtained by setting its gradient to zero. Differentiate $g(\mu)$ to optimize, we obtain + +$$ +g \left(\mu^ {*}\right) = 0, \quad \mu^ {*} = \frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \frac {\sqrt {1 - t} - \sqrt {1 + t}}{\sqrt {1 + t} + \sqrt {1 - t}} +$$ + +Moreover, by the concavity, $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ . Since $s_{\max} - s_{\min} \geq 0$ , we have + +$$ +\mu^ {*} \leq 0 \leq \frac {s _ {\mathrm {m a x}} - s _ {\mathrm {m i n}}}{2} +$$ + +In view of the constraint $C$ , if $\mu^{*} \geq \frac{s_{\min} - s_{\max}}{2}$ , the maximum is achieved by $\mu = \mu^{*}$ . Otherwise, if $\mu^{*} < \frac{s_{\min} - s_{\max}}{2}$ , since $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ , the maximum is achieved by $\mu = \frac{s_{\min} - s_{\max}}{2}$ . Since $\mu^{*} \geq \frac{s_{\min} - s_{\max}}{2}$ is equivalent to $t \leq \frac{s_{\max}^2 - s_{\min}^2}{s_{\min}^2 + s_{\max}^2}$ , the maximum can be expressed as + +$$ +\max _ {\mu \in C} g (\mu) = \left\{ \begin{array}{l l} g (\mu^ {*}) & t \leq \frac {s _ {\operatorname* {m a x}} ^ {2} - s _ {\operatorname* {m i n}} ^ {2}}{s _ {\operatorname* {m i n}} ^ {2} + s _ {\operatorname* {m a x}} ^ {2}} \\ g \big (\frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2} \big) & \text {o t h e r w i s e} \end{array} \right. +$$ + +Computing the value of $g$ at these points yields: + +$$ +g \left(\mu^ {*}\right) = 1 - \sqrt {1 - t ^ {2}} - \frac {2}{s _ {\min} + s _ {\max}} +$$ + +$$ +g \left(\frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2}\right) = - \frac {1 + t}{2 s _ {\operatorname* {m a x}}} - \frac {1 - t}{2 s _ {\operatorname* {m i n}}} +$$ + +Then, if $t \leq \frac{s_{\mathrm{max}}^2 - s_{\mathrm{min}}^2}{s_{\mathrm{min}}^2 + s_{\mathrm{max}}^2}$ , we obtain + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} = \frac {2}{s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}} + 1 - \sqrt {1 - t ^ {2}} - \frac {2}{s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}} \\ = 1 - \sqrt {1 - t ^ {2}}. \\ \end{array} +$$ + +Otherwise, we obtain + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} = \frac {2}{s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}} - \frac {1 + t}{2 s _ {\mathrm {m a x}}} - \frac {1 - t}{2 s _ {\mathrm {m i n}}} \\ = \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{2 s _ {\operatorname* {m a x}} s _ {\operatorname* {m i n}}} t - \frac {\left(s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}\right) ^ {2}}{2 s _ {\operatorname* {m a x}} s _ {\operatorname* {m i n}} \left(s _ {\operatorname* {m a x}} + s _ {\operatorname* {m i n}}\right)}. \\ \end{array} +$$ + +Since $\overline{\mathcal{J}}^{\mathrm{comp}}$ is convex, by Theorem 5, for any $h\in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell_ {\exp}} (h) - \mathcal {R} _ {\ell_ {\exp}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\exp}} (\overline {{\mathcal {H}}}) \big), +$$ + +where + +$$ +\Psi \left(t\right) = \left\{ \begin{array}{l l} 1 - \sqrt {1 - t ^ {2}} & t \leq \frac {s _ {\operatorname* {m a x}} ^ {2} - s _ {\operatorname* {m i n}} ^ {2}}{s _ {\operatorname* {m i n}} ^ {2} + s _ {\operatorname* {m a x}} ^ {2}} \\ \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{2 s _ {\operatorname* {m a x}} s _ {\operatorname* {m i n}}} t - \frac {\left(s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}\right) ^ {2}}{2 s _ {\operatorname* {m a x}} s _ {\operatorname* {m i n}} \left(s _ {\operatorname* {m a x}} + s _ {\operatorname* {m i n}}\right)} & \mathrm {o t h e r w i s e .} \end{array} \right. +$$ + +![](images/a80f47f0c4ba9a96da020ddb110d873039bae942542b4da148df6545afea914c.jpg) + +# E.4 Generalized cross-entropy loss + +Theorem 16 ( $\mathcal{H}$ -consistency bounds for generalized cross-entropy loss). For any $h \in \overline{\mathcal{H}}$ and any distribution, we have + +$$ +\begin{array}{l} \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell_ {\mathrm {g c e}}} (h) - \mathcal {R} _ {\ell_ {\mathrm {g c e}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\mathrm {g c e}}} (\overline {{\mathcal {H}}}) \big), \\ \begin{array}{r c l} w h e r e \quad \Psi (t) & = & \left\{ \begin{array}{l l} \frac {1}{q} \Big (\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2} \Big) ^ {q} \Bigg [ \left(\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2}\right) ^ {1 - q} - 1 \Bigg ] \\ & & \frac {t}{2 q} \Big (s _ {\operatorname* {m a x}} ^ {q} - s _ {\operatorname* {m i n}} ^ {q} \Big) + \frac {1}{q} \Big (\frac {s _ {\operatorname* {m i n}} ^ {q} + s _ {\operatorname* {m a x}} ^ {q}}{2} - \Big (\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2} \Big) ^ {q} \Big) \end{array} \right. \quad t \leq \frac {s _ {\operatorname* {m a x}} ^ {1 - q} - s _ {\operatorname* {m i n}} ^ {1 - q}}{s _ {\operatorname* {m i n}} ^ {1 - q} + s _ {\operatorname* {m a x}} ^ {1 - q}} \quad a n d \quad \ell_ {\text {g c e}} = \\ \frac {1}{q} \bigg [ 1 - \left(\frac {e ^ {h (x , y)}}{\sum_ {y ^ {\prime} \in \mathcal {Y}} e ^ {h (x , y ^ {\prime})}}\right) ^ {q} \bigg ]. \\ \end{array} +$$ + +Proof. For generalized cross-entropy loss $\ell_{\mathrm{gce}}$ , plugging $\Phi(t) = \frac{1}{q} (1 - t^q)$ in Theorem 5, gives $\overline{\mathcal{T}}^{\mathrm{comp}}$ + +$$ +\geq \inf_{P\in \left[\frac{1}{n - 1}\lor t,1\right]}\inf_{\substack{S_{\min}\leq \tau_{2}\leq \tau_{1}\leq S_{\max}\\ \tau_{1} + \tau_{2}\leq 1}}\sup_{\mu \in C}\Bigg\{\frac{P + t}{2}\bigg[-\frac{1}{q} (\tau_{2})^{q} + \frac{1}{q} (\tau_{1} - \mu)^{q}\bigg] + \frac{P - t}{2}\bigg[-\frac{1}{q} (\tau_{1})^{q} + \frac{1}{q} (\tau_{2} + \mu)^{q}\bigg]\Bigg\} +$$ + +where $C = \left[\max \left\{s_{\min} - \tau_2, \tau_1 - s_{\max}\right\}, \min \left\{s_{\max} - \tau_2, \tau_1 - s_{\min}\right\}\right]$ . Here, we only compute the expression for $n > 2$ . The expression for $n = 2$ will lead to the same result since it can be viewed as a special case of the expression for $n > 2$ . By differentiating with respect to $\tau_2$ and $P$ , we can see that the infimum is achieved when $\tau_1 = \tau_2 = \frac{s_{\min} + s_{\max}}{2}$ and $P = 1$ modulo some elementary analysis. Thus, $\overline{\mathcal{T}}^{\mathrm{comp}}$ can be reformulated as + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} = \sup _ {\mu \in C} \left\{\frac {1 + t}{2 q} \bigg [ - \left(\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2}\right) ^ {q} + \left(\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} - \mu\right) ^ {q} \bigg ] \right. \\ + \frac {1 - t}{2 q} \left[ - \left(\frac {s _ {\min } + s _ {\max }}{2}\right) ^ {q} + \left(\frac {s _ {\min } + s _ {\max }}{2} + \mu\right) ^ {q} \right] \Bigg \} \\ = - \frac {1}{q} \left(\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2}\right) ^ {q} + \sup _ {\mu \in C} g (\mu) \\ \end{array} +$$ + +where $C = \left[\frac{s_{\min} - s_{\max}}{2}, \frac{s_{\max} - s_{\min}}{2}\right]$ and $g(\mu) = \frac{1 + t}{2q}\left(\frac{s_{\min} + s_{\max}}{2} - \mu\right)^q + \frac{1 - t}{2q}\left(\frac{s_{\min} + s_{\max}}{2} + \mu\right)^q$ . Since $g$ is continuous, it attains its supremum over a compact set. Note that $g$ is concave and differentiable. In view of that, the maximum over the open set $(-\infty, +\infty)$ can be obtained by setting its gradient to zero. Differentiate $g(\mu)$ to optimize, we obtain + +$$ +g \left(\mu^ {*}\right) = 0, \quad \mu^ {*} = \frac {\left(1 - t\right) ^ {\frac {1}{1 - q}} - \left(1 + t\right) ^ {\frac {1}{1 - q}}}{\left(1 + t\right) ^ {\frac {1}{1 - q}} + \left(1 - t\right) ^ {\frac {1}{1 - q}}} \frac {s _ {\min } + s _ {\max }}{2}. +$$ + +Moreover, by the concavity, $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ . Since $s_{\max} - s_{\min} \geq 0$ , we have + +$$ +\mu^ {*} \leq 0 \leq \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{2} +$$ + +In view of the constraint $C$ , if $\mu^{*} \geq \frac{s_{\min} - s_{\max}}{2}$ , the maximum is achieved by $\mu = \mu^{*}$ . Otherwise, if $\mu^{*} < \frac{s_{\min} - s_{\max}}{2}$ , since $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ , the maximum is achieved by $\mu = \frac{s_{\min} - s_{\max}}{2}$ . Since $\mu^{*} \geq \frac{s_{\min} - s_{\max}}{2}$ is equivalent to $t \leq \frac{s_{\max}^{1 - q} - s_{\min}^{1 - q}}{s_{\min}^{1 - q} + s_{\max}^{1 - q}}$ , the maximum can be expressed as + +$$ +\max _ {\mu \in C} g (\mu) = \left\{ \begin{array}{l l} g (\mu^ {*}) & t \leq \frac {s _ {\operatorname* {m a x}} ^ {1 - q} - s _ {\operatorname* {m i n}} ^ {1 - q}}{s _ {\operatorname* {m i n}} ^ {1 - q} + s _ {\operatorname* {m a x}} ^ {1 - q}} \\ g \Big (\frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2} \Big) & \text {o t h e r w i s e} \end{array} \right. +$$ + +Computing the value of $g$ at these points yields: + +$$ +g \left(\mu^ {*}\right) = \frac {1}{q} \left(\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2}\right) ^ {q} \left(\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2}\right) ^ {1 - q} +$$ + +$$ +g \biggl (\frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2} \biggr) = \frac {1 + t}{2 q} \bigl (s _ {\operatorname* {m a x}} \bigr) ^ {q} + \frac {1 - t}{2 q} \bigl (s _ {\operatorname* {m i n}} \bigr) ^ {q} +$$ + +Then, if $t \leq \frac{s_{\max}^{1 - q} - s_{\min}^{1 - q}}{s_{\min}^{1 - q} + s_{\max}^{1 - q}}$ , we obtain + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c o m p}} = \frac {1}{q} \bigg (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \bigg) ^ {q} \bigg (\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2} \bigg) ^ {1 - q} - \frac {1}{q} \bigg (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \bigg) ^ {q} \\ = \frac {1}{q} \bigg (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \bigg) ^ {q} \bigg [ \left(\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2}\right) ^ {1 - q} - 1 \bigg ] \\ \end{array} +$$ + +Otherwise, we obtain + +$$ +\begin{array}{l} \overline {{\mathfrak {T}}} ^ {\mathrm {c o m p}} = - \frac {1}{q} \bigg (\frac {s _ {\operatorname* {m i n}} + s _ {\operatorname* {m a x}}}{2} \bigg) ^ {q} + \frac {1 + t}{2 q} (s _ {\operatorname* {m a x}}) ^ {q} + \frac {1 - t}{2 q} (s _ {\operatorname* {m i n}}) ^ {q} \\ = \frac {t}{2 q} \Big (s _ {\mathrm {m a x}} ^ {q} - s _ {\mathrm {m i n}} ^ {q} \Big) + \frac {1}{q} \left(\frac {s _ {\mathrm {m i n}} ^ {q} + s _ {\mathrm {m a x}} ^ {q}}{2} - \left(\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2}\right) ^ {q}\right) \\ \end{array} +$$ + +Since $\overline{\mathcal{T}}^{\mathrm{comp}}$ is convex, by Theorem 5, for any $h\in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell_ {\mathrm {g c e}}} (h) - \mathcal {R} _ {\ell_ {\mathrm {g c e}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\mathrm {g c e}}} (\overline {{\mathcal {H}}}) \big), +$$ + +where + +$$ +\begin{array}{r} \Psi (t) = \left\{ \begin{array}{l l} \frac {1}{q} \Big (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \Big) ^ {q} \Bigg [ \bigg (\frac {(1 + t) ^ {\frac {1}{1 - q}} + (1 - t) ^ {\frac {1}{1 - q}}}{2} \bigg) ^ {1 - q} - 1 \Bigg ] & t \leq \frac {s _ {\mathrm {m a x}} ^ {1 - q} - s _ {\mathrm {m i n}} ^ {1 - q}}{s _ {\mathrm {m i n}} ^ {1 - q} + s _ {\mathrm {m a x}} ^ {1 - q}} \\ \frac {t}{2 q} \Big (s _ {\mathrm {m a x}} ^ {q} - s _ {\mathrm {m i n}} ^ {q} \Big) + \frac {1}{q} \Big (\frac {s _ {\mathrm {m i n}} ^ {q} + s _ {\mathrm {m a x}} ^ {q}}{2} - \Big (\frac {s _ {\mathrm {m i n}} + s _ {\mathrm {m a x}}}{2} \Big) ^ {q} \Big) & \mathrm {o t h e r w i s e .} \end{array} \right. \end{array} +$$ + +![](images/8ff96c09eeee8202e00a157137712ea9e69f9655a87a8c1a80ecc76630b142b0.jpg) + +# E.5 Mean absolute error loss + +Theorem 17 ( $\overline{\mathcal{H}}$ -consistency bounds for mean absolute error loss). For any $h \in \overline{\mathcal{H}}$ and any distribution, we have + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \frac {2 \left(\mathcal {R} _ {\ell_ {\mathrm {m a e}}} (h) - \mathcal {R} _ {\ell_ {\mathrm {m a e}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\mathrm {m a e}}} (\overline {{\mathcal {H}}})\right)}{s _ {\mathrm {m a x}} - s _ {\mathrm {m i n}}}. +$$ + +Proof. For mean absolute error loss $\ell_{\mathrm{mae}}$ , plugging $\Phi(t) = 1 - t$ in Theorem 5, gives $\overline{\mathcal{J}}^{\mathrm{comp}}$ + +$$ +\geq \inf_{P\in \left[\frac{1}{n - 1}\vee t,1\right]}\inf_{\substack{S_{\min}\leq \tau_{2}\leq \tau_{1}\leq S_{\max}\\ \tau_{1} + \tau_{2}\leq 1}}\sup_{\mu \in C}\Bigg\{\frac{P + t}{2} \big[-(\tau_{2}) + (\tau_{1} - \mu)\big] + \frac{P - t}{2} \big[-(\tau_{1}) + (\tau_{2} + \mu)\big]\Bigg\} +$$ + +where $C = \left[\max \left\{s_{\min} - \tau_2, \tau_1 - s_{\max}\right\}, \min \left\{s_{\max} - \tau_2, \tau_1 - s_{\min}\right\}\right]$ . Here, we only compute the expression for $n > 2$ . The expression for $n = 2$ will lead to the same result since it can be viewed as a special case of the expression for $n > 2$ . By differentiating with respect to $\tau_2$ and $P$ , we can see that the infimum is achieved when $\tau_1 = \tau_2 = \frac{s_{\min} + s_{\max}}{2}$ and $P = 1$ modulo some elementary analysis. Thus, $\overline{\mathcal{T}}^{\mathrm{comp}}$ can be reformulated as + +$$ +\begin{array}{l} \overline {{\mathfrak {T}}} ^ {\mathrm {c o m p}} = \sup _ {\mu \in C} \left\{\frac {1 + t}{2} \left[ - \left(\frac {s _ {\min } + s _ {\max }}{2}\right) + \left(\frac {s _ {\min } + s _ {\max }}{2} - \mu\right) \right] \right. \\ \left. + \frac {1 - t}{2} \left[ - \left(\frac {s _ {\min } + s _ {\max }}{2}\right) + \left(\frac {s _ {\min } + s _ {\max }}{2} + \mu\right) \right] \right\} \\ = \sup_{\mu \in C} - t\mu \\ \end{array} +$$ + +where $C = \left[\frac{s_{\min} - s_{\max}}{2}, \frac{s_{\max} - s_{\min}}{2}\right]$ . Since $-t\mu$ is monotonically non-increasing, the maximum over $C$ can be achieved by + +$$ +\mu^ {*} = \frac {s _ {\operatorname* {m i n}} - s _ {\operatorname* {m a x}}}{2}, \quad \overline {{\mathfrak {T}}} ^ {\mathrm {c o m p}} = \frac {s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}{2} t. +$$ + +Since $\overline{\mathcal{T}}^{\mathrm{comp}}$ is convex, by Theorem 5, for any $h\in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \frac {2 \left(\mathcal {R} _ {\ell_ {\mathrm {m a e}}} (h) - \mathcal {R} _ {\ell_ {\mathrm {m a e}}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {\mathrm {m a e}}} (\overline {{\mathcal {H}}})\right)}{s _ {\mathrm {m a x}} - s _ {\mathrm {m i n}}}. +$$ + +![](images/5a71b19a226277c8c3e42c41e08bce3c4a0c30d9af9fe33fe7dcb6d30c614446.jpg) + +# F Extensions of constrained losses + +# F.1 Proof of $\overline{\mathcal{H}}$ -consistency bound with $\overline{\mathcal{T}}^{\mathrm{cstnd}}$ (Theorem 12) + +Theorem 12 ( $\overline{\mathcal{H}}$ -consistency bound for constrained losses). Assume that $\overline{\mathcal{T}}^{\mathrm{cstnd}}$ is convex. Then, the following inequality holds for any hypothesis $h \in \overline{\mathcal{H}}$ and any distribution: + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}})\right) \leq \mathcal {R} _ {\ell \mathrm {c s t n d}} (h) - \mathcal {R} _ {\ell \mathrm {c s t n d}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell \mathrm {c s t n d}} (\overline {{\mathcal {H}}}). \tag {6} +$$ + +with $\overline{\mathcal{T}}^{\mathrm{cstnd}}$ the $\overline{\mathcal{H}}$ -estimation error transformation for constrained losses defined for all $t \in [0,1]$ by $\overline{\mathcal{T}}^{\mathrm{cstnd}}(t) =$ + +$$ +\left\{ \begin{array}{l l} \inf _ {\tau \geq 0} \sup _ {\mu \in [ \tau - \Lambda_ {\min }, \tau + \Lambda_ {\min } ]} \big \{\frac {1 - t}{2} \big [ \Phi (\tau) - \Phi (- \tau + \mu) \big ] + \frac {1 + t}{2} \big [ \Phi (- \tau) - \Phi (\tau - \mu) \big ] \big \} & n = 2 \\ \inf _ {P \in \big [ \frac {1}{n - 1}, 1 \big ]} \inf _ {\tau_ {1} \geq \max \{\tau_ {2}, 0 \}} \sup _ {\mu \in C} \Big \{\frac {2 - P - t}{2} \big [ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \big ] + \frac {2 - P + t}{2} \big [ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \big ] \Big \} & n > 2, \end{array} \right. +$$ + +where $C = \left[\max \{\tau_1, -\tau_2\} - \Lambda_{\min}, \min \{\tau_1, -\tau_2\} + \Lambda_{\min}\right]$ and $\Lambda_{\min} = \inf_{x \in \mathcal{X}} \Lambda(x)$ . Furthermore, for any $t \in [0,1]$ , there exist a distribution $\mathcal{D}$ and a hypothesis $h \in \mathcal{H}$ such that $\mathcal{R}_{\ell_{0-1}}(h) - \mathcal{R}_{\ell_{0-1}}^*(\mathcal{H}) + \mathcal{M}_{\ell_{0-1}}(\mathcal{H}) = t$ and $\mathcal{R}_{\ell \text{cstnd}}(h) - \mathcal{R}_{\ell \text{cstnd}}^*(\mathcal{H}) + \mathcal{M}_{\ell \text{cstnd}}(\mathcal{H}) = \mathcal{T}^{\text{cstnd}}(t)$ . + +Proof. For the constrained loss $\ell^{\mathrm{cstnd}}$ , the conditional $\ell^{\mathrm{cstnd}}$ -risk can be expressed as follows: + +$$ +\begin{array}{l} \mathcal {C} _ {\ell \text {c s t n d}} (h, x) = \sum_ {y \in \mathcal {Y}} p (x, y) \ell^ {\text {c s t n d}} (h, x, y) \\ = \sum_ {y \in \mathcal {Y}} p (x, y) \sum_ {y ^ {\prime} \neq y} \Phi (- h (x, y ^ {\prime})) \\ = \sum_ {y \in \mathcal {Y}} \Phi (- h (x, y)) \sum_ {y ^ {\prime} \neq y} p (x, y ^ {\prime}) \\ = \sum_ {y \in \mathcal {Y}} \Phi (- h (x, y)) (1 - p (x, y)) \\ = \Phi (- h (x, y _ {\max }) \left(1 - p (x, y _ {\max })\right) + \Phi (- h (x, \mathsf {h} (x))) \left(1 - p (x, \mathsf {h} (x))\right) \\ + \sum_ {y \notin \{y _ {\max }, h (x) \}} \Phi (- h (x, y)) (1 - p (x, y)). \\ \end{array} +$$ + +For any $h \in \overline{\mathcal{H}}$ and $x \in \mathfrak{X}$ , by the definition of $\overline{\mathcal{H}}$ , we can always find a family of hypotheses $\{h_\mu\} \subset \mathcal{H}$ such that $h_\mu(x, \cdot)$ take the following values: + +$$ +h _ {\mu} (x, y) = \left\{ \begin{array}{l l} h (x, y) & \text {i f} y \notin \{y _ {\max }, \mathsf {h} (x) \} \\ h (x, y _ {\max }) + \mu & \text {i f} y = \mathsf {h} (x) \\ h (x, \mathsf {h} (x)) - \mu & \text {i f} y = y _ {\max }. \end{array} \right. +$$ + +Note that the hypotheses $h_\mu$ satisfies the constraint: + +$$ +\sum_ {y \in \mathcal {Y}} h _ {\mu} (x, y) = \sum_ {y \in \mathcal {Y}} h (x, y) = 0, \forall \mu \in \mathbb {R}. +$$ + +Since $h_\mu(x,y) \in [-\Lambda(x), \Lambda(x)]$ , we have the following constraints on $\mu$ : + +$$ +\begin{array}{l} - \Lambda (x) - h (x, y _ {\max }) \leq \mu \leq \Lambda (x) - h (x, y _ {\max }) \\ - \Lambda (x) + h (x, \mathsf {h} (x)) \leq \mu \leq \Lambda (x) + h (x, \mathsf {h} (x). \\ \end{array} +$$ + +Let $p_1 = p(x, y_{\max})$ , $p_2 = p(x, \mathsf{h}(x))$ , $\tau_1 = h(x, \mathsf{h}(x))$ and $\tau_2 = h(x, y_{\max})$ to simplify the notation. Then, the constraint on $\mu$ can be expressed as + +$$ +\mu \in \overline {{C}}, \quad \overline {{C}} = \left[ \max \left\{\tau_ {1}, - \tau_ {2} \right\} - \Lambda (x), \min \left\{\tau_ {1}, - \tau_ {2} \right\} + \Lambda (x) \right] +$$ + +Since $\max \{\tau_1, -\tau_2\} - \min \{\tau_1, -\tau_2\} = |\tau_1 + \tau_2| \leq |\tau_1| + |\tau_2| \leq 2\Lambda(x)$ , $C$ is not an empty set. By the definition of $h_\mu$ , we have for any $h \in \mathcal{H}$ and $x \in \mathcal{X}$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\ell^ {\text {c s t n d}}} (h, x) - \inf _ {\mu \in \overline {{C}}} \mathcal {C} _ {\ell^ {\text {c s t n d}}} (h _ {\mu}, x) \\ = \sup _ {\mu \in \overline {{C}}} \left\{\left(1 - p _ {1}\right) \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] + \left(1 - p _ {2}\right) \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} \\ = \sup _ {\mu \in \overline {{C}}} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] + \frac {2 - P + p _ {1} - p _ {2}}{2} \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} \\ (P = p _ {1} + p _ {2} \in [ \frac {1}{n - 1}, 1 ]) \\ = \inf _ {P \in \left[ \frac {1}{n - 1}, 1 \right]} \inf _ {\tau_ {1} \geq \max \left\{\tau_ {2}, 0 \right\}} \sup _ {\mu \in \overline {{C}}} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] \right. \\ \left. + \frac {2 - P + p _ {1} - p _ {2}}{2} \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} \quad (\tau_ {1} \geq 0, \tau_ {2} \leq \tau_ {1}) \\ \geq \inf _ {P \in \left[ \frac {1}{n - 1}, 1 \right]} \inf _ {\tau_ {1} \geq \max \left\{\tau_ {2}, 0 \right\}} \sup _ {\mu \in C} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \left[ \Phi (- \tau_ {2}) - \Phi (- \tau_ {1} + \mu) \right] \right. \\ \left. + \frac {2 - P + p _ {1} - p _ {2}}{2} \left[ \Phi (- \tau_ {1}) - \Phi (- \tau_ {2} - \mu) \right] \right\} \\ (C = \left[ \max \left\{\tau_ {1}, - \tau_ {2} \right\} - \Lambda_ {\min }, \min \left\{\tau_ {1}, - \tau_ {2} \right\} + \Lambda_ {\min } \right] \subset \overline {{C}} \text {s i n c e} \Lambda_ {\min } \leq \Lambda (x)) \\ = \inf _ {P \in \left[ \frac {1}{n - 1}, 1 \right]} \inf _ {\tau_ {1} \geq \max \{\tau_ {2}, 0 \}} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \Phi (- \tau_ {2}) + \frac {2 - P + p _ {1} - p _ {2}}{2} \Phi (- \tau_ {1}) \right. \\ \left. - \inf _ {\mu \in C} \left\{\frac {2 - P - p _ {1} + p _ {2}}{2} \Phi (- \tau_ {1} + \mu) + \frac {2 - P + p _ {1} - p _ {2}}{2} \Phi (- \tau_ {2} - \mu) \right\} \right\} \\ = \mathbb {T} ^ {\text {c s t n d}} \left(p _ {1} - p _ {2}\right) \\ = \mathcal {T} ^ {\text {c s t n d}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right). \tag {by Lemma 1} \\ \end{array} +$$ + +Note that for $n = 2$ , an additional constraint $\tau_{1} + \tau_{2} = 1$ is imposed and the expression can be simplified as + +$$ +\begin{array}{l} \mathcal {C} _ {\ell \text {c s t n d}} (h, x) - \inf _ {\mu \in \overline {{C}}} \mathcal {C} _ {\ell \text {c s t n d}} (h _ {\mu}, x) \\ \geq \inf _ {\tau \geq 0} \sup _ {\mu \in [ \tau - \Lambda_ {\min }, \tau + \Lambda_ {\min } ]} \left\{\frac {1 - p _ {1} + p _ {2}}{2} \left[ \Phi (\tau) - \Phi (- \tau + \mu) \right] + \frac {1 + p _ {1} - p _ {2}}{2} \left[ \Phi (- \tau) - \Phi (\tau - \mu) \right] \right\} \\ = \mathfrak {T} ^ {\operatorname {c s t n d}} \left(p _ {1} - p _ {2}\right) \\ = \mathcal {T} ^ {\text {c s t n d}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right). \tag {by Lemma 1} \\ \end{array} +$$ + +Since $\mathfrak{T}^{\mathrm{cstnd}}$ is convex, by Jensen's inequality, we obtain for any hypothesis $h\in \mathcal{H}$ and any distribution, + +$$ +\begin{array}{l} \mathfrak {T} ^ {\text {c s t n d}} \left(\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H})\right) \\ = \mathcal {T} ^ {\text {c s t n d}} \left(\underset {X} {\mathbb {E}} \left[ \Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x) \right]\right) \\ \leq \mathbb {E} _ {X} \left[ \mathcal {T} ^ {\text {c s t n d}} \left(\Delta \mathcal {C} _ {\ell_ {0 - 1}, \mathcal {H}} (h, x)\right) \right] \\ \leq \mathbb {E} _ {X} \left[ \Delta \mathcal {C} _ {\ell^ {\text {c s t n d}}, \mathcal {H}} (h, x) \right] \\ = \mathcal {R} _ {\ell \mathrm {c s t n d}} (h) - \mathcal {R} _ {\ell \mathrm {c s t n d}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell \mathrm {c s t n d}} (\mathcal {H}). \\ \end{array} +$$ + +Let $n = 2$ . For any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $h(x,2) = \tau_{\epsilon} \geq 0$ and satisfies + +$$ +\sup _ {\mu \in [ \tau_ {\epsilon} - \Lambda_ {\min }, \tau_ {\epsilon} + \Lambda_ {\min } ]} \left\{\frac {1 - t}{2} [ \Phi (\tau_ {\epsilon}) - \Phi (- \tau_ {\epsilon} + \mu) ] + \frac {1 + t}{2} [ \Phi (- \tau_ {\epsilon}) - \Phi (\tau_ {\epsilon} - \mu) ] \right\} < \mathcal {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = \mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell_ {0 - 1}} (h, x) - \mathcal {C} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}, x) \\ = t \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \mathcal {T} ^ {\text {c s t n d}} (t) \leq \mathcal {R} _ {\ell^ {\text {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\text {c s t n d}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\text {c s t n d}}} (\mathcal {H}) \\ = \mathcal {R} _ {\ell \text {c s t n d}} (h) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\ell \text {c s t n d}} ^ {*} (\mathcal {H}, x) \right] \\ = \mathcal {C} _ {\ell \text {c s t n d}} (h, x) - \mathcal {C} _ {\ell \text {c s t n d}} ^ {*} (\mathcal {H}, x) \\ = \sup _ {\mu \in \left[ \tau_ {\epsilon} - \Lambda_ {\min }, \tau_ {\epsilon} + \Lambda_ {\min } \right]} \left\{\frac {1 - t}{2} \left[ \Phi \left(\tau_ {\epsilon}\right) - \Phi (- \tau_ {\epsilon} + \mu) \right] + \frac {1 + t}{2} \left[ \Phi (- \tau_ {\epsilon}) - \Phi (\tau_ {\epsilon} - \mu) \right] \right\} \\ < \mathfrak {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . \\ \end{array} +$$ + +By letting $\epsilon \to 0$ , we conclude the proof. The proof for $n > 2$ directly extends from the case when $n = 2$ . Indeed, for any $t \in [0,1]$ , we consider the distribution that concentrates on a singleton $\{x\}$ and satisfies $p(x,1) = \frac{1 + t}{2}$ , $p(x,2) = \frac{1 - t}{2}$ , $p(x,y) = 0, 3 \leq y \leq n$ . For any $\epsilon > 0$ , by the definition of infimum, we can take $h \in \mathcal{H}$ such that $h(x,1) = \tau_{1,\epsilon}$ , $h(x,2) = \tau_{2,\epsilon}$ , $h(x,3) = 0$ , $3 \leq y \leq n$ and satisfies $\tau_{1,\epsilon} + \tau_{2,\epsilon} = 0$ , and + +$$ +\begin{array}{l} \inf_{P\in \left[\frac{1}{n - 1},1\right]}\sup_{\mu \in C}\Bigg\{\frac{2 - P - t}{2}\bigl[\Phi \bigl( - \tau_{2,\epsilon}\bigr) - \Phi \bigl( - \tau_{1,\epsilon} + \mu \bigr)\bigr ] + \frac{2 - P + t}{2}\bigl[\Phi \bigl( - \tau_{1,\epsilon}\bigr) - \Phi \bigl( - \tau_{2,\epsilon} - \mu \bigr)\bigr ]\Bigg\} \\ = \sup _ {\mu \in C} \left\{\frac {1 - t}{2} \left[ \Phi (- \tau_ {2, \epsilon}) - \Phi (- \tau_ {1, \epsilon} + \mu) \right] + \frac {1 + t}{2} \left[ \Phi (- \tau_ {1, \epsilon}) - \Phi (- \tau_ {2, \epsilon} - \mu) \right] \right\} \\ < \mathfrak {T} ^ {\mathrm {c s t n d}} (t) + \epsilon . \\ \end{array} +$$ + +Then, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell_ {0 - 1}} (\mathcal {H}) = t +$$ + +and + +$$ +\mathfrak {T} ^ {\text {c s t n d}} (t) \leq \mathcal {R} _ {\ell^ {\text {c s t n d}}} (h) - \mathcal {R} _ {\ell^ {\text {c s t n d}}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\ell^ {\text {c s t n d}}} (\mathcal {H}) < \mathfrak {T} ^ {\text {c s t n d}} (t) + \epsilon . +$$ + +By letting $\epsilon \to 0$ , we conclude the proof. + +# F.2 Constrained exponential loss + +Theorem 13 ( $\overline{\mathcal{H}}$ -consistency bounds for constrained exponential loss). Let $\Phi(t) = e^{-t}$ . For any $h \in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell \mathrm {c s t n d}} (h) - \mathcal {R} _ {\ell \mathrm {c s t n d}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell \mathrm {c s t n d}} (\overline {{\mathcal {H}}}) \big) +$$ + +$$ +w h e r e \Psi (t) = \left\{ \begin{array}{l l} 1 - \sqrt {1 - t ^ {2}} & t \leq \frac {e ^ {2 \Lambda_ {\operatorname* {m i n}}} - 1}{e ^ {2 \Lambda_ {\operatorname* {m i n}}} + 1}. \\ \frac {t}{2} \big (e ^ {\Lambda_ {\operatorname* {m i n}}} - e ^ {- \Lambda_ {\operatorname* {m i n}}} \big) + \frac {2 - e ^ {\Lambda_ {\operatorname* {m i n}}} - e ^ {- \Lambda_ {\operatorname* {m i n}}}}{2} & \text {o t h e r w i s e .} \end{array} \right. +$$ + +Proof. For $n = 2$ , plugging in $\Phi(t) = e^{-t}$ in Theorem 12, gives + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} (t) = \inf _ {\tau \geq 0} \sup _ {\mu \in [ \tau - \Lambda_ {\min }, \tau + \Lambda_ {\min } ]} \left\{\frac {1 - t}{2} [ e ^ {- \tau} - e ^ {\tau - \mu} ] + \frac {1 + t}{2} [ e ^ {\tau} - e ^ {- \tau + \mu} ] \right\}. +$$ + +By differentiating with respect to $\tau$ , we can see that the infimum is achieved when $\tau = 0$ modulo some elementary analysis. Thus, $\overline{\mathcal{T}}^{\mathrm{cstnd}}$ can be reformulated as + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\text {c s t n d}} = \sup _ {\mu \in [ - \Lambda_ {\min }, \Lambda_ {\min } ]} \left\{\frac {1 - t}{2} \left[ 1 - e ^ {- \mu} \right] + \frac {1 + t}{2} \left[ 1 - e ^ {\mu} \right] \right\} \\ = 1 + \sup _ {\mu \in \left[ - \Lambda_ {\min }, \Lambda_ {\min } \right]} g (\mu). \\ \end{array} +$$ + +where $g(\mu) = -\frac{1 - t}{2} e^{-\mu} - \frac{1 + t}{2} e^{\mu}$ . Since $g$ is continuous, it attains its supremum over a compact set. Note that $g$ is concave and differentiable. In view of that, the maximum over the open set $(-\infty, +\infty)$ can be obtained by setting its gradient to zero. Differentiate $g(\mu)$ to optimize, we obtain + +$$ +g \left(\mu^ {*}\right) = 0, \quad \mu^ {*} = \frac {1}{2} \log \frac {1 - t}{1 + t} +$$ + +Moreover, by the concavity, $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ . Since $\mu^{*} \leq 0$ and $\Lambda_{\min} \geq 0$ , we have + +$$ +\mu^ {*} \leq 0 \leq \Lambda_ {\min } +$$ + +In view of the constraint, if $\mu^{*} \geq -\Lambda_{\mathrm{min}}$ , the maximum is achieved by $\mu = \mu^{*}$ . Otherwise, if $\mu^{*} < -\Lambda_{\mathrm{min}}$ , since $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ , the maximum is achieved by $\mu = -\Lambda_{\mathrm{min}}$ . Since $\mu^{*} \geq -\Lambda_{\mathrm{min}}$ is equivalent to $t \leq \frac{e^{2\Lambda_{\mathrm{min}} - 1}}{e^{2\Lambda_{\mathrm{min}}} + 1}$ , the maximum can be expressed as + +$$ +\max _ {\mu \in [ - \Lambda_ {\min }, \Lambda_ {\min } ]} g (\mu) = \left\{ \begin{array}{l l} g (\mu^ {*}) & t \leq \frac {e ^ {2 \Lambda_ {\min}} - 1}{e ^ {2 \Lambda_ {\min}} + 1} \\ g (- \Lambda_ {\min }) & \text {o t h e r w i s e} \end{array} \right. +$$ + +Computing the value of $g$ at these points yields: + +$$ +g \left(\mu^ {*}\right) = - \sqrt {1 - t ^ {2}} +$$ + +$$ +g \left(- \Lambda_ {\min }\right) = - \frac {1 - t}{2} e ^ {\Lambda_ {\min }} - \frac {1 + t}{2} e ^ {- \Lambda_ {\min }}. +$$ + +Then, if $t \leq \frac{e^{2\Lambda_{\min}} - 1}{e^{2\Lambda_{\min}} + 1}$ , we obtain + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} = 1 - \sqrt {1 - t ^ {2}}. +$$ + +Otherwise, we obtain + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} = 1 - \frac {1 - t}{2} e ^ {\Lambda_ {\min }} - \frac {1 + t}{2} e ^ {- \Lambda_ {\min }} \\ = \frac {t}{2} \left(e ^ {\Lambda_ {\min}} - e ^ {- \Lambda_ {\min}}\right) + \frac {2 - e ^ {\Lambda_ {\min}} - e ^ {- \Lambda_ {\min}}}{2}. \\ \end{array} +$$ + +For $n > 2$ , plugging in $\Phi(t) = e^{-t}$ in Theorem 12, gives + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} (t) = \inf _ {P \in \left[ \frac {1}{n - 1}, 1 \right]} \inf _ {\tau_ {1} \geq \max \{\tau_ {2}, 0 \}} \sup _ {\mu \in C} \Bigg \{\frac {2 - P - t}{2} \big [ e ^ {\tau_ {2}} - e ^ {\tau_ {1} - \mu} \big ] + \frac {2 - P + t}{2} \big [ e ^ {\tau_ {1}} - e ^ {\tau_ {2} + \mu} \big ] \Bigg \}. +$$ + +where $C = \left[\max \{\tau_1, -\tau_2\} - \Lambda_{\min}, \min \{\tau_1, -\tau_2\} + \Lambda_{\min}\right]$ . By differentiating with respect to $\tau_2$ and $P$ , we can see that the infimum is achieved when $\tau_2 = \tau_1 = 0$ and $P = 1$ modulo some elementary analysis. Thus, $\overline{\mathcal{T}}^{\mathrm{cstnd}}$ can be reformulated as + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} = \sup _ {\mu \in C} \left\{\frac {1 - t}{2} \left[ 1 - e ^ {- \mu} \right] + \frac {1 + t}{2} \left[ 1 - e ^ {\mu} \right] \right\} \\ = 1 + \sup_{\mu \in C}g(\mu). \\ \end{array} +$$ + +where $C = \left[-\Lambda_{\min}, \Lambda_{\min}\right]$ and $g(\mu) = -\frac{1 - t}{2} e^{-\mu} - \frac{1 + t}{2} e^{\mu}$ . Since $g$ is continuous, it attains its supremum over a compact set. Note that $g$ is concave and differentiable. In view of that, the maximum over the open set $(-\infty, +\infty)$ can be obtained by setting its gradient to zero. Differentiate $g(\mu)$ to optimize, we obtain + +$$ +g (\mu^ {*}) = 0, \quad \mu^ {*} = \frac {1}{2} \log \frac {1 - t}{1 + t} +$$ + +Moreover, by the concavity, $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ . Since $\mu^{*} \leq 0$ and $\Lambda_{\min} \geq 0$ , we have + +$$ +\mu^ {*} \leq 0 \leq \Lambda_ {\mathrm {m i n}} +$$ + +In view of the constraint, if $\mu^{*} \geq -\Lambda_{\mathrm{min}}$ , the maximum is achieved by $\mu = \mu^{*}$ . Otherwise, if $\mu^{*} < -\Lambda_{\mathrm{min}}$ , since $g(\mu)$ is non-increasing when $\mu \geq \mu^{*}$ , the maximum is achieved by $\mu = -\Lambda_{\mathrm{min}}$ . Since $\mu^{*} \geq -\Lambda_{\mathrm{min}}$ is equivalent to $t \leq \frac{e^{2\Lambda_{\mathrm{min}}} - 1}{e^{2\Lambda_{\mathrm{min}}} + 1}$ , the maximum can be expressed as + +$$ +\max _ {\mu \in [ - \Lambda_ {\min }, \Lambda_ {\min } ]} g (\mu) = \left\{ \begin{array}{l l} g (\mu^ {*}) & t \leq \frac {e ^ {2 \Lambda_ {\min } - 1}}{e ^ {2 \Lambda_ {\min } + 1}} \\ g (- \Lambda_ {\min }) & \text {o t h e r w i s e} \end{array} \right. +$$ + +Computing the value of $g$ at these points yields: + +$$ +g \left(\mu^ {*}\right) = - \sqrt {1 - t ^ {2}} +$$ + +$$ +g \left(- \Lambda_ {\min }\right) = - \frac {1 - t}{2} e ^ {\Lambda_ {\min }} - \frac {1 + t}{2} e ^ {- \Lambda_ {\min }}. +$$ + +Then, if $t \leq \frac{e^{2\Lambda_{\min}} - 1}{e^{2\Lambda_{\min}} + 1}$ , we obtain + +$$ +\overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} = 1 - \sqrt {1 - t ^ {2}}. +$$ + +Otherwise, we obtain + +$$ +\begin{array}{l} \overline {{\mathcal {T}}} ^ {\mathrm {c s t n d}} = 1 - \frac {1 - t}{2} e ^ {\Lambda_ {\mathrm {m i n}}} - \frac {1 + t}{2} e ^ {- \Lambda_ {\mathrm {m i n}}} \\ = \frac {t}{2} \left(e ^ {\Lambda_ {\min }} - e ^ {- \Lambda_ {\min }}\right) + \frac {2 - e ^ {\Lambda_ {\min }} - e ^ {- \Lambda_ {\min }}}{2}. \\ \end{array} +$$ + +Since $\overline{\mathfrak{T}}^{\mathrm{cstnd}}$ is convex, by Theorem 12, for any $h\in \overline{\mathcal{H}}$ and any distribution, + +$$ +\mathcal {R} _ {\ell_ {0 - 1}} (h) - \mathcal {R} _ {\ell_ {0 - 1}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell_ {0 - 1}} (\overline {{\mathcal {H}}}) \leq \Psi^ {- 1} \big (\mathcal {R} _ {\ell \mathrm {c s t n d}} (h) - \mathcal {R} _ {\ell \mathrm {c s t n d}} ^ {*} (\overline {{\mathcal {H}}}) + \mathcal {M} _ {\ell \mathrm {c s t n d}} (\overline {{\mathcal {H}}}) \big) +$$ + +where + +$$ +\Psi \big (t \big) = \left\{ \begin{array}{l l} 1 - \sqrt {1 - t ^ {2}} & t \leq \frac {e ^ {2 \Lambda_ {\operatorname* {m i n}}} - 1}{e ^ {2 \Lambda_ {\operatorname* {m i n}}} + 1} \\ \frac {t}{2} \big (e ^ {\Lambda_ {\operatorname* {m i n}}} - e ^ {- \Lambda_ {\operatorname* {m i n}}} \big) + \frac {2 - e ^ {\Lambda_ {\operatorname* {m i n}}} - e ^ {- \Lambda_ {\operatorname* {m i n}}}}{2} & \text {o t h e r w i s e .} \end{array} \right. +$$ + +![](images/a26b984c808b177f29149d1075280038a3acff2b5b82065d6ddbd8455b698d65.jpg) \ No newline at end of file diff --git a/hconsistencyboundscharacterizationandextensions/images.zip b/hconsistencyboundscharacterizationandextensions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a711e2b24b221bd9de826204bea82b40d7c55c53 --- /dev/null +++ b/hconsistencyboundscharacterizationandextensions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04f44cc08fc8902d7a31957464e073f365af1061233498afe910bc8343a978b8 +size 2709625 diff --git a/hconsistencyboundscharacterizationandextensions/layout.json b/hconsistencyboundscharacterizationandextensions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4d2f7ffb4ce749162c20a76ed90d93d32b742abd --- /dev/null +++ b/hconsistencyboundscharacterizationandextensions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5e93d4a47ed65c6d8568f27af410d1cc23b25bf0eeaec6f88c7e48c6f50c233 +size 1917298 diff --git a/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_content_list.json b/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e32dccedad99b2045e4b1a70df93e3ac7bc1fb7f --- /dev/null +++ b/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:373d6a5eb1f2232488b59d6f6317f3d49ad8b89912a222dcd9e017aaba742200 +size 157428 diff --git a/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_model.json b/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9b5c2eccab0f8464ec4ffbe4d3c819a96d1ac9cc --- /dev/null +++ b/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da53879b2a10033dbea200bded4f9ab83382e74b326b726c60945bbc66993f17 +size 193804 diff --git a/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_origin.pdf b/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dca1d8a1903f6df5b328fce4f1cfb38dece76272 --- /dev/null +++ b/kmeansclusteringwithdistancebasedprivacy/f8e47ad3-c0a7-419f-a4a7-dcbf25c26db2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ee1f07978962b4f8d57dfaf51baaae0bc991308ad1dce824d29101ad0fc0e76 +size 696373 diff --git a/kmeansclusteringwithdistancebasedprivacy/full.md b/kmeansclusteringwithdistancebasedprivacy/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f4d9cefcc41ccd6172312ee198d6c12c15bcf286 --- /dev/null +++ b/kmeansclusteringwithdistancebasedprivacy/full.md @@ -0,0 +1,646 @@ +# $k$ -Means Clustering with Distance-Based Privacy + +Alessandro Epasto + +Google Research + +aepasto@google.com + +Vahab Mirrokni + +Google Research + +mirrokni@google.com + +Shyam Narayanan + +MIT + +shyamsn@mit.edu + +Peilin Zhong + +Google Research + +peilinz@google.com + +# Abstract + +In this paper, we initiate the study of Euclidean clustering with Distance-based privacy. Distance-based privacy is motivated by the fact that it is often only needed to protect the privacy of exact, rather than approximate, locations. We provide constant-approximate algorithms for $k$ -means and $k$ -median clustering, with additive error depending only on the attacker's precision bound $\rho$ , rather than the radius $\Lambda$ of the space. In addition, we empirically demonstrate that our algorithm performs significantly better than previous differentially private clustering algorithms, as well as naive distance-based private clustering baselines. + +# 1 Introduction + +Two of the most fundamental and widely studied problems in unsupervised machine learning are the $k$ -means and $k$ -median clustering problems. Solving these clustering problems can allow us to group together data efficiently, and hence extract valuable and concise information from massive datasets. The goal of the $k$ -means (resp., $k$ -median) clustering problem is: given a dataset $X$ of points, construct a set $C$ of $k$ centers to minimize the clustering cost $\sum_{x \in X} d(x, C)^2$ (resp., $\sum_{x \in X} d(x, C)$ ), where $d(x, C)$ represents the minimum distance between the data point $x$ and the closest center in $C$ . + +In general, machine learning and data mining algorithms are prone to leaking sensitive information about individuals who contribute data points. In certain scenarios, this can lead to severe consequences, including losses of billions of dollars [60] or even the loss of human lives [10]. Thus, providing accurate algorithms that protect data privacy has become crucial in algorithm design. Over the past decade, the notion of differential privacy (DP) [31] has emerged as the gold standard for privacy-preserving algorithms, both in theory and in practice, and has been implemented by several major companies and the US Census [34, 68, 30, 1]. Informally, DP requires the output distribution of the algorithm to remain almost identical whenever a single data point is altered. (See Section 2 for a formal definition.) Hence, even the knowledge of all but one data point, along with the output of the algorithm, still cannot reveal significant information about the final data point. + +The importance of $k$ -means and $k$ -median clustering, as well as preserving data privacy, has led to a large interest in designing differentially private clustering algorithms in Euclidean space [14, 62, 36, 47, 59, 73, 64, 65, 71, 38, 9, 63, 48, 70, 69, 44, 49, 17, 61, 19, 13, 24, 33, 25, 56]. Here, the goal is to design a differentially private set of $k$ centers, such that the clustering cost with respect to these centers is only a small factor larger than the optimal (non-private) clustering cost. Importantly, the work of [70, 44, 25] led to efficient polynomial-time and differentially private algorithms that achieve constant multiplicative approximation ratios. + +While we can obtain DP algorithms with low multiplicative error, all such algorithms also require an additional additive error. If $\Lambda$ is the radius of a ball that is promised to contain all data points, even the + +best private clustering algorithms are known to have an additive error proportional to $\mathrm{poly}(k,d)\cdot \Lambda^p$ , where $p = 2$ for $k$ -means and $p = 1$ for $k$ -median. This factor of $\Lambda^p$ is in fact unavoidable [47], as a single individual datapoint can be moved up to distance $\Lambda$ and the algorithm must preserve privacy with respect to this change. If we do not have a good bound of $\Lambda$ , this factor may dominate the error, and may make the clustering algorithm highly inaccurate. Even if the bound is known exactly, errors scaling with $\Lambda$ may however be unnecessary and unacceptable in certain situations. + +The additive error depending on $\Lambda$ is necessary because standard differential privacy requires us to protect learning anything about the location of any point. However, in practice this may not be necessary as it might be enough to not know the location of a point up to a certain error. For instance, in address data, the risk is leaking the actual location, but uncertainty within a few miles in a city is sufficient to protect the privacy of the person [20]. Another motivation is in smart meters [20, Section 6.1], where accurately learning the fine-grained consumption can result in spectacular privacy leaks (e.g. learning which TV channel is being watched [46, 52]) but slight uncertainty on the measurements is sufficient to protect from such attacks. Moreover, when differential privacy is used to protect the algorithm from adversarial inputs, it is often sufficient to protect against small perturbations as large perturbations can be detected or removed otherwise [53]. + +These cases can be modeled by variants of differential privacy, such as dX privacy (a.k.a. extended differential privacy) [20, 40], and pixelDP [53]. All such models are adaptations or generalizations of DP which take into account a metric over the datasets. + +In this paper, we study a concrete formulation of distance-based privacy which we call $\rho$ -dist-DP. Roughly speaking, an algorithm is $\rho$ -dist-DP if the algorithm protects privacy of a single data point if it is moved by at most $\rho$ in a metric space. (See Section 2 for a formal definition, where we define $(\varepsilon, \delta, \rho)$ -dist-DP.) This is a less restrictive version of DP, as usually the neighboring datasets are defined to be any two datasets with a single point allowed to move anywhere. While we remark that although this notion is well-defined for any metric space, our results in this paper focus entirely on Euclidean space. + +The main question we study in this paper is the following: can we obtain much better approximation results (and algorithms better in practice) if we allow the algorithm to resist small movements, as opposed to arbitrary movements, of a point for instance for clustering? In other words, can we design $\rho$ -dist-DP algorithms that perform significantly better than the state of the art regular DP algorithms for $k$ -means or $k$ -median clustering? + +# 1.1 Our Results + +In this work, we answer the above question affirmatively, by providing an efficient and accurate theoretical algorithm, and showing empirically that our algorithm outperforms clustering algorithms with standard differential privacy. + +# 1.1.1 Theoretical Results + +From a theoretical perspective, we are able to obtain $O(1)$ -approximate algorithms for $k$ -means and $k$ -median clustering with $\rho$ -dist-DP, and with additive error essentially only depending on the smaller distance $\rho$ as opposed to the full radius $\Lambda$ . More precisely, our main theorem is the following. + +Theorem 1.1. Let $n, k, d$ be integers, $\rho \in (0, \Lambda], \varepsilon, \delta \in (0, 1]$ be privacy parameters, and $p \in \{1, 2\}$ . Then, given a dataset $X = \{x_1, \ldots, x_n\}$ of points in a given ball of radius $\Lambda$ in Euclidean space $\mathbb{R}^d$ , there exists a polynomial-time $(\varepsilon, \delta, \rho)$ -dist- $DP$ algorithm $\mathcal{A}$ that outputs a set of centers $C = \{c_1, \ldots, c_k\}$ , such that + +$$ +\sum_{i = 1}^{n}d(x_{i},C)^{p}\leq O(1)\cdot \min_{\substack{C^{*}\subset \mathbb{R}^{d}\\ |C^{*}| = k}}\sum_{i = 1}^{n}d(x_{i},C^{*})^{p} + \text{poly}\left(k,d,\log n,\frac{1}{\varepsilon},\log \frac{1}{\delta},\log \frac{\Lambda}{\rho}\right)\cdot \rho^{p}. +$$ + +Here, $p = 1$ for $k$ -median and $p = 2$ for $k$ -means. + +For more precise dependences on the parameters $k, d, 1 / \varepsilon$ , please see Theorem C.1. + +Qualitatively, Theorem 1.1 has similar guarantees to [70], who also provided an $(\varepsilon, \delta)$ -differentially private algorithm with an $O(1)$ -approximation algorithm and additive error that was + +poly $(k,d,\log n,\frac{1}{\varepsilon},\log \frac{1}{\delta})\cdot \Lambda^p$ . The main difference is that we drastically reduce the additive error by reducing the dependence on $\Lambda$ to a dependence on the distance privacy parameter $\rho$ + +Running time and parallel computation. The runtime of a straightforward implementation of our algorithm is $\tilde{O}(nkd) + \mathrm{poly}(k) \cdot d$ , if we also ignore polynomial factors in $\log \frac{\Lambda}{\rho}$ . By using approximate near neighbor algorithms, we can improve this further to $\tilde{O}(nd) + \mathrm{poly}(k) \cdot d$ , which for $k$ at most a small polynomial in $n$ , is nearly linear. In addition, the algorithm can be easily implemented in the massively parallel computation (MPC) model [51, 11] (an abstraction of MapReduce [28]) using $O(1)$ rounds and near linear total space where each machine has sublinear space. We discuss above in-memory algorithms and MPC algorithms further at the end of Appendix C. + +Finally we remark that the $\rho^p$ dependence in the additive error is required for ensuring $\rho$ -dist-DP. In fact, we prove in Appendix D that any $(\varepsilon, \delta, \rho)$ -dist-DP algorithm, with any finite multiplicative error, must incur $\Omega(k \cdot \rho^2)$ -additive error for $k$ -means and $\Omega(k \cdot \rho)$ -additive error for $k$ -median. + +# 1.1.2 Empirical Results + +We empirically studied the performance of our algorithm on public and real-world datasets. We compare the approximation guarantee of our algorithm with the standard DP clustering algorithm and the standard non-private $k$ -clustering algorithm. Experiments show that our algorithm outperforms the DP clustering algorithm and is only slightly worse than the non-private algorithm. In addition, we show that smaller $\rho$ provides a better approximation guarantee, which aligns with our theoretical study. We refer readers for more details of our empirical study to Section 6. + +# 1.2 Other Related Work + +Distance-based Privacy: The literature on distance-based privacy explored different data protection schemes which we now describe in more detail. A general notion is known as dX privacy [20] (a.k.a. Extended differential privacy) which includes as a special case differential privacy. This privacy notion bounds the distinguishability of two statistical datasets, not just by the number of different users' inputs (i.e., their Hamming distance) but by an arbitrary $d_{\chi}$ distance between them accounting for the magnitude of the changes to each user entry. Similar notions, such as pixelDP [53] and perceptual indistinguishability [21], are also formalization of DP where adjacent datasets differ in a single feature of the input (e.g. a pixel) or some custom function of the data. Several algorithms have been defined for these notions, including LSH algorithms [40]. + +From an application point of view, much work has focused on geo-indistinguishability [3, 6, 15], i.e. preventing an adversary from distinguishing two close locations (by ensuring that close location have similar probabilities to generate a certain output). Other areas of applicability has been protecting textual data [39, 41], private smart meters sensing [27], image obfuscation [35, 21] and mobile crowsensing [74]. + +$k$ -Clustering: $k$ -Means and $k$ -median clustering have seen a large body of work over the past few decades. While both problems are known to be NP-hard [58], a significant amount of work has given various $O(1)$ -approximation algorithms for both problems [18, 8, 50, 7, 55, 16, 2, 26]. The state-of-the-art approximation is a 5.912-approximation for Euclidean $k$ -means and a 2.406-approximation for Euclidean $k$ -median [26]. As noted in previously, there has also been significant work in specifically studying differentially private $k$ -means and $k$ -median clustering, though to our knowledge we are the first to study distance-based private clustering. + +# 2 Preliminaries + +We present some basic definitions and setup that will be sufficient for explaining our algorithms for the main body of the paper. We defer some additional preliminaries to Appendix A. + +# 2.1 Differential Privacy + +First, we recall the definition of differential privacy. + +Definition 2.1. [31] A (randomized) algorithm $\mathcal{A}$ is said to be $(\varepsilon, \delta)$ -differentially private $((\varepsilon, \delta)$ -DP for short) if for any two datasets $X$ and $X'$ that differ in exactly one data point and any subset $S$ of the output space of $\mathcal{A}$ , we have + +$$ +\mathbb {P} (\mathcal {A} (X) \in S) \leq e ^ {\varepsilon} \cdot \mathbb {P} (\mathcal {A} (X ^ {\prime}) \in S) + \delta . +$$ + +In standard differential privacy, two datasets $X$ and $X'$ are adjacent if we can convert $X$ to $X'$ either by adding, removing, or changing a single data point. Notably, the change in the single data point may be arbitrary. + +In distance-based privacy, however, we only allow two datasets to be adjacent if they differ by changing (not adding or removing) a single data point, by moving it up to distance $\rho$ . Formally, we define the following. + +Definition 2.2. Let $X, X'$ be $\rho$ -adjacent if they have the same number of points and differ in exactly one data point, where the distance between the two differing data points is $\rho$ . Then, a (randomized) algorithm $\mathcal{A}$ is $(\varepsilon, \delta, \rho)$ -dist-DP if for any two $\rho$ -adjacent datasets $X$ and $X'$ and any subset $S$ of the output space of $\mathcal{A}$ , we have + +$$ +\mathbb {P} (\mathcal {A} (X) \in S) \leq e ^ {\varepsilon} \cdot \mathbb {P} (\mathcal {A} (X ^ {\prime}) \in S) + \delta . +$$ + +We remark that in all of our theoretical guarantees, we implicitly assume that $\varepsilon, \delta \leq \frac{1}{2}$ . + +The Laplace Mechanism is one of the most common primitives used to ensure privacy. Simply put, for a non-private statistic, the Laplace Mechanism adds noise $\mathrm{Lap}(t)$ to the statistic for some $t > 0$ where $\mathrm{Lap}(t)$ has the probability density function (PDF) equal to $\frac{1}{2t} \cdot e^{-|x| / t}$ . It is well-known that if $f(X)$ is a statistic such that $|f(X) - f(X')| \leq \Delta$ for any two adjacent datasets $X, X'$ , then $f(X) + \mathrm{Lap}(\Delta / \varepsilon)$ is $(\varepsilon, 0)$ -DP. Likewise, if $|f(X) - f(X')| \leq \Delta$ between two $\rho$ -adjacent datasets $X, X'$ , then $f(X) + \mathrm{Lap}(\Delta / \varepsilon)$ is $(\varepsilon, 0, \rho)$ -dist-DP. + +Similar to the Laplace Mechanism, we can also implement the Truncated Laplace mechanism [43] for approximating functions $f: X \to \mathbb{R}$ . The Truncated Laplace Mechanism outputs $f(X) + \mathrm{T}\mathrm{Lap}(\Delta, \varepsilon, \delta)$ , where $\mathrm{T}\mathrm{Lap}(\Delta, \varepsilon, \delta)$ is the distribution with PDF proportional to $e^{-|x| \cdot \varepsilon / \Delta}$ on the region $[-A, A]$ , where $A = \frac{\Delta}{\varepsilon} \cdot \log \left(1 + \frac{e^{\varepsilon} - 1}{2\delta}\right)$ , and PDF 0 outside the region $[-A, A]$ . Assuming $0 < \varepsilon$ and $0 < \delta \leq \frac{1}{2}$ , it is known that if $|f(X) - f(X')| \leq \Delta$ for all adjacent $X, X'$ , then this mechanism is $(\varepsilon, \delta)$ -DP, and if $\varepsilon \leq \frac{1}{2}$ this is accurate up to error $\frac{\Delta}{\varepsilon} \cdot \log \frac{1}{\delta}$ , with probability 1. + +Likewise, a nearly identical result holds for distance-based privacy. Namely, if $|f(X) - f(X')| \leq \Delta$ for any $\rho$ -adjacent datasets $X, X'$ , then $f(X) + \mathrm{TLap}(\Delta, \varepsilon, \delta)$ is $(\varepsilon, \delta, \rho)$ -dist-DP. + +We defer some additional preliminaries to Appendix A. + +# 2.2 $k$ -Means and $k$ -Median Clustering + +We define $d(x,y)$ to be the Euclidean distance between two points $x$ and $y$ , and for a finite subset $C \subset \mathbb{R}^d$ , we define $d(x,C) = d(C,x)$ to be $\min_{c \in C} d(x,c)$ . Given a dataset $X = \{x_1, \ldots, x_n\}$ of points in $\mathbb{R}^d$ , and a set of centers $C = \{c_1, \ldots, c_k\}$ , we define the $k$ -means/ $k$ -median cost as + +$$ +\operatorname {c o s t} (X; C) := \sum_ {x \in X} d (x, C) ^ {p}. +$$ + +Above, $p = 2$ for $k$ -means and $p = 1$ for $k$ -median. Finally, we define $\mathrm{OPT}_k(X)$ to be the minimum value of $\mathrm{cost}(X; C)$ for any set of $k$ points $C$ . + +We further assume that the points in $X$ are in $B(0,\Lambda)$ , which is the ball of radius $\Lambda$ about the origin in $\mathbb{R}^d$ . Our goal in $k$ -means (resp., $k$ -median) clustering is to find a subset $C$ of $k$ -points that minimizes $\mathrm{cost}(X;C)$ , i.e., where $\mathrm{cost}(X;C)$ is as close to $\mathrm{OPT}_k(X)$ as possible. Occasionally, we may assign each point $x_i \in X$ a positive weight $w_i$ , in which case we define $\mathrm{cost}(X;C) := \sum_{x_i \in X} w_i \cdot d(x_i, C)^p$ . + +Our goal in differentially private clustering is to produce a set of $k$ centers $C$ such that $C$ is $(\varepsilon, \delta)$ -DP with respect to $X$ , and such that $\mathrm{cost}(X; C) \leq \beta \cdot \mathrm{OPT}(X) + V \cdot \Lambda^p$ (where $p = 2$ for $k$ -means and $p = 1$ for $k$ -median), where $\beta$ and $V$ are not too large. In distance-based privacy, we wish to replace the factor $\Lambda$ with some smaller $\rho$ , i.e., we want $\mathrm{cost}(X; C) \leq \beta \cdot \mathrm{OPT}(X) + V \cdot \rho^p$ . However, our algorithm only has to be private up to changing a single data point by up to $\rho$ . If we obtain this guarantee, we say that we have a $(\beta, V)$ -approximate and $(\varepsilon, \delta, \rho)$ -dist-DP solution. + +# 3 Technical Overview and Roadmap + +We focus on proving Theorem 1.1 in Sections 4 and 5, and discuss our experimental results in Section 6. In Sections 4 and 5, we will only describe the algorithms, and we defer all formal proofs to the Supplementary sections. For simplicity, in this overview we focus on $k$ -median and assume the dimension $d = (\log n)^{O(1)}$ , and can be hidden in $\tilde{O}$ notation. + +Our approach follows two high-level steps, inspired by the work of [22, 25]. The insight used in [25], which proved highly efficient private clustering algorithms, is to start by generating a crude but private solution that may use a large number of centers and have a large approximation, but has small additive error. Then, one can apply the crude solution to partition the Euclidean space $\mathbb{R}^d$ into smaller regions, and apply some regular differentially private clustering algorithm in the regions. We follow a similar high-level template to [25]. However, we still need to implement each of these steps, which require several technical insights to ensure we maintain privacy while only losing additive error roughly proportional to $\mathrm{poly}(k,d)\cdot \rho$ . + +To obtain a crude approximation, we use a technique based on partitioning the space $\mathbb{R}^d$ into randomly shifted grids at various levels (also known as the Quadtree). In the Quadtree, the 0th level is a very coarse grid containing the large ball of radius $\Lambda$ , and each subsequent level refines the previous level with smaller grid cells. For a single grid and knowledge of which point lies in which grid cell, a natural approach for minimizing cost would be to output the centers of the "heaviest" cells, i.e., those with the most number of points. Indeed, it is known that outputting the $O(k)$ heaviest cells at each grid level provides a good approximation, at the cost of having more than $k$ centers. + +While this is not DP, a natural way of ensuring privacy would be to add Laplace noise to each count and add the heaviest cells after this. Unfortunately, doing so will lead to error depending on the full radius $\Lambda$ , due to the coarser levels of the quadtree (i.e., levels with grid length close to $\Lambda$ rather than $\rho$ ). For example, if there was only a single data point, there will be at least $e^d$ cells even at coarse levels, and several of them may have large noisy counts. Hence, we are likely to choose completely random cells, which will cause additive error to behave like $\Lambda$ as opposed to $\rho$ . Another option is to add noise to the points first and then compute the heaviest cells. While this avoids additive dependence on $\Lambda$ , the additive dependence will behave like $n \cdot \rho$ where $n$ is the full size of the dataset. + +Surprisingly, we show that we can combine both of these observations in the right way. Namely, for coarse cells (i.e., with length larger than $\tilde{O}(\rho)$ ), we add noise (of distance proportional to $\tilde{O}(\rho)$ ) to the data points directly to generate private points $\tilde{x}_i$ , and then compute the heaviest cells without adding noise to the counts. For fine cells (length smaller than $\tilde{O}(\rho)$ ), we do not add noise to the data points, but we add Laplace noise to the cell counts. + +To explain the intuition behind this, suppose that the $n$ data points happen to be perfectly divided into $n / k$ clusters, where every point has distance $r$ to its nearest cluster center. If $r \gg \rho$ , then even if we add $\tilde{O}(\rho)$ noise to each data point, we will still find cluster centers that are within $\tilde{O}(r)$ of each correct center. So, the $k$ -means cost should only blow up by a small multiplicative factor, without additive error. Alternatively, if $r \ll \rho$ , then the grid cells of side length $\tilde{O}(r)$ should contain the entire cluster, and hence have $n / k$ points in them. Assuming $n \gg d \cdot k$ , even if we add Laplace noise to each of $e^d$ cells, none of them will exceed $n / k$ . Alternatively, if $n \ll d \cdot k$ , then our approach of simply adding noise to the points and obtaining $n \cdot \rho$ error will be only $O(dk) \cdot \rho$ , which is small. + +In summary, we can generate a crude approximation $F$ with roughly $O(k)$ cells per grid level (and $\tilde{O}(k)$ centers total), with small additive ratio. But we desire for the number of centers to be exactly $k$ , and the multiplicative ratio to be $O(1)$ , whereas ours will end up being $d^{O(1)}$ . To achieve such an accurate result, we use $F$ to partition the data into regions, and apply a private coreset algorithm on + +each. By combining these coresets together, we may obtain a private coreset of the full data, and then we can apply an $O(1)$ -approximate non-private algorithm on the coreset. + +A first attempt, inspired by [22, 25], is to send each $x_{i}$ to a region $S_{j}$ if $f_{j} \in F$ is the closest center to $x_{i}$ , and then compute a standard (i.e., not dist-DP) private coreset on each region $S_{j}$ . To avoid dealing with large additive errors depending on $\Lambda$ , we further split each region into a close and far region, depending on whether the distance from $x_{i}$ to $f_{j}$ is more than or less than $S \cdot \rho$ for some parameter $S$ . + +This attempt will still suffer from a large additive cost. For instance, if a point moves, even by distance $\rho$ , it may move from a close region to a far region. Hence, the far region may have 1 more point, and since the far regions have diameter $\Lambda$ , an algorithm that is private to adding or deleting a point must incur error proportional to $\Lambda$ . + +Our fix for this is to assign each $x_{i}$ to a region not based on its closest point and distance, but instead based on $\tilde{x}_i$ 's closest point and distance, where we recall that $\tilde{x}_i$ is the noisy version of $x_{i}$ . For the points $\{x_i\}$ that are mapped to a far region (meaning $\tilde{x}_i$ is far from its nearest $f_{j}$ ), we will simply use $\{\tilde{x}_i\}$ as the coreset, as $\tilde{x}_i$ is already dist-DP. However, for points that are mapped to a close region, while we use $\tilde{x}_i$ to determine which region the point $x_{i}$ is mapped to, we compute a private coreset using [70] on the points $x_{i}$ , rather than use the points $\tilde{x}_i$ . + +To explain why this algorithm is accurate, for the close regions, we obtain additive error proportional to $S \cdot \rho$ as we apply the private coreset on a ball of radius $S \cdot \rho$ . There is one region for each center in $F$ , which multiplies the additive error by $|F| = \tilde{O}(k)$ . For the far regions, we first note that $d(\tilde{x}_i, C) = d(x_i, C) \pm \tilde{O}(\rho)$ for any set of $k$ centers $C$ , as $d(x_i, \tilde{x}_i) \leq \tilde{O}(\rho)$ . Hence, we have additive error $\tilde{O}(\rho)$ per point. While this seems bad as this might induce additive error for $n$ points, we in fact show that this additive error can be "charged" to multiplicative error. To see why, if $x_i$ mapped to the far regions, this means $d(\tilde{x}_i, F) \geq \rho \cdot S$ , which also means $d(x_i, F) \geq \Omega(\rho \cdot S)$ . If there were $T$ such points, then the total cost of $X$ with respect to $F$ is at least $T \cdot \rho \cdot S$ , whereas the additive error is roughly $T \cdot \rho$ . Finally, in our crude approximation we show $\mathrm{cost}(X; F)$ is at most $d^{O(1)}$ times the optimum $k$ -means cost, which means for $S \gg d^{O(1)}$ the additive error is small even compared to the optimum cost. Hence, we can charge the additive error to multiplicative error. We still have additive error from the close regions, but for $S = d^{O(1)}$ , the additive error is only $\mathrm{poly}(k, d) \cdot \rho$ . + +To summarize, while our techniques are inspired by [25], one important novel technical contribution of our work is that while [25] uses the true locations of the points to assign them to regions, we first add Gaussian noise to the points to determine their region, and then use the noised points only for the "far" regions and the true points only for the "close" regions. This change is crucial in ensuring the analysis is successful. In addition, we must set several parameters carefully to charge the additional incurred cost either to a small additive or small multiplicative factor. + +# 4 Crude Approximation + +In this section, we devise a crude bicriteria approximation that will serve as a starting point in developing our more refined algorithm. A bicriteria approximation is a set $F$ of $\alpha \cdot k$ points, that is $(\varepsilon, \delta, \rho)$ -DP in terms on $X$ , and in addition, it is a $(\beta, V)$ approximation, i.e., $\mathrm{cost}(X; F) \leq \beta \cdot \mathrm{OPT}_k(X) + V \cdot \rho^p$ , where $p = 1$ for $k$ -median and $p = 2$ for $k$ -means. Even though $F$ has more than $k$ points, we still compare to the optimal solution with exactly $k$ points. We will show such an algorithm with $\alpha = \mathrm{poly}(\log n, \log \frac{\Lambda}{\rho})$ , $\beta = \mathrm{poly}(d)$ , and $V = \mathrm{poly}(k, d, \varepsilon^{-1}, \log \delta^{-1}, \log n)$ . We defer the formal theorem statement, along with the proof, to Appendix B. + +Algorithm Description: The algorithm works as follows. For each $i \leq n$ , let $\tilde{x}_i$ be generated by adding $O\left(\frac{\rho}{\varepsilon} \cdot \sqrt{\log(1 / \delta)}\right) \cdot \mathcal{N}(0, I)$ noise to each data point $x_i$ . Let $\tilde{X} = \{\tilde{x}_1, \dots, \tilde{x}_n\}$ . + +We create $REP = O(\log n)$ random quadrtrees starting from the top level with side length $\Lambda$ (full diameter of pointset) until the bottom level of size length $\rho / B$ , for some parameter $B$ . Next, for some parameter $A$ , for each level with side length between $\rho \cdot A$ and $\rho / B$ , we count how many points are in each cell, add $\mathrm{TLap}(1 / \varepsilon', 1 / \delta')$ noise, where $\varepsilon' = \Theta(\varepsilon / \sqrt{\log n \log(A \cdot B) \log(1 / \delta)})$ and $\delta' = \Theta(\delta / (\log n \log(A \cdot B)))$ , and then pick the $4k$ cells in that level with the largest number of points in them, after adding noise to the number of points. For the levels of side length more than $\rho \cdot A$ , we count how many of the $\tilde{x}_i$ points are in each cell and then pick the $4k$ cells in that level with + +the largest number of points in $\tilde{X}$ . Our final algorithm simply outputs the union of all cell centers that we have picked. + +One issue is that the number of cells is exponential in $d$ , so adding noise to each cell count may be inefficient. To fix this, we will only add $\mathrm{TLap}(1 / \varepsilon', 1 / \delta')$ noise to cells that were nonempty, and will only pick a cell center if its noisy count is at least $\frac{K}{\varepsilon'} \log \frac{1}{\delta}$ , for some large constant $K$ . Since an empty cell, even after adding noise to its count, can never exceed $\frac{K}{\varepsilon'} \log \frac{1}{\delta}$ , we can pretend we did the same procedure to the empty cells, but simply never included them. It is straightforward to verify that every other step of the algorithm is implementable in polynomial time. + +We provide pseudocode for the algorithm in Algorithm 2 in Appendix B, and we discuss the runtime at the end of Appendix C. + +# 5 From Crude to Accurate + +In this section, we devise an improved approximation that only uses $k$ centers and achieves a constant approximation ratio, using the crude approximation from Section 4 as a starting point. We will subsequently prove Theorem 1.1. Again, we defer all proof details to Appendix C. + +Our approach utilizes both the crude approximation from Section 4 and previously known constant-approximation differentially private (but not dist-DP) algorithms from the literature, to create a dist-DP "semi-coreset" for clustering. More formally, given a set of $n$ points $X = \{x_{1},\ldots ,x_{n}\} \in \mathbb{R}^{d}$ , we will compute a (weighted) set of points $Y$ that is $(\varepsilon ,\delta ,\rho)$ -dist-DP with respect to $X$ , such that for any set of $k$ centers $C = \{c_1,\dots ,c_k\}$ , $\mathrm{cost}(Y;C) = \Theta (\mathrm{cost}(X;C))\pm O(\mathrm{OPT}_k(X))\pm W\cdot \rho^p$ , where $W$ will be polynomial in $d,k,\varepsilon^{-1},\log \delta^{-1},\log n$ , and $\log \frac{\Lambda}{\rho}$ . + +If we can achieve this, then we just have to compute an $O(1)$ -approximate $k$ -means (or $k$ -median) solution to $Y$ , which does not have to be private since $Y$ already is. Indeed, one can prove an $O(1)$ -approximation of $Y$ will be a dist-DP $(O(1), O(W))$ -approximate solution for $X$ . + +Algorithm Description: Our algorithm works as follows. First, for each point $x_{i} \in X$ , add $O\left(\frac{\rho}{\varepsilon} \cdot \sqrt{\log(1 / \delta)}\right) \cdot \mathcal{N}(0, I)$ noise to get a point $\tilde{x}_{i}$ . (Recall: this was also done for the crude approximation.) + +Next, we partition the set of points into regions, using our dist-DP bicriteria approximation $F$ from Section 4. If $d(\tilde{x}_i,F) > \rho \cdot S$ for some parameter $S$ , we send the noised point $\tilde{x}_i$ to the set $\tilde{X}_0$ , and send the index $i$ to the index set $I_0$ . Else, if $\tilde{x}_i$ is closest to center $f_j$ (for $j \leq \alpha \cdot k$ ), we send the true point $x_i$ to the set $\hat{X}_j$ , and send $i$ to the index set $I_j$ . In fact, for all $j$ including $j = 0$ , we may define $\hat{X}_j$ to be the set $\{x_i : i \in I_j\}$ . Note that $\hat{X}_j$ is a full partition of the dataset $X$ . For each $j \geq 1$ , we will define the region $R_j$ as the ball of radius $O(\rho \cdot S)$ around $f_j$ . + +For each $0 \leq j \leq \alpha \cdot k$ , we let $\hat{n}_j$ be the number of indices in $I_j$ . Note that this equals the number of points mapped to $\hat{X}_j$ . If $\hat{n}_j < T$ for some parameter $T$ , then we define $\tilde{X}_j$ to be the corresponding points $\{\tilde{x}_i : i \in I_j\}$ . Otherwise, we apply the private semi-coreset algorithm from [70] to find a private semi-coreset $\tilde{X}_j$ of the dataset $\hat{X}_j$ , with respect to the ball $B(f_j, \rho \cdot O(S / \gamma))$ for some parameter $\gamma < 1$ . Finally, we will merge all the semi-coresets $\tilde{X}_j$ together, which includes $\tilde{X}_0$ defined in the previous paragraph, to obtain $\tilde{X}$ . Finally, we may apply any $O(1)$ -approximate (non-private) clustering to $\tilde{X}$ . + +We provide pseudocode for the algorithm, in Algorithm 1. + +# 6 Empirical Evaluation + +In this section, we study the empirical approximation of our $\rho$ -dist-DP $k$ -means clustering algorithm. + +Datasets. We evaluate our algorithm on 6 well-known public datasets brightkite $(51406\times 2)$ , gowalla $(107092\times 2)$ , shuttle $(58000\times 10)$ , skin [12] $(245057\times 4)$ , rangequeries [67] $(200000\times 6)$ and $s$ -sets [42] $(5000\times 2)$ , where brightkite and gowalla are datasets of geographic locations (latitude and longitude) of users and can be found in Stanford Large Network Dataset Collection (SNAP) [54], shuttle, skin and rangequeries are non-geographic datasets and can be found on UCI Repository [29], + +Algorithm 1 Main Algorithm: dist-DP $k$ -means (resp., $k$ -median) clustering +1: Input: Parameters $n, d, k, \varepsilon, \delta, \rho$ , dataset $X = \{x_{1}, \ldots, x_{n}\} \subset \mathbb{R}^{d}$ , crude private bicriteria approximation $F = \{f_{1}, \ldots, f_{\alpha \cdot k}\} \subset \mathbb{R}^{d}$ . +2: Output: Improved private approximation $C = \{c_{1}, \ldots, c_{k}\} \subset \mathbb{R}^{d}$ . +3: Initialize $S = O\left(\frac{1}{\varepsilon} \cdot \sqrt{(d + \log n) \cdot \log(1 / \delta) \cdot d^{3}}\right)$ , $T = O\left(\frac{k \log^{2} n \log(1 / \delta) + k \sqrt{d \log(1 / \delta)}}{\varepsilon}\right)$ . +4: Create array arr[1:n]. +5: for $i = 1$ to $n$ do +6: $\tilde{x}_{i} := x_{i} + \frac{\rho \cdot \sqrt{2 \log(1.25 / \delta)}}{\varepsilon} \cdot \mathcal{N}(0, I)$ . +7: if $d(\tilde{x}_{i}, F) \leq \rho \cdot S^{\varepsilon}$ then +8: arr[i] = arg minj $d(\tilde{x}_{i}, f_{j})$ . +9: else +10: $\tilde{X}_{0} = \tilde{X}_{0} \cup \{\tilde{x}_{i}\}$ +11: for $j = 1$ to $\alpha \cdot k$ do +12: $\tilde{X}_{j} = \{x_{i} : arr[i] = j\}$ , and $\hat{n}_j = |\hat{X}_j|$ . +13: if $\hat{n}_j < T$ then +14: $\tilde{X}_{j}$ is $\{\tilde{x}_i : arr[i] = j\}$ . +15: else +16: Compute $\tilde{X}_{j}$ by applying a DP $k$ -means (resp., $k$ -median) semi-coreset algorithm (such as from Lemma A.10) to $\hat{X}_j$ with respect to $B(f_j, \rho \cdot S / \gamma)$ , for some fixed $\gamma \leq \frac{1}{2}$ . +17: $\tilde{X} = \bigcup_{\ell=0}^{m} \tilde{X}_{\ell}$ +18: Return non-private $k$ -means (resp., $k$ -median) approximate solution with respect to $\tilde{X}$ . + +and s-sets is another non-geographic dataset and can be found in the clustering benchmark dataset2. For each dataset, we preprocess it to make it fit into $[-1, 1]^d$ . We refer readers to Appendix E for more details of the preprocessing steps. + +Setup. We compare our algorithm described in Algorithm 1 in Section 5 with other three algorithms. We report the $k$ -means cost of all algorithms. In all plots, the label of our algorithm is "dist-DP $k$ -means". The three compared baseline algorithms are as follows. + +1. Non-private baseline ( $k$ -means++): We compare our algorithm with the non-private $k$ -means solver using $k$ -means++ seeding implemented by Python scikit-learn package [66]. The output $k$ -means cost of this baseline can be regarded as the groundtruth cost. +2. DP baseline (DP $k$ -means): This is a $k$ -means clustering algorithm in the standard DP setting implemented in part of a standard open-source DP library3. +3. $\rho$ -Dist-DP baseline (dist-DP random points): Finally, we also compare with a natural $\rho$ -dist-DP algorithm described as the following. We run non-private $k$ -means solver on $\tilde{X}$ described in Section 4. Since $\tilde{X}$ is a $\rho$ -dist-DP version of $X$ , the output centers are $\rho$ -dist-DP. Note that since the final solution of this baseline only depends on $\tilde{X}$ , we assign the entire privacy budget $(\varepsilon, \delta)$ to computing $\tilde{X}$ . + +In all experiments, we fix privacy parameters $\varepsilon = 1$ , $\delta = 10^{-6}$ . These parameter setups are standard in many other DP papers as well. We evaluate our algorithms for different choices of the privacy parameter $\rho$ . Note that the parameter $\rho$ should not be determined by our algorithm. We try different $\rho$ to show how the choice of $\rho$ affects the clustering quality. We refer readers to Section 7 for more discussions of the choice of $\rho$ . + +We use the DP coreset implementation provided by the DP baseline for the purpose of the computation of semi-coreset $\tilde{X}_j$ described in Section 5. + +Our Results. We run all algorithms for $k = 4,6,8,12,16$ . For each experiment, we repeat 10 times and report the mean and the standard error. In the experiments shown in Figure 1, we fix $\rho = 0.05^4$ . As shown, the $k$ -means cost of our dist-DP $k$ -means algorithm is always smaller than the cost of DP + +$k$ -means baseline and is only slightly worse than the non-DP baseline which is as expected. The dist-DP baseline introduces a large $k$ -means cost which implies that our partitioning strategies described in Section 4 and Section 5 are indeed necessary and can improve the clustering quality significantly in practice. Finally, we fix $k = 8$ and investigate how the changes of $\rho$ affect the $k$ -means cost of our dist-DP $k$ -means algorithm. We run our algorithm on all datasets for $\rho = 1, 0.08, 0.008, 0.0001$ . As shown in Figure 2, the $k$ -means cost of our algorithm decreases as $\rho$ decreases, which is as expected. For running time, though we did not optimize our implementation, each algorithm runs within at most a few minutes in a single thread mode. + +In summary, for a reasonable range of $\rho$ , we significantly outperform previous DP $k$ -means algorithms, whereas more naive distance-based DP algorithms perform far worse. In addition, we have comparable approximation guarantees even to the non-private $k$ -means algorithm. + +![](images/8fe49f9d03eeeee27cc6a1af30a077fd5176c30666b088af6901400b8a80d4c3.jpg) + +![](images/e664293cd9bcdd923dd62595873f34afe2978e71ed68b9552b1b70fda466d3c1.jpg) + +![](images/2da66c99fdf53a014e1dd25c91525bc12dac80d650dba3bd598eb962bde72e59.jpg) + +![](images/e0dc2a0813272571ea8b0ed1ae9ad4b9c5be24efc13d1cfd06e1a8a8e4f31e22.jpg) +Figure 1: $k$ -Means cost of non-private baseline (blue), DP baseline (green), our dist-DP $k$ -means (yellow), and dist-DP baseline (gray) for different $k$ with $\rho = 0.05$ . Shades indicate $3 \times$ standard error over 10 runs. + +![](images/525ba193e57bb5c74918961c7934fc743b11e422b3a7dfdff299f838a7c8f11f.jpg) + +![](images/c8388a525a20417f428b34144165b56905c21a423accbd86dccc3cb2ac17cfdd.jpg) + +![](images/b25d26dccd76f94afaabfe9135df953acec31f75214fe2129641f20a1ad1e323.jpg) + +![](images/1bd7795c132a9e115116f3d2165205f5861b73e7925d52a9386bc466f7a0e4c2.jpg) + +![](images/7b525c79f7aedc1a9fc8960c50e2ab7ac057e24cdcd16c885a1a985978ce21e8.jpg) + +![](images/549055e245e6b11b51fa935ad7809f87a2c8b4a5ea152357e9475747fe49d09b.jpg) +Figure 2: $k$ -Means cost of dist-DP $k$ -means algorithm for various $\rho$ with $k = 8$ . Shades indicate $3 \times$ standard error over 10 runs. The result supports the interpolating nature of the parameter $\rho$ . In particular, when $\rho$ decreases, the $k$ -means cost also decreases. When $\rho = 0$ , we exactly recover the result as non-private $k$ -means++. + +![](images/f2259f17c32c3e61dafadc2c555d16676932410c388b205ead95da633b55411f.jpg) + +![](images/0cf2fbebee34fbde0e10f771abcdb3ff8263159a1338802c90a351fee1f1f931.jpg) + +# 7 Limitations and Open Problems + +In this work, we propose efficient $(\varepsilon, \delta, \rho)$ -dist-DP algorithms for $k$ -means and $k$ -median problems for any given privacy parameters $\varepsilon, \delta, \rho$ . However, the choices of $\varepsilon, \delta$ and $\rho$ remain open. Notice that these privacy parameters should not be determined by our algorithm, but rather by legal teams, policy makers, or other experts for different specific scenarios. This is an expert determination that is outside of the scope of this paper but has been studied by practitioners extensively. + +In proving Theorem 1.1, we obtain an additive error proportional to $k^2 \cdot \rho^2$ (ignoring polynomial factors in $d$ and logarithmic factors in the other parameters - see Theorem C.1), whereas the work of [61] has dependence $k \cdot \Lambda^2$ . This is because to improve the dependence on $\Lambda$ to a dependence on $\rho$ , we end up partitioning the data into roughly $k$ regions and must apply a separate private $k$ -means algorithm on each region, which increases the additive dependence on $k$ . Hence, a natural open question is whether one can improve the additive error's dependence on $k$ . + +Acknowledgements: SN is supported by a NSF Graduate Fellowship (Grant No. 1745302) and a Google PhD Fellowship. + +# References + +[1] John M Abowd. The us census bureau adopts differential privacy. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2867-2867, 2018. +[2] Sara Ahmadian, Ashkan Norouzi-Fard, Ola Svensson, and Justin Ward. Better guarantees for k-means and euclidean k-median by primal-dual algorithms. SIAM J. Comput., 49(4), 2020. +[3] Mário Alvim, Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Anna Pazii. Local differential privacy on metric spaces: optimizing the trade-off with utility. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pages 262-267. IEEE, 2018. +[4] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 459-468, 2006. +[5] Alexandr Andoni, Zhao Song, Clifford Stein, Zhengyu Wang, and Peilin Zhong. Parallel graph connectivity in log diameter rounds. In 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 674-685. IEEE, 2018. +[6] Miguel E Andres, Nicolás E Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. Geo-indistinguishability: Differential privacy for location-based systems. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, pages 901-914, 2013. +[7] David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In Nikhil Bansal, Kirk Pruhs, and Clifford Stein, editors, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1027-1035. SIAM, 2007. +[8] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristics for k-median and facility location problems. SIAM J. Comput., 33(3):544-562, 2004. +[9] Maria-Florina Balcan, Travis Dick, Yingyu Liang, Wenlong Mou, and Hongyang Zhang. Differentially private clustering in high-dimensional euclidean spaces. In International Conference on Machine Learning, pages 322–331. PMLR, 2017. +[10] Chris Baraniuk. Ashley madison: 'suicides' over website hack. BBC News, 24, 2015. +[11] Paul Beame, Paraschos Koutris, and Dan Suciu. Communication steps for parallel query processing. Journal of the ACM, 64(6):1-58, 2017. +[12] Rajen Bhatt and Abhinav Dhall. Skin segmentation dataset. UCI Machine Learning Repository, 2010. +[13] Jeremiah Blocki, Elena Grigorescu, and Tamalika Mukherjee. Differentially-private sublinear-time clustering. In IEEE International Symposium on Information Theory (ISIT), pages 332-337. IEEE, 2021. +[14] Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. Practical privacy: the sulq framework. In Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of Database Systems (PODS), pages 128-138, 2005. +[15] Nicolas E Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. Optimal geodistinguishable mechanisms for location privacy. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, pages 251-262, 2014. +[16] Jaroslaw Byrka, Thomas W. Pensyl, Bartosz Rybicki, Aravind Srinivasan, and Khoa Trinh. An improved approximation for $k$ -median and positive correlation in budgeted optimization. ACM Trans. Algorithms, 13(2):23:1-23:31, 2017. +[17] Alisa Chang, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. Locally private k-means in one round. In International Conference on Machine Learning, pages 1441-1451. PMLR, 2021. + +[18] Moses Charikar, Sudipto Guha, Éva Tardos, and David B. Shmoys. A constant-factor approximation algorithm for the k-median problem. J. Comput. Syst. Sci., 65(1):129-149, 2002. +[19] Anamay Chaturvedi, Matthew Jones, and Huy Le Nguyen. Locally private k-means clustering with constant multiplicative approximation and near-optimal additive error. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6167-6174. AAAI Press, 2022. +[20] Konstantinos Chatzikokolakis, Miguel E. Andres, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. Broadening the scope of differential privacy using metrics. In Privacy Enhancing Technologies - 13th International Symposium (PETS), volume 7981 of Lecture Notes in Computer Science, pages 82-102. Springer, 2013. +[21] Jia-Wei Chen, Li-Ju Chen, Chia-Mu Yu, and Chun-Shien Lu. Perceptual indistinguishability-net (pi-net): Facial image obfuscation with manipulable semantics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6478-6487, 2021. +[22] Ke Chen. On coresets for k-median and k-means clustering in metric and euclidean spaces and their applications. SIAM J. Comput., 39(3):923-947, 2009. +[23] Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. Geometric median in nearly linear time. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 9-21. ACM, 2016. +[24] Vincent Cohen-Addad, Alessandro Epasto, Silvio Lattanzi, Vahab Mirrokni, Andres Munoz, David Saulpic, Chris Schwiegelshohn, and Sergei Vassilvitskii. Scalable differentially private clustering via hierarchically separated trees. In Knowledge Discovery and Data Mining (KDD), pages 221–230, 2022. +[25] Vincent Cohen-Addad, Alessandro Epasto, Vahab Mirrokni, Shyam Narayanan, and Peilin Zhong. Near-optimal private and scalable k-clustering. In Advances in Neural Information Processing Systems, 2022. +[26] Vincent Cohen-Addad, Hossein Esfandiari, Vahab S. Mirrokni, and Shyam Narayanan. Improved approximations for euclidean k-means and k-median, via nested quasi-independent sets. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing (STOC), 2022. +[27] George Danezis, Markulf Kohlweiss, and Alfredo Rial. Differentially private billing with rebates. In Information Hiding: 13th International Conference, IH 2011, Prague, Czech Republic, May 18-20, 2011, Revised Selected Papers 13, pages 148-162. Springer, 2011. +[28] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simplified data processing on large clusters. 2004. +[29] Dua Dheeru and Efi Karra Taniskidou. UCI machine learning repository, 2017. +[30] Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. Advances in Neural Information Processing Systems, 30, 2017. +[31] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, Third Theory of Cryptography Conference, (TCC), volume 3876 of Lecture Notes in Computer Science, pages 265-284. Springer, 2006. +[32] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4):211-407, 2014. +[33] Alessandro Epasto, Andres Munoz Medina, Chris Schwiegelshohn, David Saulpic, Sergei Vassilvitskii, Silvio Lattanzi, Vahab Mirrokni, and Vincent Pierre Cohen-addad. Scalable differentially private clustering via hierarchically separated trees. In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining, 2022. +[34] Ülfar Erlingsson, Vasyl Pihur, and Aleksandra Korlova. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, pages 1054–1067, 2014. + +[35] Liyue Fan. Practical image obfuscation with provable privacy. In 2019 IEEE international conference on multimedia and expo (ICME), pages 784-789. IEEE, 2019. +[36] Dan Feldman, Amos Fiat, Haim Kaplan, and Kobbi Nissim. Private coresets. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), pages 361-370. ACM, 2009. +[37] Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca, and projective clustering. SIAM Journal on Computing, 49(3):601-657, 2020. +[38] Dan Feldman, Chongyuan Xiang, Ruihao Zhu, and Daniela Rus. Coresets for differentially private k-means clustering and applications to privacy in mobile sensor networks. In 2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pages 3-16. IEEE, 2017. +[39] Natasha Fernandes, Mark Dras, and Annabelle McIver. Generalised differential privacy for text document processing. In *Principles of Security and Trust: 8th International Conference*, POST 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6–11, 2019, Proceedings 8, pages 123–148. Springer International Publishing, 2019. +[40] Natasha Fernandes, Yusuke Kawamoto, and Takao Murakami. Locality sensitive hashing with extended differential privacy. In Elisa Bertino, Haya Shulman, and Michael Waidner, editors, Computer Security - ESORICS 2021 - 26th European Symposium on Research in Computer Security, Darmstadt, Germany, October 4-8, 2021, Proceedings, Part II, volume 12973 of Lecture Notes in Computer Science, pages 563-583. Springer, 2021. +[41] Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. Privacy-and utility-preserving textual analysis via calibrated multivariate perturbations. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 178-186, 2020. +[42] Pasi Franti and Sami Sieranoja. K-means properties on six clustering benchmark datasets. Applied intelligence, 48:4743-4759, 2018. +[43] Quan Geng, Wei Ding, Ruiqi Guo, and Sanjiv Kumar. Tight analysis of privacy and utility tradeoff in approximate differential privacy. In Silvia Chiappa and Roberto Calandra, editors, The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), volume 108 of Proceedings of Machine Learning Research, pages 89–99. PMLR, 2020. +[44] Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. Differentially private clustering: Tight approximation ratios. In Advances in Neural Information Processing Systems, 2020. +[45] Michael T Goodrich, Nodari Sitchinava, and Qin Zhang. Sorting, searching, and simulation in the mapreduce framework. In International Symposium on Algorithms and Computation, pages 374-383. Springer, 2011. +[46] Ulrich Greveler, Peter Glösekötterz, Benjamin Justusy, and Dennis Loehr. Multimedia content identification through smart meter power usage profiles. In Proceedings of the International Conference on Information and Knowledge Engineering (IKE), page 1. The Steering Committee of The World Congress in Computer Science, Computer ..., 2012. +[47] Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, and Kunal Talwar. Differentially private combinatorial optimization. In Proceedings of the twenty-first annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1106-1125. SIAM, 2010. +[48] Zhiyi Huang and Jinyan Liu. Optimal differentially private algorithms for k-means clustering. In Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, pages 395-408, 2018. +[49] Matthew Jones, Huy L Nguyen, and Thy D Nguyen. Differentially private clustering via maximum coverage. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021. + +[50] Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, and Angela Y. Wu. A local search approximation algorithm for k-means clustering. Comput. Geom., 28(2-3):89-112, 2004. +[51] Howard Karloff, Siddharth Suri, and Sergei Vassilvitskii. A model of computation for mapreduce. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms (SODA), pages 938-948. SIAM, 2010. +[52] Hong Yin Lam, GSK Fung, and WK Lee. A novel method to construct taxonomy electrical appliances based on load signaturesof. IEEE Transactions on Consumer Electronics, 53(2):653-660, 2007. +[53] Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy, SP 2019, pages 656-672. IEEE, 2019. +[54] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014. +[55] Shi Li and Ola Svensson. Approximating k-median via pseudo-approximation. SIAM J. Comput., 45(2):530-547, 2016. +[56] Bar Mahpud and Or Sheffet. A differentially private linear-time fptas for the minimum enclosing ball problem. In Advances in Neural Information Processing Systems, 2022. +[57] Konstantin Makarychev, Yury Makarychev, and Ilya P. Razenshteyn. Performance of johnson-lindenstrauss transform for $k$ -means and $k$ -medians clustering. In 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 1027-1038. ACM, 2019. +[58] Nimrod Megiddo and Kenneth J. Supowit. On the complexity of some common geometric location problems. SIAM J. Comput., 13(1):182-196, 1984. +[59] Prashanth Mohan, Abhradeep Thakurta, Elaine Shi, Dawn Song, and David Culler. Gupt: privacy preserving data analysis made easy. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pages 349-360, 2012. +[60] Rupert Neate. Over $119 bn wiped off facebook's market cap after growth shock. The Guardian, 26, 2018. +[61] Huy L. Nguyen, Anamay Chaturvedi, and Eric Z. Xu. Differentially private k-means via exponential mechanism and max cover. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 9101-9108. AAAI Press, 2021. +[62] Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In Proceedings of the thirty-ninth annual ACM Symposium on Theory of computing (STOC), pages 75-84, 2007. +[63] Kobbi Nissim and Uri Stemmer. Clustering algorithms for the centralized and local models. In Algorithmic Learning Theory, pages 619-653. PMLR, 2018. +[64] Kobbi Nissim, Uri Stemmer, and Salil Vadhan. Locating a small cluster privately. In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, pages 413-427, 2016. +[65] Richard Nock, Raphaël Canyasse, Roksana Boreli, and Frank Nielsen. k-variates++; more pluses in the k-means++. In International Conference on Machine Learning, pages 145-154. PMLR, 2016. +[66] Fabian Pedregosa, Gáel Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830, 2011. + +[67] Fotis Savva, Christos Anagnostopoulos, and Peter Triantafillou. Explaining aggregates for exploratory analytics. In 2018 IEEE International Conference on Big Data (Big Data), pages 478-487. IEEE, 2018. +[68] Stephen Shankland. How google tricks itself to protect chrome user privacy. CNET, October, 2014. +[69] Uri Stemmer. Locally private k-means clustering. In Symposium on Discrete Algorithms (SODA), pages 548-559, 2020. +[70] Uri Stemmer and Haim Kaplan. Differentially private k-means with constant multiplicative error. In Advances in Neural Information Processing Systems, pages 5436-5446, 2018. +[71] Dong Su, Jianneng Cao, Ninghui Li, Elisa Bertino, and Hongxia Jin. Differentially private k-means clustering. In Proceedings of the sixth ACM conference on data and application security and privacy, pages 26-37, 2016. +[72] Salil Vadhan. The Complexity of Differential Privacy, pages 347-450. Springer International Publishing, 2017. +[73] Yining Wang, Yu-Xiang Wang, and Aarti Singh. Differentially private subspace clustering. Advances in Neural Information Processing Systems, 28, 2015. +[74] Zhibo Wang, Jiahui Hu, Ruizhao Lv, Jian Wei, Qian Wang, Dejun Yang, and Hairong Qi. Personalized privacy-preserving task allocation for mobile crowdsensing. IEEE Transactions on Mobile Computing, 18(6):1330-1341, 2018. + +# A Additional Preliminaries + +In this section, we state some additional definitions and preliminary results that are of use. + +We note one notation that we may abuse: for positive reals $A, B$ , we say that $C = A \pm B$ if $C \in [A - B, A + B]$ . Likewise, we may say $C = (1 \pm \gamma)B$ if $C \in [(1 - \gamma)B, (1 + \gamma)B]$ . + +# A.1 Differential Privacy + +In Section 2, we described the Laplace mechanism for approximating functions $f: X \to \mathbb{R}$ . + +Similar to the Laplace mechanism, there also exists the Gaussian mechanism, which is useful for high-dimensional functions $f: X \to \mathbb{R}^d$ . We will state a simpler version that is sufficient for our purposes. Namely, a dataset consists of a single data point $x$ , it is known that outputting $\tilde{x} = x + \rho \cdot \frac{\sqrt{2\log(1.25 / \delta)}}{\varepsilon} \cdot \mathcal{N}(0, I)$ satisfies $(\varepsilon, \delta, \rho)$ -dist-DP, where $\mathcal{N}(0, I)$ represents a standard $d$ -dimensional Gaussian. This implicitly follows from [32, Theorem 3.22]. As a result, we have the following basic proposition. + +Proposition A.1. Let $X = \{x_{1},\ldots ,x_{n}\}$ be a dataset of size $n$ . Then, the dataset $\{\tilde{x}_1,\dots ,\tilde{x}_n\}$ , where each $\tilde{x}_i$ is i.i.d. drawn as $x_{i} + \rho \cdot \frac{\sqrt{2\log(1.25 / \delta)}}{\varepsilon}\cdot \mathcal{N}(0,I)$ , is $(\varepsilon ,\delta ,\rho)$ -dist- $DP$ . + +Next, we note two classic theorems regarding the privacy of composing private mechanisms (see, for instance, [32] or [72]). Note that these theorems hold for adaptive composition, which allows to run algorithms $\mathcal{A}_1, \ldots, \mathcal{A}_k$ in sequence, where each $\mathcal{A}_i$ is allowed to treat $\mathcal{A}_1, \ldots, \mathcal{A}_{i-1}$ as a fixed public input to its algorithm. + +Theorem A.2 (Basic Adaptive Composition). Let $\mathcal{A}_1, \ldots, \mathcal{A}_k$ be adaptive mechanisms on a dataset $X$ such that each $\mathcal{A}_i$ is $(\varepsilon_i, \delta_i)$ -differentially private as a function of $X$ , assuming that the previous outputs $\mathcal{A}_1, \ldots, \mathcal{A}_{i-1}$ are fixed. Then, the mechanism $\mathcal{A}$ which concatenates the outputs of $\mathcal{A}_1, \ldots, \mathcal{A}_k$ is $(\sum \varepsilon_i, \sum \delta_i)$ -differentially private. + +Likewise, if each $\mathcal{A}_i$ were $(\varepsilon_i,\delta_i,\rho)$ -dist- $DP$ , the concatenated mechanism $\mathcal{A}$ is $(\sum \varepsilon_{i},\sum \delta_{i},\rho)$ -dist- $DP$ . + +Theorem A.3 (Advanced Adaptive Composition). Let $\mathcal{A}_1, \ldots, \mathcal{A}_k$ be adaptive mechanisms on a dataset $X$ such that each $\mathcal{A}_i$ is $(\varepsilon, \delta)$ -differentially private as a function of $X$ , assuming that the previous outputs $\mathcal{A}_1, \ldots, \mathcal{A}_{i-1}$ are fixed. Then, for any $\delta' > 0$ , the mechanism $\mathcal{A}$ which concatenates the outputs of $\mathcal{A}_1, \ldots, \mathcal{A}_k$ is $(\sqrt{2k\log\delta^{-1}} \cdot \varepsilon + k\varepsilon(e^{\varepsilon} - 1), k\delta + \delta')$ -differentially private. + +Likewise, if each $\mathcal{A}_i$ were $(\varepsilon, \delta, \rho)$ -dist- $DP$ , the concatenated mechanism $\mathcal{A}$ is $(\sqrt{2k\log\delta^{-1}} \cdot \varepsilon + k\varepsilon(e^{\varepsilon} - 1), k\delta + \delta', \rho)$ -dist- $DP$ . + +# A.2 Clustering + +In $k$ -means or $k$ -median clustering, given a dataset $X \subset B(0, \Lambda)$ of size $n$ , we recall that our goal is to efficiently find a set of points $C$ such that $\mathrm{cost}(X; C)$ is a good approximation to $\mathrm{OPT}(X)$ . In general, we wish for purely multiplicative approximations, but due to the nature of private $k$ -means (and $k$ -median), we will additionally have a small additive approximation that is proportional to $\Lambda^p$ . We now define approximate $k$ -means/ $k$ -median solutions. + +Definition A.4. Suppose we are given data $X = \{x_{1},\ldots ,x_{n}\} \in \mathbb{R}^{d}$ , and implicit parameters $\rho$ and $k$ . Then, for any $\beta \geq 1$ , we define a set $C$ of size $k$ to be a $(\beta ,V)$ -approximate solution for $X$ if $\mathrm{cost}(X;C)\leq \beta \cdot \mathrm{OPT}(X) + V\cdot \rho^p$ . + +We also define bicriteria solutions for $k$ -means and $k$ -median: here, we are allowed to use a larger dataset $C$ that may have more than $k$ points, but still compare to the optimal $k$ -clustering. + +Definition A.5. Suppose we are given data $X = \{x_{1},\ldots ,x_{n}\} \in \mathbb{R}^{d}$ , and implicit parameters $\rho$ and $k$ . Then, for any $\alpha ,\beta \geq 1$ , a set $C$ is an $(\alpha ,\beta ,V)$ -bicriteria approximate solution for $X$ if $|C|\leq \beta \cdot k$ and $\mathrm{cost}(X;C)\leq \beta \cdot \mathrm{OPT}(X) + V\cdot \Lambda^p$ . + +Finally, we define coresets and semi-coresets for $k$ -means (or $k$ -median) clustering. A coreset of a dataset $X$ , roughly speaking, is a (usually smaller) dataset $Y$ such that one can estimate a $k$ -means + +(or $k$ -median) solution of $X$ by computing the solution on $Y$ . More precisely, we have the following definition. + +Definition A.6. Given a dataset $X = \{x_{1},\ldots ,x_{n}\}$ and some $\gamma ,W\geq 0$ , a $(\gamma ,W,\rho)$ -coreset for $k$ -means (resp., $k$ -median) is a dataset $\mathcal{V}$ such that for any subset $C\subset \mathbb{R}^d$ of size $k$ , + +$$ +\frac {1}{1 + \gamma} \cdot \mathrm {c o s t} (Y, C) - W \cdot \rho^ {p} \leq \mathrm {c o s t} (Y; C) \leq (1 + \gamma) \cdot \mathrm {c o s t} (X, C) + W \cdot \rho^ {2}, +$$ + +where $p = 2$ (resp., $p = 1$ ). Likewise, a $(\kappa, W, \rho)$ -semi-coreset (for $\kappa, W \geq 0$ ) for $k$ -means (resp., $k$ -median) is a dataset $\mathcal{V}$ such that for any subset $C \subset \mathbb{R}^d$ of size $k$ , + +$$ +\frac {1}{1 + \kappa} \cdot \mathrm {c o s t} (Y, C) - \kappa \cdot \mathrm {O P T} _ {k} (X) - W \cdot \rho^ {p} \leq \mathrm {c o s t} (Y; C) \leq (1 + \kappa) \cdot \mathrm {c o s t} (X, C) + \kappa \cdot \mathrm {O P T} _ {k} (X) + W \cdot \rho^ {2}. +$$ + +# A.3 Randomly Shifted Grids + +In our algorithms, we will make use of the Quadtree data structure which is composed of randomly shifted grids, which we now describe. This data structure has proven useful in various geometric settings beyond clustering, such as approximate near neighbor and computing other geometric quantities such as Earth-Mover distance and Minimum Spanning Tree cost. + +Definition A.7. A randomly shifted Quadtree is constructed as follows. We start with a top level of some size $\Lambda$ and let level 0 be a single grid cell, which is the $d$ -dimensional hypercube $[- \Lambda, \Lambda]^d$ . Next, we choose a uniformly random point $\nu = (\nu_1, \ldots, \nu_d) \in [-\Lambda, \Lambda]^d$ , which will represent our shift vector. Now, for each level $\ell \geq 1$ , we partition the region $[- \Lambda, \Lambda]^d$ into grid cells of size $\Lambda / 2^\ell$ , shifted by $\nu$ . In other words, each cell is the form $[\nu_1 + a_1 \cdot \Lambda / 2^\ell, \nu_1 + (a_1 + 1) \cdot \Lambda / 2^\ell] \times \dots \times [\nu_d + a_d \cdot \Lambda / 2^\ell, \nu_d + (a_d + 1) \cdot \Lambda / 2^\ell]$ , where $a_1, \ldots, a_d \in \mathbb{Z}$ . We say that $\Lambda / 2^\ell$ is the grid size at level $\ell$ . (We remark that we may truncate some grid cells so that they do not escape $[- \Lambda, \Lambda]^d$ .) We continue this for a finite number of levels, until we reach some bottom level. + +We will utilize the following fact about Quadtrees, or more specifically the randomly shifted grid at some fixed level $\ell$ . + +Proposition A.8. (see Proof of Theorem B.1 in [25]) Given a randomly shifted grid of dimension $20r \cdot d$ , a Euclidean ball of radius $r$ (in $\mathbb{R}^d$ ) is split into at most 2 pieces in expectation. + +# A.4 Private $k$ -means + +Finally, we note the result of [61] on differentially private $k$ -means (and $k$ -median) clustering. + +Theorem A.9. [61] There exists a polynomial-time $(\varepsilon, \delta)$ -DP algorithm that, given a set $X = \{x_{1}, \ldots, x_{n}\}$ in a fixed ball of radius $\Lambda$ in $\mathbb{R}^d$ , outputs a set of $k$ centers $C = \{c_{1}, \ldots, c_{k}\}$ such that + +$$ +\operatorname {c o s t} (X; C) \leq O (1) \cdot \operatorname {O P T} _ {k} (X) + U \cdot \Lambda^ {p}, +$$ + +where $U = O\left(\frac{k\log^2n\log(1 / \delta) + k\sqrt{d\log(1 / \delta)}}{\varepsilon}\right)$ , and $p = 2$ for $k$ -means and $p = 1$ for $k$ -median. + +Using a slightly weaker result, [70, 25] was able to extend it to an algorithm for generating a private semi-coreset for $k$ -means or $k$ -median. Given Theorem A.9, the algorithm simply computes $C = \{c_1, \ldots, c_k\}$ , and gives each $c_i$ a weight which is the number of points in $X$ closest to $c_i$ , plus $\mathrm{Lap}(1 / \varepsilon)$ noise. By combining Theorem A.9 and the conversion of [70, 25] (e.g., see [25, Lemma C.1], which uses a slightly weaker bound), the following is immediate. + +Lemma A.10. For some $\kappa = O(1)$ , there exists a polynomial-time $(\varepsilon, \delta)$ -DP algorithm that, given a set $X = \{x_{1}, \ldots, x_{n}\}$ in a fixed ball of radius $R$ in $\mathbb{R}^d$ , computes a $(\kappa, U, \rho)$ -semi-coreset for $k$ -means, where $U = O\left(\frac{k \log^2 n \log(1 / \delta) + k \sqrt{d \log(1 / \delta)}}{\varepsilon}\right)$ . + +# B Crude Approximation + +In this section, we devise a crude bicriteria approximation that will serve as a starting point in developing our more refined algorithm. To recall the setup of the bicriteria problem (see Definition + +# Algorithm 2 Approximate dist-DP bicriteria algorithm + +1: Input: Parameters $n, d, k, \varepsilon, \delta, \Lambda, \rho$ , dataset $X = \{x_{1}, \ldots, x_{n}\} \subset \mathbb{R}^{d}$ . +2: Output: Crude bicriteria approximation $F = \{f_{1},\ldots ,f_{\alpha \cdot k}\} \subset \mathbb{R}^{d}$ : will be $(O(\varepsilon),O(\delta),\rho)$ -dist-DP. +3: Initialize $A = O(\varepsilon^{-1}\sqrt{\log\delta^{-1}}\cdot d\sqrt{d + \log n})$ $B = n$ $REP = O(\log n)$ +4: Initialize $\varepsilon' = \Theta \left( \frac{\varepsilon}{\sqrt{\log n \log(A \cdot B) \log(1 / \delta)}} \right)$ and $\delta' = \Theta \left( \frac{\delta}{\log n \log(A \cdot B))} \right)$ . +5: for $i = 1$ to $n$ do +6: $\tilde{x}_i\coloneqq x_i + \frac{\rho\cdot\sqrt{2\log(1.25 / \delta)}}{\varepsilon}\cdot \mathcal{N}(0,I).$ +7: for rep = 1 to REP $\mathbf{\tilde{d}}$ o +8: Create a randomly shifted Quadtree with largest level $\ell = 0$ with side length $\Lambda$ and smallest level with side length $\rho / B$ . +9: for $\ell = 0$ to $L_{1} \coloneqq \log_{2}(\Lambda / (A\rho))$ do +0: for each cell $g$ at level $\ell$ containing some $\tilde{x}_i$ do +1: count $(g) = \# \{\tilde{x}_i$ in cell $g\}$ +2: Let $g_1, \ldots, g_{4k}$ be the $4k$ cells at level $\ell$ with maximum count $(g)$ . +3: Add the centers of $g_{1}, \ldots, g_{4k}$ to $F$ . +4: $\mathbf{for}\ell = \log_2\left(\frac{\Lambda}{A\rho}\right) + 1\mathbf{to}L_2\coloneqq \log_2\left(\frac{B\Lambda}{\rho}\right)\mathbf{do}$ +5: for each cell $g$ at level $\ell$ that contains some $x_{i}\in X$ do +6: count $(g) = \# \{x_{i}$ in cell $g\} +\mathrm{TLap}(1 / \varepsilon^{\prime},1 / \delta^{\prime})$ +7: Let $g_1, \ldots, g_{4k}$ be the $4k$ cells at level $\ell$ with maximum count $(g)$ . +8: Add each center $g_i$ to $F$ , if count $(g_i) \geq \frac{K}{\varepsilon'} \log \frac{1}{\delta'}$ for some constant $K$ . +9: Return $F$ . + +A.5), we are given a dataset $X = \{x_{1},\ldots ,x_{n}\}$ , contained in a given ball of radius $\Lambda$ in $\mathbb{R}^d$ . We wish to compute an $(\alpha ,\beta ,V)$ -approximation that satisfies $(\varepsilon ,\delta ,\rho)$ -dist-DP. By this, we must output a set of $\alpha \cdot k$ centers $F = \{f_1,\dots ,f_{\alpha \cdot k}\}$ such that $\mathrm{cost}(X;F)\leq \beta \cdot \mathrm{OPT}_k(X) + V\cdot \rho^p$ for some parameters $\alpha ,\beta ,V$ , where $p = 1$ for $k$ -median and $p = 2$ for $k$ -means. In addition, $F$ should be $(\varepsilon ,\delta ,\rho)$ -dist-DP with respect to $X$ . + +Our desire for $\alpha, \beta, V$ is that they are polynomial in $d, k, \log n, \varepsilon^{-1}, \log \delta^{-1}$ , and $\log \frac{\Lambda}{\rho}$ . We do not wish for any polynomial dependencies on $n$ , either in the approximation ratio or in the additive error. + +We recall the algorithm description from Section 4. We also include the pseudocode here, as Algorithm 2. + +We now focus on analyzing the privacy and accuracy of the algorithm. Formally, in this section we prove the following. + +Theorem B.1. For any $0 < \varepsilon, \delta < \frac{1}{2}$ and $\rho \leq \frac{\Lambda}{2}$ , there exists an $(\varepsilon, \delta, \rho)$ -dist- $DP$ $(\alpha, \beta, V)$ -bicriteria approximation for $k$ -median, with $\alpha = O\left(\log n \cdot (\log n + \log \frac{\Lambda}{\rho})\right)$ , $\beta = O(d^{3/2})$ , and $V = kd^2/\varepsilon^2 \cdot \text{poly} \log (n, d, \varepsilon^{-1}, \delta^{-1})$ . + +Likewise, there exists an $(\varepsilon, \delta, \rho)$ -dist- $DP$ $(\alpha, \beta, V)$ -bicriteria approximation for $k$ -means, with $\alpha = O\left(\log n \cdot (\log n + \log \frac{\Lambda}{\rho})\right)$ , $\beta = O(d^3)$ , and $V = kd^4 / \varepsilon^3 \cdot \text{poly} \log (n, d, \varepsilon^{-1}, \delta^{-1})$ . + +Analysis of Privacy: The $\tilde{x}_i$ points will be $(\varepsilon, \delta, \rho)$ -dist-DP, by the Gaussian Mechanism (Proposition A.1). The levels above $\rho \cdot A$ are strictly determined by $\tilde{x}_i$ . For each level of grid length between $\rho \cdot A$ and $\rho / B$ , changing one data point changes at most 2 grid cells each by 1, which implies $(2\varepsilon', 2\delta')$ -DP. In addition, this happens over $O(\log(A \cdot B))$ levels and $O(\log n)$ repetitions for each. By applying the advanced composition theorem (Theorem A.3, we have that as long as $\varepsilon, \delta < 1$ , for our choices of $\varepsilon', \delta'$ , the composition is $(O(\varepsilon), O(\delta))$ -DP. + +In total, the algorithm is $(O(\varepsilon), O(\delta), \rho)$ -dist-DP. + +Analysis of Accuracy: Let $X = \{x_{1},\ldots ,x_{n}\}$ be our original set of points, and let $C = \{c_1,\dots ,c_k\}$ be the optimal set of $k$ centers. For any radius $r$ , let $n_r$ be the number of points $x\in X$ such that $d(x,C)\geq r$ . Then, it is well known that the $k$ -means cost and $k$ -median cost, up to an $O(1)$ - + +multiplicative factor, equal + +$$ +\sum_ {t \in \mathbb {Z}} 2 ^ {2 t} \cdot n _ {2 ^ {t}} \quad \text {a n d} \quad \sum_ {t \in \mathbb {Z}} 2 ^ {t} \cdot n _ {2 ^ {t}}, +$$ + +respectively. For the set of centers $F$ generated, we similarly define $\hat{n}_r$ to be the number of points $x\in X$ such that $d(x,C)\geq r$ + +It is well-known that the magnitude of a $d$ -dimensional Gaussian $\mathcal{N}(0,I)$ is bounded by $O(\sqrt{d + \log 1 / \beta})$ with failure probability $\beta$ . Hence, we set $A = O(\varepsilon^{-1}\sqrt{\log\delta^{-1}}\cdot d\sqrt{d + \log n})$ , so that with high probability, $\| \tilde{x}_i - x_i\|_2 \leq \rho \cdot A / (40d)$ for all $i$ . Now, for any $r \geq \rho \cdot A / (40d)$ , if there exist $k$ balls of radius $r$ that contain all but $n_r$ of the points in $X$ , then there exist $k$ balls of radius $2r$ that contain all but $n_r$ of the points in $\tilde{X}$ . In addition, given a randomly shifted grid of dimension $40r \cdot d$ , a ball of radius $2r$ , in expectation, is split into at most 2 pieces, by Proposition A.8. Therefore, by Markov's inequality, the $k$ balls of radius $2r$ are split into at most $4k$ cells with at least $50\%$ probability, which means that the top $4k$ cells at grid level $40rd$ contain all but at most $n_r$ points. Hence, because the center of the $40r \cdot d$ -side length grid has radius $20rd^{3/2}$ , this means $\hat{n}_{20rd^{3/2}} \leq n_r$ for all $r \geq \rho \cdot A / (40d)$ with at least $50\%$ probability: repeating this $O(\log n)$ times, this holds with at least $1 - n^{-5}$ probability, even across all levels. + +Next, suppose that $r \leq \rho \cdot A / (40d)$ . In this case, if we didn't add noise we would have $\hat{n}_{20rd^{3/2}} \leq n_r$ as in the previous case. This time, however, we add noise to the count of each cell rather than the number of points. In addition, we may not include a cell if its noisy count is at most $\frac{K}{\varepsilon'} \cdot \log \frac{1}{\delta'}$ , but note that this means its true count is at most $\frac{2K}{\varepsilon'} \cdot \log \frac{1}{\delta'}$ . Therefore, since the count of each cell is altered by $O\left(\frac{1}{\varepsilon'} \log \frac{1}{\delta'}\right)$ , we have that $\hat{n}_{20rd^{3/2}} \leq n_r + O\left(\frac{k}{\varepsilon'} \cdot \log \frac{1}{\delta'}\right)$ for all $r \leq \rho \cdot \frac{A}{40d}$ . + +In summary, we have that $\hat{n}_{20rd^{3/2}} \leq n_r$ for $r \geq \rho \cdot O(\varepsilon^{-1}\sqrt{\log\delta^{-1}} \cdot \sqrt{d + \log n})$ . In addition, for $r \leq \rho \cdot O(\varepsilon^{-1}\sqrt{\log\delta^{-1}} \cdot \sqrt{d + \log n})$ , we have that $\hat{n}_{20rd^{3/2}} \leq n_r + O(k / \varepsilon' \cdot \log 1 / \delta')$ . If we set $B = n$ , then below $r = \rho \cdot \sqrt{d} / n$ we have $\hat{n}_r \leq n$ by default and above $r = \rho \cdot \sqrt{d} / n$ the above bounds hold. This implies that + +$$ +\begin{array}{l} \sum_ {t \in \mathbb {Z}} 2 ^ {t} \hat {n} _ {2 ^ {t}} \leq O (d ^ {3 / 2}) \cdot \left(\sum_ {t \in \mathbb {Z}} 2 ^ {t} n _ {2 ^ {t}}\right) + O (\rho) \cdot d ^ {3 / 2} \cdot \sum_ {t \in \mathbb {Z}: 2 ^ {t} \leq \varepsilon^ {- 1} \sqrt {\log \delta^ {- 1}} \cdot \sqrt {d + \log n}} 2 ^ {t} \cdot O \left(\frac {k}{\varepsilon^ {\prime}} \cdot \log \frac {1}{\delta^ {\prime}}\right) + O \left(\rho \cdot \frac {\sqrt {d}}{n}\right) \cdot n \\ = O (d ^ {3 / 2}) \cdot \mathrm {O P T} _ {k} (X) + O \left(\frac {k d ^ {2}}{\varepsilon^ {2}}\right) \cdot \mathrm {p o l y} \log (n, d, \varepsilon^ {- 1}, \delta^ {- 1}) \cdot \rho . \\ \end{array} +$$ + +Hence, we obtain a bicriteria with multiplicative approximation $\beta = O(d^{3/2})$ and additive error $k\sqrt{d} \cdot \mathrm{poly}(\varepsilon^{-1}, \log \delta^{-1}, \log n) \cdot \rho$ , for $k$ -median. The same calculation for $k$ -means will give us a multiplicative approximation $\beta = O(d^3)$ and additive error $kd^4/\varepsilon^3 \cdot \mathrm{poly}(\log (n, d, \varepsilon^{-1}, \delta^{-1}) \cdot \rho^2$ . + +Finally, the number of centers we output is simple to compute. We have $O(\log n)$ repetitions, and each repetition has $O\left(\log \frac{\Lambda}{\rho / n}\right)$ levels, each of which we select at most $4k$ cell centers from. Hence, we select $O\left(k \cdot \log n \cdot (\log n + \log \frac{\Lambda}{\rho})\right)$ points, meaning that $\alpha = O\left(\log n \cdot (\log n + \log \frac{\Lambda}{\rho})\right)$ . + +# C From Crude to Accurate + +In this section, we devise an improved approximation that only uses $k$ centers and achieves a constant approximation ratio. We will subsequently prove Theorem 1.1. + +Our approach utilizes both the crude approximation from Section 4/Section B and previously known constant-approximation differentially private (but not dist-DP) algorithms from the literature. We show how to combine these to create a dist-DP semi-coreset. This idea is partially inspired by the work of [22] for (non-private) coreset constructions and more recently [25] for fast private (semi-)coreset constructions. More accurately, given a set of $n$ points $X = \{x_{1},\ldots ,x_{n}\} \in \mathbb{R}^{d}$ , we will compute a (weighted) set of points $Y$ that is $(\varepsilon ,\delta ,\rho)$ -dist-DP with respect to $X$ , such that for any set of $k$ centers $C = \{c_1,\dots ,c_k\}$ , $\mathrm{cost}(Y;C) = \Theta (\mathrm{cost}(X;C))\pm O(\mathrm{OPT}_k(X))\pm W\cdot \rho^p$ , where $W$ will be polynomial in $d,k,\varepsilon^{-1},\log \delta^{-1},\log n$ , and $\log \frac{\Lambda}{\rho}$ . + +If we can achieve this, then we just have to compute an $O(1)$ -approximate $k$ -means (or $k$ -median) solution to $Y$ , which does not have to be private since $Y$ already is. Indeed, if we do so, then the centers $C$ that we find for $Y$ satisfy + +$$ +\operatorname {c o s t} (Y; C) \leq O (1) \cdot \operatorname {c o s t} (Y; C ^ {*}) \leq O (1) \cdot \operatorname {c o s t} (X; C ^ {*}) + O (\operatorname {O P T} _ {k} (X)) + O (W) \cdot \rho^ {p} +$$ + +for any set of $k$ centers $C^*$ . Hence, this implies that + +$$ +\operatorname {c o s t} (Y; C) \leq O (1) \cdot \operatorname {O P T} _ {k} (X) + O (W) \cdot \rho^ {p}. +$$ + +Finally, we also have that $\mathrm{cost}(Y;C)\geq \Omega (1)\cdot \mathrm{cost}(X;C) - O(\mathrm{OPT}_k(X)) - W\cdot \rho^p$ , which means $\mathrm{cost}(X;C)\leq O(1)\cdot \mathrm{cost}(Y;C) + O(\mathrm{OPT}_k(X)) + O(W)\cdot \rho^p$ , so as desired, we have + +$$ +\operatorname {c o s t} (X; C) \leq O (1) \cdot \operatorname {O P T} _ {k} (X) + O (W) \cdot \rho^ {p}. +$$ + +Hence, it suffices to prove the following theorem. + +Theorem C.1. For any $0 < \varepsilon, \delta < \frac{1}{2}$ and $\rho \leq \frac{\Lambda}{2}$ , there exists an $(\varepsilon, \delta, \rho)$ -dist- $DP$ ( $O(1), W, \rho$ )-semicoreset for $k$ -means (resp., $k$ -median), with $W = O\left(\frac{k^2d^{4.5}}{\varepsilon^3}\right) \cdot \text{poly} \log \left(n, d, \frac{1}{\varepsilon}, \frac{1}{\delta}, \frac{\Lambda}{\rho}\right)$ for $k$ -means and $W = O\left(\frac{k^2d^{2.5}}{\varepsilon^2}\right) \cdot \text{poly} \log \left(n, d, \frac{1}{\varepsilon}, \frac{1}{\delta}, \frac{\Lambda}{\rho}\right)$ for $k$ -median. + +We assume we have a private $(\alpha, \beta, V)$ -bicriteria approximation $F$ . Recall this means we have an $(\varepsilon, \delta, \rho)$ -dist-DP set of (at most) $\alpha \cdot k$ centers $F$ such that $\mathrm{cost}(X; F) \leq \beta \cdot \mathrm{OPT}_k(X) + V \cdot \rho^p$ . + +We recall the algorithm description and pseudocode (Algorithm 1) from Section 5. Hence, for the remainder of this section we focus on proving Theorem C.1. In addition, after proving Theorem C.1, we briefly discuss the runtime and how to make the runtime close to linear, and parallelizable. + +Analysis of Privacy: We will think of the algorithm as having 3 adaptive components. First, we must create $F$ , which is $(\varepsilon, \delta, \rho)$ -dist-DP, by Theorem B.1. In addition, we create $\{\tilde{x}_i\}$ , which as a set is $(\varepsilon, \delta, \rho)$ -dist-DP, by Proposition A.1. Note that $\tilde{X}_0$ , the sets $I_j$ , and the sizes $\hat{n}_j$ are also only dependent on $F$ and $\{\tilde{x}_i\}$ . So, for each $j$ with $\hat{n}_j < T$ , the algorithm's creation of $\tilde{X}_j$ is only depends on $F$ and $\{\tilde{x}_i\}$ . Finally, we must compute $\tilde{X}_j$ for each $j$ such that $\hat{n}_j \geq T$ . However, each coreset is $(\varepsilon, \delta)$ -DP which also implies $(\varepsilon, \delta, \rho)$ -dist-DP, and we are computing the coresets on disjoint subsets of indices, which are fixed. So overall, computing all of the $\tilde{X}_j$ is $(\varepsilon, \delta, \rho)$ -dist-DP if we fix $F$ and each $\tilde{x}_i$ . + +By basic adaptive composition (Theorem A.2), the overall procedure is $(3\varepsilon, 3\delta, \rho)$ -dist-DP. + +Analysis of Accuracy: We focus on accuracy for $k$ -means; the proof for $k$ -median is extremely similar. + +First, note that $d(x_{i},\tilde{x}_{i}) \leq O\left(\frac{\rho\sqrt{\log(1 / \delta)}}{\varepsilon} \cdot \sqrt{d + \log n}\right)$ for all $i$ . Also, note that for general positive reals $A,B$ , $(A\pm B)^2 = A^2\pm 2AB + B^2$ , and $2AB \leq \gamma A^{2} + \frac{1}{\gamma} B^{2}$ for any positive $\gamma$ . This means that for any $0 < \gamma < 1$ , $(A + B)^{2} = (1\pm \gamma)A^{2}\pm O\left(\frac{1}{\gamma}\right)B^{2}$ , and $(A - B)^{2} = (1\pm \gamma)A^{2}\pm O\left(\frac{1}{\gamma}\right)B^{2}$ . Hence, + +$$ +\begin{array}{l} d (\tilde {x} _ {i}, C) ^ {2} = \left(d (x _ {i}, C) \pm O \left(\frac {\sqrt {\log (1 / \delta)}}{\varepsilon} \cdot \sqrt {d + \log n}\right) \cdot \rho\right) ^ {2} \\ = (1 \pm \gamma) \cdot d \left(x _ {i}, C\right) ^ {2} \pm O \left(\left(d + \log n\right) \cdot \frac {\log (1 / \delta)}{\varepsilon^ {2}} \cdot \frac {1}{\gamma}\right) \cdot \rho^ {2}. \tag {1} \\ \end{array} +$$ + +Therefore, for any set $C$ of size at most $k$ , + +$$ +\sum_ {i \in I _ {0}} d (\tilde {x} _ {i}, C) ^ {2} = (1 \pm \gamma) \cdot \sum_ {i \in I _ {0}} d (x _ {i}, C) ^ {2} \pm O \left((d + \log n) \cdot \frac {\log (1 / \delta)}{\varepsilon^ {2}} \cdot \frac {1}{\gamma}\right) \cdot \rho^ {2} \cdot | I _ {0} |, +$$ + +where we recall that $i \in I_0$ if and only if $d(\tilde{x}_i, F) > S \cdot \rho$ . For $S \geq \Omega\left(\frac{\sqrt{\log(1 / \delta)}}{\varepsilon} \cdot \sqrt{d + \log n}\right)$ , this implies that $d(x_i, F) \geq \frac{S}{2} \cdot \rho$ . However, if $d(\tilde{x}_i, F) > S \cdot \rho$ , then $d(x_i, F) \geq \frac{S}{2} \cdot \rho$ , and the + +number of such $i$ with $d(x_{i},F)\geq \frac{S}{2}\cdot \rho$ is at most $\frac{\mathrm{cost}(X;F)}{(S / 2)^2\rho^2}\leq \frac{4\beta\cdot\mathrm{OPT}_k(X) + 4V\cdot\rho^2}{S^2\cdot\rho^2}$ . Hence, apart from the $1\pm \gamma$ multiplicative error, we also incur an additional additive error of + +$$ +O \left((d + \log n) \cdot \frac {\log (1 / \delta)}{\varepsilon^ {2}} \cdot \frac {1}{\gamma}\right) \cdot \frac {\beta}{S ^ {2}} \cdot \mathrm {O P T} _ {k} (X) + O \left((d + \log n) \cdot \frac {\log (1 / \delta)}{\varepsilon^ {2}} \cdot \frac {1}{\gamma}\right) \cdot \frac {V}{S ^ {2}} \cdot \rho^ {2}. +$$ + +So, by setting $S = O\left(\frac{1}{\varepsilon \cdot \gamma} \cdot \sqrt{(d + \log n) \cdot \log(1 / \delta) \cdot \beta}\right)$ , we obtain an additional $1 \pm O(\gamma)$ multiplicative error and an additive error of $O\left(\frac{\gamma}{\beta} \cdot V \cdot \rho^2\right)$ . This deals with the error from points sent to $\tilde{X}_0$ . To summarize, we have + +$$ +\sum_ {i \in I _ {0}} d \left(\tilde {x} _ {i}, C\right) ^ {2} = (1 \pm O (\gamma)) \cdot \sum_ {i \in I _ {0}} d \left(x _ {i}, C\right) ^ {2} \pm O \left(\frac {\gamma}{\beta} \cdot V \cdot \rho^ {2}\right) \tag {2} +$$ + +for all sets $C$ of size at most $k$ . + +Next, we deal with points in $\hat{X}_j$ with $\hat{n}_j < T$ . In this case, we still have that (1) holds, which means for any such $j$ and any subset $C$ of $k$ points, + +$$ +\sum_ {i \in I _ {j}} d (\tilde {x} _ {i}, C) ^ {2} = (1 \pm \gamma) \cdot \sum_ {i \in I _ {j}} d (x _ {i}, C) ^ {2} \pm O \left((d + \log n) \cdot \frac {\log (1 / \delta)}{\varepsilon^ {2}} \cdot \frac {1}{\gamma} \cdot T\right) \cdot \rho^ {2}, \tag {3} +$$ + +since $|I_j| = \hat{n}_j < T$ . + +Finally, we deal with the rest of the points, for which we use a regular differentially private semicoreset algorithm on each $\hat{X}_j$ . Recall that we choose $S$ so that $\| \tilde{x}_i - x_i\|_2 \leq S \cdot \rho$ for all $i$ , which means every point $x_i$ for $i \in I_j$ is in $B(f_j, 2S\rho)$ , i.e., the ball of radius $2S\rho$ of $f_j$ . We apply the private semi-coreset algorithm from Lemma A.10 with respect to the larger ball $B(f_j, \rho \cdot S / \gamma)$ . This means that for any subset $C$ of size at most $k$ in $\mathbb{R}^d$ , + +$$ +\cos \left(\tilde {X} _ {j}; C\right) = \Theta (1) \cdot \cos \left(\hat {X} _ {j}; C\right) \pm O (1) \cdot \operatorname {O P T} _ {k} \left(\hat {X} _ {j}\right) \pm U \cdot \left(\frac {2 S \rho}{\gamma}\right) ^ {2}, \tag {4} +$$ + +where $U = O\left(\frac{k\log^2n\log(1 / \delta) + k\sqrt{d\log(1 / \delta)}}{\varepsilon}\right)$ . We emphasize that Lemma A.10 holds even with respect to a center set $C$ that is not contained in the ball $B(f_j,\rho \cdot S / \gamma)$ . + +We now combine Equations (2), (3), and (4), setting $\gamma$ to be a fixed small constant. Since $\tilde{X}$ is the aggregation of all $\tilde{X}_j$ 's for $j = 0,1,\ldots ,\alpha \cdot k$ , and recalling that $\hat{X}_j = \{x_i:i\in I_j\}$ , we have that + +$$ +\begin{array}{l} \operatorname {c o s t} (\tilde {X}; C) \\ = \sum_ {j = 0} ^ {\alpha \cdot k} \operatorname {c o s t} \left(\tilde {X} _ {j}; C\right) \\ = \Theta (1) \cdot \sum_ {j = 0} ^ {\alpha \cdot k} \mathrm {c o s t} (\hat {X} _ {j}; C) \pm O (1) \cdot \sum_ {j = 1} ^ {\alpha \cdot k} \mathrm {O P T} _ {k} (\hat {X} _ {j}) \pm O \left(\frac {V}{\beta} + \alpha k \cdot (d + \log n) \cdot \frac {\log (1 / \delta)}{\varepsilon^ {2}} \cdot T + \alpha k \cdot U \cdot S ^ {2}\right) \cdot \rho^ {2} \\ = \Theta (1) \cdot \operatorname {c o s t} (X; C) \pm O (1) \cdot \mathrm {O P T} _ {k} (X) \pm O \left(\frac {k ^ {2} d ^ {4 . 5}}{\varepsilon^ {3}}\right) \cdot \operatorname {p o l y} \log \left(n, d, \frac {1}{\varepsilon}, \frac {1}{\delta}, \frac {\Lambda}{\rho}\right) \cdot \rho^ {2}. \\ \end{array} +$$ + +The second line is true by combining the equations and setting $\gamma$ to be a small constant. The third line is true since $\hat{X}_j$ forms a partition of $X$ , and by our parameter settings of $\alpha, \beta, S, T, U, V$ . + +This completes the proof of Theorem C.1, in the $k$ -means case. The $k$ -median case follows the same analysis (though we will set $S = O\left(\frac{1}{\varepsilon \cdot \gamma} \cdot \sqrt{(d + \log n) \cdot \log(1 / \delta)} \cdot \beta\right)$ in this case), and will result in the additive term of $O\left(\frac{V}{\beta} + \alpha k \cdot \sqrt{(d + \log n) \cdot \frac{\log(1 / \delta)}{\varepsilon^2}} \cdot T + \alpha k \cdot U \cdot S\right) \cdot \rho = O\left(\frac{k^2 d^{2.5}}{\varepsilon^2}\right) \cdot \text{poly} \log \left(n, d, \frac{1}{\varepsilon}, \frac{1}{\delta}, \frac{\Lambda}{\rho}\right) \cdot \rho$ . By combining this with our discussion at the beginning of this section, we have also proven Theorem 1.1. + +Runtime: Finally, we note that this algorithm can be implemented efficiently. Indeed, in Algorithm 2, creating the points $\tilde{x}_i$ , creating each quadtree data structure, computing the counts (only for the $\tilde{x}_i$ and $x_i$ points), and adding laplace noise all takes $\tilde{O}(nd)$ time, where we also hide logarithmic factors in $\frac{\Lambda}{\rho}$ . Finally, picking the $k$ heaviest cells in each grid also takes $\tilde{O}(n)$ time. + +In Algorithm 1, there are two potential bottlenecks. The first is mapping each point $\tilde{x}_i$ to its closest center $f_j$ , which takes $O(nd \cdot |F|) = \tilde{O}(ndk)$ time, hiding logarithmic factors in $\frac{\Lambda}{\rho}$ . We additionally have to compute a private semi-coreset for the points in each $\alpha \cdot k$ sets of points $\hat{X}_j$ . However, using the private algorithm of [25, Theorem C.2], computing an $O(1)$ -approximate private semicoreset can be done in time $\tilde{O}(\hat{n}_jd) + \mathrm{poly}(k) \cdot d$ . Note that $\hat{n}_j = |\hat{X}_j|$ and $\sum \hat{n}_j = n$ . Hence, because $\alpha = O\left(\log \frac{\Lambda}{\rho}\right)$ , the overall algorithm, apart from the assignment of each $x_i$ to $\hat{X}_j$ , takes $\tilde{O}(nd) + \mathrm{poly}(k) \cdot d$ time, hiding logarithmic factors in $\frac{\Lambda}{\rho}$ . + +To improve the $nkd$ to $nd$ , we may use a $K = O(\log n)$ -approximate nearest neighbor data structure to map each $\tilde{x}_i$ to its $K$ -approximate nearest neighbor $f_j \in F$ . By using the locality-sensitive hashing algorithm of [4], we can compute every $K$ -approximate nearest neighbor of each $\tilde{x}_i$ in $\tilde{O}(nd)$ time instead. We remark that the privacy analysis will be unchanged, and the accuracy analysis will be similar, up to getting a slightly worse additive approximation. Namely, if the points $\hat{X}_j$ were previously within $O(S)$ of the center $f_j$ , they may now have distance $O(S \cdot K)$ . Hence, the semi-coreset computation will have to be done with respect to a ball of radius $O(\rho \cdot S \cdot K / \gamma)$ , but for $K = O(\log n)$ and $\gamma$ a constant, this doesn't affect the additive error by more than an $O(\log^2 n)$ factor. + +Hence, we can compute a private semi-coreset in $\tilde{O}(nd) + \mathrm{poly}(k) \cdot d$ time. Finally, we need to compute an offline (non-private) $k$ -means (or $k$ -median) approximation. As this is not related to private clustering, we simply sketch how this can be done. + +First, in linear $(\tilde{O}(nd))$ time, the method of [22] computes an $O(1)$ -approximate coreset $C$ of size $\mathrm{poly}(k,\log n)\cdot d$ . We can then project the data onto $O(\log k)$ dimensions, with a linear map $\Pi$ . In low dimensions, we can compute a smaller coreset $C'$ of size $\mathrm{poly}(k,\log n)$ of $\Pi C$ in linear time, and then solve $k$ -means on $C'$ in time $\mathrm{poly}(k,\log n)$ . This also implies an $O(1)$ -approximation for $\Pi C$ . Next, we can map every point in $\Pi C$ to its closest center in $d' = O(\log k)$ dimensions, to form an explicit clustering. This takes time $O(|C|\cdot k\cdot d') = \mathrm{poly}(k,\log n)\cdot d$ time. By [57], every $k$ clustering has its $k$ -means objective preserved by a $\Theta(1)$ -approximate when projected by $\Pi$ , which means the same clustering should still be an $O(1)$ -approximation in the original space. We can compute the mean of each cluster in linear time, so in the original $d$ -dimensional space, we can find an $O(1)$ -approximate $k$ -means clustering in time $\tilde{O}(nd) + \mathrm{poly}(k)\cdot d$ , as desired. Finally, in the $k$ -median case, [57] is still applicable, and we can compute an approximate 1-median of each clustering in near-linear time as well [23]. + +Parallel computation. In the following, we briefly discuss how to implement our algorithm in the massively parallel computation (MPC) model [51, 11] when each machine has $(kd\log(n)\cdot 1 / \varepsilon \cdot \log 1 / \delta \cdot \log(\Lambda / \rho))^C$ memory for a sufficiently large constant $C > 0$ . Before we state our implementation, let us briefly describe the MPC model. In the MPC model, there are $M$ machines where each machine has $H$ local memory where $H$ is sublinear in the input size and $H = M^\gamma$ for an arbitrary constant $\gamma > 0$ . At the beginning of the computation, the input is arbitrarily distributed in the machines. The computation proceeds in rounds. In each round, each machine performs some local computation. Then at the end of the round, each machine sends/recieves messages to/from other machines. However, the messages sent/received by a machine in a round cannot exceed its local memory $H$ . At the end of the algorithm, the output should store in machines distributedly. The goal is to design an algorithm with small number of rounds. In the following, we show how to implement our algorithm in the MPC model using $O(1)$ rounds. + +Consider Algorithm 2. Computation of $\{\tilde{x}_1, \tilde{x}_2, \dots, \tilde{x}_n\}$ only requires local computations. Then, we can run $REP$ repetitions and each level $\ell \in [0, L_2]$ of the loop in Algorithm 2 in parallel. For each instance, it only requires the counting and taking maximum which can be easily done in the MPC model in $O(1)$ rounds [45]. Since a single machine has large enough local memory, we are able to send the entire $F$ to a single machine. The total space required here is $O(REP \cdot L_2 \cdot n \cdot d)$ . Next, let + +us consider the implementation of Algorithm 1. Since each machine has enough local memory, we are able to make each machine holds a copy of $F$ . This broadcasting process can be done in $O(1)$ rounds (see e.g., [5]). Once each machine holds $F$ , only local computation is required to determine a point $\tilde{x}_i$ whether it should be in $\tilde{X}_0$ . For points that are not belong to $\tilde{X}_0$ , we are able to determine whether it belongs to $\hat{X}_j$ by only local computations. For each $\hat{X}_j$ , we run the MPC DP coreset algorithm of [25] to get an semi-coreset. This step also takes $O(1)$ rounds. Finally, we can run any non-private MPC $k$ -means algorithm (e.g., [37]) on $\tilde{X}_0 \cup \tilde{X}_1 \cdots \cup \tilde{X}_{|F|}$ which takes $O(1)$ rounds. + +# D A simple lower bound for dist-DP clustering + +In this section, we prove the following simple proposition, showing a additive dependence on $k \cdot \rho$ (resp., $k \cdot \rho^2$ ) is necessary for $k$ -median (resp., $k$ -means), as long as the dimension is $d = \Omega(\log k)$ . + +Proposition D.1. Let $X_0 = \{x_1, \ldots, x_{2k}\}$ be points in the ball of radius $\rho$ around the origin, separated by at least $\frac{\rho}{10}$ (for $d = \Omega(\log k)$ , this is doable). Suppose $X \subset X_0$ is a random subset of size $k$ , and there exists an $(\varepsilon, \delta, \rho)$ -dist- $DP$ algorithm that outputs $k$ centers $C$ in terms of $X$ , where $\varepsilon, \delta \leq 0.1$ . Then, the expected cost $\mathbb{E}[\mathrm{cost}(X; C)]$ is $\Omega(k \cdot \rho^p)$ , where $p = 1$ for $k$ -median and $p = 2$ for $k$ -means. Yet, the optimum cost $\mathrm{OPT}_k(X)$ is 0. + +Hence, any $(\varepsilon, \delta, \rho)$ -dist- $DP$ algorithm with finite multiplicative ratio must incur additive error $k \cdot \rho^p$ . + +Proof. First, note that $\mathrm{OPT}_k(X) = 0$ since $|X| = k$ , so for $C^* = X$ , $\mathrm{cost}(X,C^*) = 0$ . We now show that $\mathbb{E}[\mathrm{cost}(X;C)] = \Omega (k\cdot \rho^p)$ . + +For each $i \leq 2k$ , let $B_i$ be the ball of radius $\frac{\rho}{100}$ around $x_i$ . Let $p_i$ be the probability over $X$ and the randomness of the private algorithm that some point in $C$ is in $B_i$ . Let $p_i^+$ be the probability of the same event conditioned on $x_i \in X$ , and $p_i^-$ be the same probability conditioned on $x_i \notin X$ . + +First, note that $x_{i} \in X$ with probability $1/2$ , since $|X| = \frac{1}{2} \cdot |X_{0}|$ . So, $p_{i} = \frac{1}{2} (p_{i}^{-} + p_{i}^{+})$ . Next, there exists a simple coupling between the events of $x_{i} \in X$ and $x_{i} \notin X$ , that changes at most 1 point. Namely, if $X$ contains $x_{i}$ , add in a random point in $X_{0} \backslash X$ , and then remove $x_{i}$ , to get a new set $X'$ . If the distribution of $X$ is uniform conditioned on $x_{i} \in X$ , it is simple to see that the distribution of $X'$ is uniform conditioned on $x_{i} \notin X$ . Therefore, $\mathbb{P}(C(X) \in B_{i}) = p_{i}^{+}$ and $\mathbb{P}(C(X') \in B_{i}) = p_{i}^{-}$ . + +Because we only changed one element $x_{i}$ and moved it a distance at most $\rho$ , this means that $\mathbb{P}(C(X)\in B_i) = e^{\pm \varepsilon}\cdot \mathbb{P}(C(X')\in B_i)\pm \delta$ , or equivalently, $p_i^+ = e^{\pm \varepsilon}\cdot p_i^- \pm \delta$ . Since $\delta \leq \varepsilon \leq 0.1$ , this means $|p_i^+ -p_i^- |\leq 0.3$ . Also, since $p_i = \frac{1}{2} (p_i^- + p_i^+)$ , this means $p_i^+ -p_i\leq 0.15$ . + +Now, since the points in $X_0$ are separated by $\frac{\rho}{10}$ , the balls $B_i$ are disjoint. So, for any fixed $C$ , at most $k$ of the events of some point in $C$ is in $B_i$ can hold. Therefore, $\sum_{i=1}^{2k} p_i \leq k$ , which means $\sum_{i=1}^{2k} p_i^+ \leq k + 0.15 \cdot 2k = 1.3k$ . + +Now, $\mathrm{cost}(X,C)$ is at least $\sum_{i = 1}^{2k}\left(\frac{\rho}{10}\right)^p\cdot \mathbb{P}(x_i\in X)\cdot (1 - p_i^+)$ . This is because $\mathbb{P}(x_i\in X)\cdot (1 - p_i^+)$ represents the probability that $x_{i}\in X$ but no point in $C$ is within $\frac{\rho}{10}$ , so the point $x_{i}$ itself contributes $\left(\frac{\rho}{10}\right)^p$ to the cost. But this simply equals $\left(\frac{\rho}{10}\right)^p\cdot (2k - \sum_{i = 1}^{2k}p_i^+) \geq \Omega (\rho^p\cdot k)$ , as desired. + +# E Additional Details of Experiments + +# E.1 Details of Implementations + +More Implementation Details. Note that the privacy budget is consumed in 3 parts: (1) computing $\bar{X}$ , (2) computing $\text{count}(g)$ for cells $g$ at level $l$ for $l \in [L_1 + 1, L_2]$ , and (3) computing DP semicoreset $\tilde{X}_j$ in Algorithm 1. We split the privacy budget uniformly, i.e., each part takes $\varepsilon/3$ and $\delta/3$ . + +Detailed implementation of Algorithm 2. Since we know each $x$ is in $[-1,1]^d$ , When we compute $\tilde{x}_i$ , if any coordinate is outside $[-2,2]$ , we project it to $[-2,2]$ . We choose $REP = 5$ . The random shifted vector is chosen uniformly random from $[0,4]^d$ and thus the cell in the highest level of the + +quadtree has side length 8. We choose $L_{1} = 5$ and $L_{2} = 10$ . When we compute count(g) of cell $g$ at level $l \in [L_1 + 1, L2]$ , we apply Gaussian thresholding mechanism5. + +Detailed implementation of Algorithm 1. We set $S = \sqrt{2\log(1.25) / (\delta / 6)} \cdot \sqrt{d} / (\varepsilon / 6)$ . We run the first loop of Algorithm 1 to obtain $\hat{X}_0$ . We slightly modify the second loop as follows: $\hat{X}_j = \{x_i \in X \setminus \tilde{X}_0 \mid d(x_i, f_j) = d(x_i, F)\}$ . We use Gaussian thresholding mechanism to compute $\hat{n}_j$ to estimate $|\hat{X}_j|$ . If $\hat{n}_j \leq 0$ , we drop $\hat{X}_j$ . Otherwise we run a semi-coreset for $\hat{X}_j$ . It is easy to show that the above modified procedure is still DP. When we use the DP open-source library to compute the (semi)-coreset of $\hat{X}_j$ , we specify the bounding ball is centered at $f_j$ with radius $\min(S, \sqrt{d}) \cdot \rho$ , i.e., the points in $\hat{X}_j$ that are outside the ball are projected to the ball. + +Finally, we use non-DP baseline $k$ -means to run $k$ -means over the union of (semi)-coresets $\tilde{X}_1 \cup \dots \cup \tilde{X}_{\alpha \cdot k}$ and $\tilde{X}_0$ . + +# E.2 Preprocessing Steps of the Datasets + +Dataset gowalla contains 6,442,890 user check-ins of 107,092 different users and dataset brightkite contains 4,491,143 user check-ins of 51,406 different users. Each check-in record contains a location information (latitude and longitude). For each user, we use its latest check-in record, and thus we obtain a dataset of size $107,092 \times 2$ for gowalla and a dataset of size $51,406 \times 2$ for brightkite. For each latitude, we divided it by 90. For each longitude, we divided it by 180. Thus, each coordinate of a user is in [-1,1]. + +For other non-geographic datasets (shuttle, skin, rangequeries, s-sets), we follow the same preprocessing steps of experiments in [24]. In particular, we linearly rescale each dimension of each point to make the coordinate have value in [-1,1]. + +# F Broader Impacts + +Our work developed distance based private algorithms for clustering problems. Distance based privacy provides provable standards of privacy but its use, like that of any privacy protection, is subject to limitations (we refer to standard textbooks on differential privacy such as [32] for the subject). We also stress that privacy is only one of the requirements of a responsible machine learning system. For this reason, we encourage anyone using the techniques developed in this paper in a real system, to review carefully the overall safety of their design. \ No newline at end of file diff --git a/kmeansclusteringwithdistancebasedprivacy/images.zip b/kmeansclusteringwithdistancebasedprivacy/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..69dcb1f2d9015f556249f16868a1d4fe4f8108db --- /dev/null +++ b/kmeansclusteringwithdistancebasedprivacy/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e741f78f134cba8eb75258c7fd1eef630fecf7fd8e6a748c644fdbb6613df124 +size 344721 diff --git a/kmeansclusteringwithdistancebasedprivacy/layout.json b/kmeansclusteringwithdistancebasedprivacy/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6ff8d072174bfe38e2a712c0babeef2d677af1e0 --- /dev/null +++ b/kmeansclusteringwithdistancebasedprivacy/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccbcf31799e45becf31f7f50fcb26b6a97987bad538de0d05a2002445d5a9ae9 +size 1624393 diff --git a/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_content_list.json b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3b679ad8c7f441dbf5607a18946222663d48d042 --- /dev/null +++ b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06ee73d4c59e77f722edd151d1566d4601b3fd84c65b423933a78323e96ac159 +size 90320 diff --git a/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_model.json b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..db33ed6df50a1cdcb26971ea698a7edf6425b290 --- /dev/null +++ b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e4812d28b27ae926610e78c2cc167368be3ba93bd4831637a0a9f656df5240a +size 108585 diff --git a/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_origin.pdf b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..22bff8040a7f66e31ed897ab4883ee48fc622c16 --- /dev/null +++ b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/8307e485-ad0b-4962-922d-f696d9a6b9c3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e1dfe220df67cb1ad6295f8f5529256b15cf4dff1c33674eabb4e24ed63c44e +size 414971 diff --git a/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/full.md b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1e93f3bea979bdaa3ace391e43c9f7161619da47 --- /dev/null +++ b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/full.md @@ -0,0 +1,439 @@ +# $L_{2}$ -Uniform Stability of Randomized Learning Algorithms: Sharper Generalization Bounds and Confidence Boosting + +Xiao-Tong Yuan + +School of Intelligence Science and Technology + +Nanjing University + +Suzhou, 215163, China + +xtyuan1980@gmail.com + +Ping Li + +VecML Inc. www vecml.com + +Levue, WA 98004, USA + +pingli98@gmail.com + +# Abstract + +Exponential generalization bounds with near-optimal rates have recently been established for uniformly stable algorithms (Feldman and Vondrák, 2019; Bousquet et al., 2020). We seek to extend these best known high probability bounds from deterministic learning algorithms to the regime of randomized learning. One simple approach for achieving this goal is to define the stability for the expectation over the algorithm's randomness, which may result in sharper parameter but only leads to guarantees regarding the on-average generalization error. Another natural option is to consider the stability conditioned on the algorithm's randomness, which is way more stringent but may lead to generalization with high probability jointly over the randomness of sample and algorithm. The present paper addresses such a tension between these two alternatives and makes progress towards relaxing it inside a classic framework of confidence-boosting. To this end, we first introduce a novel concept of $L_{2}$ -uniform stability that holds uniformly over data but in second-moment over the algorithm's randomness. Then as a core contribution of this work, we prove a strong exponential bound on the first-moment of generalization error under the notion of $L_{2}$ -uniform stability. As an interesting consequence of the bound, we show that a bagging-based meta algorithm leads to near-optimal generalization with high probability jointly over the randomness of data and algorithm. We further substantialize these generic results to stochastic gradient descent (SGD) to derive sharper exponential bounds for convex or non-convex optimization with natural time-decaying learning rates, which have not been possible to prove with the existing stability-based generalization guarantees. + +# 1 Introduction + +In many statistical learning problems, we are interested in designing a randomized algorithm $A: \mathcal{Z}^N \times \mathcal{R} \mapsto \mathcal{W}$ that maps a training data sample $S = \{Z_i\}_{i \in [N]} \in \mathcal{Z}^N$ with an algorithm's random parameter $\xi \in \mathcal{R}$ to a model $A(S, \xi) \in \mathcal{W}$ . Here $\mathcal{Z}$ and $\mathcal{R}$ are some measurable sets, and $\mathcal{W}$ is a closed subset of an Euclidean space. The ultimate goal is to find a suitable algorithm such that the following population risk evaluated at the model should be as small as possible: + +$$ +R (A (S, \xi)) := \mathbb {E} _ {Z} [ \ell (A (S, \xi); Z) ], +$$ + +where $Z \in \mathcal{Z}$ and $\ell : \mathcal{W} \times \mathcal{Z} \mapsto \mathbb{R}^+$ is a non-negative bounded loss function whose value $\ell(w; z)$ measures the loss evaluated at $z$ with parameter $w$ . It is generally the case that the underlying data distribution is unknown, and in this case the data points $Z_i$ are usually assumed to be independent. + +Then, a natural alternative measurement that mimics the computationally intractable population risk is the empirical risk given by + +$$ +R _ {S} (A (S, \xi)) := \mathbb {E} _ {Z \sim \mathrm {U n i f} (S)} [ \ell (A (S, \xi); Z) ] = \frac {1}{N} \sum_ {i = 1} ^ {N} \ell (A (S, \xi); Z _ {i}). +$$ + +The bound on the difference between the population and empirical risks is of central interest in understanding the generalization performance of a learning algorithm. In particular, we hope to derive a suitable law of large numbers, i.e., a sample size vanishing rate $b_{N}$ such that the generalization bound $|R_{S}(A(S,\xi)) - R(A(S,\xi))| \lesssim b_{N}$ holds with high probability over the randomness of $S$ and hopefully the randomness of $\xi$ as well. Let $R^{*} \coloneqq \min_{w \in \mathcal{W}} R(w)$ be the optimal value of the population risk. Conditioned on $S$ , suppose that $A(S,\xi)$ is an almost minimizer of the empirical risk $R_{S}$ such that $R_{S}(A(S,\xi)) - \min_{w \in \mathcal{W}} R_{S}(w) \leq \varepsilon$ , then the generalization bound immediately implies an excess risk bound $R(A(S,\xi)) - R^{*} \lesssim b_{N} + \frac{1}{\sqrt{N}} + \varepsilon$ based on the standard risk decomposition and Hoeffding's inequality. Therefore, generalization guarantees also play a crucial role in understanding the stochastic optimization performance of a learning algorithm. + +A powerful proxy for analyzing the generalization bounds is the stability of learning algorithms to changes in the training dataset. Since the seminal work of Bousquet and Elisseeff (2002), stability has been extensively demonstrated to beget dimension-independent generalization bounds for deterministic learning algorithms (Mukherjee et al., 2006; Shalev-Shwartz et al., 2010), as well as for randomized learning algorithms such as bagging and SGD (Elisseeff et al., 2005; Hardt et al., 2016). So far, the best known results about generalization bounds are offered by approaches based on the notion of uniform stability (Feldman and Vondrak, 2018, 2019; Bousquet et al., 2020; Klochkov and Zhivotovskiy, 2021) which is independent to the underlying distribution of data. For randomized algorithms, the definition of uniform stability can be extended in two natural ways by respectively considering 1) the stability averaged over the algorithm's randomness (Hardt et al., 2016) and 2) the stability conditioned on the algorithm's randomness (Feldman and Vondrak, 2019). The former is simpler to show but typically leads to on-average generalization bounds, while the latter is relatively more stringent but may yield deviation bounds given that the conditional stability holds with high probability over the algorithm's randomness. Between these two extreme cases, however, the generalization behavior of randomized learning algorithm still remains largely under explored. + +To address the above mentioned theoretical gap between the current lines of results, we explore the opportunities of deriving exponential generalization bounds for randomized learning algorithms beyond the notions of on-average stability and conditional stability. A concrete working example of our study is the widely used stochastic gradient descent (SGD) algorithm that carries out the following recursion for all $t \geq 1$ with learning rate $\eta_t > 0$ : + +$$ +w _ {t} := \Pi_ {\mathcal {W}} \left(w _ {t - 1} - \eta_ {t} \nabla_ {w} \ell \left(w _ {t - 1}; Z _ {i _ {t}}\right)\right), \tag {1} +$$ + +where $i_t \in [N]$ is a random index of data under with or without replacement sampling, and $\Pi_{\mathcal{W}}$ is the Euclidean projection operator associated with $\mathcal{W}$ . The in-expectation generalization of SGD has been studied under on-average stability (Hardt et al., 2016; Zhou et al., 2022; Lei and Ying, 2020), while the exponential bounds have recently been established given that the stability holds with high probability over the sampling path of SGD (Feldman and Vondrák, 2019; Bassily et al., 2020). + +# 1.1 Prior results + +Let us start by briefly reviewing some state-of-the-art exponential generalization bounds under the notion of uniform stability and its randomized variants. We denote by $S \doteq \tilde{S}$ if a pair of data sets $S$ and $\tilde{S}$ differ in a single element. A randomized learning algorithm $A$ is said to have on-average $\gamma_{N}$ -uniform stability (Elisseeff et al., 2005) if it satisfies the following uniform bound: + +$$ +\sup _ {S \doteq \tilde {S}, Z \in \mathcal {Z}} \left| \mathbb {E} _ {\xi} \left[ \ell (A (S, \xi); Z) - \ell (A (\tilde {S}, \xi); Z) \right] \right| \leq \gamma_ {N}. \tag {2} +$$ + +This definition is equivalent to the concept of uniform stability defined for the expectation of loss $\mathbb{E}_{\xi}[\ell (A(S,\xi);Z)]$ . Suppose that the loss function is bounded in the interval $[0,M]$ . Then essentially it has been shown in Feldman and Vondrak (2019) that for any $\delta \in (0,1)$ , with probability at least $1 - \delta$ over $S$ , the on-average generalization error is upper bounded by + +$$ +\left| \mathbb {E} _ {\xi} \left[ R (A (S, \xi)) - R _ {S} (A (S, \xi)) \right] \right| \lesssim \gamma_ {N} \log (N) \log \left(\frac {N}{\delta}\right) + M \sqrt {\frac {\log (1 / \delta)}{N}}. \tag {3} +$$ + +Bousquet et al. (2020) later derived a slightly improved exponential bound that implies + +$$ +\left| \mathbb {E} _ {\xi} \left[ R (A (S, \xi)) - R _ {S} (A (S, \xi)) \right] \right| \lesssim \gamma_ {N} \log (N) \log \left(\frac {1}{\delta}\right) + M \sqrt {\frac {\log (1 / \delta)}{N}}. \tag {4} +$$ + +These bounds are near-tight (up to logarithmic factors) in the sense of an $\mathcal{O}\left(\gamma_N\log \left(\frac{1}{\delta}\right) + \sqrt{\frac{\log(1 / \delta)}{N}}\right)$ lower deviation bound on sum of random functions with $\gamma_{N}$ -uniform stability (Bousquet et al., 2020, Proposition 9). Concerning the excess risk bound, Klochkov and Zhivotovsky (2021) essentially derived the following result using the sample-splitting techniques of Bousquet et al. (2020): + +$$ +\mathbb {E} _ {\xi} \left[ R (A (S, \xi)) \right] - R ^ {*} \lesssim \Delta_ {\mathrm {o p t}} + \mathbb {E} \left[ \Delta_ {\mathrm {o p t}} \right] + \gamma_ {N} \log (N) \log \left(\frac {1}{\delta}\right) + \frac {(M + B) \log (1 / \delta)}{N}, \tag {5} +$$ + +where $\Delta_{\mathrm{opt}}\coloneqq \mathbb{E}_{\xi}\left[R_S(A(S,\xi))\right] - \min_{w\in \mathcal{W}}R_S(w)$ represents the in-expectation empirical risk suboptimality, and $B$ is the constant of the generalized Bernstein condition (Koltchinskii, 2006). While sharp in the dependence on sample size, one common limitation of the above uniform stability implied generalization and risk bounds lies in that these high-probability results only hold in expectation with respect to $\xi$ , the internal randomness of algorithm. + +Alternatively, consider that $A$ has $\gamma_N$ -uniform stability with probability at least $1 - \delta'$ for some $\delta' \in (0, 1)$ over the random draw of $\xi$ , i.e., + +$$ +\mathbb {P} \left\{\sup _ {S \dot {=} \tilde {S}, Z \in \mathcal {Z}} | \ell (A (S, \xi); Z) - \ell (A (\tilde {S}, \xi); Z) | \leq \gamma_ {N} \right\} \geq 1 - \delta^ {\prime}. \tag {6} +$$ + +Suppose that the randomness of $A$ is independent of the training set $S$ . Then the bound of Bousquet et al. (2020) naturally implies that with probability at least $1 - \delta - \delta'$ over $S$ and $\xi$ , + +$$ +\left| R (A (S, \xi)) - R _ {S} (A (S, \xi)) \right| \lesssim \gamma_ {N} \log (N) \log \left(\frac {1}{\delta}\right) + M \sqrt {\frac {\log (1 / \delta)}{N}}. \tag {7} +$$ + +This is by far the best known generalization bound of randomized stable algorithms that hold with high probability jointly over the randomness of data and algorithm. The result, however, relies heavily on the high-probability uniform stability as expressed in (6). For the SGD recursion (1) with fixed learning rate $\eta_t \equiv \eta$ , it is possible to show that $\gamma_N \lesssim \eta \sqrt{T} + \frac{\eta T}{N}$ and $\delta' = N \exp(-\frac{N}{2})$ in (6) (Bassily et al., 2020). For SGD with time decaying learning rates, which has been widely studied in theory (Harvey et al., 2019; Rakhlin et al., 2012) and applied in practice for training popular deep nets such as ResNet and DenseNet (Bengio et al., 2017), it is not clear if the condition in (6) is still valid for $\gamma_N$ and $\delta'$ of interest. Madden et al. (2020) indeed have established a high-probability uniform stability bound for minibatch SGD with learning rates $\eta_t \lesssim \frac{1}{Nt}$ . However, such a fairly conservative choice of learning rates tends to impair the empirical minimization performance of SGD and thus is of limited interest from the perspective of risk minimization. + +More specially for randomized learning methods such as bagging (Breiman, 1996) and SGD, the randomness of algorithm can be precisely characterized by a vector of i.i.d. parameters $\xi = \{i_1,\dots,i_t\}$ which are independent on data $S$ . In such cases, assume additionally that $A(S,\xi)$ has uniform stability with respect to $\xi$ conditioned on $S$ , i.e., $\sup_{\xi \dot{=} \tilde{\xi}}|\ell (A(S,\xi)) - \ell (A(S,\tilde{\xi}))|\leq \rho_T$ . Then the following exponential bound has been derived by Elisseeff et al. (2005): + +$$ +\left| R (A (S)) - R _ {S} (A (S)) \right| \lesssim \gamma_ {N} + \left(\frac {1 + N \gamma_ {N}}{\sqrt {N}} + \sqrt {T} \rho_ {T}\right) \sqrt {\log \left(\frac {1}{\delta}\right)}. \tag {8} +$$ + +Provided that $\gamma_N \lesssim \frac{1}{N}$ and $\rho_T \lesssim \frac{1}{T}$ , the above bound shows that the generalization bound scales as $\mathcal{O}\left(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{T}}\right)$ with high probability. However, the rate of the above bound is sub-optimal and will show no guarantee on convergence if $\gamma_N \gtrsim \frac{1}{\sqrt{N}}$ and/or $\rho_T \gtrsim \frac{1}{\sqrt{T}}$ . As an example, for non-convex SGD with learning rate $\eta_t = O\left(\frac{1}{t}\right)$ , it can be shown that $\gamma_N \lesssim \frac{\sqrt{T}}{N}$ and $\rho_T$ scales as large as $\mathcal{O}(1)$ . + +Open problem. So far, it still remains open if the exponential generalization bounds for deterministic uniformly stable algorithms might be extended to randomized learning algorithms under the variants of uniform stability tighter than the on-average version (2) but less restrictive than the high-probability + +version (6). Particularly, we are interested in the following notion of $L_{2}$ -uniform stability (as formally introduced in Definition 1) with parameter $\gamma_{\mathrm{L}_2,N}$ : + +$$ +\sup _ {S \div \tilde {S}, Z \in \mathcal {Z}} \mathbb {E} _ {\xi} \left[ \left(\ell (A (S, \xi); Z) - \ell (A (\tilde {S}, \xi); Z)\right) ^ {2} \right] \leq \gamma_ {\mathrm {L} _ {2}, N} ^ {2}, \tag {9} +$$ + +which represents a second-moment variant of the uniform stability for randomized learning algorithms. For example, as we will shortly show in Section 4 that SGD with practical time-decaying learning rates has $L_{2}$ -uniform stability with favorable parameters. The main goal of the present work is to derive sharper exponential generalization bounds for randomized learning algorithms under the notion of $L_{2}$ -uniform stability. + +# 1.2 Overview of our contribution + +The fundamental contribution of this work is a near-optimal first-moment generalization error bound for $L_{2}$ -uniformly stable algorithms, which is summarized in Theorem 1 and highlighted below: + +$$ +\mathbb {E} _ {\xi} \left[ | R (A (S, \xi)) - R _ {S} (A (S, \xi)) | \right] \lesssim \gamma_ {\mathrm {L} _ {2}, N} \log (N) \log \left(\frac {1}{\delta}\right) + M \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +While our first-moment bound above has an identical convergence rate to that of the on-average bound in (4), the former is stronger in the sense that the expectation is taken outside the generalization gap and thus implies the latter where the expectation is taken inside. The key ingredients of our analysis are a set of fine-grained concentration inequalities for randomized functions (Proposition 1) and sums of randomized functions (Proposition 2), which respectively generalize the classic bounded-difference inequalities and a prior result of Bousquet et al. (2020) under the considered $L_{2}$ -uniform bounded difference conditions. These generalized concentration inequalities and their proof arguments are novel to our knowledge and should be of independent interests in analyzing randomized functions. + +As an important consequence of our main result, we reveal that a bagging-based meta procedure (see Algorithm 1) can be used to boost the confidence of generalization for $L_{2}$ -uniformly stable algorithms. More specifically, in the presented bagging procedure we independently run a randomized algorithm $A$ multiple $K$ times over a fraction of the training set to obtain $K$ solutions. Then we evaluate the validation error of these candidate solutions over a holdout training subset, and output the solution that has the smallest training-validation gap. Our result in Theorem 2 shows that for any confidence level $\delta \in (0,1)$ , setting $K \asymp \log \left(\frac{1}{\delta}\right)$ yields a near-optimal generalization bound for the selected solution that holds with high probability jointly over the randomness of data and algorithm. + +We have substantialized our results to SGD with smooth (Corollary 1) or non-smooth (Corollary 2) convex losses, and smooth non-convex losses (Corollary 3) as well. For an instance, our result in Corollary 1 shows that when invoked to SGD with smooth convex loss and learning rates $\eta_t = \mathcal{O}\left(\frac{1}{\sqrt{t}}\right)$ , the generalization bound of the output of Algorithm 1 may scale as $\mathcal{O}\left(\log(N)\log\left(\frac{1}{\delta}\right)\sqrt{\frac{\log(T)}{N} + \frac{\sqrt{T}}{N}}\right)$ . + +To compare with the $\mathcal{O}\left(\frac{\sqrt{T}}{N}\right)$ in-expectation bound of smooth convex SGD (Hardt et al., 2016), our bound above for the boosted SGD is comparable in convergence rate while it holds with high probability jointly over the randomness of data and sampling path. + +# 2 $L_{2}$ -Uniform Stability and Generalization + +# 2.1 Notation and definitions + +Let us introduce some notation to be used in our analysis. We abbreviate $[N] := \{1, \dots, N\}$ . Recall that $S = \{Z_i\}_{i \in [N]}$ is a set of i.i.d. training data. Denote by $S' = \{Z_i'\}_{i \in [N]}$ an independent copy of $S$ and we write $S^{(i)} = \{Z_1, \dots, Z_{i-1}, Z_i', Z_{i+1}, \dots, Z_N\}$ . For a real-valued random variable $Y$ , its $L_q$ -norm for $q \geq 1$ is given by $\|Y\|_q = (\mathbb{E}[|Y|^q])^{1/q}$ . By definition it can be verified that $\forall q \geq 2$ , + +$$ +\left\| Y \right\| _ {q} ^ {2} = \left(\mathbb {E} [ | Y | ^ {q} ]\right) ^ {2 / q} = \left(\mathbb {E} [ | Y ^ {2} | ^ {q / 2} ]\right) ^ {2 / q} = \left\| Y ^ {2} \right\| _ {q / 2}. \tag {10} +$$ + +Let $h: \mathcal{Z}^N \mapsto \mathbb{R}$ be some measurable function and consider the random variable $h(S) = h(Z_1, \ldots, Z_N)$ . For $h(S)$ and any index set $I \subseteq [N]$ , we define the following abbreviations: + +$$ +h \left(S _ {I}\right) := \mathbb {E} \left[ h (S) \mid S _ {I} \right], \quad \| h \| _ {q} \left(S _ {I}\right) := \left(\mathbb {E} \left[ | h (S) | ^ {q} \mid S _ {I} \right]\right) ^ {1 / q}. +$$ + +We say a function $f$ to be $G$ -Lipschitz continuous over $\mathcal{W}$ if $|f(w) - f(\tilde{w})| \leq G\| w - \tilde{w}\|$ for all $w, \tilde{w} \in \mathcal{W}$ , and it is $L$ -smooth if $\| \nabla f(w) - \nabla f(\tilde{w})\| \leq L\| w - \tilde{w}\|$ . For a pair of functions $f, f' \geq 0$ , we use $f \lesssim f'$ (or $f' \gtrsim f$ ) to denote $f \leq c f'$ for some universal constant $c > 0$ . + +In the following definition, we formally introduce the concept of $L_{2}$ -uniform stability for randomized learning algorithms to be investigated in this work. + +Definition 1 ( $L_{2}$ -Uniform stability of randomized learning algorithms). We say a randomized learning algorithm $A: \mathcal{Z}^{N} \times \mathcal{R} \mapsto \mathcal{W}$ to have $L_{2}$ -uniform stability with parameter $\gamma_{L_{2}, N} \geq 0$ if + +$$ +\sup _ {S, Z _ {i} ^ {\prime}, Z} \mathbb {E} _ {\xi} \left[ \left(\ell (A (S, \xi); Z) - \ell (A (S ^ {(i)}, \xi); Z)\right) ^ {2} \right] \leq \gamma_ {L _ {2}, N} ^ {2}. +$$ + +Remark 1. By definition the $L_{2}$ -uniform stability has a second-moment dependence on the internal randomness of algorithm conditioned on data, while it is invariant to the data distribution. This justifies the name of such a notion of mixed algorithmic stability. + +Remark 2. On one hand, by Jensen's inequality the $L_{2}$ -uniform stability implies the on-average uniform stability defined in (2). On the other hand, the second-order form of $L_{2}$ -uniform stability is by definition weaker than the high-probability uniform stability in (6). If the algorithm's randomness $\xi$ can be expressed as a set of i.i.d. random bits, then the $L_{2}$ -uniform stability is also weaker than the conditional uniform stability conditioned on data $S$ (Elisseeff et al., 2005). + +Throughout this paper, we assume for simplicity that the output models $A(S^{(i)},\xi)$ and $A(S,\xi)$ share the same internal random bit $\xi$ which is invariant to data. With similar analysis techniques, it is indeed possible to extend Definition 1 and our main results to the general setting where the randomness of algorithm is allowed to be dependent on data, such as in posterior sampling for Bayesian learning. + +# 2.2 Concentration inequalities for randomized functions + +We begin by establishing in the following result a group of first- and second-order concentration inequalities (in moments) for randomized functions of independent random variables. + +Proposition 1. Let $S = \{Z_1, Z_2, \dots, Z_N\}$ be a set of independent random variables valued in $\mathcal{Z}$ and $\xi$ be a random variable valued in $\mathcal{R}$ . Let $g: \mathcal{Z}^N \times \mathcal{R} \mapsto \mathbb{R}$ be a measurable function that satisfies the following $L_2$ -bounded-difference condition: + +$$ +\sup _ {S, Z _ {i} ^ {\prime}} \mathbb {E} _ {\xi} \left[ \left(g (S, \xi) - g (S ^ {(i)}, \xi)\right) ^ {2} \right] \leq \beta^ {2}. +$$ + +Then for any $q \geq 2$ , + +$$ +\left\| \mathbb {E} _ {\xi} \left[ \left| g (S, \xi) - \mathbb {E} _ {S} [ g (S, \xi) ] \right| \right] \right\| _ {q} \leq 3 \beta \sqrt {N q}, \tag {11} +$$ + +and + +$$ +\left\| \mathbb {E} _ {\xi} \left[ (g (S, \xi) - \mathbb {E} _ {S} [ g (S, \xi) ]) ^ {2} \right] \right\| _ {q} \leq 6 8 N \beta^ {2} q. \tag {12} +$$ + +Proof in sketch. Let us consider $h(S) \coloneqq \mathbb{E}_{\xi} \left[ \| g(S, \xi) - \mathbb{E}_S[g(S, \xi)] \| \right]$ . The given $L_2$ -bounded-difference condition implies that $h(S)$ has bounded-difference property. Then the desired first-order bound in (11) can be obtained by respectively invoking a moment Efron-Stein inequality (Boucheron et al., 2005, Theorem 2) to upper bound $\| h(S) - \mathbb{E}[h(S)] \|_q$ and a slightly modified Efron-Stein inequality to bound the mean $\mathbb{E}[h(S)]$ . To prove the second-order concentration bound, we consider the function $h'(S) \coloneqq \mathbb{E}_{\xi} \left[ (g(S, \xi) - \mathbb{E}_S[g(S, \xi)])^2 \right]$ , which can be shown to be weakly self-bounding (see Definition 2) under the $L_2$ -bounded-difference condition. Then the desired bound (12) can be derived by applying the upper tail bound of Boucheron et al. (2005, Theorem 6.19) and lower tail bound of Klochkov and Zhivotovsky (2021, Proposition 3.1) for weakly self-bounding functions. See Appendix A.2 for a detailed proof of this result. + +The moment bound in (11) extends the McDiarmid's (bounded difference) inequality (McDiarmid et al., 1989) to randomized functions with the $L_{2}$ -bounded-difference property. The second-order concentration bound in (12) is crucial for proving the moment bound of sums in Proposition 2, as it can be used to sharply control some second-order components involved in the arguments. These generic inequalities are expected to be of independent interests for understanding the first-/second-order concentration behavior of randomized functions. + +# 2.3 A moment inequality for sums of randomized functions + +As a key intermediate result, we further establish in the following proposition a moment concentration inequality for sums of randomized functions that satisfy the $L_{2}$ -bounded-difference condition. This result extends the moment bound for sums of functions (Bousquet et al., 2020, Theorem 4) to sums of randomized functions. + +Proposition 2. Let $S = \{Z_1, Z_2, \dots, Z_N\}$ be a set of independent random variables valued in $\mathcal{Z}$ and $\xi$ be a random variable valued in $\mathcal{R}$ . Let $g_1, \dots, g_N$ be a set of measurable functions $g_i : \mathcal{Z}^N \times \mathcal{R} \mapsto \mathbb{R}$ that satisfy the following conditions for any $i \in [N]$ : + +- $\mathbb{E}[g_i(S,\xi) \mid S \setminus Z_i, \xi] = 0$ and $|\mathbb{E}[g_i(S,\xi) \mid Z_i, \xi]| \leq M$ , almost surely; +- $g_{i}(S,\xi)$ has the following $L_{2}$ -bounded-difference property with respect to all variables in $S$ except $Z_{i}$ , i.e., $\forall j \neq i$ , + +$$ +\sup _ {S, Z _ {j} ^ {\prime}} \mathbb {E} _ {\xi} \left[ \left(g _ {i} (S, \xi) - g _ {i} (S ^ {(j)}, \xi)\right) ^ {2} \right] \leq \beta^ {2}. +$$ + +Then for all $q \geq 2$ , + +$$ +\left\| \mathbb {E} _ {\xi} \left[ \left| \sum_ {i = 1} ^ {N} g _ {i} (S, \xi) \right| \right] \right\| _ {q} \leq 3 M \sqrt {3 N q} + 3 8 N \lceil \log_ {2} N \rceil \beta q. +$$ + +Proof in sketch. The main idea is inspired by the sample-splitting arguments of Feldman and Vondrak (2019); Bousquet et al. (2020), with some new ingredients developed to handle the first-moment operator taken over the internal randomness of functions. Here we just highlight a fundamental difference, which arises from using a newly developed moment inequality (Lemma 9) for bounding the sums of conditionally independent randomized functions inside each individual data splits. Different from the version of Marcinkiewicz-Zygmund's inequality used in the original analysis of Bousquet et al. (2020), our new bound in Lemma 9 relies on some second-order (over the function's randomness) components which might be tightly bounded by the second-order concentration inequality in Proposition 1. A full proof is provided in Appendix A.3. + +Remark 3. For sums of deterministic functions, our result in Proposition 2 reduces to the existing moment bound of Bousquet et al. (2020, Theorem 4) which is known to be near-tight up to logarithmic factors. We comment in passing that the tightness analysis of Bousquet et al. (2020, Proposition 9) for deterministic functions can be more or less straightforwardly extended to randomized functions. + +**Remark 4.** The bound of Proposition 2 would still be valid when the bounded-loss condition $|\mathbb{E}[g_i(S,\xi) \mid Z_i,\xi]| \leq M$ is relaxed to certain sub-Gaussian or sub-exponential stochastic versions. + +# 2.4 Main result on generalization bound + +Consequently from Proposition 2, we can now establish our main result on the generalization bound of $L_{2}$ -uniformly stable randomized learning algorithms. + +Theorem 1. Let $A: \mathcal{Z}^N \times \mathcal{R} \mapsto \mathcal{W}$ be a randomized learning algorithm that has $L_2$ -uniform stability with parameter $\gamma_{L_2, N}$ . Assume that the loss function $\ell$ is valued in $[0, M]$ . Then for any $\delta \in (0, 1)$ , the following bound holds with probability at least $1 - \delta$ over the draw of $S$ : + +$$ +\mathbb {E} _ {\xi} \left[ | R (A (S, \xi)) - R _ {S} (A (S, \xi)) | \right] \lesssim \gamma_ {L _ {2}, N} \log (N) \log \left(\frac {1}{\delta}\right) + M \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +Proof. See Appendix A.4 for a proof of this result. + +![](images/00268892b697841d9d1bad935ea109f081ad20d2ac870089bb3c39622edbcfbf.jpg) + +Remark 5. The first-moment bound in Theorem 1 naturally implies the on-average bound in (4) with an identical rate of convergence, though the former is obtained under the relatively stronger notion of $L_{2}$ -uniform stability. As we will see shortly that the $L_{2}$ -uniform stability can indeed be fulfilled by the popularly applied SGD algorithm and thus Theorem 1 is of practical importance for showcasing sharper generalization performance of SGD. When $A$ is deterministic, our bound reduces to the near-optimal (up to logarithmic factors on sample size and failure tail) generalization bound for uniformly stable algorithms (Bousquet et al., 2020). + +# Algorithm 1: Confidence-Boosting for Randomized Learning Algorithms + +Input: Randomized learning algorithm $A$ , data set $S = \{Z_i\}_{i \in [N]}$ , $\mu \in (0,1)$ and $K \in \mathbb{Z}^+$ . + +Output: $A(S, \xi_{k^*})$ . + +Uniformly divide $S$ into two disjoint subsets $S_{1}$ and $S_{2}$ with $|S_1| = (1 - \mu)N$ , $|S_{2}| = \mu N$ . + +for $k = 1,2,\dots,K$ do + +Estimate $A(S_{1},\xi_{k})$ as an output of $A$ over the subset $S_{1}$ with random bit $\xi_{k}$ . + +end + +Select the random bit $k^{*}$ according to $k^{*} = \arg \min_{k\in [K]}|R_{S_2}(A(S_1,\xi_k)) - R_{S_1}(A(S_1,\xi_k))|$ . + +In view of the standard risk decomposition, the following excess risk tail bound can be readily obtained via applying Theorem 1 and Hoeffding's inequality: + +$$ +\mathbb {E} _ {\xi} \left[ R (A (S, \xi)) - R ^ {*} \right] \lesssim \Delta_ {\text {o p t}} + \gamma_ {\mathrm {L} _ {2}, N} \log (N) \log \left(\frac {1}{\delta}\right) + M \sqrt {\frac {\log (1 / \delta)}{N}}. \tag {13} +$$ + +Here recall that $\Delta_{\mathrm{opt}} \coloneqq \mathbb{E}_{\xi} [R_S(A(S, \xi))] - \min_{w \in \mathcal{W}} R_S(w)$ is the sub-optimality of empirical risk minimization. Since the excess risk is by definition non-negative, the above bound can also be obtained under the weaker notion of on-average uniform stability (2) via applying (4). In this sense, the first-moment generalization error bound in Theorem 1 is substantially more challenging to derive than the excess risk bound. Additionally, under the generalized Bernstein condition (Koltchinskii, 2006), the risk bound (13) can be readily improved to (5) by directly applying the corresponding deviation optimal risk bound of Klochkov and Zhivotovsky (2021) to the on-average loss function $\mathbb{E}_{\xi}[\ell(A(S, \xi); Z)]$ under on-average uniform stability condition. + +# 3 Boosting the Confidence of Generalization + +The confidence-boosting technique of Schapire (1990) is a classic meta approach that allows one to boost the dependence of a learning algorithm on the failure probability $\delta$ from $1 / \delta$ to $\log(1 / \delta)$ , at a certain cost of computational complexity. In this section, we show an implication of our first-moment bound in Theorem 1 for achieving high-probability generalization jointly over the randomness of data and algorithm, inside a natural framework of confidence-boosting. + +# 3.1 Confidence boosting via bagging + +Given a randomized learning algorithm $A$ , we propose to study a bagging based confidence-boosting procedure as outlined in Algorithm 1. In this meta procedure, we independently run the algorithm $A$ for $K$ times over $S_{1}$ , a fraction of the training set, to obtain $K$ different candidate solutions $\{A(S_1,\xi_k)\}_{k\in [K]}$ . Then we evaluate the validation error of these candidate solutions over the holdout training subset $S_{2}$ , and cherry pick $A(S_{1},\xi_{k^{*}})$ that has the smallest gap between the training error and validation error, i.e., $k^{*} = \arg \min_{k\in [K]}\left|R_{S_2}(A(S_1,\xi_k)) - R_{S_1}(A(S_1,\xi_k))\right|$ . Particularly, consider that the internal randomness of $A$ arises from random sampling with replacement of data points, such as SGD under with-replacement sampling. Then in this setting, the procedure can be regarded as a version of bagging (Breiman, 1996) with a greedy model ensemble scheme, which is invoked to the deterministic counterpart of $A$ with fixed random bits (e.g., SGD with identity permutation) over the training subset $S_{1}$ . + +# 3.2 Jointly exponential bounds + +The following theorem is our main result about the generalization error bound of the output $A(S_{1}, \xi_{k^{*}})$ that holds with high probability over the entire training set $S$ and the random seeds $\{\xi_k\}_{k \in [K]}$ . + +Theorem 2. Suppose that a randomized learning algorithm $A: \mathcal{Z}^N \times \mathcal{R} \mapsto \mathcal{W}$ has $L_2$ -uniform stability with parameter $\gamma_{L_2, N}$ . Assume that the loss function $\ell$ is valued in $[0, M]$ . Then for any $\delta \in (0, 1)$ and $K \geq 2\log\left(\frac{2}{\delta}\right)$ , with probability at least $1 - \delta$ over the randomness of $S$ and $\{\xi_k\}_{k \in [K]}$ , the output of Algorithm 1 satisfies + +$$ +\left| R \left(A \left(S _ {1}, \xi_ {k ^ {*}}\right)\right) - R _ {S} \left(A \left(S _ {1}, \xi_ {k ^ {*}}\right)\right) \right| \lesssim \gamma_ {L _ {2}, (1 - \mu) N} \log (N) \log \left(\frac {1}{\delta}\right) + \frac {M}{\sqrt {\mu (1 - \mu)}} \sqrt {\frac {\log \left(K / \delta\right)}{N}}. +$$ + +Algorithm 2: $A_{\mathrm{SGD - w}}$ : SGD under With-Replacement Sampling +Input :Data set $S = \{Z_{i}\}_{i\in [N]}$ , step-sizes $\{\eta_t\}_{t\geq 1}$ , #iterations $T$ , initialization $w_0$ Output: $\bar{w}_T = \frac{1}{T}\sum_{t\in [T]}w_t$ for $t = 1,2,\dots,T$ do Uniformly randomly sample an index $i_t\in [N]$ with replacement; Compute $w_{t} = \Pi_{\mathcal{W}}(w_{t - 1} - \eta_t\nabla_w\ell (w_{t - 1};Z_{i_t}))$ +end + +Proof in sketch. Based on Theorem 1, we first prove an intermediate result to show that the minimal generalization error of the $K$ outputs satisfies $\min_{k\in [K]}|R(A(S_1,\xi_k)) - R_{S_1}(A(S_1,\xi_k))|\lesssim \gamma_{\mathrm{L}_2,(1 - \mu)N}\log (N)\log \left(\frac{1}{\delta}\right) + \frac{M}{\sqrt{\mu(1 - \mu)}}\sqrt{\frac{\log(1 / \delta)}{N}}$ provided that $K\gtrsim \log (\frac{1}{\delta})$ . Next we show that the used greedy model selection strategy guarantees that the selected $A(S,\xi_{k^*})$ mimics the generalization behavior of that best performer among the $K$ candidates, with a slightly expanded $\log (K / \delta)$ factor representing the overhead of simultaneously bounding the generalization performance of $K$ different candidate solutions over the holdout validation set. Finally the desired bound follows from the union probability argument. See Appendix B.1 for its full proof. + +Remark 6. The bound in Theorem 2 holds with high probability jointly over the randomness of sample and algorithm. Different from the bound in (7) that requires high probability uniform stability, Theorem 2 is valid under a substantially milder notion of $L_{2}$ -uniform stability, though at the cost of multiple running of algorithm for confidence boosting. Compared to the bound in (8) that requires certain conditional uniform stability over the random bits of algorithm, our bound has sharper dependence on the uniform stability parameter yet under a weaker notion of stability. + +Remark 7. Regarding the scale of the factor $1 / \sqrt{\mu(1 - \mu)}$ in the bound of Theorem 2, if setting $\mu = 0.01$ (i.e., $99\%$ of $S$ are used as $S_{1}$ for training), then the factor is around 10.05. + +Concerning the excess risk of Algorithm 1, we consider a slightly modified output $A(S_{1},\xi_{k^{*}})$ such that $k^{*} = \arg \min_{k\in [K]}R_{S_2}(A(S_1,\xi_k))$ . Then based on the in-expectation risk bound (13), we can derive the following excess risk bound under the conditions of Theorem 2 using similar arguments: + +$$ +R \left(A \left(S _ {1}, \xi_ {k ^ {*}}\right)\right) - R ^ {*} \lesssim \Delta_ {\text {o p t}} + \gamma_ {\mathrm {L} _ {2}, (1 - \mu) N} \log (N) \log \left(\frac {1}{\delta}\right) + \frac {M}{\sqrt {\mu (1 - \mu)}} \sqrt {\frac {\log (K / \delta)}{N}}. \tag {14} +$$ + +Again, the above risk bound is still valid under the weaker notion of on-average uniform stability (2). + +# 4 Implications for SGD + +This section is devoted to demonstrating the implications of Theorem 1 and Theorem 2 for the widely used SGD algorithm and its confidence-boosted versions as well. We focus on a variant of SGD under with-replacement sampling as outlined in Algorithm 2, which we call $A_{\mathrm{SGD - w}}$ . In what follows, we substantially $\xi = \{i_t\}_{t\in [T]}$ the sample path of $A_{\mathrm{SGD - w}}$ over a given data set, and $\{\xi_k\}_{k\in [K]}$ the $K$ independent copies of $\xi$ when implemented with bagging as shown in Algorithm 1. Our results can also be extended to the without-replacement variant of SGD and the corresponding results are provided in Appendix D for the sake of completeness. + +# 4.1 Convex optimization with smooth loss + +We first present the following lemma that establishes the $L_{2}$ -uniform stability of $A_{\mathrm{SGD - w}}$ with convex and smooth loss functions, such as logistic loss. See Appendix C.2 for its proof. + +Lemma 1. Suppose that the loss function $\ell(\cdot; \cdot)$ is convex, $G$ -Lipschitz and $L$ -smooth with respect to its first argument. Assume that $\eta_t \leq 2 / L$ for all $t \geq 1$ . Then $A_{SGD - w}$ has $L_2$ -uniform stability with parameter + +$$ +\gamma_ {L _ {2}, N} = 2 G ^ {2} \sqrt {1 0 \left(\frac {1}{N} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {1}{N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}\right)}. +$$ + +Given Lemma 1, we can apply Theorem 1 and Theorem 2 to immediately obtain the following generalization result for $A_{\mathrm{SGD - w}}$ and its confidence-boosted version with smooth and convex losses. + +Corollary 1. Suppose that the loss function $\ell (\cdot ;\cdot)\in [0,M]$ is convex, $G$ -Lipschitz and $L$ -smooth with respect to its first argument. Then for any $\delta \in (0,1)$ , it holds with probability at least $1 - \delta$ over the randomness of $S$ that $\mathbb{E}_{\xi}\left[|R(A_{SGD - w}(S,\xi)) - R_S(A_{SGD - w}(S,\xi))|\right]\lesssim$ + +$$ +G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\frac {1}{N} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {1}{N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}} + M \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +Moreover, consider Algorithm 1 specified to $A_{SGD - w}$ with learning rate $\eta_t \leq 2 / L$ and $K \asymp \log \left(\frac{1}{\delta}\right)$ . Then with probability at least $1 - \delta$ over the randomness of $S$ and $\{\xi_k\}_{k \in [K]}$ , it holds that $|R(A_{SGD - w}(S_1, \xi_{k^*})) - R_S(A_{SGD - w}(S_1, \xi_{k^*}))| \lesssim$ + +$$ +G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\frac {1}{(1 - \mu) N} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {1}{(1 - \mu) ^ {2} N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}} + \frac {M}{\sqrt {\mu (1 - \mu)}} \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +Remark 8. For the conventional choice of $\eta_t = \frac{2}{L\sqrt{t}}$ , the high-probability (w.r.t. data) generalization bounds in Corollary 1 for SGD and its confidence boosted version are roughly of scale $\mathcal{O}\bigl (\log (N)\log \bigl (\frac{1}{\delta}\bigr)\sqrt{\frac{\log(T)}{N} + \frac{\sqrt{T}}{N}}\bigr)$ , which matches the corresponding $\mathcal{O}\bigl (\frac{\sqrt{T}}{N}\bigr)$ in-expectation bound of SGD with smooth and convex losses (Hardt et al., 2016). + +Combining with the standard in-expectation optimization error bound of convex SGD (see, e.g., Shamir and Zhang, 2013), we can show the following excess risk bound of (modified) Algorithm 1 as a direct consequence of the generic bound (14) to $A_{\mathrm{SGD - w}}$ with convex and smooth losses: + +$$ +\begin{array}{l} R (A _ {\tt S G D - w} (S _ {1}, \xi_ {k ^ {*}})) - R ^ {*} \lesssim G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\frac {1}{(1 - \mu) N} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {1}{(1 - \mu) ^ {2} N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}} \\ + \frac {M}{\sqrt {\mu (1 - \mu)}} \sqrt {\frac {\log (1 / \delta)}{N}} + \frac {D ^ {2} (w _ {0} , W ^ {*}) + G ^ {2} \sum_ {t = 1} ^ {T} \eta_ {t} ^ {2}}{\sum_ {t = 1} ^ {T} \eta_ {t}}, \\ \end{array} +$$ + +where $W^{*} \coloneqq \operatorname{Argmin}_{w \in \mathcal{W}} R(w)$ and $D(w, W^{*}) = \min_{w^{*} \in W^{*}} \| w - w^{*} \|$ . With learning rate $\eta_t = \frac{2}{L\sqrt{t}}$ , the right hand side of the above roughly scales as $\mathcal{O}\left(\sqrt{\log(N)}\log\left(\frac{1}{\delta}\right)\frac{\log(T)}{N} + \frac{\sqrt{T}}{N} + \frac{\log(T)}{\sqrt{T}}\right)$ which matches the prior high-probability excess risk bounds of SGD with convex losses (Harvey et al., 2019, Remark 3.7). + +# 4.2 Convex optimization with non-smooth loss + +Now we turn to study the case where the loss is convex but not necessarily smooth, such as the hinge loss and absolute loss. We first establish the following lemma about the $L_{2}$ -uniform stability parameter of $A_{\mathrm{SGD - w}}$ in the considered setting. See Appendix C.3 for its proof. + +Lemma 2. Suppose that the loss function $\ell (\cdot ;\cdot)$ is convex and $G$ -Lipschitz with respect to its first argument. Then $A_{SGD - w}$ has $L_{2}$ -uniform stability with parameter + +$$ +\gamma_ {L _ {2}, N} = G ^ {2} \sqrt {4 0 \sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {3 2}{N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}}. +$$ + +With Lemma 2 in place, we can readily apply Theorem 1 and Theorem 2 to establish the following corollary about the generalization bounds of $A_{\mathrm{SGD - w}}$ and its confidence-boosted version with convex and non-smooth loss functions. + +Corollary 2. Suppose that the loss function $\ell (\cdot ;\cdot)\in [0,M]$ is convex and $G$ -Lipschitz with respect to its first argument. Then for any $\delta \in (0,1)$ , it holds with probability at least $1 - \delta$ over the randomness of $S$ that $\mathbb{E}_{\xi}\left[|R(A_{SGD - w}(S,\xi)) - R_S(A_{SGD - w}(S,\xi))|\right]\lesssim$ + +$$ +G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {1}{N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}} + M \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +Moreover, consider Algorithm 1 specified to $A_{SGD - w}$ with $K \asymp \log \left(\frac{1}{\delta}\right)$ . Then with probability at least $1 - \delta$ over $S$ and $\{\xi_k\}_{k \in [K]}$ , it holds that $|R(A_{SGD - w}(S_1, \xi_{k^*})) - R_S(A_{SGD - w}(S_1, \xi_{k^*}))| \lesssim$ + +$$ +G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\sum_ {t = 1} ^ {T} \eta_ {t} ^ {2} + \frac {1}{(1 - \mu) ^ {2} N ^ {2}} \left(\sum_ {t = 1} ^ {T} \eta_ {t}\right) ^ {2}} + \frac {M}{\sqrt {\mu (1 - \mu)}} \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +Remark 9. For SGD with decaying learning rates $\eta_t = \frac{1}{\sqrt{Nt}}$ , Corollary 2 admits high-probability generalization bounds of scale $\mathcal{O}\big(\log (N)\log \big(\frac{1}{\delta}\big)\sqrt{\frac{\log(T)}{N} + \frac{T}{N^3}} +\sqrt{\frac{\log(1 / \delta)}{N}}\big)$ . With fixed rates $\eta_t\equiv \eta$ , Corollary 2 yields deviation bounds of scale $\mathcal{O}\big(\eta \log (N)\log \big(\frac{1}{\delta}\big)(\sqrt{T} +\frac{T}{N}) + \sqrt{\frac{\log(1 / \delta)}{N}}\big)$ which matches the near-optimal rate by Bassily et al. (2020, Theorem 3.3). + +# 4.3 Non-convex optimization with smooth loss + +We further study the performance of Algorithm 1 for $A_{\mathrm{SGD - w}}$ with smooth but not necessarily convex loss functions, such as normalized sigmoid loss (Mason et al., 1999). The following lemma estimates the $L_{2}$ -uniform stability of $A_{\mathrm{SGD - w}}$ in the considered setting. See Appendix C.4 for its proof. + +Lemma 3. Suppose that the loss function $\ell (\cdot ;\cdot)$ is $G$ -Lipschitz and $L$ -smooth with respect to its first argument. Consider $\eta_t\leq 1 / L$ . Then $A_{SGD - w}$ has $L_{2}$ -uniform stability with parameter + +$$ +\gamma_ {L _ {2}, N} = 2 G ^ {2} \sqrt {\frac {1}{N} \sum_ {t = 1} ^ {T} \exp \left(3 L \sum_ {\tau = t + 1} ^ {T} \eta_ {\tau}\right) u _ {t}}, +$$ + +where + +$$ +u _ {t} := \eta_ {t} ^ {2} + 2 \eta_ {t} \sum_ {\tau = 1} ^ {t - 1} \exp \left(L \sum_ {i = \tau + 1} ^ {t - 1} \eta_ {i}\right) \eta_ {\tau}. +$$ + +Based on Lemma 3, we can invoke Theorem 1 and Theorem 2 to show the following generalization result for $A_{\mathrm{SGD - w}}$ and its confidence-boosted version with non-convex and smooth loss functions. + +Corollary 3. Suppose that the loss function $\ell (\cdot ;\cdot)\in [0,M]$ is $G$ -Lipschitz and $L$ -smooth with respect to its first argument. Then for any $\delta \in (0,1)$ , it holds with probability at least $1 - \delta$ over the randomness of $S$ that $\mathbb{E}_{\xi}\left[\left|R(A_{\mathcal{SGD} - w}(S,\xi)) - R_S(A_{\mathcal{SGD} - w}(S,\xi))\right|\right]\lesssim$ + +$$ +G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\frac {1}{N} \sum_ {t = 1} ^ {T} \exp \left(L \sum_ {\tau = t + 1} ^ {T} \eta_ {\tau}\right) u _ {t}} + M \sqrt {\frac {\log (1 / \delta)}{N}}, +$$ + +where $u_{t} \coloneqq \eta_{t}^{2} + 2\eta_{t}\sum_{\tau = 1}^{t - 1}\exp (L\sum_{i = \tau +1}^{t - 1}\eta_{i})\eta_{\tau}$ for all $t \geq 1$ . Moreover, consider Algorithm 1 specified to $A_{SGD - w}$ with $\eta_t \leq \frac{1}{L}$ and $K \asymp \log (\frac{1}{\delta})$ . Then with probability at least $1 - \delta$ over $S$ and $\{\xi_k\}_{k\in [K]}$ , it holds that $|R(A_{SGD - w}(S_1,\xi_{k^*})) - R_S(A_{SGD - w}(S_1,\xi_{k^*}))| \lesssim$ + +$$ +G ^ {2} \log (N) \log \left(\frac {1}{\delta}\right) \sqrt {\frac {1}{(1 - \mu) N} \sum_ {t = 1} ^ {T} \exp \left(L \sum_ {\tau = t + 1} ^ {T} \eta_ {\tau}\right) u _ {t}} + \frac {M}{\sqrt {\mu (1 - \mu)}} \sqrt {\frac {\log (1 / \delta)}{N}}. +$$ + +Remark 10. For the decaying learning rates $\eta_t = \frac{1}{L\nu t}$ with arbitrary $\nu \geq 1$ , the generalization bounds in Corollary 3 are of scale $\mathcal{O}\big(\log (N)\log \big(\frac{1}{\delta}\big)\sqrt{\frac{T^{1 / \nu}\log(T)}{\nu N}} +\sqrt{\frac{\log(1 / \delta)}{N}}\big)$ . For the constant learning rates $\eta_t\equiv \frac{1}{LT}$ , the bounds are of scale $\mathcal{O}\big(\log (N)\log \big(\frac{1}{\delta}\big)\sqrt{\frac{\log(1 / \delta)}{N}}\big)$ . + +# 5 Conclusion + +In this paper, we have introduced a novel concept of $L_{2}$ -uniform stability for randomized learning algorithms and proved a strong first-moment generalization bound that holds with high probability over training sample. Equipped with this result, we have further developed a bagging based confidence-boosting procedure and shown that it yields near-optimal generalization bounds with high confidence jointly over the randomness of sample and algorithm. The power of our theory has been demonstrated through an application to SGD with time-decaying learning rates, where sharper generalization bounds have been obtained for both convex and non-convex loss functions. + +# Acknowledgments and Disclosure of Funding + +The authors sincerely thank the anonymous reviewers and area chairs for their insightful comments on this paper. The research was conducted while XTY worked for Nanjing University of Information Science and Technology and both authors worked for Baidu Cognitive Computing Lab. The work of XTY is also funded in part by the National Key Research and Development Program of China under Grant No. 2018AAA0100400, and in part by the Natural Science Foundation of China (NSFC) under Grant No. U21B2049, 61936005. + +# References + +Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Guha Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems (NeurIPS), pages 11279-11288, Vancouver, Canada, 2019. +Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. In Advances in Neural Information Processing Systems (NeurIPS), virtual, 2020. +Yoshua Bengio, Ian Goodfellow, and Aaron Courville. Deep learning, volume 1. MIT press Cambridge, MA, USA, 2017. +Stéphane Boucheron, Olivier Bousquet, Gábor Lugosi, and Pascal Massart. Moment inequalities for functions of independent random variables. The Annals of Probability, 33(2):514-560, 2005. +Olivier Bousquet and André Elisseeff. Stability and generalization. J. Mach. Learn. Res., 2:499-526, 2002. +Olivier Bousquet, Yegor Klochkov, and Nikita Zhivotovskiy. Sharper bounds for uniformly stable algorithms. In Proceedings of the Conference on Learning Theory (COLT), pages 610-626, Virtual Event [Graz, Austria], 2020. +Leo Breiman. Bagging predictors. Mach. Learn., 24(2):123-140, 1996. +Peter Buhlmann. Bagging, boosting and ensemble methods. In Handbook of computational statistics, pages 985-1022. Springer, 2012. +Zachary Charles and Dimitris S. Papailiopoulos. Stability and generalization of learning algorithms that converge to global optima. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 744-753, Stockholm, Sweden, 2018. +Yuan Shih Chow and Henry Teicher. Probability theory: independence, interchangeability, martingales. Springer Science & Business Media, 2003. +Luc Devroye and Terry J. Wagner. Distribution-free inequalities for the deleted and holdout error estimates. IEEE Trans. Inf. Theory, 25(2):202-207, 1979. +André Elisseeff, Theodoros Evgeniou, and Massimiliano Pontil. Stability of randomized learning algorithms. J. Mach. Learn. Res., 6:55-79, 2005. +Vitaly Feldman and Jan Vondrák. Generalization bounds for uniformly stable algorithms. In Advances in Neural Information Processing Systems (NeurIPS), pages 9770-9780, Montréal, Canada, 2018. +Vitaly Feldman and Jan Vondrak. High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. In Proceedings of the Conference on Learning Theory (COLT), pages 1270-1279, Phoenix, AZ, 2019. +Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: optimal rates in linear time. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 439-449, Chicago, IL, 2020. +Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, and Karthik Sridharan. Hypothesis set stability and generalization. In Advances in Neural Information Processing Systems (NeurIPS), pages 6726-6736, Vancouver, Canada, 2019. + +Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the 33nd International Conference on Machine Learning (ICML), pages 1225-1234, New York City, NY, 2016. +Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, and Sikander Randhawa. Tight analyses for non-smooth stochastic gradient descent. In Proceedings of the Conference on Learning Theory (COLT), pages 1579-1613, Phoenix, AZ, 2019. +Matthew Holland. Robustness and scalability under heavy tails, without strong convexity. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 865-873, Virtual Event, 2021. +Yegor Klochkov and Nikita Zhivotovsky. Stability and deviation optimal risk bounds with convergence rate $o(1/n)$ . In Advances in Neural Information Processing Systems (NeurIPS), pages 5065-5076, virtual, 2021. +Vladimir Koltchinskii. Local rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593-2656, 2006. +Ilja Kuzborskij and Christoph H. Lampert. Data-dependent stability of stochastic gradient descent. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 2820-2829, Stockholm, Sweden, 2018. +Yunwen Lei and Yiming Ying. Fine-grained analysis of stability and generalization for stochastic gradient descent. In Proceedings of the 37th International Conference on Machine Learning (ICML), pages 5809-5819, Virtual Event, 2020. +Yunwen Lei and Yiming Ying. Sharper generalization bounds for learning with gradient-dominated objective functions. In Proceedings of the 9th International Conference on Learning Representations (ICLR), Virtual Event, Austria, 2021. +Liam Madden, Emiliano DallAnese, and Stephen Becker. High probability convergence and uniform stability bounds for nonconvex stochastic gradient descent. arXiv preprint arXiv:2006.05610, 2020. +Llew Mason, Jonathan Baxter, Peter L. Bartlett, and Marcus R. Frean. Boosting algorithms as gradient descent. In Advances in Neural Information Processing Systems (NIPS), pages 512-518, Denver, CO, 1999. +Colin McDiarmid et al. On the method of bounded differences. Surveys in Combinatorics, 141(1): 148-188, 1989. +Nishant A. Mehta. Fast rates with high probability in exp-concave statistical learning. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 1085-1093, Fort Lauderdale, FL, 2017. +Sayan Mukherjee, Partha Niyogi, Tomaso A. Poggio, and Ryan M. Rifkin. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Adv. Comput. Math., 25(1-3):161-193, 2006. +Dheeraj Nagaraj, Prateek Jain, and Praneeth Netrapalli. SGD without replacement: Sharper rates for general smooth convex functions. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 4703-4711, Long Beach, CA, 2019. +David W. Opitz and Richard Maclin. Popular ensemble methods: An empirical study. J. Artif. Intell. Res., 11:169-198, 1999. +Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Conference on Machine Learning (ICML), Edinburgh, Scotland, UK, 2012. +William H Rogers and Terry J Wagner. A finite sample distribution-free performance bound for local discrimination rules. The Annals of Statistics, pages 506-514, 1978. + +Robert E. Schapire. The strength of weak learnability. Mach. Learn., 5:197-227, 1990. +Mark Schmidt, Nicolas Le Roux, and Francis R. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. In Advances in Neural Information Processing Systems (NIPS), pages 1458-1466, Granada, Spain, 2011. +Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and uniform convergence. J. Mach. Learn. Res., 11:2635-2670, 2010. +Ohad Shamir. Without-replacement sampling for stochastic gradient methods. In Advances in Neural Information Processing Systems (NIPS), pages 46-54, Barcelona, Spain, 2016. +Ohad Shamir and Tong Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. In Proceedings of the 30th International Conference on Machine Learning (ICML), pages 71-79, Atlanta, GA, 2013. +Giorgio Valentini and Thomas G. Dietterich. Low bias bagged support vector machines. In Proceedings of the Twentieth International Conference (ICML), pages 752-759, Washington, DC, 2003. +V. N. Vapnik and A. Ya. Chervonenkis. Theory of Pattern Recognition [in Russian]. Nauka, 1974. +Xiao-Tong Yuan and Ping Li. Stability and risk bounds of iterative hard thresholding. IEEE Trans. Inf. Theory, 68(10):6663-6681, 2022. +Xiao-Tong Yuan and Ping Li. Exponential generalization bounds with near-optimal rates for $l_{q}$ -stable algorithms. In Proceedings of the Eleventh International Conference on Learning Representations (ICLR), Kigali, Rwanda, 2023. +Tong Zhang. Leave-one-out bounds for kernel methods. Neural Comput., 15(6):1397-1437, 2003. +Yi Zhou, Yingbin Liang, and Huishuai Zhang. Understanding generalization error of SGD in nonconvex optimization. Mach. Learn., 111(1):345-375, 2022. \ No newline at end of file diff --git a/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/images.zip b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4f445bbe3f1ad4be0b13192f450af9a2687d27ac --- /dev/null +++ b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17e35f31fd4247d310799f01baeda3a996cb90f2351c7c74999ffde7d6c9a23b +size 343564 diff --git a/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/layout.json b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..07266995c3f44702f6fca09eecf672f4b9d8e1b3 --- /dev/null +++ b/l2uniformstabilityofrandomizedlearningalgorithmssharpergeneralizationboundsandconfidenceboosting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bd94a304677a6e8f6b7bfeb507d16a44fbb4dc3b15e4c579151ead4d2bc655e +size 627695 diff --git a/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_content_list.json b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..86c8a9dac86e1d620c63d8ad81134543e77347c7 --- /dev/null +++ b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e48dc37b0822bfde5984b9362ffd4f4c182baaba659c8e1239129c586dbeb22 +size 70643 diff --git a/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_model.json b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a2837a779afc6624e52c6b47f3a9cdb214894945 --- /dev/null +++ b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9620506ef33c736c8d1b7e27014c1c10d49663aef33eabf5f7db4dfbb0489c86 +size 85709 diff --git a/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_origin.pdf b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..517882bc86ed1ea666a4e2276c53801128bb3687 --- /dev/null +++ b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/8ff452b5-0ceb-4c54-9014-56c3936099e5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3f5e95ad65784af168f65be0e897fe07fd9f53598735682caabdad6ae009247 +size 2119736 diff --git a/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/full.md b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fe93ca3c8ff5cace89cf08603d15618121ca9bd2 --- /dev/null +++ b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/full.md @@ -0,0 +1,254 @@ +# $\mathbb{E}^{\mathrm{FWI}}$ : Multiparameter Benchmark Datasets for Elastic Full Waveform Inversion of Geophysical Properties + +Shihang Feng $^{1,*}$ + +Hanchen Wang $^{1,*}$ + +Chengyuan Deng1,2 + +Yinan Feng + +Yanhua Liu1,3 + +Min Zhu1,4 + +Peng Jin1,5 + +Yinpeng Chen + +Youzuo Lin1,7 + +1Los Alamos National Laboratory ²Rutgers University ³Colorado School of Mines + +4University of Pennsylvania 5The Pennsylvania State University 6Microsoft + +7University of North Carolina at Chapel Hill + +shihang.feng@live.com + +{hanchen.wang, charles.deng, ynf, yanhualiiu, minzhu, pjin, ylin}@lanl.gov + +yiche@microsoft.com, yzlin@unc.edu + +# Abstract + +Elastic geophysical properties (such as P- and S-wave velocities) are of great importance to various subsurface applications like $\mathrm{CO}_{2}$ sequestration and energy exploration (e.g., hydrogen and geothermal). Elastic full waveform inversion (FWI) is widely applied for characterizing reservoir properties. In this paper, we introduce $\mathbb{E}^{\mathrm{FWI}}$ , a comprehensive benchmark dataset that is specifically designed for elastic FWI. $\mathbb{E}^{\mathrm{FWI}}$ encompasses 8 distinct datasets that cover diverse subsurface geologic structures (flat, curve, faults, etc). The benchmark results produced by three different deep learning methods are provided. In contrast to our previously presented dataset (pressure recordings) for acoustic FWI (referred to as OPENFWI), the seismic dataset in $\mathbb{E}^{\mathrm{FWI}}$ has both vertical and horizontal components. Moreover, the velocity maps in $\mathbb{E}^{\mathrm{FWI}}$ incorporate both P- and S-wave velocities. While the multicomponent data and the added S-wave velocity make the data more realistic, more challenges are introduced regarding the convergence and computational cost of the inversion. We conduct comprehensive numerical experiments to explore the relationship between P-wave and S-wave velocities in seismic data. The relation between P- and S-wave velocities provides crucial insights into the subsurface properties such as lithology, porosity, fluid content, etc. We anticipate that $\mathbb{E}^{\mathrm{FWI}}$ will facilitate future research on multiparameter inversions and stimulate endeavors in several critical research topics of carbon-zero and new energy exploration. All datasets, codes1 and relevant information can be accessed through our website at https://efwi-lanl.github.io/. + +# 1 Introduction + +Seismic imaging is similar to how submarines use sonar to map out underwater landscapes and locate objects. Just as sonar sends out sound waves and interprets the echoes to figure out distances and + +![](images/f5fd7fc98e2ad39960ee1ece7f5775e21223e17e7b93aa346f6b01199474a9d9.jpg) +Figure 1: Gallery of $\mathbb{E}^{\mathbf{FWI}}$ : one example of reservoir structure (Pr) and velocity maps $(V_P, V_S)$ from the $\mathbb{E}^{\mathbf{FVB}}, \mathbb{E}^{\mathbf{FFB}}, \mathbb{E}^{\mathbf{CVB}}, \mathbb{E}^{\mathbf{CFB}}$ datasets. Pr refers to the designed reservoir, which is the Poisson's ratio anomaly calculated explicitly by $V_P$ and $V_S$ , two kinds of wave traveling speed at each spatial point. + +shapes of underwater objects, geoscientists send seismic waves deep into the Earth. By analyzing how these waves are reflected back, they can generate detailed images of the subsurface and deduce properties of rock formations. + +Seismic waves, propagating through the subsurface medium, can unveil the physical properties of the rock formations. Full waveform inversion (FWI) has emerged as an effective technique for obtaining high-resolution models of the subsurface physical properties [1, 2, 3]. In essence, it's like refining our underwater sonar map to capture more details and nuances. The determination of such properties from seismic data is posed as an inverse problem. This means we use the reflected waves (akin to sonar echoes) to infer the properties of the rocks they passed through. FwI refines this process, striving to find the most accurate representation by minimizing the difference between observed and synthetic seismic data [4]. This technique has made substantial contributions across a range of domains, including geothermal energy exploration, earthquake monitoring, subsurface imaging for engineering applications, and many others [5]. + +Acoustic approximations have been widely employed in wavefield simulation for FWI, resulting in a substantial reduction in computational cost [6, 7]. It assumes that the subsurface medium behaves as a fluid and focuses on simulating the kinematic aspects of compressional (P) wave propagation within the medium. However, acoustic wave propagation is an oversimplified representation of real-world scenarios, as it solely considers P-wave propagation and does not adequately model the dynamics of the wavefield [8, 9]. Consequently, this oversimplification leads to suboptimal accuracy of the reconstructed medium parameters [10, 11, 12, 13, 14]. + +![](images/960d814e482870ee860735b7a5a9c3d2af4d0ec698aeeba6faa6b218a86b801a.jpg) +Figure 2: Comparison of elastic data in $\mathbb{E}^{\mathrm{FWI}}$ and acoustic data in OPENFWI. Acoustic data only contain P-waves propagation while elastic data contain both P- and S-waves. + +Why elastic FWI: Elastic inversion, which considers both P- and shear (S-) waves, provides a more comprehensive and precise representation of the subsurface. The correlation between the P- + +$(\mathrm{V_P})$ and S-wave velocities $(\mathrm{V_S})$ holds significant implications in the determination of Poisson's ratio (i.e., $\mathrm{V_P - V_S}$ ratio) and Young's modulus. These parameters play a vital role in the reservoir characterization and serve as essential indicators in the identification and assessment of hydrogen and geothermal reservoirs [15, 16, 17, 18]. The following aspects highlight their significance: + +- Lithology discrimination: Combination of $\mathrm{V_P}$ and $\mathrm{V_S}$ velocities is useful for the lithology estimation, while $\mathrm{V_P}$ alone introduces significant ambiguity because of the overlap of $\mathrm{V_P}$ for different types of rocks [19]. +- Fracture characterization: Using the Poisson's ratio $(\mathrm{V_P - V_S}$ ratio) and S-wave splitting can estimate fracture orientation and facilitate hydraulic fracturing stimulation [20]. +- Estimation of fluid content and saturation: Poisson's ratio (VP-VS ratio) allows us to estimate the compressibility and estimate the fluid property qualitatively with other relevant reservoir parameters such as the pressure and temperature [21]. + +Elastic FWI, as a prominent multiparameter-inversion technique, allows us to simultaneously estimate P- and S-wave velocities [22]. However, the simultaneous consideration of multiple parameters and the expanded dimensions of seismic data significantly increases the complexity of the objective function. This escalation results from the enhanced nonlinearity and the induced trade-offs between the velocities. The coupled impact of P- and S-wave velocities on seismic response further complicates the iterative update process for each parameter. Additionally, the nonlinearity becomes even more pronounced when multiple parameter classes are incorporated into the inversion, as this substantially expands the model space by introducing an increased degree of freedom [23]. Thus, the multidimensionality of elastic FWI renders the problem considerably more complex and challenging compared to the acoustic single-parameter counterpart. With the recent development of machine learning, researchers have been actively exploring data-driven solutions for multiparameter FWI, including multilayer perceptron (MLP) [24], encoder-decoder-based convolutional neural networks (CNNs) [25, 26], recurrent network [27, 28], generative adversarial networks (GANs) [29], etc. Nonetheless, the absence of a publicly available elastic dataset poses challenges in facilitating a fair comparison of these methods. + +To illustrate the essence of these parameters and the efficacy of elastic FWI, we spotlight the Gallery of $\mathbb{E}^{\mathbf{FWI}}$ in Figure 1. This visualization showcases four sets of samples, each hailing from one of the datasets $\mathbb{E}^{\mathbf{FVB}}$ , $\mathbb{E}^{\mathbf{FFB}}$ , $\mathbb{E}^{\mathbf{CVB}}$ , $\mathbb{E}^{\mathbf{CFB}}$ . Each set encompasses three distinct subplots: + +- $P$ -wave Velocity Map $(V_{P})$ : Demonstrating the speed at which Primary or $P$ -waves traverse these velocities provide insights into the composition and layering of the subsurface, such as the presence of fluids or gas. +- S-wave Velocity Map $(\mathrm{V}_{\mathrm{S}})$ : Reflecting the pace of Secondary or S-waves, these velocities are sensitive to the rigidity and shear strength of the geological formations, offering a more detailed perspective on rock and sediment characteristics. +- Poisson's Ratio Map (Pr): Derived from the $V_{P}$ and $V_{S}$ maps, this visualization quantifies the subsurface's ability to deform under compressive stress. A higher Poisson's ratio typically indicates a more ductile material, while a lower value suggests a more brittle nature, making this measure crucial for understanding the geomechanical behavior of subsurface formations. + +$\mathbb{E}^{\mathrm{FWI}}$ is constructed upon our previously published open-access acoustic seismic dataset, known as OPENFWI [30]. Our approach incorporates the advantageous characteristics of multi-scale, multi-domain, and multi-subsurface-complexity, inherited from the OPENFWI framework. Furthermore, $\mathbb{E}^{\mathrm{FWI}}$ entails the creation of S-wave velocity maps and employs the elastic wave equation in the forward modeling phase (Figure 2). The computational demands associated with conducting elastic forward modeling are substantial. Consequently, the availability of this dataset would significantly alleviate the burden on researchers. + +$\mathbb{E}^{\mathrm{FWI}}$ facilitates equitable comparisons across various methodologies using multiple datasets. In this study, we evaluate the effectiveness of three prominent methodologies derived from pre-existing networks, namely InversionNet [31], VelocityGAN [32], and SimFWI [33]. The objective of this evaluation is to establish a benchmark for future investigations. For comprehensive replication attempts, including the GitHub repository, pre-trained models, and associated licenses, we direct readers to the resources referenced in Section 1 of supplementary materials. + +The rest of this paper is organized as follows: Section 2 offers a comprehensive overview of the fundamental principles governing elastic FWI. Section 3 presents a detailed description of the methodology employed in the construction of the dataset. Section 4 offers a succinct introduction to three deep learning methods employed for benchmarking purposes, alongside the presentation of inversion performance on each dataset. The investigation of the interdependence between P- and S-waves is conducted through ablation experiments, as outlined in Section 5. Section 6 outlines the challenges faced and discusses the future implications of the dataset. Lastly, Section 7 offers conclusive remarks summarizing the key findings and contributions. + +![](images/dcb9d5a77ad00c1a4644ef9758e51e1a4322c0d53329c7e010fc151c1eaa4413.jpg) +Figure 3: Schematic depiction of the data-driven approach for elastic forward modeling and FWI. The forward modeling process involves utilizing elastic forward modeling to compute seismic data by employing the governing elastic wave equations, while elastic FWI employs neural networks to infer the P- and S-wave velocity maps from seismic data containing vertical and horizontal components. + +# 2 Elastic Forward Modeling and Data-driven FWI + +Figure 3 provides a concise illustration of 2D data-driven elastic FWI and the relationship between P-, S-wave velocity maps and the input horizontal and vertical components of particle displacement therein. In general, the objective of data-driven elastic FWI is to employ neural networks to determine the subsurface velocity maps of the P- $(\mathrm{V_P})$ and S-waves $(\mathrm{V_S})$ . The velocities depict the propagation speed of P- and S-waves through the subsurface medium and depend on the spatial coordinates $(x,z)$ . We also consider the density of the subsurface as $\rho$ . The source term, represented by s, depends on spatial coordinates and time $(x,z,t)$ . This source term excites both the P- and S-wave components. The particle displacement in horizontal and vertical directions is denoted by the vector $\mathbf{u} = (u_x,u_z)$ . The governing equation for the elastic wave forward modeling in an isotropic medium is given as [34]: + +$$ +\rho \frac {\partial^ {2} \mathbf {u}}{\partial t ^ {2}} - \nabla \left[ \rho \left(V _ {P} ^ {2} - 2 V _ {S} ^ {2}\right) (\nabla \cdot \mathbf {u}) \right] - \nabla \cdot \left[ \rho V _ {S} ^ {2} \left(\nabla \mathbf {u} + (\nabla \mathbf {u}) ^ {T}\right) \right] = \mathbf {s}, \tag {1} +$$ + +In the above equation: + +- $\nabla$ represents the gradient calculation of a scalar field in space, specifically: + +$$ +\nabla f = \left[ \frac {\partial f}{\partial x}, \frac {\partial f}{\partial z} \right] ^ {T} +$$ + +- $\nabla$ denotes the divergence of a vector field in space, which is expressed as: + +$$ +\nabla \cdot \mathbf {v} = \frac {\partial v _ {x}}{\partial x} + \frac {\partial v _ {z}}{\partial z} +$$ + +For simplicity, we assume a constant density $\rho$ with the value of $1\mathrm{g} / \mathrm{cm}^3$ . The forward modeling problem can be expressed as $(u_{x},u_{z}) = f_{e}(V_{P},V_{S})$ , where $f_{e}(\cdot)$ signifies the highly nonlinear elastic forward modeling. It details how the P- and S-waves, initiated by the source $s$ , navigate through the subsurface characterized by $\mathrm{V_P}$ and $\mathrm{V_S}$ over time $t$ . Receivers then record these waves as components $u_{x}$ and $u_{z}$ . The ultimate goal of data-driven elastic FWI is to harness neural networks + +Table 1: Dataset summary of $\mathbb{E}^{\mathrm{FWI}}$ . Velocity maps are represented in dimensions of depth $\times$ width $\times$ length, while seismic data is presented as #sources $\times$ time $\times$ #receivers in width $\times$ #receivers in length. + +
GroupDatasetSize#Train/#TestSeismic Data SizeVelocity Map Size
EVelEFVA/B123GB24K / 6K5 × 1000 × 1 × 7070 × 1 × 70
FamilyECVA/B123GB24K / 6K5 × 1000 × 1 × 7070 × 1 × 70
EFaultFFA/B222GB48K / 6K5 × 1000 × 1 × 7070 × 1 × 70
FamilyECFA/B222GB48K / 6K5 × 1000 × 1 × 7070 × 1 × 70
+ +to learn the inverse mapping $(V_{P}, V_{S}) = f_{e}^{-1}(u_{x}, u_{z})$ . This inverse process enables us to deduce the subsurface velocity maps $(\mathrm{V}_{\mathrm{P}}$ and $\mathrm{V}_{\mathrm{S}})$ from the recorded particle displacements $(u_{x}$ and $u_{z})$ acquired from the receivers. By training neural networks with datasets of recorded waveforms and matching velocity maps, we can fine-tune the network parameters for a precise estimation of the subsurface velocities. + +# 3 $\mathbb{E}^{\mathrm{FWI}}$ Dataset + +This section describes the methodology used to extend the velocity maps from the OPENFWI dataset to elastic FWI and generate our new dataset $\mathbb{E}^{\mathrm{FWI}}$ . Our intention is to provide an accessible, open-source benchmark dataset that can comprehensively facilitate the development and evaluation of the machine learning algorithms in elastic FWI. + +The basic information and physical meaning of all the datasets in $\mathbb{E}^{\mathrm{FWI}}$ is summarized in Table 1 and Table 2. The velocity maps encompass P- $\mathrm{V_P}$ and S-wave velocities $\mathrm{V_S}$ , whereas the seismic data comprise the horizontal and vertical components of particle displacement, $u_{x}$ and $u_{z}$ . The geophysical attributes in "E Vel Family" and the "E Fault Family" have been constructed utilizing the $\mathrm{V_P}$ maps derived from two distinct groups, namely the "Vel Family" and the "Fault Family" within the OPENFWI dataset. Similar to OPENFWI, the dataset has been categorized into two distinct versions, namely easy (A) and hard (B), based on the relative complexity of the subsurface structures. A thorough examination of the methodologies employed in the construction of the $\mathrm{V_P}$ maps and the detailed analysis of the complexity inherent in the velocity maps can be found in [30]. + +The P-wave velocity $(\mathrm{V_P})$ maps in $\mathbb{E}^{\mathbf{FWI}}$ are identical to those in the previously published OPENFWI dataset. For example, $\mathrm{V_P}$ in $\mathbb{E}^{\mathbf{FVA}}$ is corresponding to "FlatVel-A" in OPENFWI, $\mathrm{V_P}$ in $\mathbb{E}^{\mathbf{CFB}}$ is corresponding to "CurveFault-B" in OPENFWI, and the same naming rule applies to the rest datasets. These velocity maps incorporate a wide range of geological scenarios reflecting diverse subsurface complexities, thereby providing an extensive testbed for machine learning methodologies. + +In order to construct the S-wave velocity $(\mathrm{V_S})$ maps, we incorporate the Poisson's ratio $(\mathrm{Pr})$ maps [35], which provide a representation of the relationship between the P- $(\mathrm{V_P})$ and the S-wave velocities $(\mathrm{V_S})$ + +$$ +P _ {r} = \frac {V _ {P} ^ {2} - 2 V _ {S} ^ {2}}{2 V _ {P} ^ {2} - 2 V _ {S} ^ {2}}. \tag {2} +$$ + +The initial step involves the generation of Poisson's ratio $(\mathrm{Pr})$ maps by selecting two values within the reasonable range of 0.1 to 0.4 in a random manner [36]. One of these values is allocated to represent the background, whereas the other value is assigned to represent a thin-layer reservoir. Thin-layer reservoirs are selected due to their significance in representing areas where pores are saturated with fluids, making them crucial targets for subsurface exploration and reservoir detection. In the $\mathbb{E}^{\mathrm{FWI}}$ framework, the S-wave velocity $(\mathrm{V_S})$ maps are synthesized by multiplying the models of P-wave velocity $(\mathrm{V_P})$ with the respective Poisson's ratio $(\mathrm{Pr})$ maps, adhering to the following relationship: + +$$ +V _ {S} = \sqrt {\frac {0 . 5 - P r}{1 - P r}} * V _ {P}. \tag {3} +$$ + +This approach ensures a wide range of velocity contrasts, resulting in diverse wavefield behaviors, thus expanding the scope of scenarios for machine learning tests in elastic FWI. The details of the elastic forward modeling are given in Section 2 of the supplementary materials. + +Table 2: Physical meaning of ${\mathbb{E}}^{\mathbf{{FWI}}}$ dataset + +
DatasetGrid SpacingVelocity Map Spatial SizeSource SpacingSource Line LengthReceiver Line SpacingReceiver Line LengthTime SpacingRecorded Time
EVel, EFault Family5 m0.35 × 0.35 km287.5 m0.35 km5 m0.35 km0.001 s1 s
+ +# 4 $\mathbb{E}^{\mathrm{FWI}}$ Benchmarks + +# 4.1 Deep Learning Methods for Elastic FWI + +Our benchmark presents inversion results by three deep learning-based approaches, namely ElasticNet, ElasticGAN, and ElasticTransformer. These methods are derived from pre-existing networks, namely InversionNet [31], VelocityGAN [32], and SimFWI [33], with modifications tailored to address the challenges posed by elastic FWI. We provide a summary of each method separately as follows: + +ElasticNet is extended from the vanilla InversionNet [31] to the elastic setting with two pairs of input and output. It is a fully-convolutional neural network taking seismic data $u_{x}$ and $u_{z}$ as the input of two encoders to learn the latent embeddings independently. The mutual representations of two inputs are concatenated and then forwarded to two independent decoders to obtain the estimated velocity maps $\mathrm{V_P}$ and $\mathrm{V_S}$ as output. + +ElasticGAN follows the design of VelocityGAN [32] but substitutes the original generator with an encoder-decoder network such as ElasticNet. The estimated velocity maps $\mathrm{V_P}$ and $\mathrm{V_S}$ produced by the generator are fed to two independent discriminators to identify the real and fake predictions. A CNN architecture is employed for both discriminators. + +**ElasticTransformer** follows a similar seismic-encoder and velocity-decoder architecture design as the SimFWI described in [33]. It consists of two two-layer transformer encoders that take $u_{x}$ and $u_{z}$ as inputs and two two-layer transformer decoders to output $V_{P}$ and $V_{S}$ separately. Two latent embeddings of $u_{x}$ and $u_{z}$ are concatenated and passed through two Maxout converters, then transformed embeddings fed into the decoders. Unlike the linear upsampler utilized at the end of the velocity decoder in [33], we stack upsampling and convolution blocks to construct the upsampler. + +# 4.2 Inversion Benchmarks + +![](images/c694a692869e66ec8ddd28d55b0c81da826a21b58df532a6205b0c6fef524b7b.jpg) +Figure 4: Examples of both successful and inadequate predictions in $\mathbb{E}^{FWI}$ benchmarks performed by the ElasticNet. + +The experiments were conducted using Nvidia Tesla V100 GPUs, and the training parameters were kept consistent across all datasets. The training process is conducted utilizing the $\ell_1$ -norm and $\ell_2$ -norm loss functions, respectively. In our study, we assess not only the accuracy of the predicted velocities $V_P$ and $V_S$ but also the degree of decoupling between them by evaluating the accuracy of the predicted Poisson ratio $(Pr)$ . To quantify the performance of our predictions, we utilize three evaluation metrics: mean absolute error (MAE), root mean square error (RMSE), and structural similarity index (SSIM). These metrics provide a comprehensive assessment of the quality of our predictions and their similarity to the ground truth values. The performance of ElasticNet on various datasets is presented in Table 3, while Table 4 provides the estimated training time per + +epoch for each method on the $\mathbb{E}^{FWI}$ datasets. In Figure 4, examples of inverted velocity maps obtained using the ElasticNet are presented alongside the corresponding ground truth velocity maps. These visual representations highlight instances where the inversion process successfully predicts accurate velocities, as well as instances where further improvement is required. The benchmarks with ElasticGAN and ElasticTransformer are given in Section 6 of the supplementary materials. + +The performance of all three models exhibits a decline as the complexity of the dataset increases. Notably, in the case of the dataset with version B, it consistently exhibits lower performance compared to the dataset with version A. The network provides direct predictions for $\mathrm{V_P}$ and $\mathrm{V_S}$ , whereas $\mathrm{Pr}$ is obtained indirectly through calculations based on $\mathrm{V_P}$ and $\mathrm{V_S}$ . As a result, $\mathrm{Pr}$ consistently exhibits lower SSIM compared to $\mathrm{V_P}$ and $\mathrm{V_S}$ . However, it should be noted that $\mathrm{Pr}$ represents a sparser map compared to $\mathrm{V_P}$ and $\mathrm{V_S}$ , leading to lower MAE and RMSE values for $\mathrm{Pr}$ compared to $\mathrm{V_P}$ and $\mathrm{V_S}$ . + +Table 3: Quantitative results of ElasticNet on $\mathbb{E}^{\mathrm{FWI}}$ datasets. + +
DatasetLossElasticNet
VPVSPr
MAE↓RMSE↓SSIM↑MAE↓RMSE↓SSIM↑MAE↓RMSE↓SSIM↑
EFVAl10.03080.05590.96150.02590.05000.95960.03290.06640.8455
l20.02350.04550.97020.01960.03850.96830.03070.05830.8644
EFVBl10.06680.14680.88910.04830.10530.89510.05420.10570.7138
l20.10160.19010.83540.06910.13220.85990.07560.13020.6227
ECVAl10.07450.13450.80550.06000.10800.80510.05740.11560.5766
l20.07450.13430.80330.06160.10870.80200.06040.11310.6131
ECVBl10.17220.29820.65290.12580.21650.68270.09150.15800.4612
l20.16820.30480.65660.12340.22200.68750.09560.16600.4337
EFFAl10.05430.10260.90420.06470.13490.82250.07100.15010.6447
l20.09370.15370.86070.07690.13090.83050.08300.13690.6251
EFFBl10.11980.18590.70140.09470.14620.73460.08020.13120.4902
l20.10840.17040.71310.08110.12900.75230.07190.12250.5270
ECFAl10.05510.11280.88140.05180.10420.84450.05280.11500.6562
l20.09720.16360.82230.08860.13900.78910.09270.14430.5536
ECFBl10.15350.23070.59810.11230.16980.64080.10120.16020.3576
l20.15620.23050.61600.11380.16970.66080.08540.13930.4490
+ +# 5 Ablation Study + +# 5.1 Independent vs. Joint Inversion: Impact on $\mathrm{Pr}$ Maps + +The first experiment examined the impact of separate versus joint inversion of $\mathrm{V_P}$ and $\mathrm{V_S}$ on the accuracy of predicted Poisson ratio (Pr) maps. This process involved individually training two InversionNets on the $\mathbb{E}^{\mathbf{FWI}}$ dataset to predict $\mathrm{V_P}$ and $\mathrm{V_S}$ maps, which were then used to calculate the Pr maps. The results revealed a substantial deterioration in map quality, with the independent inversion maps exhibiting significantly higher MAE and MSE, and lower SSIM values, as outlined in Table 5, compared to those reconstructed from joint inversion, shown in Table 3, especially for the complex B datasets, such as "ECFB". These findings reinforce the significance of considering the $\mathrm{V_P}-\mathrm{V_S}$ relationship and P-S wave coupling, with the single-parameter inversion approach being deemed unviable. Detailed information on this experiment can be found in Section 6 of the supplementary materials. + +Table 4: Training time in each epoch by each benchmarking method on $\mathbb{E}^{\mathbf{FWI}}$ datasets. All the models are trained on a single GPU. + +
ElasticNetElasticGANElasticTransformer
EVel Family4m15s2m20s1m15s
EFault Family8m35s3m50s2m30s
+ +Table 5: Quantitative results of OPENFWI's InversionNet trained with $\mathbb{E}^{FWI}$ data. Input z-component data and output the $V_{P} / V_{S}$ maps independently. The performance is a benchmark of ElasticNet. + +
DatasetLossInversionNet
VPVSPr
MAE↓RMSE↓SSIM↑MAE↓RMSE↓SSIM↑MAE↓RMSE↓SSIM↑
EFVAl10.03920.07120.94550.02390.04470.95900.04610.08850.8282
l20.04510.07450.94140.02510.04690.95850.05410.10390.8071
EFVBl10.10300.19860.82600.06430.13180.86150.12900.27730.6063
l20.08830.18320.84530.06200.12690.86650.09050.19800.6603
ECVAl10.10160.16990.76360.07360.12450.78370.10500.21410.5607
l20.10520.17300.74600.07200.12360.77980.11940.23810.5454
ECVBl10.18540.32660.62700.13880.24050.65780.15230.30640.4695
l20.18200.32600.63230.13310.23440.66740.15530.32300.4648
EFFAl10.08180.14130.86250.05840.10340.86810.07840.15850.6937
l20.07880.13430.89300.09460.15250.79180.13510.24850.6174
EFFBl10.13230.20010.67900.09430.14530.73120.11800.22830.5280
l20.13010.19790.68080.08980.13820.73990.11240.20950.5494
ECFAl10.10120.16380.86240.07100.11820.84850.07780.15860.6857
l20.09620.16630.84430.08330.13940.81060.10320.19110.6393
ECFBl10.17020.24850.60200.12430.18070.65310.13170.23780.5091
l20.17450.25630.58490.12190.17750.65880.13790.25290.4841
+ +# 5.2 Investigating P- and S-waves Coupling via Machine Learning + +The second experiment focused on examining the interaction between P- and S-waves in the context of seismic data inversion. Two InversionNets were trained, one focusing on P-wave velocity $(\mathrm{V_P})$ and the other on S-wave velocity $(\mathrm{V_S})$ , while adjusting the structural characteristics of the ignored wave. This experiment, trained with the OPENFWI's InversionNet using data from $\mathbb{E}^{\mathbf{FWI}}$ , revealed that any minor change in the disregarded wave velocity structure significantly degraded the network's performance, as evidenced in Table 6. This outcome was clearly demonstrated in the more complex datasets, such as "ECFB" test set, where changes in structure led to a substantial increase in MAE and RMSE, along with a decrease in the SSIM. For a more detailed analysis, refer to the supplementary materials. + +# 6 Discussion + +# 6.1 Future Challenge + +Decouple P- and S-waves: The interaction between P- and S-waves during seismic wave propagation poses a significant challenge when attempting to simultaneously determine P and S velocities. The networks described in this paper exhibit limited success in separating P- and S-waves within the seismic data. Consequently, we anticipate the development of robust methodologies that can precisely estimate both P and S velocities while effectively mitigating the interdependence between these wave components. + +Generalization of data-driven methods: The elastic approximation provides a more accurate representation of field data in comparison to acoustic data. As a result, we expect the neural networks trained on the $\mathbb{E}^{\mathrm{FWI}}$ dataset to exhibit improved resilience in handling real-world field data. However, it should be noted that there are additional physical phenomena, such as anisotropy and viscosity, which are not accounted for in the $\mathbb{E}^{\mathrm{FWI}}$ dataset. The question of how to incorporate these phenomena into the analysis of field data remains an open and unanswered challenge. + +Forward modeling: The computational expense associated with elastic forward modeling surpasses that of acoustic cases due to various factors. These include the increased memory requirements and the implementation of smaller grid sizes to counteract dispersion phenomena, among others. A detailed comparison highlighting these aspects can be found in the last section of supplementary + +Table 6: Quantitative results of InversionNet trained with $\mathbb{E}^{\mathrm{FWI}}$ data. The performance compared between testing on the same and different disregarded velocity structural datasets. + +
DatasetLossInversionNet
VP: different VS structureVS: different VP structure
MAE↓RMSE↓SSIM↑MAE↓RMSE↓SSIM↑
EFVAl10.11620.19190.89740.20400.27160.7977
l20.12450.21010.88490.21890.29280.7793
EFVBl10.20980.35270.70860.24790.32850.7182
l20.19400.32650.72790.25400.33820.7022
ECVAl10.16240.25900.72330.22020.29240.6794
l20.16780.26470.70430.22490.29870.6824
ECVBl10.30140.48280.52060.31360.42460.5398
l20.30670.49000.51810.32070.43490.5341
FFAl10.17410.27320.74640.22000.28650.7362
l20.13030.20520.84900.22280.29130.6952
FFFBl10.15750.22910.65010.23500.30300.6480
l20.16270.23360.64920.23190.29820.6517
CFAl10.14040.22440.79510.23270.30590.7360
l20.13190.21340.81590.23990.31940.7139
CFBl10.19370.27980.57470.24650.32150.5785
l20.19280.27880.56130.24450.31830.5915
+ +materials. Despite the possibility of bypassing extensive forward modeling by providing the $\mathbb{E}^{\mathrm{FWI}}$ dataset, there remains a need to explore an efficient forward modeling algorithm to accommodate the growing volume of data in the field. + +# 6.2 Broader Impact + +Multiparameter inversion: Multiparameter inversion techniques have found wide-ranging applications across diverse scientific and engineering domains, including but not limited to geophysics, medical imaging, and material science. The introduction of $\mathbb{E}^{\mathrm{FWI}}$ serves as a catalyst for further investigation and the pursuit of innovative methodologies in these fields. By addressing the inherent limitations and complexities associated with multiparameter inversion, this advancement encourages ongoing research and the exploration of novel solutions. + +**Carbon-zero emission:** The attainment of carbon-zero emissions holds paramount significance in addressing climate change, safeguarding human well-being, and fostering sustainable development. While researchers continue to explore effective strategies towards achieving this goal, elastic FWI emerges as a promising approach that can contribute significantly. Particularly, elastic FWI plays a crucial role in assessing and developing geothermal energy resources, as well as in facilitating carbon capture and storage projects, among other applications. The introduction of $\mathbb{E}^{\mathrm{FWI}}$ as a fundamental dataset for elastic FWI is expected to stimulate further research and innovation in this direction, thereby enhancing our understanding and capabilities in the pursuit of carbon-zero emissions. + +New energy exploration: Elastic FWI can be utilized to evaluate the geological viability of potential sites for hydrogen storage, including underground formations or depleted oil and gas reservoirs. The suitability, capacity, and feasibility of a storage site heavily rely on the effectiveness of a geophysical survey and characterization approaches. With the availability of the $\mathbb{E}^{\mathrm{FWI}}$ dataset, it would yield great potential to enhance the accuracy in characterizing the subsurface reservoir, therefore providing better identification of hydrogen storage locations. + +Potential Social Impacts: Our research, while advancing inverse problems in natural science, may inadvertently support increased fossil fuel consumption if applied to optimize oil and gas drilling, raising environmental concerns. However, the same techniques can equally bolster positive initiatives, such as geothermal exploration and hydrogen storage, as highlighted in Sec 6.2. The dual potential of + +our methodologies underscores the importance of their judicious application, aligning with sustainable development goals. + +# 7 Conclusion + +This paper presents $\mathbb{E}^{\mathrm{FWI}}$ , an open-source elastic FWI dataset. $\mathbb{E}^{\mathrm{FWI}}$ comprises eight datasets and includes benchmarks for three deep learning methods. The datasets released with $\mathbb{E}^{\mathrm{FWI}}$ , provide diverse P-wave and S-wave velocities, specifically addressing the coupling problem encountered in multiparameter inversion. The initial benchmarks demonstrate promising results on certain datasets, while others may require further investigation. Additionally, coupling tests are conducted to provide insights into network design for multiparameter inversion problems. Furthermore, this paper discusses the future challenges that can be explored using these datasets and outlines the envisioned future advancements as $\mathbb{E}^{\mathrm{FWI}}$ continues to evolve. + +# Acknowledgement + +This work was funded by the Los Alamos National Laboratory (LANL) - Technology Evaluation and Demonstration (TED) program and by the U.S. Department of Energy (DOE) Office of Fossil Energy's Carbon Storage Research Program via the Science-Informed Machine Learning to Accelerate Real-Time Decision Making for Carbon Storage (SMART-CS) Initiative. + +# References + +[1] J. Virieux and S. Operto. An overview of full-waveform inversion in exploration geophysics. Geophysics, 74(6):WCC1-WCC26, 2009. +[2] Jean Virieux, Amir Asnaashari, Romain Brossier, Ludovic Métivier, Alessandra Ribodetti, and Wei Zhou. An introduction to full waveform inversion. In Encyclopedia of exploration geophysics, pages R1-1. Society of Exploration Geophysicists, 2017. +[3] Andreas Fichtner and Jeannot Trampert. Resolution analysis in full waveform inversion. Geophysical Journal International, 187(3):1604-1624, 2011. +[4] Andreas Fichtner, Jeannot Trampert, Paul Cupillard, Erdinc Saygin, Tuncay Taymaz, Yann Capdeville, and Antonio Villasenor. Multiscale full waveform inversion. Geophysical Journal International, 194(1):534-556, 2013. +[5] Denes Vigh, Jerry Kapoor, and Hongyan Li. Full-waveform inversion application in different geological settings. In 2011 SEG Annual Meeting. OnePetro, 2011. +[6] R Gerhard Pratt. Seismic waveform inversion in the frequency domain, part 1: Theory and verification in a physical scale model. Geophysics, 64(3):888-901, 1999. +[7] Christophe Barnes and Marwan Charara. The domain of applicability of acoustic full-waveform inversion for marine seismic data. Geophysics, 74(6):WCC91-WCC103, 2009. +[8] James WD Hobro, Chris H Chapman, and Johan OA Robertsson. A method for correcting acoustic finite-difference amplitudes for elastic effects. Geophysics, 79(4):T243-T255, 2014. +[9] Espen Birger Raknes and Børge Arntsen. Challenges and solutions for performing 3d time-domain elastic full-waveform inversion. The leading edge, 36(1):88-93, 2017. +[10] Óscar Calderón Agudo, Nuno Vieira da Silva, Michael Warner, and Joanna Morgan. Acoustic full-waveform inversion in an elastic world. Geophysics, 83(3):R257–R271, 2018. +[11] Lei Fu, Bowen Guo, and Gerard T Schuster. Multiscale phase inversion of seismic data. Geophysics, 83(2):R159–R171, 2018. +[12] Jinwei Fang, Hui Zhou, Qingchen Zhang, Hanming Chen, Pengyuan Sun, Jianlei Zhang, and Liang Zhang. The effects of elastic data on acoustic and elastic full waveform inversion. Journal of Applied Geophysics, 172:103876, 2020. +[13] Shihang Feng, Lei Fu, Zongcai Feng, and Gerard T Schuster. Multiscale phase inversion for vertical transverse isotropic media. Geophysical Prospecting, 69(8-9):1634-1649, 2021. +[14] Timothy J Sears, SC Singh, and PJ Barton. Elastic full waveform inversion of multi-component obc seismic data. Geophysical Prospecting, 56(6):843-862, 2008. +[15] Xiaoqiang Liu, Zhanqing Qu, Tiankui Guo, Qizhong Tian, Wei Lv, Zhishuang Xie, and Chunbo Chu. An innovative technology of directional propagation of hydraulic fracture guided by radial holes in fossil hydrogen energy development. International Journal of Hydrogen Energy, 44(11):5286-5302, 2019. +[16] Jonathan M Lees and Huatao Wu. Poisson's ratio and porosity at coso geothermal area, california. Journal of volcanology and geothermal research, 95(1-4):157-173, 2000. + +[17] Runhai Feng, Niels Balling, and Dario Grana. Lithofacies classification of a geothermal reservoir in denmark and its facies-dependent porosity estimation from seismic inversion. Geothermics, 87:101854, 2020. +[18] Hui Li, Jing Lin, Baohai Wu, Jinghuai Gao, and Naihao Liu. Elastic properties estimation from prestack seismic data using ggcnns and application on tight sandstone reservoir characterization. IEEE Transactions on Geoscience and Remote Sensing, 60:1-21, 2021. +[19] Eugenia Rojas, Thomas L. Davis, Michael Batzle, Manika Prasad, and Reinaldo J. Michelena. $\mathrm{V_p - V_s}$ ratio sensitivity to pressure, fluid, and lithology changes in tight gas sandstones. In SEG Technical Program Expanded Abstracts 2005, pages 1401-1404, 2005. +[20] Pinbo Ding, Ding Wang, Guidong Di, and Xiangyang Li. Investigation of the effects of fracture orientation and saturation on the vp/vs ratio and their implications. Rock Mechanics and Rock Engineering, 52:3293-3304, 2019. +[21] J. Sun, X. Wei, and X. Chen. Fluid identification in tight sandstone reservoirs based on a new rock physics model. Journal of Geophysics and Engineering, 13(4):526-535, 07 2016. +[22] WA Mulder and R-E Plessix. Exploring some issues in acoustic full waveform inversion. Geophysical Prospecting, 56(6):827-841, 2008. +[23] Stéphane Operto, Yaser Gholami, Vincent Prieux, Alessandra Ribodetti, R Brossier, L Metivier, and Jean Virieux. A guided tour of multiparameter full-waveform inversion with multicomponent data: From theory to practice. The leading edge, 32(9):1040-1054, 2013. +[24] Zhen-dong Zhang and Tariq Alkhalifah. High-resolution reservoir characterization using deep learning-aided elastic full-waveform inversion: The north sea field data examplemlefwi. Geophysics, 85(4):WA137-WA146, 2020. +[25] Arnab Dhara and Mrinal Sen. Elastic-adjointnet: A physics-guided deep autoencoder to overcome crosstalk effects in multiparameter full-waveform inversion. In SEG/AAPG International Meeting for Applied Geoscience & Energy. OnePetro, 2022. +[26] Yulang Wu, George A McMechan, and Yanfei Wang. Cnn-based gradient-free multiparameter reflection full-waveform inversion. In First International Meeting for Applied Geoscience & Energy, pages 1369-1373. Society of Exploration Geophysicists, 2021. +[27] Linan Xu, Zhaoqi Gao, Sichao Hu, Jinghuai Gao, and Zongben Xu. Simultaneous inversion for reflectivity and q using nonstationary seismic data with deep-learning-based decoupling. IEEE Transactions on Geoscience and Remote Sensing, 60:1-15, 2022. +[28] Tianze Zhang, Kristopher A Innanen, Jian Sun, and Daniel O Trad. Numerical analysis of a deep learning formulation of multi-parameter elastic full waveform inversion. In SEG International Exposition and Annual Meeting. OnePetro, 2020. +[29] Jiashun Yao, Michael Warner, and Yanghua Wang. Regularization of anisotropic full-waveform inversion with multiple parameters by adversarial neural networks. Geophysics, 88(1):R95–R103, 2023. +[30] Chengyuan Deng, Shihang Feng, Hanchen Wang, Xitong Zhang, Peng Jin, Yinan Feng, Qili Zeng, Yinpeng Chen, and Youzuo Lin. Openfwi: Large-scale multi-structural benchmark datasets for full waveform inversion. Advances in Neural Information Processing Systems, 35:6007–6020, 2022. +[31] Yue Wu and Youzuo Lin. InversionNet: An efficient and accurate data-driven full waveform inversion. IEEE Transactions on Computational Imaging, 6:419-433, 2019. +[32] Zhongping Zhang and Youzuo Lin. Data-driven seismic waveform inversion: A study on the robustness and generalization. IEEE Transactions on Geoscience and Remote sensing, 58(10):6900-6913, 2020. +[33] Yinan Feng, Yinpeng Chen, Peng Jin, Shihang Feng, Zicheng Liu, and Youzuo Lin. Simplifying full waveform inversion via domain-independent self-supervised learning. arXiv preprint arXiv:2305.13314, 2023. + +[34] Alan R Levander. Fourth-order finite-difference p-sv seismograms. Geophysics, 53(11):1425-1436, 1988. +[35] Nikolas I Christensen. Poisson's ratio and crustal seismology. Journal of Geophysical Research: Solid Earth, 101(B2):3139-3156, 1996. +[36] H Gercek. Poisson's ratio values for rocks. International Journal of Rock Mechanics and Mining Sciences, 44(1):1-13, 2007. \ No newline at end of file diff --git a/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/images.zip b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e0c686db8b206389a727f854493581f9f4a1e8c4 --- /dev/null +++ b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3868f2c710f0ec163d434a8d95ec3fd3c08fa9921936053bcf8d7a810fcd16f9 +size 621147 diff --git a/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/layout.json b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..942e4271ad042a172123a3972683c73f6ccc38f3 --- /dev/null +++ b/mathbfmathbbefwimultiparameterbenchmarkdatasetsforelasticfullwaveforminversionofgeophysicalproperties/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:394d9180fbb3ff30d77bb3bb32d3f09895882958f64f7f602fb1874465f432c5 +size 387241 diff --git a/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_content_list.json b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e9c556eb006440f38c56c7028a5df061e13f54ee --- /dev/null +++ b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e744a1cc6fa61e0c3e388decae33e0c4613a38d5db047e94157c5ef6fa65aea +size 78772 diff --git a/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_model.json b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2b22e98c3a9f1598fcdcd32c36fde22477951f11 --- /dev/null +++ b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90275e52b1e6272b1de744100b3af4b3c121b3f848eca5bca419ef881ec8b63f +size 100557 diff --git a/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_origin.pdf b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0401bc26f9e13761c23a4c1f1834569b31d9c50c --- /dev/null +++ b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/c9fc0aaa-0f10-41fe-bb86-750ecf783a67_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5d5aa2f46cb972e0eccc3e876adb64e29574b69275a64588d045136ecb48c5d +size 507718 diff --git a/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/full.md b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bbc8366b6b339f6bcce1848d2e48dd3729f03c62 --- /dev/null +++ b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/full.md @@ -0,0 +1,324 @@ +# $\mathcal{M}^4$ : A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models + +Xuhong Li + +Baidu Inc. + +lixuhong@baidu.com + +Mengnan Du + +New Jersey Institute of Technology + +mengnan.du@njit.edu + +Jiamin Chen + +Baidu Inc. + +chenjiamin01@baidu.com + +Yekun Chai + +Baidu Inc. + +chaiyekun@baidu.com + +Himabindu Lakkaraju + +Harvard University + +hlakkaraju@seas.harvard.edu + +Haoyi Xiong* + +Baidu Inc. + +xionghaoyi@baidu.com + +# Abstract + +While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for various models and the lack of ground-truth explanations. This paper introduces an XAI benchmark named $\mathcal{M}^4$ , which allows evaluating various input feature attribution methods using the same set of faithfulness metrics across multiple data modalities (images and texts) and network structures (ResNets, MobileNets, Transformers). A taxonomy for the metrics has been proposed as well. We first categorize commonly used XAI evaluation metrics into three groups based on the ground truth they require. We then implement classic and state-of-the-art feature attribution methods using InterpretDL and conduct extensive experiments to compare methods and gain insights. Extensive experiments have been conducted to provide holistic evaluations as benchmark baselines. Several interesting observations are made for designing attribution algorithms. The implementation of state-of-the-art explanation methods and evaluation metrics of $\mathcal{M}^4$ is publicly available at https://github.com/PaddlePaddle/InterpretDL. + +# 1 Introduction + +Although deep neural networks (DNNs) have achieved state-of-the-art performance on numerous AI tasks, they are often regarded as black boxes due to their lack of transparency. This opacity hinders the adoption of deep models in high-stake applications that require explainability, such as healthcare, criminal justice, and law. Explainable AI (XAI) aims to address this limitation by developing techniques to provide explanations for predictions made by DNN models [15, 17]. In recent years, researchers have proposed various XAI algorithms to enable deeper understanding of DNNs [40, 46, 48, 6, 7, 9]. Among these techniques, post-hoc feature attribution is one of the most widely used paradigms, which could provide insight into the behaviors of trained DNNs. + +Although these feature attribution methods can be helpful in understanding deep models, their faithfulness (i.e., how well explanations match model reasoning) is not always guaranteed. Unfaithful explanations fail to provide a complete or accurate description of the algorithm that the model implements, thus might yield futile or deceptive insights [41]. To address this, researchers have + +![](images/c31736ea98727119dccc5b25ccb1d8579925c2689167bb2655b652e01b3da90c.jpg) +Figure 1: The benchmark pipeline $\mathcal{M}^4$ , which supports evaluations on two modalities with more than ten deep models to holistically validate the faithfulness of existing feature attribution methods. + +proposed methods to measure the faithfulness of explanations and filter out unfaithful algorithms. For example, Adebayo et al. [2] have proposed randomization tests on the model's parameters, on which explanation methods should depend. Such tests can easily filter out unfaithful explanation methods that rarely vary even when randomizing the model parameters. Furthermore, beyond simply binarizing the faithfulness of the explanation methods, recent work has built XAI benchmarks with evaluation metrics and datasets [14, 39, 3] to quantitatively compare the faithfulness among explanation methods. These evaluations are emerging as a guiding principle for selecting the most effective and appropriate explanation methods to elucidate models in specific tasks. + +However, there are still several challenges towards this direction of research for faithfulness evaluations and benchmarks of XAI algorithms. First, assessing the faithfulness of feature attributions is difficult due to the lack of ground truths (of explanations). Although some attainable metrics have been proposed to evaluate feature attributions [42, 53], existing benchmarks rarely share or standardize these metrics. For instance, perturbation-based metrics are commonly used but differ in their perturbation methods, scales, and granularity across benchmarks. Alternative metrics also quantify faithfulness in different ways, but few studies have consolidated, classified, or analyzed the relationships between these metrics. Second, most benchmarks focus on a single data type and model, limiting their applicability and ability to validate new methods. Feature attribution techniques designed for specific models may not be applicable to others. For example, gradient-based explanations can be noisy for Vision Transformers [4]. To robustly evaluate explanation methods, XAI benchmarks should encompass assessments across multiple models and modalities. + +To address these challenges, we propose developing benchmarks that cover various types of data and models, employ a taxonomy of standardized metrics, and analyze the relationships between different metrics. First, we categorize the commonly used faithfulness metrics into three groups, according to the status of the ground-truth explanations. Then, based on these metrics, we provide a unified XAI benchmark named $\mathcal{M}^4$ , which supports the evaluations of two modalities (images and texts) with more than ten deep models to holistically validate the faithfulness of existing feature attribution methods (Figure 1). Through the proposed benchmark $\mathcal{M}^4$ , we conducted comprehensive experiments and also obtained several interesting observations on the feature attribution methods that motivate future work in designing XAI methods and other applications related to XAI. + +The contributions of this work are threefold: + +- To the best of our knowledge, our work provides the first taxonomy of faithfulness evaluation metrics. We categorize the commonly used metrics into three sets that require no ground-truth of explanations, generate pseudo ground truth, and design synthetic ground truth, respectively. +- The proposed $\mathcal{M}^4$ benchmark allows the evaluation of diverse feature attribution methods with the same metrics across multiple data modalities (images and texts) and various architectures. We take advantage of the modular-designed implementations of XAI methods in InterpretDL [29] to build the $\mathcal{M}^4$ benchmark, making the evaluations easily extensible. +- We conduct extensive experiments to provide holistic comparisons with off-the-shelf baselines, yielding valuable observations that can inform future designs and applications of XAI methods. + +# 2 Benchmark $\mathcal{M}^4$ + +In this section, we introduce $\mathcal{M}^4$ , a unified benchmark for evaluating feature attribution methods. + +# 2.1 Tasks, Datasets and Models + +We consider two classic tasks: image classification from the computer vision domain and sentiment analysis from the NLP domain. These tasks have been used as testbeds for most explanation methods. To maximize the reuse of publicly available resources, we utilize two commonly used datasets for this benchmark, namely ImageNet [12] and MovieReview [55]. In fact, we do not need to train models on ImageNet because there are numerous pre-trained models publicly available. Furthermore, to enhance computational efficiency, we use a subset of 5,000 images from the ImageNet validation set, with 5 images per class for class balance. One training phase for images is required to quantify the Synthetic-based score (introduced in Section 2.3) because models need to be trained on a new synthetic dataset. For this training scenario, we take 10 random images per class from the ImageNet training set and randomly add synthetic patches to train the model. We then conduct the evaluations on the same 5,000 images (also with random synthetic patches) as the previous evaluations. For MovieReview, we fine-tune the pretrained language models on its training set and conduct the faithfulness evaluation on its validation set. Therefore, both the training and validation sets of MovieReview are required. + +The reasons for choosing the ImageNet and MovieReview datasets, as well as the related tasks also include the availability of semantic segmentation labels [19, 31] and language reasoning labels [14] from public resources. Although these labels are not used in $\mathcal{M}^4$ as our purpose is for faithfulness evaluations, these labels can be used directly to measure human-labeled interpretability which is defined as the alignment between model explanations with human understanding. + +Recent benchmarks rarely consider the choices of network structures when evaluating the faithfulness of explanation methods. In contrast, our proposed $\mathcal{M}^4$ considers a wide range of models to holistically evaluate faithfulness: VGG [45], three ResNets [21], Mobilenet-V3 [22], three ViT versions (small, base and large) [16] and MAE-ViT-base [20] for image classification, and two BERTs (base and large) [13], DistilBERT [43], ERNIE-2.0-base [47] and RoBERTa [34] for sentiment analysis. + +# 2.2 Feature Attribution Methods + +Our proposed benchmark $\mathcal{M}^4$ considers classic feature attribution methods that have been used in previous benchmarks, including model-agnostic explanations (LIME [40]), gradient-based (Integrated Gradient (IG) [48], SmoothGrad (SG) [46]), and model-specific (GradCAM [44]). In addition, our proposed benchmark $\mathcal{M}^4$ also evaluates the state-of-the-art ones especially for Transformer structures, e.g., Generic Attribution (GA) [6], Bidirectional Explanations (BT) [9], etc., to comprehensively evaluate the feature attributions, as well as to revisit and confirm the progress of explanation methods. + +Feature attribution methods are explainers that assign importance scores to input features of a machine learning model, elucidating the reasoning behind a model's specific prediction. Such explanations are readily comprehensible for human understanding. We note that there are more advanced forms of explanation, including prototype exemplars [8, 18, 37], concept vectors in the activation space [24, 54], and proxy models to simulate the rational process of deep models [27, 56], etc. Although these methods produce unique and probably more profound explanations, they may be either incomprehensible or laboriously evaluated. As an initial step toward establishing a benchmark for evaluating XAI methods, concentrating on feature attribution methods would be an achievable endeavor, albeit one that still presents its own set of challenges. + +# 2.3 Metrics and Taxonomy + +Due to the lack of explanation ground truths, there are no natural metrics to quantify the faithfulness of the explanations produced by feature attribution methods. Recent studies have proposed various metrics to evaluate explanation methods [42, 28, 53]. We review commonly used metrics and categorize them into three types based on whether they require explanation ground truths and how the ground truths are generated. Specifically, the three types of metrics are as follows. + +No Ground Truth. Perturbation-based metrics offer a feasible approach that circumvents the need for ground-truth explanations. The underlying concept is based on the premise that important input + +features will noticeably degrade the predicted probability of a model, while irrelevant features will have minimal impact on the probability mass. Several widely utilized metrics are built upon this fundamental idea. For example, Samek et al. [42] proposed ordered perturbation, which perturbs the input features gradually following the same order of values from the explanation results. They then calculate the area under the perturbation curve. The MoRF (most relevant first) metric represents the descending order, the LeRF (least relevant first) denotes the ascending order, and the ABPC (area between the perturbation curves) signifies their difference. Similarly deletion and insertion metrics [38] are an equivalent pair of MoRF and LeRF. Another example is random perturbation, such as Infidelity [53], which randomly perturbs the input, unlike ordered perturbation, and computes the empirical average of Eq.(4). Formally, their formulations are given as follows: + +$$ +\operatorname {M o R F} (\boldsymbol {x}) = \frac {1}{L + 1} \sum_ {k = 0} ^ {L} \left(\boldsymbol {f} \left(\boldsymbol {x} _ {\text {M o R F}} ^ {(0)}\right) - \boldsymbol {f} \left(\boldsymbol {x} _ {\text {M o R F}} ^ {(k)}\right)\right), \tag {1} +$$ + +$$ +\operatorname {L e R F} (\boldsymbol {x}) = \frac {1}{L + 1} \sum_ {k = 0} ^ {L} \left(\boldsymbol {f} \left(\boldsymbol {x} _ {\text {L e R F}} ^ {(0)}\right) - \boldsymbol {f} \left(\boldsymbol {x} _ {\text {L e R F}} ^ {(k)}\right)\right), \tag {2} +$$ + +$$ +\operatorname {A B P C} (\boldsymbol {x}) = \frac {1}{L + 1} \sum_ {k = 0} ^ {L} \left(\boldsymbol {f} \left(\boldsymbol {x} _ {\text {L e R F}} ^ {(k)}\right) - \boldsymbol {f} \left(\boldsymbol {x} _ {\text {M o R F}} ^ {(k)}\right)\right), \tag {3} +$$ + +$$ +\operatorname {I N F D} (\boldsymbol {x}) = \mathbb {E} _ {\boldsymbol {I} \sim \mu_ {\boldsymbol {I}}} \left(\boldsymbol {I} ^ {T} \mathcal {A} (\boldsymbol {x}, \boldsymbol {f}) - \left(\boldsymbol {f} (\boldsymbol {x}) - \boldsymbol {f} (\boldsymbol {x} - \boldsymbol {I})\right) ^ {2}\right), \tag {4} +$$ + +where $\pmb{f}$ is the DNN model including the architecture and the trained parameters, $\pmb{x}^{(0)}$ is the original input, $\pmb{x}_{\mathrm{MoRF}}^{(k)}$ is the perturbed input whose top- $k$ features are masked, $\pmb{x}_{\mathrm{LeRF}}^{(k)}$ is the perturbed input whose bottom- $k$ features are masked, and $\mathcal{A}$ is a feature attribution method taking a data sample $\pmb{x}$ and a trained model $\pmb{f}$ as input. Note that if the explanation is of better quality and of better loyalty to the model, the MoRF and ABPC scores are higher and the LeRF is lower. However, the metric of ABPC scores contains the information of both MoRF and LeRF. Without loss of completeness, we do not report the results of LeRF scores. INFD [53] follows the similar idea but its perturbation manner is quite different. Random perturbation on the input space is adopted (or an effective sampling strategy can be designed). This may lead to high computational complexity when the input space is large. + +Pseudo Ground Truth. In certain cases, pseudo ground truths can serve as reasonable approximations of the actual ground truths for explanations. For example, pseudo ground truths of explanations can be generated through a consensus-based metric [28]. Here, the consensus refers to the aggregation of explanations from multiple deep models. We can consider this consensus as a pseudo ground truth for the explanation. To evaluate an explanation, we only need to measure its similarity score to this pseudo ground truth. We call this score the PScore. Formally, PScore can be formulated as follows: + +$$ +\operatorname {P S c o r e} (\boldsymbol {x}) = \cos \left(\frac {1}{| \mathcal {M} |} \sum_ {\boldsymbol {g} \in \mathcal {M}} \mathcal {A} (\boldsymbol {x}, \boldsymbol {g}), \mathcal {A} (\boldsymbol {x}, \boldsymbol {f})\right), \tag {5} +$$ + +where the cosine similarity is taken, $\mathcal{M}$ is a set of well-trained models and $\mathcal{A}$ is a feature attribution method that takes a data sample $x$ and a trained model $f$ as input. + +Some clarifications may be required for elaborating the pseudo ground truth metric. + +(1) Take an example of evaluating the faithfulness of a new attribution algorithm $\mathcal{A}$ . On the image classification task, we use $\mathcal{A}$ to explain 15 models and get 15 attribution results respectively. Then we aggregate the 15 attribution results through normalization and average and obtain the pseudo ground truth. After that, we measure the similarity score between the pseudo ground truth and each of the 15 attributions, as the PScore for the evaluation result of $\mathcal{A}$ on each model. In this way, the faithfulness evaluation of $\mathcal{A}$ using the PScore metric is done. + +(2) This metric requires some assumptions and preconditions, that + +- The models in $\mathcal{M}$ should be well-trained, otherwise both the predictions and attributions can be random and bad. According to our experiments, ImageNet-pretrained models released in public are safe to use for image classification tasks. +- The number of models should be large. The original paper [28] suggests using 15 models, while we use 9 models for image tasks and 6 models for text tasks for reducing the computation complexity. + +Synthetic Ground Truth. Synthetic datasets with sophisticated designs similar to "data poisoning" or "adversarial attacks" can provide synthetic ground truths of explanations [32]. Note that the "attacks" we intentionally apply here are noticeable and describable, simply consisting of painted patches. The idea is that the synthetic patches on the images serve as supervision signals for training models. Labels of the images with synthetic patches will be reversed. These patches constitute explanation ground truths because no other patterns can lead to correct predictions by design. Therefore, models trained on such datasets must attribute their predictions to the synthetic ground truths. To explain a well-trained model, effective feature attribution methods should produce explanations that closely match the synthetic ground truths. We propose using Average Precision (AP) and Area Under the Receiver Operating Characteristic Curve (AUC-ROC) to quantify the matching score, referred to as SynScore. Formally, SynScore can be defined as follows: + +$$ +\operatorname {S y n S c o r e} _ {\text {m e t r i c}} (\boldsymbol {x}) = \operatorname {m e t r i c} \left(\operatorname {S y n} (\boldsymbol {x}), \mathcal {A} (\boldsymbol {x}, \boldsymbol {f})\right), \tag {6} +$$ + +where metric can be AP or AUC-ROC and $\operatorname{Syn}(\boldsymbol{x})$ is the synthetic ground truth of explanation for the sample $\boldsymbol{x}$ . + +These three categories of metrics are commonly used in recent work, but each has its own limitations. We do not expect that any single category accurately measures the faithfulness of feature attribution methods. Evaluating them together through cross-comparative analysis can help to understand their strengths and weaknesses. This could provide useful insights until more principled metrics emerge. + +# 2.4 Benchmark Pipeline and Modular Implementations + +We currently provide evaluations on two data modalities, i.e., images and texts. For images, we choose a small amount (5000) of images from ImageNet as the benchmark dataset, multiple pre-trained image classification networks as the benchmark models and multiple feature attribution methods as the benchmark explainers. Each explainer will produce an explanation given a model for each image in the dataset, and each explanation will be given a score by applying the evaluation metrics. Finally, the faithfulness of the explainer will be quantified by the average of scores across all images in the dataset. For texts it is very similar except that the base dataset is MovieReview and the base models are for the task of sentiment analysis. The pipeline is illustrated in Figure 1. + +The benchmark pipeline is implemented following InterpretDL [29] in a modular-designed style and is publicly available $^{2}$ . This means that from deep models, feature attribution methods, to evaluation metrics, their implementations are independent modules. A code sample to obtain the explanation and the evaluation results is shown in Listing 1. Therefore, new methods and metrics can also be easily added and compatible across deep learning library frameworks $^{3}$ . For example, one can use Pytorch to obtain the explanations, e.g., Captum [26], and use our benchmark to do the evaluations. To show the benchmark utility, we have provided a user-case scenario of using a HuggingFace model $^{4}$ for feature attributions and faithfulness evaluations on the X-ray pneumonia classification task. See the source code in the supplementary material for details. + +# 3 Experiments and Observations + +Following the benchmark pipeline as described in the previous section, we have conducted evaluation experiments. In this section, we present the experimental results, where we address several interesting research questions and introduce observations that could guide future work. + +To acquaint readers with our experimental results, we first present an illustrative example. We compare this example with constant and random baselines to confirm the effectiveness of all the feature attribution methods. The results show the scores obtained by applying all possible feature attributions and measuring them using all metrics on ResNet-50. Table 1 presents the results, where the constant and random baselines perform significantly worse than the other three attribution methods (i.e., GradCAM, IG and SG). Moreover, no single attribution method achieves the best performance across all faithfulness metrics, highlighting the need for our benchmark. For rigorous analysis, the statistics in the following subsections exclude the constant and random baselines. + +```python +1 import interpretdl as it +2 +3 # Load a pretrained model from PaddlePaddle model zoo. +4 from paddle.vision.models import resnet50 +5 model $=$ resnet50(pretrained $\equiv$ True) +6 +7 # Available feature attribution methods include but are not limited to SG,IG,LIME,BT,GA and etc.'interpret' is the universal api. +8 algo $=$ it.SmoothGradInterpreter(model,device $\coloneqq$ "gpu:0") +9 expl_result $=$ algo.interpret("test.jpg") +10 +11 # Available faithfulness evaluation metrics include but are not limited to MoRF,ABPC,INFD and etc.'evaluate' is the universal api.Note that some evaluators do not require the model. +12 evaluator $=$ it.Infidelity(model) +13 eval_result $=$ evaluator.eval("test.jpg",expl_result) +``` + +Listing 1: Codes for computing the SmoothGrad explanation and the INFD metric score, given the model of a pretrained ResNet-50 and a testing image. + +Table 1: Evaluation results on the ImageNet-pretrained ResNet-50, with constant and random baselines, GradCAM, IG and SG. "Random-16" indicates that the random values are given at patch level (16×16 pixels) while "Random" is at pixel level. + +
Attribution MethodsMoRF ↑ABPC ↑PScore ↑INFD ↓SynScore ↑
Constant0.0000.000N/A3.0150.072
Random-160.5960.007N/A3.0390.077
Random0.5990.008N/A3.0150.078
GradCAM0.6280.4240.8352.4961.000
IG0.7090.3770.8122.3730.999
SG0.7010.3690.8202.3230.998
+ +Evaluating the faithfulness of attribution algorithms presents several challenges from three perspectives: metric-wise, attribution algorithm-wise, and classification model-wise. All of these perspectives can lead to variance in faithfulness benchmarking. We will analyze faithfulness performance from these three perspectives in the following subsections. + +# 3.1 Whether There are Two Metrics that are Correlated? + +Selecting appropriate metrics is critical but also challenging given the multitude of options. We introduced three families of metrics to evaluate the faithfulness of explainability methods. An intriguing research question is whether any two metrics are correlated. The short answer is yes. To investigate the inter-correlations between metrics, whether they are correlated or orthogonal, we compute the Pearson's correlation coefficient between metrics considering all possible pairs of models and explanation techniques. We can see from Figure 2 that first, the most positive correlation coefficient is located in the pair between ABPC and PScore (0.58, p-value $2.4e^{-4}$ ); Second, the most negative among all pairs is between INFD and PScore (-0.59, p-value $1.9e^{-4}$ ); Third, there is a near zero correlation for the MoRF-PScore, + +![](images/b7be1de90ef0bbe1b16b839f15049dcef7867e042fc866f809456b7e05c5bfe1.jpg) +Figure 2: Correlation between metrics. + +MoRF-INFD and PScore-SynScore pairs, indicating that the pairs are not correlated at all. + +As noticed, ABPC, INFD, and PScore are potential alternatives to one another, each requiring a good amount of computation from different perspectives. Besides one trial of explanation algorithm, ABPC needs tens of forward passes of the original model. INFD requires generating random masks, which + +![](images/36ed6c72b26487b7838f5693fd90731adedd7023389415fb216855129ab99c0e.jpg) +(a) Modality of images. + +![](images/a8fbd78f9f70b3444f8e657a301156ead33dc7fcd27e81c6a0fe7a1ef9de4567.jpg) +(b) Modality of texts. +Figure 3: Averaged metric scores. A higher value indicates better faithfulness. The blanks indicate that the algorithm in a vanilla style is not suitable for the model. + +is sampled from a very large space, especially for images. Meanwhile, PScore necessitates trained models' availability, with each model passing the explanation algorithm once. Practitioners should choose the most appropriate method based on the availability of models and computational resources. + +There are two other key observations worth discussing. First, MoRF is weakly correlated with ABPC, with a correlation coefficient of 0.29, indicating that they are not measuring exactly the same characteristic. As discussed by Samek et al. [42], MoRF focuses only on the most important features, while ABPC also considers the ranking of the least important features. Therefore, if the goal is to filter out irrelevant features, ABPC scores should be more heavily weighted. Second, we found that ABPC, PScore, and INFD are strongly correlated with each other. In contrast, MoRF and SynScore evaluate faithfulness from different perspectives than the former three metrics. + +# 3.2 Which Explanation Algorithm Demonstrates the Best Faithfulness? + +Another interesting question is determining which explanation algorithm is the most faithful. Depending on the models and faithfulness metrics used, the optimal algorithm may differ. However, we still can draw several useful and instructive conclusions. + +To simplify the comparison of faithfulness across explanation algorithms, we aggregate the multidimensional metrics into a single one. First, we make the assumption that the metrics in our benchmark measure the faithfulness from different aspects since no pairs are perfectly correlated. Then we can propose a single score by averaging all of the metrics while negating the scores of INFD which is the only one ranking the faithfulness in a descent order. Moreover, we do the standardization for scores of all metrics within each model to balance the contributions of metrics. Specifically, the averaged metric is defined as: + +$$ +\operatorname {A v g S c o r e} = z (\text {M o R F}) + z (\text {A B P C}) + z (\text {P S c o r e}) - z (\text {I N F D}) + z (\text {S y n S c o r e}), \tag {7} +$$ + +where $z(s) = (s - \bar{s}) / \sigma(s)$ is the standardization within each model. Following the formulas, we compute the averaged faithfulness score for each model-algorithm pair and show the results in Figure 3. We provide the results of each metric in the supplementary materials. + +We summarize the observations as follows: + +- Overall, IG generally outperforms SG except for Vision Transformers. For Transformers in NLP, SG is not an optimal choice. One possible reason is that SG adds noise in the embedding layer, and the noise scale is difficult to tune. Though further investigation is needed, we believe that the theoretical guarantee (IG satisfies the Completeness axiom, i.e., the attributions add up to the difference between the output of the model at the input and a chosen baseline) may be one of the reasons to support wide applications. + +![](images/8a7a9a1838ced3a1861b31021e36ca7b8b33efe0a2dae8c4b6a4a454fb9af369.jpg) +Figure 4: Sensibility measured by standard deviations across attribution methods. Note that the std for VGG16-INFD is very high (86.5) whereas we set the limit to 0.2 for better visualization. + +- LIME steadily gets a high averaged score for NLP Transformers5. One reason is that the algorithmic computation of LIME overlaps in some degree with the evaluation of MoRF and ABPC. This is more obvious if we see the results of MoRF and ABPC, which are in the supplementary materials. Nevertheless, LIME is among the best algorithms measured by other metrics as well. +- For attention-based networks, including both ViTs and NLP Transformers, BT generally demonstrates higher faithfulness than others. This may stem from its accurate approximation of Transformer computations. This observation coincides with the implication of the first observation, motivating future work to design explanation algorithms through mathematical analysis of network structure, e.g., attention modules in Transformers. +- Even within the same network structure (here we have three sets of model families: ResNet{50,101,152}, ViT-{Small,Base,Large} and Bert-{Distil,Base,Large})), no algorithm consistently achieved high faithfulness. For example, IG performed well on R101 and R152 but not R50. Although BT demonstrated the highest faithfulness in most cases, it did not do so for Bert/L. Developing a faithful algorithmic technique that works across different metrics, modalities and models remains an open challenge. + +# 3.3 Which Model is the Most (In)sensitive to Explanation Algorithms? + +The model's complexity and interpretability is also a key factor influencing the benchmarking performance. From the extensive evaluation results, we further investigate the sensitivity of the models to attribution methods. Given a model and a metric, we calculate the standard deviation of all metric scores across possible attribution methods. Take ResNet-50 as an example. For each evaluation metric, a standard deviation is computed among GradCAM, IG, and SG. The results are shown in Figure 4 for image classification models and Figure 5 for NLP models. + +A model that is insensitive to attribution methods may indicate that the model can be easily explained, or all attribution methods fail to explain the model. Fortunately, the latter case does not occur in our experiments, because the attribution + +![](images/e7fe6d878fdbfed4fe9a9d193a7ffb0a8a1bb3d75bf1fbcc72e12effe103c152.jpg) +Figure 5: Sensibility measured by standard deviations across attribution methods. Since the vocabulary of Roberta/B is different from other models, thus PScore is excluded. + +methods we selected for the models are relatively suitable. So, with the sensitivity, we can roughly estimate the difficulty of explaining a model. This may be helpful when designing novel network structures. + +Although the most insensitive model is not easy to identify, the most sensitive one is VGG16, which gets the highest sensitivity in almost all metrics. For network families, we can find that ResNets get higher sensitivity than ViTs in MoRF and INFD but lower in PScore and SynScore. In either network family, the sensitivity does not vary much. As for NLP models, they have similar sensitivity as well, except Bert/B in the metrics of MoRF and ABPC. A similar observation is found in ViT/B by the ABPC metric. Existing attribution methods often use ViT/B or BERT as the primary model and achieve good evaluation results. However, some methods may work especially well for ViT/B or BERT/B but not as well for other models. Therefore, we encourage the research community to evaluate attribution methods on a variety of models with different network architectures. + +# 4 Related Work + +Existing work has developed benchmarks for evaluating and comparing explainability approaches. For example, Rathee et al. proposed BAGEL, a benchmark for evaluating explanation methods on graphical neural networks [39]. OpenXAI provides an open-source framework for evaluating post hoc explanation methods in tabular data [3]. Similarly, other benchmarks focus on NLP models, such as [14, 52]. The XAI-Bench library benchmarks feature attribution methods on synthetic datasets [33]. Chou et al. proposed a benchmark for counterfactual explanation methods on tabular data [11]. However, these benchmarks are limited to specific data modalities and explanation methods. A benchmark that considers multiple data modalities and explanation paradigms is still lacking. Our work addresses this gap by proposing a unified benchmark for explainable AI across different modalities, with the goal of facilitating holistic progress in the field of XAI. + +# 5 Discussions, Limitations and Future Work + +In this section, we provide discussions and limitations on the proposed benchmark $\mathcal{M}^4$ . We also present our plans of future work for addressing the limitations. + +In addition to including a wide range of feature attribution methods, faithfulness evaluation metrics, data modalities, and deep models, our benchmark $\mathcal{M}^4$ has two other properties. The first one is efficiency and facility. The benchmark $\mathcal{M}^4$ utilizes subsets of public datasets and evaluates the same datasets for each modality, i.e. ImageNet and MovieReview. Moreover, the models used during evaluation consist of publicly available pre-trained model weights, avoiding training new models from scratch, except for fine-tuning required in sentiment analysis. The second one is objectiveness. The $\mathcal{M}^4$ pipeline is objective and performed by evaluation algorithms because the faithfulness evaluation depends only on attribution methods and deep models. + +We would also like to distinguish between faithfulness and interpretability. Interpretability refers to the alignment between the explanations of a model and human understanding [25]. Faithfulness is a prerequisite for interpretability and refers to how well an explanation reflects the model's functioning. This paper focuses on evaluating the faithfulness, specifically of feature attribution methods, across metrics, models, and modalities. We do not focus much on the interpretability of deep models in this work but our benchmark can be easily extended to its evaluations, e.g., with the help of ground-truth labels of image segmentation [19] and language reasoning [14]. + +We present the limitations and plans in the future work. + +(1) The evaluation metrics in XAI contain several others beyond faithfulness, e.g., interpretability, sparsity, stability etc, while the current version of $\mathcal{M}^4$ only focuses on the faithfulness. For the comprehensive applicability of the XAI benchmark, we will progressively integrate other evaluation metrics for feature attributions. Some are easy to be plugged in. For example, the sparsity can be directly computed via the entropy of normalized attributions, but for the reason that the sparsity is not one of the faithfulness metrics, it is thus not reported in the current version of $\mathcal{M}^4$ benchmark. Another reason that we do not involve other aspects in the benchmark and focus on the faithfulness evaluations of feature attributions, is that we believe that based on faithful explanation results, we can more easily and accurately analyze other aspects of XAI. In the future, various metrics will be included in $\mathcal{M}^4$ , contributing $\mathcal{M}^4$ for more comprehensive applicability in XAI. + +(2) Our benchmark $\mathcal{M}^4$ did not include many good attribution methods, such as Shapley values based methods [35, 10], CAM variants [51, 36, 23], LIME variants [58, 30], LRPs [5, 49] attention- + +based [1] and many others. However, explanations are of great variety. Although many advanced explanations beyond feature attributions have been proposed to facilitate deeper understanding of deep neural networks, their faithfulness is difficult to evaluate a posteriori and would be evaluated ad hoc. As an initial stride toward establishing a benchmark for evaluating XAI methods, concentrating on feature attribution methods would be an attainable endeavor, albeit one that still presents its own set of challenges. + +(3) Language models and their explanations are only evaluated by the task of sentiment analysis. Though we are interested in the explanation faithfulness instead of language models' capacities and the sentiment analysis is one of the accessible tasks for faithfulness evaluations, it would be comprehensive to evaluate on other NLP tasks, e.g., those from the GLUE benchmark [50, 14, 57]. +(4) Our benchmark $\mathcal{M}^4$ contains currently the image and text modalities. One of the future directions is to enhance the benchmark by integrating more data modalities, such as graphs, audio clips, tabular data, and multi-modality, whereas several pioneering studies [6, 9] have explored multi-modal explanations using some of the evaluation metrics in our benchmark pipeline. +(5) Social impacts and ethics. Our framework can assess bias and fairness issues of DNNs in high-stake applications involving sensitive attributes like gender, race, and age. This is a challenging topic and would be investigated in our future research. + +# 6 Conclusions + +Although existing benchmarks have advanced XAI in specific domains, a universal benchmark is lacking to compare explanation methods between models and modalities. Our work aims to address this gap by evaluating feature attribution methods on computer vision and NLP tasks using a variety of metrics. In our benchmark, we evaluated nine models using six of the most common explanation methods (LIME, SG, IG, GradCAM, GA and BT) on two modalities (image and texts) based on five evaluation metrics. We gain several observations that can inform the future design and applications of the XAI method. For future work, we plan to expand the benchmark to other modalities, including but not limited to graphs and audio. We also plan to incorporate additional evaluation perspectives, such as interpretability, stability, sparsity, etc. + +# Acknowledgments + +Xuhong Li and Haoyi Xiong were supported in part by the National Key R&D Program of China under the gradnt No. 2021ZD0110303. + +# References + +[1] Samira Abnar and Willem H. Zuidema. Quantifying attention flow in transformers. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. +[2] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. Proceedings of the Advances in Neural Information Processing Systems, 2018. +[3] Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. Openxai: Towards a transparent evaluation of model explanations. Advances in Neural Information Processing Systems, 2022. +[4] Ameen Ali, Thomas Schnake, Oliver Eberle, Grégoire Montavon, Klaus-Robert Müller, and Lior Wolf. Xai for transformers: Better explanations through conservative propagation. In International Conference on Machine Learning, pages 435-451. PMLR, 2022. +[5] Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. Layer-wise relevance propagation for neural networks with local renormalization layers. In International Conference on Artificial Neural Networks. Springer, 2016. + +[6] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021. +[7] Hila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021. +[8] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32, 2019. +[9] Jiamin Chen, Xuhong Li, Lei Yu, Dejing Dou, and Haoyi Xiong. Beyond intuition: Rethinking token attributions inside transformers. Transactions on Machine Learning Research, 2022. +[10] Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. L-shapley and c-shapley: Efficient model interpretation for structured data. arXiv preprint arXiv:1808.02610, 2018. +[11] Yu-Liang Chou, Chihcheng Hsieh, Catarina Moreira, Chun Ouyang, Joaquim Jorge, and João Madeiras Pereira. Benchmark evaluation of counterfactual algorithms for xai: From a white box to a black box. arXiv preprint arXiv:2203.02399, 2022. +[12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009. +[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +[14] Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429, 2019. +[15] Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017. +[16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, ICLR, 2021. +[17] Mengnan Du, Ninghao Liu, and Xia Hu. Techniques for interpretable machine learning. Communications of the ACM, 63(1):68-77, 2019. +[18] Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi. Black box explanation by learning image exemplars in the latent feature space. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2019. +[19] Matthieu Guillaumin, Daniel Kuttel, and Vittorio Ferrari. Imagenet auto-annotation with segmentation propagation. International Journal of Computer Vision, 110:328–348, 2014. +[20] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000-16009, 2022. +[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. +[22] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1314-1324, 2019. + +[23] Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei. Layercam: Exploring hierarchical class activation maps for localization. IEEE Transactions on Image Processing, 2021. +[24] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In Proceedings of the International Conference on Machine Learning, 2018. +[25] Jinkyu Kim and John Canny. Interpretable learning for self-driving cars by visualizing causal attention. In Proceedings of the IEEE International Conference on Computer Vision, 2017. +[26] Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, et al. Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896, 2020. +[27] Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154, 2017. +[28] Xuhong Li, Haoyi Xiong, Siyu Huang, Shilei Ji, and Dejing Dou. Cross-model consensus of explanations and beyond for image classification models: An empirical study. arXiv preprint arXiv:2109.00707, 2021. +[29] Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Zeyu Chen, and Dejing Dou. Interpretdl: Explaining deep models in paddlepaddle. Journal of Machine Learning Research, 23(197):1-6, 2022. +[30] Xuhong Li, Haoyi Xiong, Xingjian Li, Xiao Zhang, Ji Liu, Haiyan Jiang, Zeyu Chen, and Dejing Dou. G-lime: Statistical learning for local interpretations of deep neural networks using global priors. Artificial Intelligence, 314:103823, 2023. +[31] Xuhong Li, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, and Dejing Dou. Distilling ensemble of explanations for weakly-supervised pre-training of image segmentation models. Machine Learning, pages 1-17, 2022. +[32] Yi-Shan Lin, Wen-Chuan Lee, and Z Berkay Celik. What do you see? evaluation of explainable artificial intelligence (xai) interpretability through neural backdoors. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1027-1035, 2021. +[33] Yang Liu, Sujay Khandagale, Colin White, and Willie Neiswanger. Synthetic benchmarks for scientific research in explainable machine learning. In Advances in Neural Information Processing Systems Datasets Track, 2021. +[34] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +[35] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems, 2017. +[36] Mohammed Bany Muhammad and Mohammed Yeasin. Eigen-cam: Class activation map using principal components. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-7. IEEE, 2020. +[37] Meike Nauta, Ron Van Bree, and Christin Seifert. Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14933-14943, 2021. +[38] Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In Proceedings of the British Machine Vision Conference, 2018. +[39] Mandeep Rathee, Thorben Funke, Avishek Anand, and Megha Khosla. Bagel: A benchmark for assessing graph neural network explanations. arXiv preprint arXiv:2206.13983, 2022. + +[40] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?" explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. +[41] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5):206-215, 2019. +[42] Wojciech Samek, Alexander Binder, Gregoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 2016. +[43] Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019. +[44] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, 2017. +[45] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +[46] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017. +[47] Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8968-8975, 2020. +[48] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, 2017. +[49] Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multihead self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Anna Korhonen, David R. Traum, and Lluis Márquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5797-5808. Association for Computational Linguistics, 2019. +[50] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. +[51] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Score-cam: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. +[52] Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao Liu, Hongxuan Tang, Ying Chen, Hua Wu, and Haifeng Wang. A fine-grained interpretability evaluation benchmark for neural nlp. arXiv preprint arXiv:2205.11097, 2022. +[53] Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I Inouye, and Pradeep Ravikumar. On the (in) fidelity and sensitivity for explanations. arXiv preprint arXiv:1901.09392, 2019. +[54] Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. Advances in Neural Information Processing Systems, 33:20554-20565, 2020. +[55] Omar Zaidan and Jason Eisner. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of the 2008 conference on Empirical methods in natural language processing, pages 31-40, 2008. +[56] Quanshi Zhang, Yu Yang, Haotian Ma, and Ying Nian Wu. Interpreting cnns via decision trees. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019. + +[57] Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Mengnan Du. Explainability for large language models: A survey. arXiv preprint arXiv:2309.01029, 2023. +[58] Zhengze Zhou, Giles Hooker, and Fei Wang. S-lime: Stabilized-lime for model explanation. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 2429-2438, 2021. \ No newline at end of file diff --git a/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/images.zip b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b73960012a31c8f928243dd0adcda39574fbbff8 --- /dev/null +++ b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0101eecdf4fa056492e1c909a3c456b2e81daae53ac48160ad04c32ddef47e1c +size 266350 diff --git a/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/layout.json b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..938438626a4baca214828dbe35274472dc2cef4f --- /dev/null +++ b/mathcalm4aunifiedxaibenchmarkforfaithfulnessevaluationoffeatureattributionmethodsacrossmetricsmodalitiesandmodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c27644db9b05a5f3ae390cb656cc8a3c691f568d41b1e2069dec3be1de9db396 +size 364250 diff --git a/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_content_list.json b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..369e8907367cfdfeb06dbe8238878f58807aa6a9 --- /dev/null +++ b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fce5de24f75de4eb51f6df101ef5ccbb72e664e07b64b6575b6ce0cbed93c2a3 +size 117707 diff --git a/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_model.json b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..11cf45b931a388b0616d01e146f2c520e1e4665d --- /dev/null +++ b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86479788b186d599a6457f4ff08de0c93d134810d741df78ac0aff4c3482a079 +size 143774 diff --git a/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_origin.pdf b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..601c7b1b7cb9c918890d0e5e4f7829cd1df7c7d0 --- /dev/null +++ b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/1a1c2248-1e08-49b3-81e2-6fd05f954bbc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93c82ba00952e4574e79d3f81042dda253a4d9f1cb39feb8464c9058c8b30a07 +size 8302998 diff --git a/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/full.md b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0aa17017bc75a9c24a48e84286c357c53cd1ce42 --- /dev/null +++ b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/full.md @@ -0,0 +1,518 @@ +# $p$ -Poisson surface reconstruction in curl-free flow from point clouds + +Yesom Park $^{1*}$ , Taekyung Lee $^{2*}$ , Jooyoung Hahn $^{3}$ , Myungjoo Kang $^{1}$ + +$^{1}$ Department of Mathematical Sciences, Seoul National University + +$^{2}$ Interdisciplinary Program in Artificial Intelligence, Seoul National University + +$^{3}$ Department of Mathematics and Descriptive Geometry, + +Slovak University of Technology in Bratislava + +{yeisom,dlxorud1231,mkang}@snu.ac.kr + +jooyoung.hahn@stuba.sk + +# Abstract + +The aim of this paper is the reconstruction of a smooth surface from an unorganized point cloud sampled by a closed surface, with the preservation of geometric shapes, without any further information other than the point cloud. Implicit neural representations (INRs) have recently emerged as a promising approach to surface reconstruction. However, the reconstruction quality of existing methods relies on ground truth implicit function values or surface normal vectors. In this paper, we show that proper supervision of partial differential equations and fundamental properties of differential vector fields are sufficient to robustly reconstruct high-quality surfaces. We cast the $p$ -Poisson equation to learn a signed distance function (SDF) and the reconstructed surface is implicitly represented by the zero-level set of the SDF. For efficient training, we develop a variable splitting structure by introducing a gradient of the SDF as an auxiliary variable and impose the $p$ -Poisson equation directly on the auxiliary variable as a hard constraint. Based on the curl-free property of the gradient field, we impose a curl-free constraint on the auxiliary variable, which leads to a more faithful reconstruction. Experiments on standard benchmark datasets show that the proposed INR provides a superior and robust reconstruction. The code is available at https://github.com/Yebbi/PINC. + +# 1 Introduction + +Surface reconstruction from an unorganized point cloud has been extensively studied for more than two decades [10, 29, 40, 9, 62] due to its many downstream applications in computer vision and computer graphics[8, 16, 62, 58]. Classical point cloud or mesh-based representations are efficient but they do not guarantee a watertight surface and are usually limited to fixed geometries. Implicit function-based representations of the surface [28, 64, 43, 14] as a level set $S = \{\mathbf{x} \in \mathbb{R}^3 \mid u(\mathbf{x}) = c\}$ of a continuous implicit function $u: \mathbb{R}^3 \to \mathbb{R}$ , such as signed distance functions (SDFs) or occupancy functions, have received considerable attention for providing watertight results and great flexibility + +![](images/5b405a8050d80bb54f3dcc201a03fabe6c22e0d1613b3d2d0c3b8e12f44fb466.jpg) + +![](images/77dfe369f71f590e16b0f684bb770182afc24b65b5945b9a41ecd05ac3af0fd8.jpg) +Eikonal + +![](images/2f3bb21701fa5d8b169ede28cd8ce38a75c746d1d8b2a5379d430700bee58d5e.jpg) + +![](images/5d7876a65f9440b39d70098bb624749b61f7d80ccaa746945da41b35ff2c0019.jpg) +$p$ -Poisson + +![](images/c995e12c23618ba34c3b75345cdf63a787d49a179bbe6cca78092896108f083e.jpg) + +![](images/2083062b177f0efbf130133dc2f36dcdd3eb56c5fa34cd48a7705d8c121b4d35.jpg) +$p$ -Poisson + curl-free +Figure 1: Comparison of reconstruction using an eikonal equation (9), the $p$ -Poisson equation (8), and the proposed $p$ -Poisson equation with the curl-free condition (11). + +in representing different topologies. In recent years, with the rise of deep learning, a stream of work called implicit neural representations (INRs) [2, 44, 16, 61, 19, 55, 53, 50] has revisited them by parameterizing the implicit function $u$ with neural networks. INRs have shown promising results by offering efficient training and expressive surface reconstruction. + +Early INRs [44, 42, 16] treat the points-to-surface problem as a supervised regression problem with ground-truth distance values, which are difficult to use in many situations. To overcome this limitation, some research efforts have used partial differential equations (PDEs), typically the eikonal equation, as a means to relax the 3D supervision [23, 37, 48]. While these efforts have been successful in reconstructing various geometries, they encounter an issue of non-unique solutions in the eikonal equation and rely heavily on the oriented normal vector at each point. They often fail to capture fine details or reconstruct plausible surfaces without normal vectors. A raw point cloud usually lacks normal vectors or numerically estimated normal vectors [1, 18] contain approximation errors. Moreover, the prior works are vulnerable to noisy observations and outliers. + +The goal of this work is to propose an implicit representation of surfaces that not only provides smooth reconstruction but also recovers high-frequency features only from a raw point cloud. To this end, we provide a novel approach that expresses an approximated SDF as the unique solution to the $p$ -Poisson equation. In contrast to previous studies that only describe the SDF as a network, we define the gradient of the SDF as an auxiliary variable, motivated by variable splitting methods [47, 60, 22, 12] in the optimization literature. We then parameterize the auxiliary output to automatically satisfy the $p$ -Poisson equation by reformulating the equation in a divergence-free form. The divergence-free splitting representation contributes to efficient training by avoiding deeply nested gradient chains and allows the use of sufficiently large $p$ , which permits an accurate approximation of the SDF. In addition, we impose a curl-free constraint [25] because the auxiliary variable should be learned as a conservative vector field which has vanishing curl. The curl-free constraint serves to achieve a faithful reconstruction. We carefully evaluate the proposed model on widely used benchmarks and robustness to noise. The results demonstrate the superiority of our model without a priori knowledge of the surface normal at the data points. + +# 2 Background and related works + +Implicit neural representations In recent years, implicit neural representations (INRs) [41, 16, 3, 55, 54], which define a surface as zero level-sets of neural networks, have been extensively studied. Early work requires the ground-truth signed implicit function [44, 16, 41], which is difficult to obtain in real-world scenarios. Considerable research [3, 4] is devoted to removing 3D supervision and relaxing it with a ground truth normal vector at each point. In particular, several efforts use PDEs to remove supervision and learn implicit functions only from raw point clouds. Recently, IGR [23] revisits a conventional numerical approach [14] that accesses the SDF by incorporating the eikonal equation into a variational problem by using modern computational tools of deep learning. Without the normal vector, however, IGR misses fine details. To alleviate this problem, FFN [56] and SIREN [55] put the high frequencies directly into the network. Other approaches exploit additional loss terms to regulate the divergence [6] or the Hessian [63]. The vanishing viscosity method, which perturbs the eikonal equation with a small diffusion term, is also considered [37, 49] to mitigate the drawback that the eikonal loss has unreliable minima. The classical Poisson reconstruction [31], which recovers the implicit function by integration over the normal vector field, has also been revisited to accelerate the model inference time [48], but supervision of the normal vector field is required. Neural-Pull [39] constructs a new loss function by borrowing the geometrical property that the SDF and its gradient define the shortest path to the surface. + +$p$ -Poisson equation The SDF is described by a solution of various PDEs. The existing work [23, 55, 6] uses the eikonal equation, whose viscosity solution describes the SDF. However, the use of the residual of the eikonal equation as a loss function raises concerns about the convergence to the SDF due to non-unique solutions of the eikonal equation. Recent works [55, 6] utilize the notion of vanishing viscosity to circumvent the issue of non-unique solutions. In this paper, we use the $p$ -Poisson equation to approximate the SDF, which is a nonlinear generalization of the Poisson equation $(p = 2)$ : + +$$ +\left\{ \begin{array}{l} - \triangle_ {p} u = - \nabla \cdot \left(\| \nabla u \| ^ {p - 2} \nabla u\right) = 1 \text {i n} \Omega \\ u = 0 \text {o n} \Gamma , \end{array} \right. \tag {1} +$$ + +where $p \geq 2$ , the computation domain $\Omega \subset \mathbb{R}^3$ is bounded, and $\Gamma$ is embedded in $\Omega$ . + +The main advantage of using the $p$ -Poisson equation is that the solution to (1) is unique in Sobolev space $W^{1,p}(\Omega)$ [36]. The unique solution with $p \geq 2$ brings a viscosity solution of the eikonal equation in the limit $p \to \infty$ , which is the SDF, and it eventually prevents finding non-viscosity solutions of the eikonal equation; see a further discussion with an example in Appendix C.1. Moreover, in contrast to the eikonal equation, it is possible to describe a solution of (1) as a variational problem and compute an accurate approximation [5, 20]: + +$$ +\min _ {u} \int_ {\Omega} \frac {\left\| \nabla u \right\| ^ {p}}{p} d \mathbf {x} - \int_ {\Omega} u d \mathbf {x}. \tag {2} +$$ + +As $p \to \infty$ , it has been shown [11, 30] that the solution $u$ of (1) converges to the SDF whose zero level set is $\Gamma$ . As a result, increasing $p$ gives a better approximation of the SDF, which is definitely helpful for surface reconstruction. However, it is still difficult to use a fairly large $p$ in numerical computations and in this paper we will explain one of the possible solutions to the mentioned problem. + +# 3 Method + +In this section, we propose a $p$ -Poisson equation based Implicit Neural representation with Curl-free constraint (PINC). From an unorganized point cloud $\mathcal{X} = \{\mathbf{x}_i : i = 1, 2, \dots, N\}$ sampled by a closed surface $\Gamma$ , a SDF $u: \mathbb{R}^3 \to \mathbb{R}$ whose zero level set is the surface $\Gamma = \{\mathbf{x} \in \mathbb{R}^3 \mid u(\mathbf{x}) = 0\}$ is reconstructed by the proposed INR. There are two key elements in the proposed method: First, using a variable-splitting representation [45] of the network, an auxiliary output is used to learn the gradient of the SDF that satisfies the $p$ -Poisson equation (1). Second, a curl-free constraint is enforced on an auxiliary variable to ensure that the differentiable vector identity is satisfied. + +# 3.1 $p$ -Poisson equation + +A loss function in the physics-informed framework [51] of the existing INRs for the $p$ -Poisson equation (1) can be directly written: + +$$ +\min _ {u} \int_ {\Gamma} | u | d \mathbf {x} + \lambda_ {0} \int_ {\Omega} \left| \nabla \cdot \left(\| \nabla u \| ^ {p - 2} \nabla u\right) + 1 \right| d \mathbf {x}, \tag {3} +$$ + +where $\lambda_0 > 0$ is a regularization constant. To reduce the learning complexity of the second integrand, we propose an augmented network structure that separately parameterizes the gradient of the SDF as an auxiliary variable that satisfies the $p$ -Poisson equation (1). + +Variable-splitting strategy Unlike existing studies [23, 37, 6] that use neural networks with only one output $u$ for the SDF, we introduce a separate auxiliary network output $G$ for the gradient of the SDF; see that the same principle is used in [45]. In the optimization literature, it is called the variable splitting method [47, 60, 22, 12] and it has the advantage of decomposing a complex minimization into a sequence of relatively simple sub-problems. With the auxiliary variable $G = \nabla u$ and the penalty method [13], the variational problem (3) is converted into an unconstrained problem: + +$$ +\min _ {u, G} \int_ {\Gamma} | u | d \mathbf {x} + \lambda_ {0} \int_ {\Omega} \left| \nabla \cdot \left(\| G \| ^ {p - 2} G\right) + 1 \right| d \mathbf {x} + \lambda_ {1} \int_ {\Omega} \| \nabla u - G \| ^ {2} d \mathbf {x}, \tag {4} +$$ + +where $\lambda_{1} > 0$ is a penalty parameter representing the relative importance of the loss terms. + +$p$ -Poisson as a hard constraint Looking more closely at the minimization (4), if $G$ is already a gradient to satisfy (1), then the second term in (4) is no longer needed and it brings the simplicity of one less parameter. Now, for a function $F: \Omega \to \mathbb{R}^3$ such that $\nabla \cdot F = 1$ , for example $F(\mathbf{x}) = \frac{1}{3}\mathbf{x}$ , the $p$ -Poisson equation (1) is reformulated by the divergence-free form: + +$$ +\nabla \cdot \left(\| \nabla u \| ^ {p - 2} \nabla u + F\right) = 0. \tag {5} +$$ + +Then, there exists a vector potential $\Psi : \mathbb{R}^3 \to \mathbb{R}^3$ satisfying + +$$ +\| G \| ^ {p - 2} G + F = \nabla \times \Psi , \tag {6} +$$ + +where $G = \nabla u$ . Note that a similar idea is used in the neural conservation law [52] to construct a divergence-free vector field built on the Helmholtz decomposition [33, 57]. From the condition (6), we have $\| G\|^{p - 1} = \| \nabla \times \Psi -F\|$ and $G$ is parallel to $\nabla \times \Psi -F$ , then the auxiliary output $G$ is explicitly written: + +$$ +G = \frac {\nabla \times \Psi - F}{\| \nabla \times \Psi - F \| ^ {\frac {p - 2}{p - 1}}}. \tag {7} +$$ + +This confirms that the minimization problem (4) does not require finding $G$ directly, but rather that it can be obtained from the vector potential $\Psi$ . Therefore, the second loss term in (4) can be eliminated by approximating the potential function $\Psi$ by a neural network and defining the auxiliary output $G$ as a hard constraint (7). To sum up, we use a loss function of the form + +$$ +\mathcal {L} _ {p - \text {P o i s s o n}} = \int_ {\Gamma} | u | d \mathbf {x} + \lambda_ {1} \int_ {\Omega} \| \nabla u - G \| ^ {2} d \mathbf {x}, \tag {8} +$$ + +where $G$ is obtained by (7), the first term is responsible for imposing the boundary condition of (1), and the second term enforces the constraint $G = \nabla u$ between primary and auxiliary outputs. It is worth mentioning that $G$ in (7) is designed to exactly satisfy the $p$ -Poisson equation (1). + +An advantage of the proposed loss function (8) and the hard constraint (7) is that (1) can be solved for sufficiently large $p$ , which is critical to make a better approximation of the SDF. It is not straightforward in (3) or (4) because the numeric value of $(p - 2)$ -power with a large $p$ easily exceeds the limit of floating precision. On the other hand, in (7) we use $(p - 2) / (p - 1)$ -power, which allows stable computation even when $p$ becomes arbitrarily large. The surface reconstruction with varying $p$ in Figure 7 shows that using a large enough $p$ is crucial to get a good reconstruction. As the $p$ increases, the reconstruction gets closer and closer to the point cloud. Furthermore, it is worth noting that the proposed representation expresses the second-order PDE (1) with first-order derivatives only. By reducing the order of the derivatives, the computational graph is simplified than (3) or (4). + +Note that one can think of an approach to directly solve the eikonal equation $\| \nabla u\| = 1$ with an auxiliary variable $H = \nabla u$ as an output of the neural network: + +$$ +\min _ {u, \| H \| = 1} \int_ {\Gamma} | u | d \mathbf {x} + \eta \int_ {\Omega} \| \nabla u - H \| ^ {2} d \mathbf {x}, \tag {9} +$$ + +where $\eta > 0$ . The above loss function may produce a non-unique solution of the eikonal equation, which causes numerical instability and undesirable estimation of the surface reconstruction; see Figure 1. To alleviate such an issue, the vanishing viscosity method is used in [37, 49] to approximate the SDF by $u_{\sigma}$ as $\sigma \to 0$ , a solution of $-\sigma \triangle u_{\sigma} + \mathrm{sign}(u_{\sigma})(|\nabla u_{\sigma}| - 1) = 0$ . However, the results are dependent on the hyper-parameter $\sigma > 0$ related to the resolution of the discretized computational domain and the order of the numerical scheme [17, 24]. + +# 3.2 Curl-free constraint + +In the penalty method, we have to compute more strictly to ensure that $G = \nabla u$ by using progressively larger values of $\lambda_{1}$ in (8), but in practice we cannot make the value of $\lambda_{1}$ infinitely large. Now, we can think of yet another condition for enforcing the constraint $G = \nabla u$ from a differential vector identity which says a conservative vector field is curl-free: + +$$ +\nabla \times G = 0 \Longleftrightarrow G = \nabla u \tag {10} +$$ + +for some scalar potential function $u$ . While it may seem straightforward, adding a penalty term $\int_{\Omega} \| \nabla \times G \|^2 d\mathbf{x}$ at the top of (8) is fraught with problems. Since $G$ is calculated by using a curl operation (7), the mentioned penalty term makes a long and complex computational graph. In addition, it has been reported that such loss functions, which include high-order derivatives computed by automatic differentiation, induce a loss landscape that is difficult to optimize [34, 59]. In order to relax the mentioned issue, we augment another auxiliary variable $\tilde{G}$ , where $G = \tilde{G}$ and $\nabla \times \tilde{G} = 0$ are constrained. + +By incorporating the new auxiliary variable $\tilde{G}$ and its curl-free constraint, we have the following loss function: + +$$ +\mathcal {L} _ {\mathrm {P I N C}} = \mathcal {L} _ {p - \text {P o i s s o n}} + \lambda_ {2} \int_ {\Omega} \left\| G - \tilde {G} \right\| ^ {2} d \mathbf {x} + \lambda_ {3} \int_ {\Omega} \left\| \nabla \times \tilde {G} \right\| ^ {2} d \mathbf {x}. \tag {11} +$$ + +![](images/97d69716e9e6e0a3b2cb5dfb6715423dd6dd056ae0c41b1040e3e86340b00046.jpg) +Figure 2: The visualization of the augmented network structure with two auxiliary variables. + +Note that the optimal $\tilde{G}$ should have a unit norm according to the eikonal equation. To facilitate training, we relax this nonconvex equality condition into a convex constraint $\| \tilde{G} \| \leq 1$ . To this end, we parameterize the second network auxiliary output $\tilde{\Psi}$ and define $\tilde{G}$ by + +$$ +\tilde {G} = \mathcal {P} (\tilde {\Psi}) := \frac {\tilde {\Psi}}{\max \left\{1 , \| \tilde {\Psi} \| \right\}}, \tag {12} +$$ + +where $\mathcal{P}$ is the projection operator to the three-dimensional unit ball. Appendix A provides further discussion on the importance of the curl-free term to learn a conservative vector field. + +Figure 2 illustrates the proposed network architecture. The primary and the auxiliary variables are trained in a single network, instead of being trained separately in individual networks. The number of network parameters remains almost the same since only the output dimension of the last layer is increased by six, while all hidden layers are shared. + +# 3.3 Proposed loss function + +In the case of a real point cloud to estimate a closed surface by range scanners, it is inevitable to have occluded parts of the surface where the surface has a concave part depending on possible angles of the measurement [35]. It ends up having relatively large holes in the measured point cloud. Since there are no points in the middle of the hole, it is necessary to have a certain criterion for how to fill in the hole. In order to focus to check the quality of $\mathcal{L}_{\mathrm{PINC}}$ (11) in this paper, we choose a simple rule to minimize the area of zero level set of $u$ : + +$$ +\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\mathrm {P I N C}} + \lambda_ {4} \int_ {\Omega} \delta_ {\epsilon} (u) \| \nabla u \| d \mathbf {x}, \tag {13} +$$ + +where $\lambda_4 > 0$ and $\delta_{\epsilon}(x) = 1 - \tanh^2\left(\frac{x}{\epsilon}\right)$ is a smeared Dirac delta function with $\epsilon > 0$ . The minimization of the area is used in [21, 49] and the advanced models [15, 27, 63] on missing parts of the point cloud to provide better performance of the reconstruction. + +# 4 Experimental results + +In this section, we evaluate the performance of the proposed model to reconstruct 3D surfaces from point clouds. We study the following questions: (i) How does the proposed model perform compared to existing INRs? (ii) Is it stable from noise? (iii) What is the role of the parts that make up the model and the loss? Each is elaborated in order in the following sections. + +Implementation As in previous studies [44, 23, 37], we use an 8-layer network with 512 neurons and a skip connection to the middle layer, but only the output dimension of the last layer is increased by six due to the auxiliary variables. For (13), we empirically set the loss coefficients to $\lambda_1 = 0.1$ , $\lambda_2 = 0.0001$ . $\lambda_3 = 0.0005$ , and $\lambda_4 = 0.1$ and use $p = \infty$ in (7) for numerical simplicity. We implement all numerical experiments on a single NVIDIA RTX 3090 GPU. In all experiments, we use the Adam optimizer [32] with learning rate $10^{-3}$ decayed by 0.99 every 2000 iterations. + +![](images/69579c66777a7e502067dc8f268c18e8fe97ec3c111e1eb008151151104ccdc9.jpg) +Figure 3: 3D Reconstruction results for SRB and Thingi10K datasets. + +Datasets We leverage two widely used benchmark datasets to evaluate the proposed model for surface reconstruction: Surface Reconstruction Benchmark (SRB) [7] and Thingi10K [65]. The geometries in the mentioned datasets are challenging because of their complex topologies and incomplete observations. Following the prior works, we adopt five objects per dataset. We normalize the input data to center at zero and have a maximum norm of one. + +Baselines We compare the proposed model with the following baselines: IGR [23], SIREN [55], SAL [3], PHASE [37], and DiGS [6]. All models are evaluated from only raw point cloud data without surface normal vectors. A comparison with models that leverage surface normals as supervision is included in Appendix C. + +Metrics To estimate the quantitative accuracy of the reconstructed surface, we measure Chamfer $(d_{\mathcal{C}})$ and Hausdorff $(d_H)$ distances between the ground-truth point clouds and the reconstructed surfaces. Moreover, we report one-sided distances $d_{\vec{\mathcal{C}}}$ and $d_{\vec{H}}$ between the noisy data and the reconstructed surfaces. Please see Appendix B.2 for precise definitions. + +# 4.1 Surface reconstruction + +We validate the performance of the proposed PINC (13) in surface reconstruction in comparison to other INR baselines. For a fair comparison, we consider the baseline models that were trained without a normal prior. Table 1 summarizes the numerical comparison on SRB in terms of metrics. We report the results of baselines from [37, 49, 6]. The results show that the reconstruction quality obtained is on par with the leading INRs, and we achieved state-or-the-art performance for Chamfer distances. + +Table 1: Results on surface reconstruction of SRB. + +
ModelAnchorDaratechDCGargoyleLoard Quas
GTScansGTScansGTScansGTScansGTScans
dCdHdCdHdCdHdCdHdCdHdCdHdCdHdCdHdCdHdCdH
IGR0.457.450.174.554.942.150.73.680.6310.350.143.440.7717.460.182.040.164.220.081.14
SIREN0.7210.980.111.270.214.370.091.780.346.270.062.710.467.760.080.680.358.960.060.65
SAL0.427.210.174.670.6213.210.112.150.183.060.082.820.459.740.213.840.134140.074.04
PHASE0.297.430.091.490.357.240.081.210.194.650.052.780.174.790.071.580.110.710.050.74
DiGS0.297.190.111.170.203.720.091.800.151.700.072.750.174.100.090.920.120.910.060.70
PINC0.297.540.091.200.377.240.111.880.142.560.042.730.164.780.050.800.100.920.040.67
+ +Table 2: Results on surface reconstruction of Thingi10K. + +
ModelSquirrelBuser headScrewstarFrogrockPumpkin
dCdHdCdHdCdHdCdHdCdH
IGR0.3611.970.385.950.183.020.4812.050.111.13
SIREN0.475.660.434.810.274.980.7814.750.465.03
DiGS0.5012.450.3910.640.266.330.4510.500.328.03
PINC0.3511.550.376.190.173.000.4311.060.101.90
+ +We further verify the accuracy of the reconstructed surface for the Thingi10K dataset by measuring the metrics. For Thingi10K, we reproduce the results of IGR, SIREN, and DiGS without normal vectors using the official codes. Results on Thingi10K presented in Table 2 show the proposed method achieves superior performance compared to existing approaches. PINC achieves similar or better metric values on all objects. + +The qualitative results are presented in Figure 3. SIREN, which imposes high-frequency features to the model by using a sine periodic function as activation, restores a somewhat torn surface. Similarly, DiGS restores rough and ragged surfaces, for example, the human face and squirrel body are not smooth and are rendered unevenly. On the other hand, IGR provides smooth surfaces but tends to over-smooth details such as the gargoyle's wings and detail on the star-shaped bolt head of screwstar. The results confirm that the proposed PINC (13) adopts both of these advantages: PINC represents a smooth and detailed surface. More results can be found in the Appendix C. + +# 4.2 Reconstruction from noisy data + +In this section, we analyze whether the proposed PINC (13) produces robust results to the presence of noise in the input point data. In many situations, the samples obtained by the scanning process contain a lot of noise and inaccurate surface normals are estimated from these noisy samples. Therefore, it is an important task to perform accurate reconstruction using only noisy data without normal vectors. To investigate the robustness to noise, we perturb the data with additive Gaussian noise with mean zero and two standard deviations 0.005 and 0.01. + +We quantify the ability of the proposed model to handle noise in the input points. The qualitative results are shown in Figure 4. Compared to existing methods, the results demonstrate superior resilience of the proposed model with respect to noise corruption in the input samples. We can observe that SIREN and DiGS restore broken surfaces that appear to be small grains as the noise level increases. On the other hand, the proposed model produces a relatively smooth reconstruction. Results show that PINC is less sensitive to noise than others. + +# 4.3 Ablation studies + +This section is devoted to ablation analyses which show that each part of the proposed loss function $\mathcal{L}_{\mathrm{total}}$ in conjunction with the divergence-free splitting architecture plays an important role in high-quality reconstruction. + +Effect of curl-free constraint We first study the effect of the curl-free constraint on reconstructing high fidelity surfaces. To investigate the effectiveness of the proposed curl-free constraint, we compare the performance of PINC without the curl-free loss term, i.e., the model trained with the loss function $\mathcal{L}_{p\text{-Poisson}}$ (8). The results on the SRB dataset are reported in Table 3 and Figure 5. Figure 5 + +![](images/7a1f6fe2d00d454d6c2a9f4c5681428ff62cf44666e4251dcac5d1d7d936204a.jpg) +Figure 4: Reconstruction results from noisy observations. Two levels of additive Gaussian noise with standard deviations $\sigma = 0.005$ (low) and 0.01 (high) are considered. + +shows that the variable splitting method, which satisfies the $p$ -Poisson equation as a hard constraint (without the curl-free condition), recovers a fairly decent surface, but it generates oversmoothed surfaces and details are lost. However, as we can see from the qualitative result reconstructed with the curl-free constraint, this constraint allows us to capture the details that PINC without the curl-free condition cannot recover. The metric values presented in Table 3 also provide clear evidence of the need for the curl-free term. To further examine the necessity of another auxiliary variable $\tilde{G}$ , we conduct an additional experiment by applying the curl-free loss term directly on $G$ without the use of $\tilde{G}$ . The results are presented in the second row of the Table 3. The results indicate that taking curl on $G$ , which is constructed by taking curl on $\Psi$ in (7), leads to a suboptimal reconstruction. This is likely due to a challenging optimization landscape that is difficult to optimize as a result of consecutive automatic differentiation [59]. The results provide numerical evidences of the necessity of introducing $\tilde{G}$ . + +Table 3: Quantitative results on the ablation study of the curl-free term. + +
ModelGTScans
dCdHd→Cd→H
wo/ curl free0.204.960.122.98
w/ curl free on G4.1752.260.486.03
w/ curl free on G0.164.780.050.80
+ +![](images/d14183170416721c93eb3d758001e23f9f8aecb5600e94030e78352852d9a8d5.jpg) +Figure 5: Comparison of surface reconstruction without (left) and with (right) curl-free constraint. + +Effect of minimal area criterion We study the effect of the minimal area criterion suggested in Section 3.3. In real scenarios, there are defected regions where the surface has not been measured. To fill this part of the hole, the minimum surface area is considered. Figure 6 clearly shows this effect. Some parts in the daratech of SRB have a hole in the back. Probably because of this hole, parts that are not manifolds are spread out as manifolds as shown in the left figure without considering the minimal area. However, we can see that adding a minimal area loss term alleviates this problem. We would like to note that, except for daratech, we did not encounter this problem because other data are point clouds sampled from a closed surface and also are not related to hole filling. Indeed, we + +empirically observe that the results are quite similar with and without the minimal area term for all data other than daratech. + +![](images/c613e5766dfc7a7d9c59eda63a4cc45571007ec6830be26b311e49a8d826d99c.jpg) +(a) wo/ area loss + +![](images/18883227f9f57dcba3fffc9e0740739f19e61c789b5eef9df429368e1ec02deb.jpg) +(b) w/ area loss +Figure 6: Comparison of surface recovery without (a) and with (b) minimum area criterion. + +Effect of large $p$ The $p$ -Poisson equation (1) draws the SDF as $p$ becomes infinitely large. Therefore, it is natural to think that it would be good to use a large $p$ . Here, we conducted experiments on the effect of $p$ . We define $G$ with various $p = 2, 10,$ and 100 and learn the SDF with it. Figure 7 shows surfaces that were recovered from the Gargoyle data in the SRB with different $p$ values. When $p$ is as small as 2, it is obvious that it is difficult to reconstruct a compact surface from points. When $p$ is 10, a much better surface is constructed than that of $p = 2$ , but the by-products still remain on the small holes. Furthermore, a large value of $p = 100$ provides a quite proper reconstruction. This experimental result demonstrates that a more accurate approximation can be obtained by the use of a large $p$ , which is consistent with the theory. This once again highlights the advantage of the variable splitting method we have proposed, which allows an arbitrarily large $p$ to be used. This highlights the advantage of the variable splitting method (7) we have proposed in Section 3.1, which allows an arbitrarily large $p$ to be used. Note that the previous approaches have not been able to use large $p$ because the numeric value of $p$ -power easily exceeds the limit of floating precision. On the other hand, the proposed method is amenable to large $p$ and hence the reconstruction becomes closer to the point cloud. + +# 5 Conclusion and limitations + +We presented a $p$ -Poisson equation-based shape representation learning, termed PINC, that reconstructs high-fidelity surfaces using only the locations of given points. We introduced the gradient of the SDF as an auxiliary network output and incorporated the $p$ -Poisson equation into the auxiliary variable as a hard constraint. The curl-free constraint was also used to provide a more accurate representation. Furthermore, the minimal surface area regularization was considered to provide a compact surface and overcome the ill-posedness of the surface reconstruction problem caused by unobserved points. The proposed PINC successively achieved a faithful surface with intricate details and was robust to noisy observations. + +The minimization of the surface area is used to reconstruct missing parts of points under the assumption that a point cloud is measured by a closed surface. Regarding the hole-filling strategy, it still needs further discussion and investigation of various constraints such as mean curvature or total variation of the gradient. At present, the proposed PDE-based framework is limited to closed surfaces and is inadequate to reconstruct open surfaces. We leave the development to open surface reconstruction as future work. Establishing a neural network initialization that favors the auxiliary gradient of the SDF would be an interesting venue. Furthermore, the computational cost of convergence would differ when using and not using auxiliary variables. Analyzing the convergence speed or computational cost of utilizing auxiliary variables versus not utilizing them is a worthwhile direction for future research. + +# 6 Societal Impacts + +The proposed PINC allows high-quality representation of 3D shapes only from raw unoriented 3D point cloud. It has many potential downstream applications, including product design, security, medical imaging, robotics, and the film industry. We are aware that accurate 3D surface reconstruction + +![](images/3348b7ed53c2bd94253dfb25818f5a439a2f53c60cb1922d3980111a479e8b6c.jpg) +(a) $p = 2$ + +![](images/a94a5e1f1b63f997440d4f9761cfc9188192dc697f2adda81ac6b8af0c7308cd.jpg) +(b) $p = 10$ + +![](images/dbae8551186916a64eeb7a1db24c9bb8107aaeee59dbf67412332ae86ebd85c9.jpg) +(c) $p = 100$ +Figure 7: Surface reconstruction of anchor data with various $p$ . The results show the importance of using a sufficiently large $p$ for an accurate approximation. + +can be used in malicious environments such as unauthorized reproduction of machines without consent and digital impersonation. However, it is not a work to develop a technique to go to abuse, and we hope and encourage users of the proposed model to concenter on the positive impact of this work. + +# 7 Acknowledgements + +This work was supported by the NRF grant [2012R1A2C3010887] and the MSIT/IITP ([1711117093], [2021-0-00077], [No. 2021-0-01343, Artificial Intelligence Graduate School Program(SNU)]). Also, this project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 945478. + +# References + +[1] Nina Amenta and Marshall Bern. Surface reconstruction by Voronoi filtering. In Proceedings of the Fourteenth Annual Symposium on Computational Geometry, pages 39-48, 1998. +[2] Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, and Yaron Lipman. Controlling neural level sets. Advances in Neural Information Processing Systems, 32, 2019. +[3] Matan Atzmon and Yaron Lipman. SAL: Sign agnostic learning of shapes from raw data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2565-2574, 2020. +[4] Matan Atzmon and Yaron Lipman. SALD: Sign agnostic learning with derivatives. arXiv preprint arXiv:2006.05400, 2020. +[5] Alexander G. Belyaev and Pierre-Alain Fayolle. On variational and PDE-based distance function approximations. In Computer Graphics Forum, volume 34, pages 104-118. Wiley Online Library, 2015. +[6] Yizhak Ben-Shabat, Chamin Hewa Koneputugodage, and Stephen Gould. DiGS: Divergence guided shape implicit neural representation for unoriented point clouds. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19301-19310, 2022. +[7] Matthew Berger, Joshua A Levine, Luis Gustavo Nonato, Gabriel Taubin, and Claudio T Silva. A benchmark for surface reconstruction. ACM Transactions on Graphics (TOG), 32(2):1-17, 2013. +[8] Matthew Berger, Andrea Tagliasacchi, Lee M. Seversky, Pierre Alliez, Gael Guennebaud, Joshua A Levine, Andrei Sharf, and Claudio T Silva. A survey of surface reconstruction from point clouds. In Computer graphics forum, volume 36, pages 301-329. Wiley Online Library, 2017. + +[9] Matthew Berger, Andrea Tagliasacchi, Lee M. Seversky, Pierre Alliez, Joshua A. Levine, Andrei Sharf, and Claudio T. Silva. State of the art in surface reconstruction from point clouds. In 35th Annual Conference of the European Association for Computer Graphics, Eurographics 2014 - State of the Art Reports. The Eurographics Association, 2014. +[10] Fausto Bernardini, Joshua Mittleman, Holly Rushmeier, Cláudio Silva, and Gabriel Taubin. The ball-pivoting algorithm for surface reconstruction. IEEE transactions on visualization and computer graphics, 5(4):349-359, 1999. +[11] Tilak Bhattacharya, Emmanuele DiBenedetto, and Juan Manfredi. Limits as $p \to \infty$ of $\triangle_p u_p = f$ and related external problems. Rendiconti del Seminario Matematico Università e Politecnico di Torino, 47:15-68, 1989. +[12] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1-122, 2011. +[13] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge University Press, 2004. +[14] Fatih Calakli and Gabriel Taubin. SSD: Smooth signed distance surface reconstruction. In Computer Graphics Forum, volume 30, pages 1993-2002. Wiley Online Library, 2011. +[15] Vicent Caselles, Gloria Haro, Guillermo Sapiro, and Joan Verdera. On geometric variational models for inpainting surface holes. Computer Vision and Image Understanding, 111(3):351-373, 2008. +[16] Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5939-5948, 2019. +[17] Alexander G. Churbanov and Petr N. Vabishchevich. Numerical solution of boundary value problems for the eikonal equation in an anisotropic medium. Journal of Computational and Applied Mathematics, 362:55-67, 2019. +[18] Tamal K. Dey, Gang Li, and Jian Sun. Normal estimation for point clouds: A comparison study for a voronoi based method. In Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, pages 39-46. IEEE, 2005. +[19] Philipp Erler, Paul Guerrero, Stefan Ohrhallinger, Niloy J Mitra, and Michael Wimmer. Points2surf learning implicit surfaces from point clouds. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V, pages 108–124. Springer, 2020. +[20] Pierre-Alain Fayolle. Signed distance function computation from an implicit surface. arXiv preprint arXiv:2104.08057, 2021. +[21] Henry Fuchs, Zvi M Kedem, and Samuel P Uselton. Optimal surface reconstruction from planar contours. Communications of the ACM, 20(10):693-702, 1977. +[22] Tom Goldstein and Stanley Osher. The split Bregman method for L1-regularized problems. SIAM journal on imaging sciences, 2(2):323-343, 2009. +[23] Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, and Yaron Lipman. Implicit geometric regularization for learning shapes. ICML'20, page 11. JMLR.org, 2020. +[24] Jooyoung Hahn, Karol Mikula, and Peter Frolkovič. Laplacian regularized eikonal equation with Soner boundary condition on polyhedral meshes. arXiv preprint arXiv:2301.11656, 2023. +[25] Jooyoung Hahn, Jie Qiu, Eiji Sugisaki, Lei Jia, Xue-Cheng Tai, and Hock Soon Seah. Stroke-based surface reconstruction. Numerical Mathematics: Theory, Methods and Applications, 6:297-324, 2013. + +[26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034, 2015. +[27] Yuchen He, Sung Ha Kang, and Hao Liu. Curvature regularized surface reconstruction from point clouds. SIAM Journal on Imaging Sciences, 13(4):1834-1859, 2020. +[28] Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle. Surface reconstruction from unorganized points. In Proceedings of the 19th annual conference on computer graphics and interactive techniques, pages 71-78, 1992. +[29] Hui Huang, Dan Li, Hao Zhang, Uri Ascher, and Daniel Cohen-Or. Consolidation of unorganized point clouds for surface reconstruction. ACM transactions on graphics (TOG), 28(5):1-7, 2009. +[30] Bernhard Kawohl. On a family of torsional creep problems. Journal für die reine und angewandte Mathematik, 410:1-22, 1990. +[31] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry processing, volume 7, 2006. +[32] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[33] Leo Koenigsberger. Hermann von Helmholtz. Clarendon pPress, 1906. +[34] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems, 34:26548-26560, 2021. +[35] Marc Levoy, Kari Pulli, Brian Curless, Szymon Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginzton, Sean Anderson, James Davis, Jeremy Ginsberg, Jonathan Shade, and Duane Fulk. The digital Michelangelo project: 3D scanning of large statues. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '00, pages 131-144, USA, 2000. ACM Press/Addison-Wesley Publishing Co. +[36] Peter Lindqvist. Notes on the $p$ -Laplace equation. Number 161. University of Jyväskylä, 2017. +[37] Yaron Lipman. Phase transitions, distance functions, and implicit neural representations. In International Conference on Machine Learning, 2021. +[38] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. ACM SIGGRAPH computer graphics, 21(4):163-169, 1987. +[39] Baorui Ma, Zhizhong Han, Yu-Shen Liu, and Matthias Zwicker. Neural-Pull: Learning signed distance functions from point clouds by learning to pull space onto surfaces. In International Conference on Machine Learning, 2020. +[40] Zoltan Csaba Marton, Radu Bogdan Rusu, and Michael Beetz. On fast surface reconstruction methods for large and noisy point clouds. In 2009 IEEE international conference on robotics and automation, pages 3218-3223. IEEE, 2009. +[41] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3D reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4460-4470, 2019. +[42] Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Deep Level Sets: Implicit surface representations for 3D shape inference. CoRR, 2019. +[43] Carsten Moenning and Neil A. Dodgson. Fast marching farthest point sampling for implicit surfaces and point clouds. Computer Laboratory Technical Report, 565:1-12, 2003. + +[44] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165-174, 2019. +[45] Yesom Park, Chang Hoon Song, Jooyoung Hahn, and Myungjoo Kang. ReSDF: Redistancing implicit surfaces using neural networks. arXiv preprint arXiv:2305.08174, 2023. +[46] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +[47] Donald W. Peaceman and Henry H. Rachford, Jr. The numerical solution of parabolic and elliptic differential equations. Journal of the Society for Industrial and Applied Mathematics, 3(1):28-41, 1955. +[48] Songyou Peng, Chiyu Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys, and Andreas Geiger. Shape As Points: A differentiable Poisson solver. Advances in Neural Information Processing Systems, 34:13032-13044, 2021. +[49] Albert Pumarola, Artsiom Sanakoyeu, Lior Yariv, Ali K. Thabet, and Yaron Lipman. VisCo grids: Surface reconstruction with viscosity and coarea grids. In NeurIPS, 2022. +[50] Fu Qiancheng, Xu Qingshan, Ong Yew-Soon, and Tao Wenbing. Geo-Neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction. Advances in Neural Information Processing Systems, 35:3403–3416, 2022. +[51] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686-707, 2019. +[52] Jack Richter-Powell, Yaron Lipman, and Ricky T. Q. Chen. Neural conservation laws: A divergence-free perspective. In Advances in Neural Information Processing Systems, 2022. +[53] Liu Shaohui, Zhang Yinda, u Peng Songyo, Shi Boxin, Pollefeys Marc, and Cui. Zhaopeng. DIST: Rendering deep implicit signed distance function with differentiable sphere tracing. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2019-2028, 2020. +[54] Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, and Gordon Wetzstein. MetaSDF: Meta-learning signed distance functions. Advances in Neural Information Processing Systems, 2020. +[55] Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In Proceedings of the 34th International Conference on Neural Information Processing Systems, number 626 in NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. +[56] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33:7537–7547, 2020. +[57] J. Van Bladel. On Helmholtz's theorem in finite regions. Technical report, CM-P00066539, 1958. +[58] He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas. Normalized object coordinate space for category-level 6D object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2642-2651, 2019. +[59] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing, 43(5):A3055-A3081, 2021. + +[60] Yilun Wang, Junfeng Yang, Wotao Yin, and Yin Zhang. A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248-272, 2008. +[61] Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. DISN: Deep implicit surface network for high-quality single-view 3D reconstruction. Advances in neural information processing systems, 32, 2019. +[62] Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. Advances in Neural Information Processing Systems, 33:2492-2502, 2020. +[63] Jingyang Zhang, Yao Yao, Shiwei Li, Tian Fang, David McKinnon, Yanghai Tsin, and Long Quan. Critical regularizations for neural surface reconstruction in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6270-6279, 2022. +[64] Hong-Kai Zhao, Stanley Osher, and Ronald Fedkiw. Fast surface reconstruction using the level set method. In Proceedings IEEE Workshop on Variational and Level Set Methods in Computer Vision, pages 194-201. IEEE, 2001. +[65] Qingnan Zhou and Alec Jacobson. Thingi10k: A dataset of 10,000 3D-printing models. arXiv preprint arXiv:1605.04797, 2016. + +# A More discussion on curl-free term + +This section is devoted to both theoretically and empirically validate the necessity of the curl-free loss term. One might think that the curl-free term is unnecessary, since a curl-free $G$ can be obtained by reducing the $L^2$ penalty term for the variable splitting constraint $\nabla u = G$ . However, this penalty term is not sufficient to train $G$ as a conservative vector field. In subsequent sections, we prove this theoretically and verify experimentally that this is indeed the case in practice. + +# A.1 Theoretical justification + +In the following theorem, we opine that minimizing the $L^2$ energy of $\| \nabla u - G\|$ in (8) without the curl-free term is not sufficient to obtain a conservative vector field $G$ . + +Theorem A.1. There is a sequence $\{u_n, G_n\}_{n \in \mathbb{N}}$ such that $\int_{\Omega} \| \nabla u_n - G_n \|^2 d\mathbf{x} \to 0$ as $n \to \infty$ , but $G_n$ does not converge to a curl-free vector field. + +Proof. For every $\{u_n\}_{n\in \mathbb{N}}$ defined on $\Omega = [0,1]^3$ , set + +$$ +G _ {n} (x, y, z) = \nabla u _ {n} (x, y, z) + \left(0, \frac {1}{n} \sin (2 \pi n x), 0\right) \in \mathbb {R} ^ {3}. +$$ + +Then, + +$$ +\begin{array}{l} \int_ {\Omega} \left\| \nabla u _ {n} - G _ {n} \right\| ^ {2} d \mathbf {x} = \int_ {\Omega} \left\| \left(0, \frac {1}{n} \sin (2 \pi n x), 0\right) \right\| ^ {2} d \mathbf {x} (14) \\ = \frac {1}{n ^ {2}} \int_ {\Omega} \sin^ {2} (2 \pi n x) d \mathbf {x} (15) \\ \rightarrow 0, (16) \\ \end{array} +$$ + +as $n\to \infty$ . However, + +$$ +\nabla \times G _ {n} (x, y, z) = (0, 0, \cos (2 \pi n x)) +$$ + +does not converge to zero. + +![](images/943afdf2bbf34dec68249e7e7a9c705b4835c83080c87c44348b2f55337ed79b.jpg) + +Remark A.2. Note that for a $G_{n}$ set in the proof of the above theorem A.1, $\int_{\Omega} \| \nabla \times G_{n} \|^{2} d\mathbf{x} = \frac{1}{2}$ is a positive constant independent of $n$ . This implies that we can prevent the pathological example above by adding the curl-free loss term. Therefore, the curl-free term is necessary to accurately learn the gradient field $G$ . + +# A.2 Empirical Validation + +To examine the practical effect of the curl-free term on learning a conservative vector field, we include experimental results on a simple example of a sphere of radius 0.5 centered at the origin. Figure 8 depicts the level set contours of the trained $u$ of a cross section cut at the planes $x = 0.2$ and 0.4, along with the vector field of the trained $G$ projected onto this plane together with the gradient field of the true SDF. We note that the $p$ -Poisson equation (1) gives an SDF that is positive on the interior of the surface and negative on the outside, however, the contours depicted in Figures 8 and 9 are of the opposite sign of the trained $u$ . As shown in the Figure 8, the model trained without the curl-free term learns a vector field $G$ that is not curl-free, resulting in $G$ being distinct from the true gradient field. This ultimately impedes $u$ from correctly learning the SDF. On the other hand, it is evident that the model trained with the curl-free term converges fairly close to the true gradient field. This ultimately helps $u$ to accurately learn the SDF. + +# B Implementation Details + +In this section, we provide more details about the implementation for reproducibility. Note that our code is built on top of IGR $^{2}$ (MIT License). + +![](images/d8fd30ac43ca400522b8da15e3f6280f1bec7df6aa882e0f5e817076d2ff7eee.jpg) +Figure 8: The trained results of a cross section cut in planes $x = 0.2$ (left) and $x = 0.4$ (right). The level-sets show the signed distance fields $u$ learned by the proposed model with (top) and without (bottom) the curl-free term. Dashed contours depict the learned zero level set. Quivers represent the vector field of the trained auxiliary variable $G$ and the true gradient fields are plotted in red arrows. + +![](images/b97b590228a8f28bc399c30e66fa5c4406c67b1a7a8ad0469908669f943c5dea.jpg) + +# B.1 Experimental Setup + +Parameter Tuning The proposed training loss $\mathcal{L}_{\mathrm{total}}$ (13) is a weighted sum of five loss terms with four regularization parameters $\lambda_1, \lambda_2, \lambda_3$ , and $\lambda_4$ . In all surface reconstruction experiments, we use $\lambda_1 = 0.1$ , $\lambda_2 = 0.0001$ , $\lambda_3 = 0.0005$ , and $\lambda_4 = 0.1$ . In the proposed model, $p$ is also a hyperparameter to be chosen. Considering the theoretical fact that $p$ should be infinitely large and numerical simplicity, we set $p = \infty$ . We empirically confirm no significant difference between when $p = 100$ and when $p = \infty$ . Moreover, we set the smoothing parameter $\epsilon = 1$ for approximating Dirac delta in (13). + +Network Architecture As in previous studies [44, 23, 37], we represent the primary and auxiliary outputs by a single 8-layered multi-layer perceptron (MLP) $\mathbb{R}^3\to \mathbb{R}^7$ with 512 neurons and a skip connection to the fourth layer, but only the output dimension of the last layer is increased by six due to the two auxiliary variables; see Figure 2. We use softmax activation function $\alpha (x) = \frac{1}{\beta}\ln \left(1 + e^{\beta x}\right)$ with $\beta = 100$ . Network weights are initialized by the geometric initialization proposed in [3]. + +Training details The gradient and the curl of networks are computed with auto-differentiation library (autograd) [46]. In all experiments, we use the Adam optimizer [32] with learning rate $10^{-3}$ decayed by 0.99 every 2,000 iterations. At each iteration, we uniform randomly sample 16,384 points $\mathbf{x} \in \mathcal{X}$ from the point cloud $\mathcal{X}$ . We sample the collocation points of $\Omega$ as provided in [23]. The collocation points consist of global points and local points. The local collocation points are sampled by perturbing each of the 16,384 points drawn from the point cloud with a zero mean Gaussian distribution with a standard deviation equal to the distance to the 50th nearest neighbor. The global collocation points are made up of approximately 2,000 points from the uniform distribution $U(-\eta, \eta)$ with $\eta = 1.1$ . $F = \frac{1}{3}\mathbf{x}$ is utilized in all experiments. + +Baseline models For baseline models on the Thingi10K dataset, we use the official codes of IGR $^{2}$ (MIT License), SIREN $^{3}$ (MIT License), and DiGS $^{4}$ (MIT License). We faithfully follow the official implementation to train each model without normal prior. For the variable splitting representation of the eikonal equation (9), there is a single auxiliary output. Consequently, we use the same 8 layer MLP with 512 nodes, but a network with an output dimension of 4. We normalize the auxiliary output to make it a unit norm, and use the normalized one to represent $H$ . + +# B.2 Evaluation + +Metrics We measure the distance between two point clouds $\mathcal{X}$ and $\mathcal{Y}$ by using the standard one-sided and double-sided $\ell_1$ Chamfer distances $d_{\vec{C}}$ , $d_C$ and Hausdorff distances $d_{\vec{H}}$ , $d_H$ . Each are defined as follows: + +$$ +d _ {\vec {C}} (\mathcal {X}, \mathcal {Y}) = \frac {1}{| \mathcal {X} |} \sum_ {\mathbf {x} \in \mathcal {X}} \min _ {\mathbf {y} \in \mathcal {Y}} \| \mathbf {x} - \mathbf {y} \| _ {2}, +$$ + +$$ +d _ {C} \left(\mathcal {X}, \mathcal {Y}\right) = \frac {1}{2} \left(d _ {\vec {C}} \left(\mathcal {X}, \mathcal {Y}\right) + d _ {\vec {C}} \left(\mathcal {Y}, \mathcal {X}\right)\right), +$$ + +$$ +d_{\vec{H}}\left(\mathcal{X},\mathcal{Y}\right) = \max_{\mathbf{x}\in \mathcal{X}}\min_{\mathbf{y}\in \mathcal{Y}}\left\| \mathbf{x} - \mathbf{y}\right\|_{2}, +$$ + +$$ +d _ {H} \left(\mathcal {X}, \mathcal {Y}\right) = \max \left\{d _ {\vec {H}} \left(\mathcal {X}, \mathcal {Y}\right) + d _ {\vec {H}} \left(\mathcal {Y}, \mathcal {X}\right) \right\}. +$$ + +When we estimate the distance from a surface, we sample $10M$ uniformly random points from the surface and then measure the distance from the sampled point clouds by the metrics defined above. + +Furthermore, in order to measure the accuracy of the trained gradient field, we evaluate Normal Consistency (NC) [41] between the learned $G$ and the surface normal as follows: from given an oriented point cloud $\mathcal{X} = \{\mathbf{x}_i,\mathbf{n}_i\}_{i = 1}^N$ comprising of sampled points $\mathbf{x}_i$ and the corresponding outward normal vectors $\mathbf{n}_i$ , NC is defined by + +$$ +N C (G, \mathbf {n}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \left| G \left(\mathbf {x} _ {i}\right) ^ {\mathrm {T}} \mathbf {n} _ {i} \right|, \tag {17} +$$ + +the average of the absolute dot product of the trained $G$ and the surface normals. + +Level set extraction We extract the zero level set of a trained neural network $u$ by using the classical marching cubes meshing algorithm [38] on a $512 \times 512 \times 512$ uniform grid. + +# C Additional Results + +# C.1 Uniqueness of the solution of $p$ -Poisson equation + +In this section, we provide a numerical example supporting the strength of the proposed model regarding the uniqueness of the solution to the $p$ -Poisson equation. The given points are located on a cube centerd at the origin with an edge of the length 1. We consider IGR [23] as an eikonal-rooted baseline, and we train IGR and the proposed PINC with the following three different network initializations: The geometric initialization [23] that IGR originally used and the Kaiming uniform initialization [26] with two different random seeds. The results are summarized in Figure 9. The results show that IGR converges to different solutions depending on the model initializations. In particular, IGR fails to learn the SDF of the cube except for the geometric initialization. On the other hand, the results of PINC with the same initializations show that the proposed model converges to the SDF in all three cases. The numerical results of the chosen example show that the proposed method can pursue the unique solution of the PDE. + +# C.2 Additional comparison with models utilizing surface normals + +In Section 4, we made a comparison with models that do not use the surface normal $\mathbf{n}$ as a supervision. Here, we additionally consider a comparison with models that leverage normal supervision. We + +![](images/6d635dbf7df2d0f9b251cfba0b701bc78ba59313a24a2965752aa032112d60fe.jpg) + +![](images/a41658b9d5a2439c9d7d0cedad2b97106b32dbcefbe2ffc2632cead8621348ae.jpg) + +![](images/14cf83741291179b3d66cd6aa33a53ab38bb93ce2eaee6a99102fae900eef84c.jpg) + +![](images/fbe32c395ec30eea8bb3781f3c3db73aaf01ea546c3611d0e9cd2e34649dbc8d.jpg) +Figure 9: Experimental results show whether a method can find the SDF from different network initializations. IGR and PINC are trained on the synthetic cube data with three network initializations: geometric initialization (initialization 1) and Kaiming initialization with two different random seeds (initializations 2 and 3). Each depicts the trained level-set contours of a cross-section cut in the plane $x = 0$ . Red contours depict the trained zero level set of numerical solutions. + +![](images/ad307c75bfb34fc30fa588db8dbe13e766a96ab633b4be56d0986ed601281759.jpg) + +![](images/496e154db7b533022d20ee7fe3d5452839cca54484c9a38765e80c72cf9a1b19.jpg) + +Table 4: Comparison with models that use the surface normal supervision $\mathbf{n}$ on SRB. The proposed model PINC did not utilize the surface normal. + +
ModelAnchorDaratechDCGargoyleLoard Quas
GTScansGTScansGTScansGTScansGTScans
dCdHdCdHdCdHdCdHdCdHdCdHdCdHdCdHdCdHdCdH
w/nVisCO0.213.000.151.070.264.060.141.760.152.220.092.760.174.400.110.960.121.060.70.64
IGR0.224.710.121.320.254.010.081.590.172.220.092.610.163.520.060.810.121.170.070.98
SAP0.348.830.092.930.223.090.081.660.173.300.042.230.185.540.051.730.133.490.041.17
wo/nPINC0.297.540.091.200.377.240.111.880.142.560.042.730.164.780.050.800.100.920.040.67
+ +consider three baseline models as follows: (i) IGR that evaluates using the surface normal, (ii) VisCo [49], a grid-based method based on the viscosity regularized eikonal equation, and (iii) Shape As Points (SAP) [48], a model that revisits the classical Poisson Surface Reconstruction (PSR) [31] using deep learning. The results are reported in Table 4. We can see that the proposed model performs on par with baselines, despite not utilizing the surface normal. Considering that all of the baselines are PDE-based INR models, the results exhibit the effectiveness of the proposed model (11) in reconstructing a surface from the sole use of raw point clouds. + +It is worth note that the proposed model may be interpreted as PSR because of (8). More precisely, the Euler-Lagrange equation of (8) says that the variational problem (8) for finding a scalar function $u$ whose gradient best approximates a given vector field $G$ transforms into the following Poisson problem: + +$$ +\left\{ \begin{array}{l l} \triangle u = \nabla \cdot G & \text {i n} \Omega \\ u = 0 & \text {o n} \Gamma . \end{array} \right. \tag {18} +$$ + +In the conventional PSR, the gradient field is set to surface normals. Thus, the auxiliary variable $G$ can be regarded as playing a role of surface normals for PSR. However, the vector field $G$ in the proposed model is not obtained from the oriented point cloud, but the learnable function that is trained with $u$ at the same time. Moreover, since we bake the $p$ -Poisson equation into $G$ as a hard constraint in (7), we obtain a continuous SDF rather than an indicator function like PSR and SAP. The results confirm that simultaneous training of the gradient field and the SDF, that is, the variable + +Table 5: Normal consistency of reconstructed surfaces on SRB. + +
ModelAnchorDaratechDCGargoyleLord Quas
IGR0.97060.85260.98000.97650.9901
SIREN0.94380.96820.97350.93920.9762
DiGS0.97670.96800.98260.97880.9907
SAP0.97500.94140.96360.97310.9838
PINC0.97540.93110.98280.98030.9915
+ +Table 6: Normal consistency of reconstructed surfaces on Thingi10K. + +
ModelSquirrelPumpkinFrogrockScrestarBuser head
IGR0.98200.95650.95090.97090.9249
SIREN0.95290.89960.90350.91420.8860
DiGS0.95570.93530.94680.93860.9171
SAP0.97910.95200.93190.97670.9004
PINC0.98160.95830.95450.98050.9376
+ +splitting method, achieves similar or even better surface restoration than SAP, even without using the given surface normal $\mathbf{n}$ . + +# C.3 Additional quantitative results + +We reported Chamfer distances and Hausdorff distances in Tables 1 and 2, but these two metrics do not reflect the complete quality of the restored surface. Here, we evaluate Normal Consistency (NC) (17) which measures how well the model can capture higher order information of the surface. The results on both SRB and Thingi10K datasets are summarized in Tables 5 and 6, respectively. Overall, the proposed model achieves a better NC score than baseline models. In particular, the results show that the proposed model achieves superior NC for the tested examples than SAP, even though it does not employ surface normal supervision. + +Moreover, we measure Chamfer distance and Hausdorff distance for ablation studies reported in Figures 6 and 7 of Section 4.3 and summarized them in the Tables 7 and 8, respectively. + +More results on effect of $p$ Theoretically, an accurate SDF can be obtained as $p$ grows infinitely. That the same story continues in practice is confirmed by the results shown in Figure 10. We can see that the larger $p$ induces a better reconstruction. This phenomenon is also observed in Figure 7. Moreover, it can be seen that $p = \infty$ , which we used in the implementation, gives a similar qualitative result to $p = 100$ . These experimental results once again remind us how important it is to be able to use a large $p$ . + +Furthermore, we provide numerical verification for the use of $p = \infty$ in Figure 11. For notational convenience, we use the subscript $u_{p}$ to denote the dependence of the solution on the parameter $p$ . Figure 11 depicts graphs of the mean squared error (MSE) of $u_{p}$ and $u_{\infty}$ over different $p$ . MSEs are computed by discretizing the computational domain $\Omega$ into a $100 \times 100 \times 100$ uniform grid. The results show that the MSE decreases as $p$ increases. In other words, it confirms that $u_{p}$ is getting closer to $u_{\infty}$ as $p$ grows, which supports the justification for using $p = \infty$ . + +Table 7: Quantitative results on the effect of area loss on daratech. + +
ModelGTScans
dCdHdCdH
PINC wo/ area loss4.2653.340.202.81
PINC w/ area loss0.377.240.111.88
+ +Table 8: Quantitative results on the ablation study of $p$ . + +
ModelGargoyleAnchor
GT dCdHScans d→CdHGT dCdHScans d→CdH
p = 23.9643.130.516.314.3246.660.7714.58
p = 100.228.140.101.190.507.240.123.02
p = 1000.174.900.100.820.317.200.131.80
p = ∞0.164.780.050.800.297.190.111.17
+ +![](images/d14f24d95ab69c5df1bdc04c53444d41050709559a53ac632eadb01d21f4db61.jpg) +$p = 2$ + +![](images/7d59984041464a81f240ae1456cac38cdb70f884e2cdf05e65e91fd4f8392bba.jpg) +$p = 10$ + +![](images/9728b4e515e75710c20a20db6d7f6a05ba0a7c1bcc0e91d539ac2a415efd1892.jpg) +$p = 100$ +Figure 10: Quality of surface reconstruction with varying $p$ from $p = 2$ to $p = \infty$ . + +![](images/596a9e89abedf529ef1d1656447d2aa89e44cf3ca30a9392da3850ed0a52ba00.jpg) +$p = \infty$ + +![](images/1b8f7391870fa54b59b0f6de66d3ae04556ef8307a755769fe35c4fd0a31a491.jpg) +Figure 11: MSEs of $u_{p}$ and $u_{\infty}$ over different $p$ . + +![](images/ce1b705d8c54e5966bffb20c77243b1616cfed55a5c800b15cc5fac4858b86da.jpg) +Figure 12: Additional qualitative results of the surface reconstruction on SRB and Thingi10K datasets. + +# C.4 Additional qualitative results + +Figure 12 provides additional qualitative results of surface reconstruction on SRB and Thingi10K discussed in Section 4.1. + +Reconstruction of large point clouds We further provide qualitative results for surface reconstruction from large models taken from Thingi10K. The adopted point clouds consist of from 35K to 980K vertices. Figure 13 depicts the qualitative reconstruction results of PINC on these large point clouds. The model is trained with the same configuration used in Section 4.1. + +# C.5 Training/Inference time + +To investigate the computational time of the proposed model, We carefully measured average execution time compared to baselines. In the Table 9, we report the average training time per iteration and inference time at a resolution of $32^{3}$ voxels. The proposed model requires more computational cost than baseline models because of the computation on curl using automatic differentiation. + +![](images/a79a8e33fa9d1e0943e0b299b37f996d1ca3d19f767f1f7ddba95fc082695803.jpg) +Figure 13: Reconstructed surfaces of large point clouds from Thingi10K. + +Table 9: Training and inference times for surface reconstruction on SRB. + +
ModelIGRSIRENDiGSPINC
Training time (ms/iteration)48.3413.1152.34295.01
Inference time (ms)6.863.514.396.93
\ No newline at end of file diff --git a/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/images.zip b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7ddd817bde9ba14276ffa7306efc251d965cbf30 --- /dev/null +++ b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ab36dbc76fdf83c7d3d9d41384723737a4829d73a832e600f170289393239f3 +size 1099018 diff --git a/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/layout.json b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6b3bff095d982051bdba54764701dcb5a34f40c6 --- /dev/null +++ b/ppoissonsurfacereconstructionincurlfreeflowfrompointclouds/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0deade27860dad7345ad6ccb3effc70e440b9a7bb60e2a2832011d3e91d52cbb +size 714233 diff --git a/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_content_list.json b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bca096decfef879de027b273b31abc574b3eb98b --- /dev/null +++ b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67cd324fa83b130890da09ff315e9f9fc010f8a28b3525d8d655e6064697acd6 +size 96757 diff --git a/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_model.json b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a9f0558941e2055686e61a0cee1f18e58278d1ed --- /dev/null +++ b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9670a782bf0582d98fdfabe7731d1996d3c221d4184e460724666bccafa40ce +size 114403 diff --git a/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_origin.pdf b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..821dd3a8dd3f695f90aa24be6ffbe2e6abb9478e --- /dev/null +++ b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/28fde974-d5f3-4b59-bc58-d2b40f43e483_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:957f3f58242b3d5930c7dcc1ab494260203b69416fcb8f21518ea918a350346f +size 1021210 diff --git a/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/full.md b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/full.md new file mode 100644 index 0000000000000000000000000000000000000000..328f9a5211f9c0c80e1dc2e4a6cc5467a5070fe7 --- /dev/null +++ b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/full.md @@ -0,0 +1,442 @@ +# $p$ -value Adjustment for Monotonous, Unbiased, and Fast Clustering Comparison + +Kai Klede1 Thomas Altstidl1 Dario Zanca1 Björn Eskofier1,2 + +1Machine Learning and Data Analytics (MaD) Lab + +Friedrich-Alexander Universität Erlangen-Nürnberg + +2Translational Digital Health Group + +Institute of AI for Health, Helmholtz Zentrum München + +{kai.klede; thomas.r.altstidl; dario.zanca; bjoern. eskofier}@fau.de + +# Abstract + +Popular metrics for clustering comparison, like the Adjusted Rand Index and the Adjusted Mutual Information, are type II biased. The Standardized Mutual Information removes this bias but suffers from counterintuitive non-monotonicity and poor computational efficiency. We introduce the $p$ -value adjusted Rand Index (PMI $_2$ ), the first cluster comparison method that is type II unbiased and provably monotonous. The PMI $_2$ has fast approximations that outperform the Standardized Mutual information. We demonstrate its unbiased clustering selection, approximation quality, and runtime efficiency on synthetic benchmarks. In experiments on image and social network datasets, we show how the PMI $_2$ can help practitioners choose better clustering and community detection algorithms. + +# 1 Introduction + +Clustering is fundamental to unsupervised learning, and practitioners can choose from many algorithms to partition a dataset into homogeneous clusters. Therefore it is common to annotate parts of an otherwise unlabeled dataset and select the clustering algorithm that best reproduces the annotations [1]. A good selection crucially depends on the clustering comparison method, like the Mutual Information (MI) [25] or the Rand Index (RI) [39]. The importance of the comparison method further increases with the advent of deep learning methods for applications such as community detection, in which they serve as components of loss functions during network training [10, 12, 33]. Some use cases admit multiple clustering solutions [23, 38], and clustering comparison can help identify qualitatively different clustering solutions for a single dataset. Other applications of clustering comparison include categorical feature selection or consensus clustering [5, 17]. + +The MI and RI are biased towards clusterings with particular cluster size distributions [19] (type I bias). For example, the MI favors larger clusters for any reference clustering [24]. The Adjusted Rand Index (ARI) [11] and Adjusted Mutual Information (AMI) [25] achieve a constant baseline value by subtracting the expected value under random permutation of the cluster labels. However, they still exhibit a bias when multiple clusterings are compared via a fixed ground truth (type II bias) [29], as opposed to comparing two random clusterings with each other. This type II scenario typically arises when selecting the best algorithm for a given task on a labeled subset of the data. + +Romano et al. [30] showed via the Tsallis entropy, that the AMI and ARI are special cases of generalized information-theoretic clustering comparison measures $\mathrm{AMI}_q$ and proposed standardization to resolve both types of biases. However, the runtime for standardization generally exhibits a substantial time complexity of $\mathcal{O}(N^3 k_A\max (k_A,k_B))$ [30], where $N$ represents the number of data points and $k_{A}$ and $k_{B}$ denote the respective number of clusters. This complexity is prohibitive for many applications [15]. Gösgens et al. [9] found that the Standardized Mutual Information (SMI) does not + +increase monotonically as one clustering is adjusted to match a reference and therefore reject the SMI entirely. + +This work presents the $p$ -value of the $\mathrm{MI}_q$ (denoted $\mathrm{PMI}_q$ ) as a provably monotonous type II bias correction. We formally define type II bias and prove that the $\mathrm{PMI}_q$ does not suffer from it. For the $\mathrm{SMI}_q$ , there is only empirical evidence [29, 30]. We show that the $\mathrm{PMI}_q$ is monotonous for $q \geq 2$ . This includes the $p$ -value of the RI, but for the MI, the $p$ -value is not monotonous. When normalized with the normal CDF, the $\mathrm{SMI}_q$ approximates the $\mathrm{PMI}_q$ , which we confirm via Monte Carlo simulation. We reduce the runtime of the $\mathrm{SMI}_2$ from $\mathcal{O}(N^3 k_A\max (k_A,k_B))$ to a much more practical $\mathcal{O}(k_Ak_B)$ by a reformulation of the variance term. We demonstrate the impact of type II unbiased algorithm selection for community detection on a social network dataset and clustering on images of handwritten digits and human faces. + +# 2 Generalized information theoretic clustering comparison measures + +A clustering $A$ of $N$ data points is a partition of the set $\{1, \dots, N\}$ into disjoint subsets $A = \{A_1, \dots, A_{k_A}\}$ . $A_i$ denotes the set of points in the $i$ -th cluster of size $a_i := |A_i|$ and $k_A$ is the number of clusters in $A$ . Clustering comparison measures quantify the similarity between two clusterings $A$ and $B$ , and can be expressed as a function of the contingency table (Table 1). + +Table 1: Contingency table for clusterings $A$ , $B$ with $k_{A}$ and $k_{B}$ clusters respectively. Lower case $a_{i}$ and $b_{j}$ denote the cluster size of the $i$ -th and $j$ -th clusters in $A$ and $B$ , while $n_{ij}$ represents the size of their overlap. Clustering comparison measures can be expressed in terms of the elements of this contingency table. + +![](images/9ea7d24ef00286acb77e817de5e1839a1fe0927942ce68c9e9f1c3c53abe5169.jpg) + +While many clustering comparison methods exist in the literature [2, 9], many well-known methods like the Variation of Information, Mirkin Index, or Rand Index belong to the family of generalized information-theoretic clustering comparison measures [30, 31]. When adjusted for chance, these measures reduce to the mutual information with Tsallis $q$ -entropy [35], which will be the focus of this work. + +Definition 2.1 (Tsallis $q$ -entropy). Let $q \in \mathbb{R}_+$ , and $A$ be a clustering. Then, the Tsallis $q$ -entropy is + +$$ +H _ {q} (A) = - \sum_ {i = 1} ^ {k _ {A}} \left(\frac {a _ {i}}{N}\right) ^ {q} \log_ {q} \frac {a _ {i}}{N}, \tag {1} +$$ + +with the $q$ -logarithm $\log_q(y) \coloneqq (y^{1 - q} - 1) / (1 - q)$ if $q \neq 1$ and the natural logarithm for $q = 1$ , where $x \log_1(x) = 0$ for $x = 0$ . + +The generalized mutual information is defined in analogy to the mutual information but with Tsallis $q$ -entropy replacing the Shannon entropy. + +Definition 2.2 (Generalized mutual information). Let $A, B$ be two clusterings of the set $\{1, \dots, N\}$ and $q \in \mathbb{R}_+$ , then the generalized mutual information is + +$$ +\mathrm {M I} _ {q} (A, B) = H _ {q} (A) + H _ {q} (B) - H _ {q} (A, B). \tag {2} +$$ + +Here $H_{q}(A,B)$ denotes the joint $q$ -entropy $H_{q}(\{A_{i}\cap B_{j}\mid A_{i}\in A\land B_{j}\in B\})$ + +# 3 Adjustment for chance + +The bare $\mathrm{MI}_q$ has limited value as a clustering comparison measure. When comparing two clusterings directly with one another, it is biased towards larger or smaller clusters, depending on the value of + +$q$ (type I bias) [19, 25]. But even after adjusting for type I bias, there is another, more subtle bias when multiple clusterings are compared via a single ground truth [29] (Figure 1). In Section 3.2, we introduce the $p$ -value as an adjustment to the latter type II bias. + +# 3.1 Type I bias + +It is well known throughout the literature that the $\mathrm{MI}_1$ is biased towards smaller clusters in direct clustering comparisons [19, 25]. To make this precise, Gösgens et al. [9] defined a family of clustering distributions for which the expected similarity to a reference clustering should be constant: + +Definition 3.1 (Element-symmetric distribution). A distribution over clusterings $\mathcal{B}$ is element-symmetric if every two clusterings $B$ and $B'$ with the same cluster sizes have the same probability. + +An example of an element-symmetric distribution is the uniform distribution over all clusterings of $N$ elements into $k$ clusters. If the clustering is random, a comparison measure should not favor one particular $k$ over another. If it does, we call it type I biased [9]. + +Definition 3.2 (Type I unbiased). A clustering measure $V$ is type I unbiased if there is a constant $c$ , such that for any clustering $A$ with $1 < k_{A} < N$ and every element-symmetric distribution $\mathcal{B}$ the expected value $\mathbb{E}_{B \sim \mathcal{B}}[V(A, B)] = c$ is constant. + +In other words, type I unbiased means that when comparing a fixed clustering $A$ to all permutations of any clustering $B$ , the average metric value is the same for all $A$ . As the $\mathrm{MI}_q$ has this type I bias, it is commonly adjusted by subtracting its expected value under random permutation, yielding a type I unbiased measure [30]. + +$$ +\mathrm {A M I} _ {q} (A, B) := \frac {\mathrm {M I} _ {q} (A , B) - \mathbb {E} _ {\sigma \in S _ {N}} [ \mathrm {M I} _ {q} (A , \sigma (B)) ]}{\frac {1}{2} \left(H _ {q} (A) + H _ {q} (B)\right) - \mathbb {E} _ {\sigma \in S _ {N}} [ \mathrm {M I} _ {q} (A , \sigma (B)) ]}. \tag {3} +$$ + +$S_{N}$ denotes the symmetric group and $\frac{1}{2} (H_q(A) + H_q(B))$ is an upper bound to the $\mathrm{MI}_q$ such that the $\mathrm{AMI}_q$ is normalized to $c = 0$ for random clusterings and upper bounded by 1 [25, 30]. Adjustments with respect to other random models are possible [8, 18]. However, the random permutation model remains the most popular and is the focus of this work. Due to the generalization using Tsallis entropy, the $\mathrm{AMI}_2$ corresponds to the Adjusted Rand Index [11, 30]. + +# 3.2 Type II bias + +However, in a typical external validation scenario, a single absolute value of the $\mathrm{AMI}_q$ is of little help. While it is easy to understand that an $\mathrm{AMI}_q$ of zero means a clustering algorithm is no better than random and a value of one means optimal agreement, the scale of the range in between is unclear. Therefore, the $\mathrm{AMI}_q$ values of multiple candidate solutions with a reference are typically compared against each other to find the best algorithm for a given dataset. + +As a toy model for this scenario, we uniformly generate 5000 clusterings of $N = 500$ elements for each number of clusters $k_{B} \in \{2,6,10,14,18,22\}$ [29]. We compare them to a fixed clustering $A$ with $k_{A} = 10$ evenly sized clusters and plot the selection probabilities for the RI, MI, and $\mathrm{AMI}_q$ for $q \in \{1,2\}$ (Figures 1a, b, d and e). The bare MI and RI and their adjusted variants $\mathrm{AMI}_q$ are biased towards certain values of $k_{B}$ . + +We generalize and formalize this observation by demanding a clustering comparison measure to favor no element-symmetric distribution over another. + +Definition 3.3 (Type II unbiased). Let $V$ be a clustering comparison measure and $\mathcal{B},\mathcal{B}^{\prime}$ be element-symmetric clustering distributions. $V$ is type II unbiased if + +$$ +\mathbb {E} _ {B \sim \mathcal {B}, B ^ {\prime} \sim \mathcal {B} ^ {\prime}} [ \theta (V (A, B) - V (A, B ^ {\prime})) ] = \frac {1}{2} \tag {4} +$$ + +for any clustering $A$ with $1 < k_{A} < N$ , where $\theta$ denotes the Heaviside step function with $\theta(0) = 1/2$ . + +Intuitively, Type I bias means that certain cluster sizes receive higher metric values. Type II bias, on the other hand, gives a higher relative rank to certain cluster sizes when multiple clusterings are compared with a ground truth. + +![](images/e7487146bc354ed76834a2fda4a24babcf343b493d3c99cd767c8f4dd62f2a87.jpg) + +![](images/6ff615f3fad5203be0e6017349074253953426a09d09fda11aa8af6f177cf102.jpg) + +![](images/9665013fa596444affb10f53ffe7fec53012a390beed6278a9a1130df3383848.jpg) + +![](images/27749261f1f181a430b93abeb1ba73900f5cf803b52188a5a2e6081648d46a60.jpg) +Selected number of clusters + +![](images/f4ab1e7aed15632c6c58d72b9aee678c95a21c2b1ce50fdfaa82b1b04198dd57.jpg) +Figure 1: We compare a fixed reference clustering with $k_{A} = 10$ even clusters, to random clusterings with $k_{B} \in \{2,6,10,14,18,22\}$ clusters. The plot shows the selection probabilities of each $k_{B}$ for the MI and RI and its adjusted $(\mathrm{AMI}_q)$ and our $p$ -value adjusted $(\mathrm{PMI}_q)$ variants after 5000 repetitions. The RI, MI, and $\mathrm{AMI}_q$ are type II biased, while our $\mathrm{PMI}_q$ selects each cluster size with equal probability. + +![](images/60099909269362f1001f42a9a249b5155390dbf32d562ed735534fe04bbb7889.jpg) + +Romano et al. [29] introduced standardization to correct for type II bias + +$$ +\operatorname {S M I} _ {q} (A, B) := \frac {\operatorname {M I} _ {q} (A , B) - \mathbb {E} _ {\sigma \in S _ {N}} \left[ \operatorname {M I} _ {q} (A , \sigma (B)) \right]}{\sqrt {\mathbb {E} _ {\sigma \in S _ {N}} \left[ \operatorname {M I} _ {q} (A , \sigma (B)) \right] ^ {2} - \mathbb {E} _ {\sigma \in S _ {N}} \left[ \operatorname {M I} _ {q} (A , \sigma (B)) ^ {2} \right]}}. \tag {5} +$$ + +They observed in numerical simulations that in the toy model above (Figure 1), the $\mathrm{SMI}_q$ selects each $k_B$ with approximately equal probability [30]. + +However, the $\mathrm{MI}_q$ is not normally distributed under random permutation (Figure 2a), and standardization is only an approximation to the true $p$ -value (Figure 2b). The $p$ -value quantifies what percentage of all permutations of the data would have led to higher mutual information, and we propose to use it for clustering comparison. + +Definition 3.4 ( $p$ -value adjusted, generalized mutual information). Let $A, B$ be two partitions of the set $\{1, \ldots, N\}$ , $q \in \mathbb{R}_+$ . Assuming the random permutation model, the $p$ -value adjusted, generalized mutual information is + +$$ +\mathrm {P M I} _ {q} := \mathbb {E} _ {\sigma \in S _ {N}} [ \theta (\mathrm {M I} _ {q} (A, B) - \mathrm {M I} _ {q} (\sigma (A), B)) ]. \tag {6} +$$ + +Note that as the marginal entropies are independent of the permutation, $\mathrm{PMI}_q = \mathbb{E}_{\sigma \in S_N}[\theta (H_q(\sigma (A),B) - H_q(A,B))]$ . For $q = 1$ , this is the $p$ -value of the mutual information and the variation of information by definition. For $q \neq 1$ the $\mathrm{PMI}_q$ further simplifies to $\mathbb{E}_{\sigma \in S_N}[\theta (n_{ij}' - n_{ij}^q)]$ for $q > 1$ and $\mathbb{E}_{\sigma \in S_N}[\theta (n_{ij}' - n_{ij}'^q)]$ for $q < 1$ with $n_{ij}'$ being the elements of the contingency table for $\sigma (A), B$ . In a sense, the details of the generalized mutual information don't matter under $p$ -value adjustment, except the exponent $q$ of the contingency matrix elements. In fact, the $p$ -value of the Rand Index is equivalent to the $\mathrm{PMI}_2$ by a very similar argument. + +In the experiment in Figure 1, we observe that the $\mathrm{PMI}_1$ and $\mathrm{PMI}_2$ select each $k_B$ with approximately equal probability. + +Proposition 3.1. The $\mathrm{PMI}_q$ is type I and type II unbiased. + +We formally prove this result in Appendix A. + +![](images/62ff323dea9b8e6df9e8248d0d7e5527fef9eb0f00d61459a31ace206793ead7.jpg) +Figure 2: The probability of obtaining a particular $\mathrm{MI}_2$ under random permutation for two fixed clusterings $A, B$ each of 100 elements. Our $\mathrm{PMI}_2$ (blue bars in a)) takes the true distribution of the $\mathrm{MI}_2$ into account, whereas the $\mathrm{SMI}_2$ (shaded blue region in b)) is based on a continuous normal approximation. However, when normalized with the normal CDF $\Phi$ , the $\mathrm{SMI}_2$ is a good approximation of the $\mathrm{PMI}_2$ as shown in Figure c). Here, we sampled 1000 pairs of clusterings uniformly at random for different numbers of elements $N$ . We plot the absolute difference between Monte Carlo estimates of the $\mathrm{PMI}_2$ and normalized $\mathrm{SMI}_2$ values as a function of the two-sided $p$ -value. The larger the dataset size $N$ , the better $\Phi(\mathrm{SMI}_2)$ approximates the true $\mathrm{PMI}_2$ . + +![](images/aa6eefe975993427cfbbe574dce8abae896e692f3cd685a56be4857a3e35cee7.jpg) + +![](images/edc9d9f9ee72854251709d82ffaa0afa19c19f2c6a6ce953f8fc97c697c73d00.jpg) + +# 4 Monotonicity + +When one clustering is changed to resemble another more, any clustering similarity measure should increase. A major drawback of the $\mathrm{SMI}_1(A,B)$ is its non-monotonous behavior as the number of pairs of elements that agree in $A$ and $B$ increases [9]. We show that the $\mathrm{PMI}_q$ , on the other hand, is monotonous for $q \geq 2$ . + +# 4.1 Definition of monotonicity + +The atomic operations that add new pairs of agreeing elements in two clusterings are the perfect split and the perfect merge [9]. + +Definition 4.1 (Perfect split). $B'$ is a perfect split of $B$ with respect to $A$ if it splits a single cluster $B_1 \in B$ into $B_1', B_2' \in B'$ such that for all $i$ either $A_i \cap B_1 \subset B_1'$ or $A_i \cap B_1 \subset B_2'$ . + +Definition 4.2 (Perfect merge). $B'$ is a perfect merge of $B$ with respect to $A$ if $B'$ is obtained by merging two clusters $B_1, B_2 \in B$ with $B_1, B_2 \subset A_i$ for some $i$ . + +Gösgens et al. [9] require that any clustering similarity measure increases monotonically for any combination of perfect splits and perfect merges. + +Definition 4.3 (A-consistent improvement). $B'$ is an $A$ -consistent improvement of $B$ iff there exists a series of perfect splits and perfect merges that change $B$ into $B'$ . + +Definition 4.4 (Monotonicity). A symmetric clustering comparison measure $V$ is monotonous if for every $A, B$ with $1 < k_A < N$ and any $A$ -consistent improvement $B'$ of $B$ , $V(A, B') > V(A, B)$ . + +We show that the $\mathrm{PMI}_q$ is monotonous for $q\geq 2$ , making it the first known clustering comparison measure to be both type II unbiased and monotonous. The case $q = 2$ is particularly interesting as it corresponds to the well-known Rand Index. + +# 4.2 Proof of monotonicity for $\mathrm{PMI}_q$ with $q\geq 2$ + +The proof can be broken down into monotonicity under perfect splits and perfect merges. We first show that the joint $q$ -entropy increases under any split that is not perfect. + +Lemma 4.1. Let $A, B$ be clusterings with $1 < k_A < n$ and $B'$ be obtained by splitting a cluster $B_j \in B$ into non-empty clusters $B_{j_1}', B_{j_2}'$ . Then $H_q(A, B') = H_q(A, B) + \Delta H_q$ with $\Delta H_q \geq 0$ and equality iff the split is perfect with respect to $A$ . + +This statement is a direct consequence of the subadditivity of the $q$ -entropy [7]. In particular $h_q:p\mapsto p^q\log_qp$ is a strictly convex function with $h_q(0) = 0$ and hence strictly superadditive, i.e. $h_q(p_1 + p_2)\geq h_q(p_1) + h_q(p_2)$ for any $p_1,p_2\geq 0$ with equality iff $p_1 = 0\lor p_2 = 0$ + +Proof of Lemma 4.1. We express $\Delta H_{q}$ as + +$$ +\begin{array}{l} \Delta H _ {q} = \sum_ {i} \left[ \left(\frac {n _ {i j}}{N}\right) ^ {q} \log_ {q} \left(\frac {n _ {i j}}{N}\right) - \left(\frac {n _ {i j _ {1}} ^ {\prime}}{N}\right) ^ {q} \log_ {q} \left(\frac {n _ {i j _ {1}} ^ {\prime}}{N}\right) - \left(\frac {n _ {i j _ {2}} ^ {\prime}}{N}\right) ^ {q} \log_ {q} \left(\frac {n _ {i j _ {2}} ^ {\prime}}{N}\right) \right] \\ = \sum_ {i} h _ {q} \left(\frac {n _ {i j}}{N}\right) - h _ {q} \left(\frac {n _ {i j 1} ^ {\prime}}{N}\right) - h _ {q} \left(\frac {n _ {i j 2} ^ {\prime}}{N}\right). \tag {7} \\ \end{array} +$$ + +From $n_{ij} = n_{ij_1}' + n_{ij_2}'$ and the strict superadditivity of $h_q$ follows $\Delta H_q \geq 0$ with equality iff $n_{ij_1}' = 0 \lor n_{ij_2}' = 0$ , i.e. when the split is perfect. + +Conversely, a perfect merge maximizes the difference in joint entropy. + +Lemma 4.2. Let $A, B, B'$ and $\Delta H_q$ be as in Lemma 4.1, then for $q \geq 2$ , $\Delta H_q$ is maximal iff $B$ is a perfect merge of $B'$ with respect to $A$ . + +Proof of Lemma 4.2. When $B$ is a perfect merge of $B'$ with respect to $A$ , then + +$$ +\Delta H _ {q} ^ {\text {p e r f e c t}} = h _ {q} \left(\frac {b _ {j}}{N}\right) - h _ {q} \left(\frac {b _ {j _ {1}} ^ {\prime}}{N}\right) - h _ {q} \left(\frac {b _ {j} - b _ {j _ {1}} ^ {\prime}}{N}\right). \tag {8} +$$ + +To show that $\Delta H_q^{\mathrm{perf}}$ is superadditive as a function of $b_{j}$ , we take its second derivative + +$$ +\frac {\mathrm {d} ^ {2}}{\mathrm {d} b _ {j} ^ {2}} \Delta H _ {q} ^ {\text {p e r f e c t}} = \frac {q}{N ^ {q}} \left(b _ {j} ^ {q - 2} - \left(b _ {j} - b _ {j _ {1}} ^ {\prime}\right) ^ {q - 2}\right) \geq 0 \text {f o r} q \geq 2 \text {a n d} b _ {j} > b _ {j _ {1}} ^ {\prime} > 0. \tag {9} +$$ + +For $q > 2$ , it is strictly convex and thus strictly superadditive. For $q = 2$ and $b_{j} = 0$ , the difference $\Delta H_{q}^{\mathrm{perfect}} = -b_{j1}^{2}$ is negative and thus $\Delta H_{2}^{\mathrm{perfect}}$ is also strictly superadditive. Now consider $\tilde{A}$ such that $B$ is not a perfect merge with respect to $\tilde{A}$ . Then at least two $\tilde{A}_{i_1}, \tilde{A}_{i_2}$ have non-vanishing overlap $\tilde{n}_{i_1j}, \tilde{n}_{i_2j} > 0$ with $B_{j}$ such that + +$$ +\Delta H _ {q} ^ {\text {p e r f e c t}} > \sum_ {i} h _ {q} \left(\frac {\tilde {n} _ {i j}}{N}\right) - h _ {q} \left(\frac {b _ {j _ {1}} ^ {\prime}}{N}\right) - h _ {q} \left(\frac {\tilde {n} _ {i j} - b _ {j _ {1}} ^ {\prime}}{N}\right) \text {f o r} q \geq 2. \tag {10} +$$ + +With the superadditivity of $h_q$ follows $H_q^{\mathrm{perfect}} > H_q^{\mathrm{not~perfect}}$ (Compare Eq. 7). + +Now that we know how the joint entropy behaves under perfect splits and perfect merges, we can put together the proof of the monotonicity of the $\mathrm{PMI}_q$ . + +Theorem 4.3. Let $A, B$ be clusterings with $1 < k_A < n$ and $B'$ an $A$ -consistent improvement of $B$ . Then $\mathrm{PMI}_q(A, B') > \mathrm{PMI}_q(A, B)$ for $q \geq 2$ . + +Proof of Theorem 4.3. It suffices to show monotonicity for $B'$ a perfect split or perfect merge since any $A$ -consistent improvement of $B$ can be obtained by a sequence of perfect splits and perfect merges. + +Case 1. $B^{\prime}$ is a perfect split. + +Since $A$ is not a singleton cluster, a permutation $\sigma$ exists such that $B'$ is not a perfect split with respect to $\sigma(A)$ and with Lemma 4.1 it follows + +$$ +\operatorname {P M I} _ {q} (A, B ^ {\prime}) > \mathbb {E} _ {\sigma \in S _ {N}} \left[ \theta \left(H _ {q} (\sigma (A), B) - H _ {q} (A, B ^ {\prime})\right) \right]. \tag {11} +$$ + +![](images/ef704847ee90966df1c1008acab499d109fdd32fad08d2a4533bb281397ab6de.jpg) +Figure 3: Runtime of the Monte Carlo $\mathrm{PMI}_2$ and the $\mathrm{SMI}_2$ for random clusterings of a) $N$ elements into $k_{A} = k_{B} = 10$ clusters and of b) random clusterings with $N = 1000$ and varying approximation error $a$ . The $\mathrm{SMI}_1$ calculation, as proposed in [29], is prohibitively expensive for medium-sized datasets. Our exact reformulation of the $\mathrm{SMI}_2$ and the Monte Carlo $\mathrm{PMI}_2$ maintain practical runtimes for high $N$ . The $\Phi (\mathrm{SMI}_2)$ is faster, while the Monte Carlo $\mathrm{PMI}_2$ allows for higher accuracy. + +![](images/f721d9cd43e306ca6d8fb12404d80d6a49ca06ad0b105bcd603a183e179dd97e.jpg) + +However, $B'$ is a perfect split of $B$ with respect to $A$ and equality holds in Lemma 4.1 + +$$ +\operatorname {P M I} _ {q} (A, B ^ {\prime}) > \mathbb {E} _ {\sigma \in S _ {N}} \left[ \theta \left(H _ {q} (\sigma (A), B) - H _ {q} (A, B)\right) \right] = \operatorname {P M I} _ {q} (A, B). \tag {12} +$$ + +# Case 2. $B^{\prime}$ is a perfect merge. + +Let $b_{1}, b_{2} \in B$ denote the merged clusters that form $b_{1}' \in B'$ . Using Lemma 4.2, we find + +$$ +\operatorname {P M I} _ {q} (A, B ^ {\prime}) = \mathbb {E} _ {\sigma \in S _ {N}} \left[ \theta \left(H _ {q} (\sigma (A), B) - H _ {q} (A, B) + \Delta H _ {q} ^ {\sigma (A)} - \Delta H _ {q} ^ {A}\right) \right] > \operatorname {P M I} _ {q} (A, B), \tag {13} +$$ + +as there is at least one permutation $\sigma$ for which the merge is not perfect. + +![](images/f44428b87865b10ecf87cc38ad3f542daa8f71f846940e23e684bed7a4e65741.jpg) + +# 5 Approximations and runtime + +A limitation of the $\mathrm{PMI}_q$ is its computational complexity. Its exact calculation is intractable even for small datasets, as it requires a sum over all contingency tables with given marginals. To mitigate this limitation, we propose two approximation schemes: + +1. Standardized approximation $(\mathbf{q} = 2)$ : We approximate the true, discrete distribution of $\mathrm{MI}_q$ with a continuous normal distribution that matches its first and second statistical moments (See Figure 2a and b). While this approximation is particularly fast for $q = 2$ , it does not preserve the theoretical guarantees of the $\mathrm{PMI}_q$ . +2. Monte Carlo approximation: Given two clusterings $A$ and $B$ , we sample contingency tables with the same cluster sizes. The fraction of tables with $\mathrm{MI}_q$ lower than $\mathrm{MI}_q(A, B)$ approximates the true $p$ -value. In this approach, the theoretical guarantees hold up to a tunable approximation error at the cost of higher runtime. + +# 5.1 The standardized Rand Index + +We approximate the $\mathrm{PMI}_q$ with the $\mathrm{SMI}_q$ , normalized with the normal CDF $\Phi$ (Figure 2b). This can be seen as a truncated, second-order Gram Charlier A series of the $\mathrm{PMI}_q$ , and while this could be continued for higher statistical moments, it is difficult to find an exact error term [37]. A more cautious + +normalization permits the lower bound $\mathrm{PMI}_q(A,B)\geq 1 - \frac{1}{1 + (\mathrm{SMI}_q(A,B))^2}$ for $\mathrm{SMI}_q(A,B) > 0$ but has little practical significance in the context of this work [30]. Therefore, we evaluate the approximation quality experimentally on 1000 pairs of clusterings drawn uniformly from the set of all clusterings with $N\in \{50,100,200,500,1000\}$ using a method described in [16]. We compare $\Phi (\mathrm{SMI}_2)\coloneqq (1 + \mathrm{erf}(\mathrm{SMI}_2 / \sqrt{2})) / 2$ with a Monte Carlo estimate of the $\mathrm{PMI}_2$ with approximation error 0.001 in Figure 2c. The values are highly correlated $(r_{\mathrm{Pearson}} = 0.9983$ for $N = 50$ ), and the approximation improves with larger values of $N$ $(r_{\mathrm{Pearson}} = 0.9995$ for $N = 1000$ ). So although $\Phi (\mathrm{SMI}_q)$ itself is not monotonous, it closely matches the $\mathrm{PMI}_q$ , which is monotonous for $q\geq 2$ . + +While $\Phi (\mathrm{SMI}_q)$ is a simplification over the $\mathrm{PMI}_q$ , its computational complexity $\mathcal{O}(N^3 k_A\max (k_A,k_B))$ for general $q$ is far from practical [15, 30]. In this work, we contribute a novel algorithm for the special case $q = 2$ that improves the computational complexity. + +Proposition 5.1. The computational complexity of $\mathrm{SMI}_2$ is $\mathcal{O}(k_A k_B)$ . + +The proof is in Appendix C. This special case $q = 2$ is of particular interest because of the correspondence with the well-known Rand Index and the monotonicity of $\mathrm{PMI}_2$ . Our improved algorithm for the $\mathrm{SMI}_2$ allows comparisons of moderately sized clusterings $N \approx 10,000$ that are computationally out of reach for, e.g., the $\mathrm{SMI}_1$ (Figure 3a). + +# 5.2 Monte Carlo approximation + +The standardized approximation has two limitations: + +- It is computationally inefficient only for $q \neq 2$ . +- There is no guarantee that it preserves the desirable theoretical properties of the $\mathrm{PMI}_2$ . + +We address both of these limitations by introducing a Monte Carlo approximation at the cost of increased runtime. For two clusterings $A, B$ , the method samples contingency tables uniformly from all tables with their respective cluster sizes $a_1, \ldots, a_{k_A}$ ; $b_1, \ldots, b_{k_B}$ (Compare Table 1), using the algorithms proposed in [4, 26]. The fraction of samples with $\mathrm{MI}_q$ lower than $\mathrm{MI}_q(A, B)$ is an unbiased estimator of the $\mathrm{PMI}_q$ . The sampling procedure terminates when a given approximation error $a$ is reached. This way, the theoretical properties of the $\mathrm{PMI}_q$ are preserved up to the tunable approximation error. + +However, lower approximation errors require more samples: + +Proposition 5.2. The computational complexity of the Monte Carlo $\mathrm{PMI}_q$ is $\mathcal{O}(\min (N,k_Ak_B\log N) / a^2)$ , with the desired approximation error $a$ . + +The proof is in Appendix B, and Figure 3 shows an experimental study of the runtime compared to the standardized approximation. The Monte Carlo approach is computationally more expensive, especially for larger datasets. Therefore, the standardized approach is for choice when $q = 2$ and moderate approximation quality is acceptable. The Monte Carlo method should be used if $q \neq 2$ or theoretical guarantees are required. + +# 6 Algorithm selection on real-world datasets + +# 6.1 $k$ -means clustering on image datasets + +As a first example, we mimic the synthetic experiment in Figure 1. For several numbers of clusters $k$ , we apply $k$ -means clustering [21] with 1000 different random seeds. We select the clustering with the highest RI, $\mathrm{AMI}_2$ , and $\mathrm{PMI}_2$ , approximated by $\Phi(\mathrm{SMI}_2)$ and denote the corresponding number of clusters $k_{\text{selected}}$ . Figure 4a shows the selection probabilities of $k_{\text{selected}}$ when compared with a ground truth with $k_{\text{true}}$ clusters for a handwritten digit dataset [6] and Figure 4b for a dataset of human faces [27]. Naturally, all measures favor solutions where $k_{\text{selected}} > k_{\text{true}}$ , as a higher number of clusters increases the chances of $k$ -means matching the reference decision boundaries. However, the RI and $\mathrm{AMI}_2$ additionally suffer from type II bias, leading to a higher overestimation of $k_{\text{selected}}$ + +![](images/5a99c419a181cdbcc5503505feb8c75eac0dcc2b9c530974a31b4be92b176dff.jpg) +Figure 4: We apply $k$ -means clustering [21] with varying $k$ to a) the UCI handwritten digit dataset [6] and b) the Olivetti faces dataset [27] and select the solution with the highest similarity to the ground truth. We repeat this experiment 1000 times with different random seeds and plot the selection probability under the RI, $\mathrm{AMI}_2$ , and normal approximation of the $\mathrm{PMI}_2$ . The $\mathrm{PMI}_2$ selects candidates where $k_{\text{selected}}$ is closer to the true number of clusters $k_{\text{true}}$ on average (dashed lines) compared to the RI and $\mathrm{AMI}_2$ . In c), we select a connected subset of $k_{\text{true}} = 30$ communities from the email EU core dataset [20] and detect communities using five algorithms with 22 parameter configurations. The RI, $\mathrm{AMI}_2$ , and $\mathrm{PMI}_2$ select the best solution, and we plot the selection probability for $k_{\text{selected}}$ after 100 repetitions. The $\mathrm{PMI}_2$ prefers the Leiden algorithm, which produces $k_{\text{selected}}$ on the order of $k_{\text{true}}$ . The $\mathrm{AMI}_2$ gives a higher probability for Louvain Map Equation, and the RI sometimes selects low-quality Label Propagation results. + +![](images/af3b7cac304ef9d18c351a1fb3526880457480b525ebd86a746feca8f34ce3d9.jpg) + +![](images/a943bfe117228f5227040eeba30fa734648cafa1845f2b3adc9197069f3879ed.jpg) + +(compare with Figure 1). The difference between the $\mathrm{AMI}_2$ and the $\mathrm{PMI}_2$ is subtle, but this is expected. Type II bias correction is just one step forward in clustering comparison and does not turn existing assessments based on metrics like the $\mathrm{AMI}_2$ on its head. In practice, much wider ranges of $k_{\mathrm{selected}}$ can arise from different clustering algorithms which could potentially amplify the effect. Two additional experiments with spectral clustering instead of $k$ -means can be found in Appendix E. + +# 6.2 Community detection in social networks + +In social networks, detecting communities is ubiquitous and can help to detect fraud, deliver personalized content, or target ads [13]. As a result, many community detection algorithms are known in the literature [3, 28, 34]. However, community detection is inherently unsupervised, and it is a priori unclear which algorithm with which parameters will perform best for a given application. In practice, human experts often annotate a subset of the dataset, and an unsupervised algorithm is selected via a clustering comparison measure on that subset. + +We simulate this procedure on a network of email conversations between European research institutions, where each institution is a ground truth cluster [20]. We select a connected subset with $k_{\mathrm{true}} = 30$ institutions and detect communities using Degree Ordered Label Propagation, Label Propagation, Leiden, Louvain, and Louvain Map Equation [32] with 22 parameter configurations (Appendix D). We then select the most similar algorithm to the ground truth using RI, $\mathrm{AMI}_2$ , and $\mathrm{PMI}_2 \approx \Phi(\mathrm{SMI}_2)$ . This process is repeated for 100 subsets per dataset, and the resulting probabilities are shown in Figure 4c. Label propagation is a fast but inaccurate method [28] and overestimates $k_{\mathrm{true}}$ by almost an order of magnitude in our experiment. During algorithm selection, RI was the only metric to choose Label Propagation in some cases. The $\mathrm{PMI}_2$ differs from the $\mathrm{AMI}_2$ in that it selected Leiden more frequently over the Louvain Map Equation, both of which are improvements over the original Louvain method [14, 34]. However, Leiden comes closer to the true number of clusters $k_{\mathrm{true}}$ , and in that sense, $\mathrm{PMI}_2$ led to a better choice of algorithm. + +Table 2: Comparison of the $\mathrm{PMI}_q$ to the clustering comparison metrics in the systematic review by Gósgens et al. [9]. Examples for type II biasedness can be found in Appendix A. We consider a metric computationally tractable if its asymptotic complexity is linear in the number of data points $N$ but not necessarily in the numbers of clusters $k_A, k_B$ . The rationale is that in many cases, the number of clusters is much lower than the number of data points and metrics like the $\mathrm{AMI}_1$ with $\mathcal{O}(N \max\{k_A, k_B\})$ are widely used in practice [27, 29]. The $\mathrm{PMI}_2$ is the first metric to be Type II unbiased and monotonous and, while computationally demanding, has efficient approximations. + +
NMINMI maxFair NMIVIFMeasureBCubedJaccardWallaceDiceCorr. Coeff.Sokal&SneathCorr. Dist.Rand IndexAM1AM2SM1SM2PML1PM2
Type I unbiasedXXXXXXXXXXX
Type II unbiasedXXXXXXXXXXXXXXX
MonotonicityXXXXXXX
Comp. tractableX
+ +# 7 Conclusion and outlook + +Table 2 summarizes our findings for the $\mathrm{PMI}_q$ and compares its theoretical properties to 17 clustering comparison measures from the systematic review by Gösgens et al. [9]. We introduce the first type II unbiased and monotonous cluster comparison method, the $p$ -value adjusted Rand Index $(\mathrm{PMI}_2)$ . Existing methods that addressed type II bias, namely the Standardized Mutual Information $(\mathrm{SMI}_1)$ and the Standardized Rand Index $(\mathrm{SMI}_2)$ are not monotonous, meaning clusterings closer to the ground truth can score worse. In addition, the $\mathrm{SMI}_1$ has high computational complexity, making it unsuitable in practice. For the $\mathrm{SMI}_2$ we showed that an efficient algorithm exists and we leverage this algorithm for an efficient approximation of the proposed $\mathrm{PMI}_2$ . However, our analysis of the errors in this standardized approximation is limited to experimental observations, leaving a theoretical analysis for future work. We devised a Monte Carlo approximation for the $\mathrm{PMI}_2$ with tunable approximation error, for when theoretical guarantees are required. To validate our theoretical findings, synthetic experiments confirm that the presented $\mathrm{PMI}_2$ selects different cluster sizes with equal probability and is not subject to type II bias. In practice, the $\mathrm{PMI}_2$ chooses better clustering algorithms from a set of candidates when a ground truth reference is available. Thanks to its monotonicity and computational efficiency, the $\mathrm{PMI}_2$ is a practical candidate for evaluating cluster similarity without type II bias. While we investigated $p$ -value adjustment for the family of generalized information-theoretic clustering comparison measures, further research is required to understand if other comparison measures, like the Jaccard Index, could benefit from a similar adjustment. + +# Acknowledgments and Disclosure of Funding + +This work was funded by the Digital Europe Grant Testing and Experimentation Facility for Health AI and Robotics (TEF-Health), Project number 101100700 and the Bayerischen Verbundforderprogramm (BayVFP) - Forderlinie Digitalisierung - Forderbereich Informations- und Kommunikationstechnik of the Bavarian Ministry of Economic Affairs, Regional Development and Energy and supported by Bayern Innovativ - Bayerische Gesellschaft für Innovation und Wissentransfer mbH. + +# References + +[1] C. C. Aggarwal and C. K. Reddy, editors. Data Clustering: Algorithms and Applications. CRC Press, 2014. ISBN 978-1-4665-5821-2. +[2] N. Arinik, V. Labatut, and R. Figueiredo. Characterizing and comparing external measures for the assessment of cluster analysis and community detection. IEEE Access, 9:20255-20276, 2021. doi: 10.1109/ACCESS.2021.3054621. + +[3] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008, 2008. +[4] J. M. Boyett. Algorithm AS 144: Random $\mathbf{R} \times \mathbf{C}$ tables with given row and column totals. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(3):329-332, 1979. ISSN 00359254, 14679876. +[5] C. Carpineto and G. Romano. Consensus clustering based on a new probabilistic rand index with application to subtopic retrieval. IEEE Trans. Pattern Anal. Mach. Intell., 34(12):2315-2326, 2012. doi: 10.1109/TPAMI.2012.80. +[6] D. Dua and C. Graff. UCI machine learning repository, 2019. +[7] S. Furuichi. Information theoretical properties of Tsallis entropies. Journal of Mathematical Physics, 47(2):023302, Feb. 2006. ISSN 0022-2488. doi: 10.1063/1.2165744. +[8] A. J. Gates and Y.-Y. Ahn. The impact of random models on clustering similarity. Journal of Machine Learning Research, 18:87:1-87:28, 2017. +[9] M. Gösgens, A. Tikhonov, and L. Prokhorenkova. Systematic analysis of cluster similarity indices: How to validate validation measures. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 3799-3808. PMLR, 2021. +[10] D. He, L. Zhai, Z. Li, D. Jin, L. Yang, Y. Huang, and P. S. Yu. Adversarial mutual information learning for network embedding. In C. Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3321-3327. ijcai.org, 2020. doi: 10.24963/ijcai.2020/459. +[11] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2(1):193-218, Dec. 1985. ISSN 1432-1343. doi: 10.1007/BF01908075. +[12] D. Jin, Z. Yu, P. Jiao, S. Pan, D. He, J. Wu, P. S. Yu, and W. Zhang. A survey of community detection approaches: From statistical modeling to deep learning. IEEE Trans. Knowl. Data Eng., 35(2):1149-1170, 2023. doi: 10.1109/TKDE.2021.3104155. +[13] A. Karataş and S. Şahin. Application areas of community detection: A review. In 2018 International Congress on Big Data, Deep Learning and Fighting Cyber Terrorism (IBIGDELFT), pages 65–70, 2018. doi: 10.1109/IBIGDELFT.2018.8625349. +[14] Y. Kim and H. Jeong. Map equation for link communities. Physical Review E, 84(2):026110, 2011. +[15] K. Klede, L. Schwinn, D. Zanca, and B. M. Eskofier. Fastami - a monte carlo approach to the adjustment for chance in clustering comparison metrics. In B. Williams, Y. Chen, and J. Neville, editors, Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, pages 8317-8324. AAAI Press, 2023. doi: 10.1609/aaai.v37i7.26003. +[16] D. Knuth. The complexity of nonuniform random number generation. Algorithms and Complexity, New Directions and Results, pages 357-428, 1976. +[17] A. Lancichinetti and S. Fortunato. Consensus clustering in complex networks. Scientific Reports, 2(1):336, Mar. 2012. ISSN 2045-2322. doi: 10.1038/srep00336. +[18] D. Lazarenko and T. Donald. Pairwise adjusted mutual information. CoRR, abs/2103.12641, 2021. +[19] Y. Lei, J. C. Bezdek, S. Romano, X. V. Nguyen, J. Chan, and J. Bailey. Ground truth bias in external cluster validity indices. Pattern Recognition, 65:58-70, 2017. doi: 10.1016/j.patcog.2016.12.003. + +[20] J. Leskovec, J. M. Kleinberg, and C. Faloutsos. Graph evolution: Densification and shrinking diameters. ACM Trans. Knowl. Discov. Data, 1(1):2, 2007. doi: 10.1145/1217299.1217301. +[21] J. MacQueen. Classification and analysis of multivariate observations. In 5th Berkeley Symp. Math. Statist. Probability, pages 281-297. University of California Los Angeles LA USA, 1967. +[22] T. Mansour, M. Schork, and M. Shattuck. The generalized Stirling and Bell numbers revisited. J. Integer Seq, 15(8):47, 2012. +[23] E. Müller, S. Gunnemann, I. Färber, and T. Seidl. Discovering multiple clustering solutions: Grouping objects in different views of the data. In G. I. Webb, B. Liu, C. Zhang, D. Gunopoulos, and X. Wu, editors, ICDM 2010, the 10th IEEE International Conference on Data Mining, Sydney, Australia, 14-17 December 2010, page 1220. IEEE Computer Society, 2010. doi: 10.1109/ICDM.2010.85. +[24] X. V. Nguyen, J. Epps, and J. Bailey. Information theoretic measures for clusterings comparison: Is a correction for chance necessary? In A. P. Danyluk, L. Bottou, and M. L. Littman, editors, Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, pages 1073-1080. ACM, 2009. doi: 10.1145/1553374.1553511. +[25] X. V. Nguyen, J. Epps, and J. Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11:2837-2854, 2010. +[26] W. M. Patefield. Algorithm AS 159: An efficient method of generating random $\mathbf{R} \times \mathbf{C}$ tables with given row and column totals. Journal of the Royal Statistical Society. Series C (Applied Statistics), 30(1):91-97, 1981. ISSN 00359254, 14679876. +[27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournaopeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. +[28] U. N. Raghavan, R. Albert, and S. Kumara. Near linear time algorithm to detect community structures in large-scale networks. Physical review E, 76(3):036106, 2007. +[29] S. Romano, J. Bailey, X. V. Nguyen, and K. Verspoor. Standardized mutual information for clustering comparisons: One step further in adjustment for chance. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 1143-1151, 2014. +[30] S. Romano, X. V. Nguyen, J. Bailey, and K. Verspoor. Adjusting for chance clustering comparison measures. Journal of Machine Learning Research, 17:134:1-134:32, 2016. +[31] D. A. Simovici. On generalized entropy and entropic metrics. J. Multiple Valued Log. Soft Comput., 13(4-6):295-320, 2007. +[32] C. L. Staudt, A. Sazonovs, and H. Meyerhenke. NetworkKit: A tool suite for large-scale complex network analysis. Network Science, 4(4):508-530, 2016. doi: 10.1017/nws.2016.20. +[33] X. Su, S. Xue, F. Liu, J. Wu, J. Yang, C. Zhou, W. Hu, C. Paris, S. Nepal, D. Jin, Q. Z. Sheng, and P. S. Yu. A comprehensive survey on community detection with deep learning. CoRR, abs/2105.12584, 2021. +[34] V. A. Traag, L. Waltman, and N. J. Van Eck. From Louvain to Leiden: Guaranteeing well-connected communities. Scientific reports, 9(1):5233, 2019. +[35] C. Tsallis. Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical Physics, 52(1):479-487, July 1988. ISSN 1572-9613. doi: 10.1007/BF01016429. +[36] J. Vanschoren, J. N. van Rijn, B. Bischl, and L. Torgo. Openml: networked science in machine learning. SIGKDD Explorations, 15(2):49-60, 2013. doi: 10.1145/2641190.2641198. + +[37] D. L. Wallace. Asymptotic Approximations to Distributions. The Annals of Mathematical Statistics, 29(3):635-654, Sept. 1958. ISSN 0003-4851, 2168-8990. doi: 10.1214/aoms/1177706528. +[38] S. Wei, G. Han, R. Wang, Y. Yang, H. Zhang, and S. Li. Inductive multi-view multiple clusterings. In 2021 7th International Conference on Big Data and Information Analytics (BigDIA), pages 308-315, 2021. doi: 10.1109/BigDIA53151.2021.9619704. +[39] William M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846-850, 1971. doi: 10.1080/01621459.1971.10482356. + +# Appendix + +# A Type II Bias + +We show that the $\mathrm{PMI}_q$ is both type I and type II unbiased: + +Proof of Proposition 3.1. The choice of $\theta(0) = \frac{1}{2}$ allows us to use $\theta(x) = 1 - \theta(-x)$ such that + +$$ +\begin{array}{l} \mathrm {P M I} _ {q} (A, B) = \quad \mathbb {E} _ {\sigma \in S _ {N}} \left[ \theta \left(H _ {q} (A, B) - H _ {q} (\sigma (A), B)\right) \right] = \\ \mathbb {E} _ {\tilde {\sigma} \in S _ {N}} \left[ \theta \left(H _ {q} (\tilde {\sigma} (A), \tilde {B}) - H _ {q} (A, \tilde {B})\right) \right] = \tag {14} \\ 1 - \mathbb {E} _ {\tilde {\sigma} \in S _ {N}} \left[ \theta \left(H _ {q} (A, \tilde {B}) - H _ {q} (\tilde {\sigma} (A), \tilde {B})) \right], \right. \\ \end{array} +$$ + +with $\tilde{B} = \sigma^{-1}(B)$ and $\tilde{\sigma} = \sigma^{-1}$ . Now in the definition of type I unbiasedness, $\mathcal{B}$ is element-symmetric, such that $\tilde{B} \in \mathcal{B}$ and + +$$ +\mathbb {E} _ {B \sim \mathcal {B}} [ \mathrm {P M I} _ {q} (A, B) ] = 1 - \mathbb {E} _ {\tilde {B} \sim \mathcal {B}} [ \mathrm {P M I} _ {q} (A, \tilde {B}) ] = \frac {1}{2}. \tag {15} +$$ + +Hence, $\mathrm{PMI}_q$ is type I unbiased. + +For type II unbiasedness, $\mathcal{B},\mathcal{B}^{\prime}$ are element symmetric and with Eq. 14 we get + +$$ +\begin{array}{l} \mathbb {E} _ {B \sim B, B ^ {\prime} \sim B ^ {\prime}} [ \theta (\mathrm {P M I} _ {q} (A, B) - \mathrm {P M I} _ {q} (A, B ^ {\prime})) ] = \\ 1 - \mathbb {E} _ {B \sim \mathcal {B}, B ^ {\prime} \sim \mathcal {B} ^ {\prime}} \left[ \theta \left(\operatorname {P M I} _ {q} (A, B ^ {\prime}) - \operatorname {P M I} _ {q} (A, B)\right) \right] = \tag {16} \\ 1 - \mathbb {E} _ {\tilde {B} \sim \mathcal {B}, \tilde {B} ^ {\prime} \sim \mathcal {B} ^ {\prime}} [ \theta (\mathrm {P M I} _ {q} (A, \tilde {B}) - \mathrm {P M I} _ {q} (A, \tilde {B} ^ {\prime})) ] = \frac {1}{2}. \\ \end{array} +$$ + +![](images/032a4fe961662d73588547967cb45e1f312fc3e21d0f077bff57d0fdd5bb8db6.jpg) + +For all other clustering comparison measures in Table 2, the proof or a counterexample to type I unbiasedness are given in [9]. Here, we provide an example for their type II bias. For $N = 4$ data points, we compare the clustering $A = \{\{1,4\}, \{2,3\}\}$ with all clusterings with 2 clusters, distributed uniformly $\mathcal{B}$ . We also compare $A$ to all clusterings from the uniform distribution $\mathcal{B}'$ of $N = 4$ data points into 3 clusters. All comparison measures except the $\mathrm{PMI}_2$ prefer either 2 or 3 clusters in the sense that their expectation value in Eq. 4 is not equal to $1/2$ , and are thus type II biased (See Table 3). + +Table 3: Example for type II bias of other clustering comparison measures. For $A = \{\{1,4\}, \{2,3\}\}$ and $\mathcal{B}, \mathcal{B}'$ the uniform distribution of all clusterings with 4 elements into 2, 3 clusters respectively, the expected value $\mathbb{E}_{B \sim \mathcal{B}, B' \sim \mathcal{B}'}[\theta(V(A, B) - V(A, B'))] \neq \frac{1}{2}$ and hence these measures are type II biased. + +
NMINMI maxFair NMIVIFMeasureBCuhedJaccardWallaceDiceCorr. Coeff.Sokal&SneathCorr. Dist.Rand IndexAM1AM2SM1SM2
1/71/711/2110/2111/2113/2113/215/713/2111/2111/2110/211/311/2111/2125/4225/42
+ +# B Monte Carlo $\mathrm{PMI}_q$ + +Proof of proposition 5.2. Random contingency tables with fixed marginals can be generated in $\mathcal{O}(\min (N,k_Ak_B\log N))$ [4, 26]. The error $a = \sqrt{\mathrm{PMI}_q(1 - \mathrm{PMI}_q) / N_{\mathrm{samples}}}$ decreases with the inverse square root of the number of Monte Carlo samples $N_{\mathrm{samples}}$ . Hence we need $N_{\mathrm{samples}}\leq a^{-2} / 4$ samples to reach the approximation error $a$ . + +# C The standardized Rand Index + +For the proof of the runtime complexity of the $\mathrm{SMI}_2$ , we use the fact that it is equivalent to the SRI (Corollary 2 in [30]) + +$$ +\mathrm {S R I} := \frac {\mathrm {R I} - \mathbb {E} [ \mathrm {R I} ]}{\sqrt {\operatorname {V a r} [ \mathrm {R I} ]}} = \frac {\sum_ {i j} \binom {n _ {i j}} {2} - \mathbb {E} \left[ \sum_ {i j} \binom {n _ {i j}} {2} \right]}{\mathbb {E} \left[ \left(\sum_ {i j} \binom {n _ {i j}} {2}\right) ^ {2} \right] - \mathbb {E} \left[ \sum_ {i j} \binom {n _ {i j}} {2} \right] ^ {2}} = \mathrm {S M I} _ {2}. \tag {17} +$$ + +Proof of Proposition 5.1. Hubert and Arabie [11] derived the expected value under random permutation as + +$$ +\mathbb {E} \left[ \sum_ {i j} \binom {n _ {i j}} {2} \right] = \sum_ {i j} \frac {\binom {a _ {i}} {2} \binom {b _ {j}} {2}}{\binom {N} {2}}. \tag {18} +$$ + +For the variance, we also need the second moment. While Romano et al. [30] give a general formula in Eq. (13), it is impractical in the given form as the authors themselves note the runtime complexity is $\mathcal{O}(N^3 k_A\max (k_A,k_B))$ . We observe that in the special case of the SRI, the higher moments can be simplified by leveraging the identity [39] + +$$ +\mathbb {E} \left[ \binom {n _ {i j}} {2} ^ {m} \mid n _ {i j} \sim \operatorname {H y p} \left(a _ {i}, b _ {j}, N\right) \right] = \sum_ {l = 2} ^ {\min (N, 2 m)} \frac {S _ {2 , 2} (m , l)}{2 ^ {m}} \prod_ {p = 0} ^ {l - 1} \frac {(a _ {i} - p) (b _ {j} - p)}{N - p}, \tag {19} +$$ + +with the generalized Stirling Number $S_{2,2}(m,k)$ [22] and the hypergeometric distribution Hyp. Note that the right-hand side is completely independent of $n_{ij}$ . Hence the expected values under the hypergeometric distributions in Eq. (13) in [30] can be calculated in $\mathcal{O}\big(\max \{k_A,k_B\} \big)$ time, giving + +$$ +\begin{array}{l} \mathbb {E} \left[ \sum_ {i j} {\binom {n _ {i j}} {2}} ^ {2} \right] = \left(2 \gamma_ {a} \sum_ {j = 1} ^ {k _ {B}} (N - b _ {j}) (N - 3 (b _ {j} - 1)) (b _ {j} - 1) b _ {j} \right. \\ + \sum_ {i = 1} ^ {k _ {A}} a _ {i} ^ {2} (a _ {i} - 1) \sum_ {j = 1} ^ {k _ {B}} (4 N - 5 b _ {j} + 3) (b _ {j} - 2) (b _ {j} - 1) b _ {j} \\ + \sum_ {i = 1} ^ {k _ {A}} a _ {i} ^ {3} (a _ {i} - 1) \sum_ {j = 1} ^ {k _ {B}} (b _ {j} - 3) (b _ {j} - 2) (b _ {j} - 1) b _ {j} \\ + \left(\gamma_ {a} ^ {2} - \gamma_ {a, 2}\right) \sum_ {j = 1} ^ {k _ {B}} b _ {j} (b _ {j} - 1) (b _ {j} - 2) (b _ {j} - 3) \tag {20} \\ + \left(\gamma_ {b} ^ {2} - \gamma_ {b, 2}\right) \sum_ {i = 1} ^ {k _ {A}} a _ {i} (a _ {i} - 1) (a _ {i} - 2) (a _ {i} - 3) \\ \left. + \left(\gamma_ {a} ^ {2} - \gamma_ {a, 2}\right) \left(\gamma_ {b} ^ {2} - \gamma_ {b, 2}\right)\right) \\ \frac {1}{4 \left(N (N - 1) (N - 2) (N - 3)\right)}, \\ \end{array} +$$ + +for the most general case $N \geq 4$ and no cluster larger than $N - 2$ , with + +$$ +\gamma_ {a} = \sum_ {i = 1} ^ {N} a _ {i} (a _ {i} - 1) \quad \gamma_ {a, 2} = \sum_ {i = 1} ^ {N} a _ {i} ^ {2} (a _ {i} - 1) ^ {2} \quad \gamma_ {b} = \sum_ {j = 1} ^ {N} b _ {j} (b _ {j} - 1) \quad \gamma_ {b, 2} = \sum_ {j = 1} ^ {N} b _ {j} ^ {2} (b _ {j} - 1) ^ {2}. \tag {21} +$$ + +The other cases can be treated analogously using Eq. (19). Hence the dominant factor for the runtime complexity of the SRI is the RI itself with $\mathcal{O}(k_A k_B)$ for the sum over all contingency matrix elements. + +# D Community detection parameter configurations + +For the experiment in Figure 4c, we compared five community detection algorithms implemented in networkkit [32]. Label Propagation and Degree Ordered Label Propagation do not have any parameters. The parameter choices for the other algorithms is listed in Table 4. + +Table 4: Parameter configurations for the experiment on the email EU core dataset. For Leiden, we used all combinations of $\gamma$ and randomize. + +
AlgorithmParameters
Louvainγ ∈ {0.001, 0.01, 0.1, 1.0}
Louvain Map Equationhierarchical ∈ {True, False}
Leidenγ ∈ {1 × 10-6, 1 × 10-5, 0.0001, 0.001, 0.01, 0.1, 1.0}
randomize ∈ {True, False}
+ +# E Spectral clustering on image datasets + +We conducted two additional experiments similar to the ones in Section 6.1, but using spectral clustering instead of $k$ -means. We apply spectral clustering with varying number of clusters $k$ to the UCI image segmentation dataset [6] and a texture classification dataset from OpenML [36]. We compare each clustering solution with the ground truth labels and select the clustering with the highest RI, $\mathrm{AMI}_2$ and $\mathrm{PMI}_2$ . Figure 5 shows the selection frequency of each number of clusters $k_{\mathrm{selected}}$ after 1000 trials with different random seeds. In the case of the image segmentation dataset, different subsets of 1000 samples were chosen for each trial, due to the steep runtime requirements of spectral clustering. The results align with the $k$ -means experiment in Section 6.1: The $\mathrm{PMI}_2$ selects clusterings in a less biased way, in the sense that the selected number of clusters $k_{\mathrm{selected}}$ is closer to the true number of clusters $k_{\mathrm{true}}$ than for RI and $\mathrm{AMI}_2$ . + +![](images/a3f75f4e632c2096e4773219944c82778287f81ac5c817a5b8d78f6c1b732649.jpg) +Figure 5: We apply spectral clustering to a) the UCI image segmentation dataset and b) a texture classification dataset. The number of clusters parameter $k$ is set to eleven values between $k_{\mathrm{true}} / 2$ and $3k_{\mathrm{true}} / 2$ . We compare the resulting clusterings with the ground truth via RI, $\mathrm{AMI}_2$ , and $\mathrm{PMI}_2$ and select the best clustering $k_{\mathrm{selected}}$ according to each metric. We repeat the experiment with 1000 different random seeds and plot the selection probabilities of $k_{\mathrm{selected}}$ for each metric. The $\mathrm{PMI}_2$ selects candidates where the number of clusters is closer to the true number of clusters $k_{\mathrm{true}}$ on average (dashed lines) compared to the RI and $\mathrm{AMI}_2$ . + +![](images/b0723db34c85f3c80867ce779be0754982ed054d9de451c1d4c643a98bd70d2c.jpg) \ No newline at end of file diff --git a/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/images.zip b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0c02cda409d62425302e894ed90f44fd5faf86a7 --- /dev/null +++ b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c08299507ca5e35618e3b959620bb16b63d931199ca3dc3a7d5206d62cd56379 +size 602078 diff --git a/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/layout.json b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..736744e0ad959e6fc8befa5de85bda5c5c79f26b --- /dev/null +++ b/pvalueadjustmentformonotonousunbiasedandfastclusteringcomparison/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:907ee5832a7350ae09444d1824056632cbba26a5fedc75561e1e93bf56484975 +size 765822 diff --git a/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_content_list.json b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0f9954026da4738f713db083783ac0989dc9f9e6 --- /dev/null +++ b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ecfcec12028ad2c63e144e6a6b5a4eb8ae4d43e9de7d549cf1e5563bbcf4d06 +size 75597 diff --git a/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_model.json b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3f474f413ad78c083187e3683ee3aa5d0eaa6ff6 --- /dev/null +++ b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db0fedf008d659f376d68ff8b0dec5bc934b77d9665bf7266d60b1a1ab120539 +size 94121 diff --git a/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_origin.pdf b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..914ee6b367cb2902709e6770bce9b9c0ddda4ffe --- /dev/null +++ b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/0c94340e-d4ff-4f26-be73-a7ab4282079b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab19274ef1f5a3efcd1a08a77f5503fc5930d1b8ce8c3763311e1c909456a671 +size 1153349 diff --git a/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/full.md b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/full.md new file mode 100644 index 0000000000000000000000000000000000000000..473713e4dec3ad744d23f939041fbce3afd6866f --- /dev/null +++ b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/full.md @@ -0,0 +1,277 @@ +# $\mathbf{S}^{3}$ : Increasing GPU Utilization during Generative Inference for Higher Throughput + +Yunho Jin + +Harvard University + +Chun-Feng Wu + +National Yang Ming + +Chiao Tung University + +David Brooks + +Harvard University + +Gu-Yeon Wei + +Harvard University + +# Abstract + +Generating texts with a large language model (LLM) consumes massive amounts of memory. Apart from the already-large model parameters, the key/value (KV) cache that holds information about previous tokens in a sequence can grow to be even larger than the model itself. This problem is exacerbated in one of the current LLM serving frameworks which reserves the maximum sequence length of memory for the KV cache to guarantee generating a complete sequence as they do not know the output sequence length. This restricts us to use a smaller batch size leading to lower GPU utilization and above all, lower throughput. We argue that designing a system with a priori knowledge of the output sequence can mitigate this problem. To this end, we propose $S^3$ , which predicts the output sequence length, schedules generation queries based on the prediction to increase device resource utilization and throughput, and handle mispredictions. Our proposed method achieves $6.49 \times$ throughput over those systems that assume the worst case for the output sequence length. + +# 1 Introduction + +Text generation has become increasingly popular in various services, leading to a surge in the use of large language models (LLMs) known for their high accuracy. LLMs have distinct features that differentiate them from non-Transformer-based models, including a requirement for massive amounts of compute and memory resources. Specifically, we observe that Transformer-based LLMs are often limited by memory capacity and bandwidth, resulting in significant underutilization of compute resources. When serving GPT-J on an NVIDIA A100 GPU, the utilization of GPU compute resources can be as low as $0.4\%$ . This highlights the memory-bound nature of the LLMs and the need for efficient memory utilization to increase the utilization of GPU compute resources. + +A common approach to boost GPU utilization and enhance throughput is to increase batch size. This is due to the fact that inputs within a batch share the same model weights, thus the GPU only needs to load the model weight from its high bandwidth memory (HBM) to the on-chip SRAM once and reuse it for all inputs within the batch. The GPU uses more of its compute resources when processing the same model weights. Increasing batch size is a simple optimization technique and is, therefore, commonly used in serving convolutional and fully-connected neural network models [1,2]. + +However, the self-attention layer in Transformer-based text generation LLMs presents a challenge to this simple optimization due to its autoregressive nature. Specifically, when generating a new token in a sequence, the model needs to attend to all previous tokens in the sequence, requiring the model to retain all information from previous tokens and store them in HBM. We call this region in the HBM holding the information key/value cache (KV cache). The size of the KV cache grows with larger batch sizes and longer sequences, which limits the maximum batch size, thereby lowering GPU utilization and ultimately reducing throughput. To support the growing KV cache size, Huggingface's Transformers library [3] constantly allocates new memory at each token generation + +![](images/ee60757e857fc679b0e3a80303f608531dc48f7ec03c87aa70e3b74309bbc241.jpg) +Figure 1: Latency versus throughput trade-off among different models (left, online scenario with single GPU running models within the size confines of the GPU) and the number of GPUs (right, offline scenario distributing GPT-3 175B to 6, 8, and 10 GPUs) when generating 60 tokens, inspired by FlexGen [6]. The markers in the lines represent batch sizes, from 1 to the maximum batch size that can be loaded on an A100 GPU, incrementing by the power of two. Allocating the exact amount of memory for each sequence expands the curve to higher throughput at the cost of higher latency. The solid lines show the trade-off in vanilla systems and dotted lines show how much $\mathbf{S}^3$ can expand the trade-off. The numbers represent the maximum batch sizes for $\mathbf{S}^3$ and vanilla systems. The vertical line in the left figure denotes the latency SLO for reading a 60-token-long sequence. + +![](images/df288c2ebe021dbb8994201789b8040c09815392911a809211d31eaf812771c2.jpg) + +and incurs latency overhead associated with memory allocation. This improves usability since users do not have to know the output sequence length but suffers long inference latency. Alternatively, NVIDIA's FasterTransformer library [4] pre-allocates memory for the maximum sequence length, which ends up wasting memory by allocating more than is necessary. This approach limits batch size and prioritizes latency over throughput. The trend towards long maximum sequence lengths in recent models (e.g., 8K or 32K) [5] amplifies the KV cache overhead, demanding more efficient memory utilization in serving Transformer-based text generation models. + +In light of these challenges, we propose $S^3$ , **scheduling** sequences with speculation, a framework that maximizes throughput via predicting the output sequence length and reducing memory waste. The frequent memory allocations in Huggingface's Transformers and the limited batch size in FasterTransformer stem from the fact that we lack prior knowledge of the output sequence length. $S^3$ addresses these issues by predicting the expected output sequence length and allocating the corresponding amount of memory for each query. It also schedules sequences to increase the GPU utilization. Finally, $S^3$ runs a supervisor in the background that detects mispredictions and adjusts the size of allocated memory to be more accurate. By integrating these components together, $S^3$ optimizes memory usage and scheduling to maximize throughput during deployment of Transformer-based LLMs for text generation on GPUs. + +There are two types of LLM deployment scenarios: online and offline. Online scenarios such as chatbots [7, 8] require service providers to generate a sequence within a tight latency service level objective (SLO) constraint. Offline scenarios include applications such as scoring [9] or data wrangling [10], and have loose latency SLOs, emphasizing throughput over end-to-end latency. In contrast, FaterTransformers [4] and xFormers [11] prioritize reducing latency. We argue that ensuring the latency remains below the SLO constraint renders it unnecessary to prioritize further latency reduction. To this end, we design $\mathbf{S}^3$ to achieve higher throughput under those latency SLOs. Figure 1 highlights how much $\mathbf{S}^3$ can improve throughput when trading off latency. For online scenarios, we assume a latency SLO set to the average reading speed of English readers, 4 words per second [12] and 0.1875 second per token [13]. The models on the left figure that are smaller than 100 billion parameters satisfy this SLO for all possible batch sizes. This SLO offers service providers to improve throughput over all models. In fact, for LLAMA-33B [14], there are opportunities to get throughput benefits with no latency penalties. The right figure, sweeping over different number of GPUs, shows opportunities to maximize throughput in offline scenarios that have loose latency SLO. + +We evaluate $S^3$ , assessing both its throughput and cost-efficiency. Our analysis includes both offline and online scenarios. In online scenarios under the average reading speed latency SLO constraint, + +we find that $\mathrm{S}^3$ can generate up to $6.49\times$ more sequences while adhering to the same SLO constraint. In offline scenarios, we observe that $\mathrm{S}^3$ achieves a speedup up to $6.49\times$ for different models. $\mathrm{S}^3$ does not affect the models' perplexity as it does not change the models' architectures. Furthermore, we evaluate the cost-efficiency of $\mathrm{S}^3$ and find that using 6 GPUs, $\mathrm{S}^3$ provides almost identical throughput compared to a vanilla system with 10 GPUs. + +To summarize, we make the following contributions: + +- We increase the achievable batch size under longer latency SLO and allow service providers to serve with higher throughput in both online and offline scenarios by using larger batch sizes. +- We fine-tune a Distillbert model to predict output sequence lengths given an input prompt. +- We provide a mechanism to recover from mispredictions. $S^3$ preempts sequences that exceed their allocated memory and retrain the predictor to learn from its mistakes. + +# 2 Background and Motivation + +# 2.1 Generative AI Models + +A Transformer-based generative model is autoregressive. It predicts the most probable token based on past tokens. Since the model generates one token at a time, it has to iterate over itself $n$ times to generate a sequence that is $n$ -tokens long. One iteration involves an input token traversing through the model which is a stack of transformer layers containing one attention, two layer norm, and two feed-forward layers. Especially, the self-attention layer uses information on the past tokens to generate the next token. + +For example, the model at the $i^{th}$ iteration attends the current token $(t_i)$ with every token it already generated $(t_0, \dots, t_{i-1})$ in the self-attention layer. We can express the self-attention layer as: + +$$ +h _ {o u t} = s o f t m a x (\frac {q _ {i} \cdot K ^ {T}}{\sqrt {d _ {h}}}) \cdot V +$$ + +where $d_h$ is hidden dimension of the model, $h_{out}, q_i \in \mathbb{R}^{d_h}$ are output hidden vector, current query vector, respectively, and $K, V \in \mathbb{R}^{i \times d_h}$ are key and value matrices. The $j_{th}$ rows in the $K$ and $V$ matrices represent key and value vectors of $t_j$ , respectively. The two dot products attend $t_i$ to all the key and value vectors in the current sequence. + +The model stores $K$ and $V$ matrices as the key/value (KV) cache to avoid having to generate key and value vectors at each iteration. Otherwise, it has to store hidden states of every previous token and multiply it with a weight matrix $W_{QKV} \in \mathbb{R}^{d_h \times 2d_h}$ at every transformer layer. This would require $2(i - 1)d_h$ FLOPs per layer almost identical to $2id_h$ FLOPs for the self-attention layer at $t_i$ . The size of the KV cache is $4ld_h$ bytes per token when using half-precision numbers. The cache uses 2 bytes for every number for both the key and value cache where $l$ is the number of transformer layers in a model. For example, GPT-NEOX [15], a 20 billion parameter model, has 44 layers and 6144 hidden dimensions and thus uses 1MB per KV cache per token. + +# 2.2 KV Cache Management on GPUs + +The KV cache is relatively small (e.g., several MBs) and can be easily stored in the GPU HBM (high-bandwidth memory) when the sequence is short as the cache stores information about the previous tokens in the sequence. It grows as the model generates more tokens. Using this dynamic nature, Huggingface's Transformers [3] (HF-Transformers) constantly allocates more memory to the KV cache and stalls the computation until it is complete. This approach allows the library to allocate the exact amount of memory for each cache at the cost of frequent memory accesses. + +To mitigate this, NVIDIA's FasterTransformer library reserves the maximum sequence length of memory for every sequence [4, 16]. It removes redundant memory accesses by simply filling in the reserved memory in an append-only fashion. However, this approach comes with its own drawback as it reserves more than strictly-necessary memory for sequences. For GPT-NEOX with a maximum sequence length of 2048 tokens, FasterTransformer reserves 2048 tokens even for sequences that + +![](images/bb64e25aa6c8cacc6998e2677c6d6abc2478be556564005307d1a52749b6e51b.jpg) +Figure 2: (a) GPT-J's and (b)GPT-NEOX's NVIDIA A100 compute utilization. + +end up being 50 tokens long. The maximum batch size that FasterTransformer can use for this model with a 80GB A100 GPU is less than 20 with the 40GB model size and 2.2GB per sequence. The small batch size underutilizes the massive compute resources in the GPU. The rationale for reserving the maximum sequence length amount of memory even with the underutilization problem is to ensure that it generates full sequences and enhances user experience. + +# 2.3 Observation + +Language models are memory bound We demonstrate the extent of GPU resource underutilization when we run GPT-J with 6B parameters on an A100 GPU. The relatively smaller model shows a wider spectrum of different batch sizes and sequence lengths since we can fit larger batches with longer sequences in the GPU. Fig. 2 (a) shows the GPU utilization swept over different batch sizes and sequence lengths. As the figure denotes, increasing the batch size achieves a higher utilization but eventually faces a memory cliff, where an out-of-memory (OOM) error kills the process. Take the batches with 1024 sequence length for example. 32 sequences are the maximum batch size and thus $12.56\%$ is the maximum utilization that we can achieve with this sequence length. Fig. 2 (b) shows similar underutilization in the larger GPT-NEOX model due to the memory cliff problem. This model faces the memory cliff with smaller batch sizes and shorter sequences since the model consumes more of the GPU memory. Please note that HF-Transformers still needs to know the output sequence length before batching inputs to avoid the memory cliff. + +As the figures illustrate, increasing the batch size can enhance throughput in neural networks. This approach does not require intricate optimizations and enables the GPU to load the model weight from its HBM to the on-chip SRAM only once and share it among a larger number of inputs. By doing so, the GPU can activate its idle compute resources and concurrently handle multiple inputs. Nevertheless, the memory cliff poses a challenge, limiting the utilization of additional resources. However, as we elaborate, this issue can be resolved. + +Reasons behind the inefficiencies Both the frequent memory allocations in HF-Transformers and the limited batch size in FasterTransformer come from the fact that we are not aware of the generated sequence length. If we know the precise length of the generated sequence, we can allocate exact memory to each sequence and resolve the repetitive memory reservation and the unnecessary memory allocation problems in HF-Transformers and FasterTransformer, respectively. + +# 3 S $^3$ Design + +$S^3$ is a system-algorithm co-design framework that maximizes GPU utilization with sequence length prediction to achieve higher throughput. $S^3$ has three components as shown in Fig. 3: 1) predictor, 2) scheduler, and 3) supervisor. A text generation query arrives in a request pool in the host DRAM. The predictor then predicts its output sequence length which the scheduler uses to batch requests. The scheduler dispatches the batch to the GPU and the text generator model generates texts in the + +![](images/126f86e3b939dd4661508beca050b58e9c92b911214eca596539a75aeea052de.jpg) +Figure 3: Overview of $\mathbf{S}^3$ . The boxes in yellow denote new components proposed by $\mathbf{S}^3$ . + +batch. The supervisor oversees the GPU utilization and handles mispredictions. We describe each component in detail and how they interact with each other in this section. + +Output sequence length predictor We use a predictor to predict the output sequence length and resolve the frequent and redundant memory allocation problems. Specifically, we fine-tune a Distilbert [17] model that was trained for sequence classification to classify which length bucket the output sequence length falls into. We bucketize the sequence lengths since it is known that machine learning models are not as capable of the "last mile" search compared to narrowing the candidates down to the last mile. Each bucket is allocated the range of $\frac{\text{max sequence length}}{\text{number of buckets}}$ and we use 10 buckets. + +To this end, we fine-tune the model on the Alpaca dataset [18], one of the representative question-and-answering datasets, and use the questions as inputs and the lengths of the answers as labels. We observe that this predictor predicts the correct bucket with $98.61\%$ accuracy. The average distance between the wrong predictions and the correct bucket is 1.03 meaning that the error converges to 0 when we double the bucket size. We also evaluate the predictor on a model fine-tuned with Google Natural-Question dataset [19] and observe an accuracy of $77.13\%$ . It makes smaller mistakes more often than larger ones. For completeness, we fine-tune a model on the Pile dataset [20], a non-question-and-answering dataset, and see $65.6\%$ accuracy. The predictor shows surprisingly high accuracy compared to randomly guessing the bins as the latter is correct only $10\%$ of the time. + +We choose Distilbert, a 66 million parameter model for its small size and fast inference. The model size is negligible since it is smaller than even a single transformer layer in the billion-scale models (e.g., 214 million for 6 billion GPT-J [21] model). The latency is also negligible since the predictor model runs only once when a request arrives at the server while the text generation model runs $n$ times to generate an $n$ -token long output sequence. We measure that Distilbert takes 3.7ms to run compared to several seconds for the text generation models on an NVIDIA A100 GPU. + +Length-aware sequence scheduler The scheduler batches and schedules sequences based on the predicted results to maximize the GPU utilization without exceeding the GPU HBM capacity. We can formulate this problem as a variant of the bin packing problem with a single bin. The capacity bin is the HBM size and the item weight is the size of the KV cache for each sequence. + +We use the decreasing first fit algorithm [22] as the solution to the bin packing problem for its simplicity. The scheduler queues the lengthiest sequences first, reserving room for shorter sequences within the GPU's available HBM. It orders sequences in the request pool by length in decreasing order and iterates through the pool to check if the KV cache of the current sequence does not exceed the available HBM. If so, it includes the sequence in the current batch and reduces the available HBM by the size of the KV cache. The scheduler continues this process until either there is no available HBM or it has iterated through the entire request pool. This approach has been consciously adopted due to its minimal associated overhead, while still maintaining an approximately-optimal resolution for the problem at hand. + +The resulting batch is irregularly shaped where some sequences are longer than others. Unfortunately, current frameworks either do not support irregularly shaped batch [3, 4] or support it with limited performance [23]. Those that do not support this functionality pad the short sequences with padding tokens to match the length of sequences in the same batch. The padding tokens waste both computation and memory since they do not hold any useful information. ORCA [16] introduces an interesting solution to this problem termed selective batching. The authors of the work identify + +Table 1: Model architecture used in the evaluations + +
ModelNum ParamsNum layersModel dimNum heads
GPT-J [21]6B28512016
LLAMA 13B [14]13B40409640
GPT-NEOX [15]20B44614464
LLAMA 33B [14]30B60665652
GPT3 175B [26]175B961228896
+ +that inputs to certain layers (e.g., feed-forward) share identical weights, in contrast to inputs to other layers (i.e., self-attention) which do not share weights. Consequently, this enables a streamlined batch processing flow, wherein layers with shared weights are batch-processed and the batch is momentarily unpacked to process each input serially through the attention layers. ORCA shows that this has a negligible impact on the latency since the inputs to the self-attention layers do not share weights and do not benefit from batching. As such, we follow ORCA and use its selective batching technique. + +Also borrowing from ORCA the iteration-level scheduling technique, $S^3$ does not wait until all sequences in the current batch finish generation. Instead, it checks if any sequence in the batch has finished generation at every iteration. This grants $S^3$ higher scheduling flexibility and removes any redundant waiting time. Finally, if a model cannot fit in one GPU, $S^3$ uses pipeline parallelism and shard the models in Transformer layer granularity. + +Supervisor The supervisor is in charge of supervising the GPU utilization and handling mispredictions. The supervisor runs in the background to check for the available space in the HBM and passes the information to the scheduler. The scheduler then appends a sequence to the running batch if the available memory is large enough for the sequence. + +The supervisor is also responsible for handling mispredictions. In the case of short predictions, the supervisor preempts those sequences that exceed their reserved memory. It monitors the length of the current output sequences and evicts them if they are not finished but used up its reserved memory. It asynchronously moves the current state of those sequences including the KV cache and the generated tokens to the request pool and frees up the GPU memory. Now the K and V matrices are fragmented with blank rows where the evicted KV cache was originally stored in. The supervisor shifts the rows below the blank one so that all rows are stored contiguously. This memory format is required by current libraries [24,25] and also resolves the fragmentation issue. Finally, the supervisor doubles the assigned memory for the evicted sequences to fix the short misprediction. + +Finally, the supervisor constantly trains the predictor in the background. It uses the sequences that the predictor mistook to train the predictor so that it can learn from its mistakes. This training time is relatively short and our measurement shows that each training iteration takes 11ms on average while sequence generation takes several seconds or more. This implies that the retraining overhead is less than $10\%$ even if we train the predictor for 10 epochs. + +Putting it all together We summarize this section with an explanation of how $\mathrm{S}^3$ uses each component to serve a sequence-generation request. First, text-generation requests arrive at the request pool. The predictor predicts the output sequence length of the sequences in the pool. The supervisor runs in the background and checks the current HBM usage. Next, the scheduler uses both the predictions and the available HBM to batch requests for maximum GPU utilization. It finishes its job by scheduling that batch to the GPU which generates the scheduled sequences. + +# 4 Evaluation + +We show that $S^3$ achieves higher throughput by predicting the output sequence length. It does so by using larger batch sizes hence smaller numbers of iterations. One iteration refers to processing a Transformer model once to generate a token. We also show that $S^3$ can reduce the cost of serving models by using fewer GPUs. + +![](images/0ae23048c9449f65770affaf324c82716c917f23717941130f6bb744534d157d.jpg) +(a) Maximum throughput (Alpaca [18]) + +![](images/dc2a2451849d37895604d7ae00f1e18ce82431f510ac525e82700dd846c19ea2.jpg) +(b) Maximum throughput (Google-NQ [19]) + +![](images/eed1ec20b3911ab4e534e5bad4ecbfdf997f4a96a9db2107bddefb2de8ce2865.jpg) +(c) Maximum throughput (The Pile [20]) + +![](images/16f52417c8d7c5bf19f26a9267c625b628d09c72e17472171ec78a8ffb14a547.jpg) +(d) Generated sequences under latency SLO +Figure 4: Latency and throughput of different models and datasets. + +Table 2: Maximum throughput of the three systems measured in tokens/s + +
BaselineS3Oracle
GPT-J2061.162349.672569.09
LLAMA-13B1018.151641.551907.09
GPT-NEOX490.151344.941530.05
LLAMA-30B91.46593.73834.30
+ +Environment We run our evaluation on an NVIDIA 80GB A100 GPU connected to the host DRAM via PCIe $4.0 \times 8$ in a Lenovo ThinkSystem SD650-N V2 Server [27]. + +Baselines We compare with ORCA [16], a Transformer serving system that increases throughput by iteration level scheduling and selective batching. ORCA has to allocate the maximum sequence length of memory when it is not aware of the output sequence length to guarantee full sequence generation. $S^3$ predicts the output sequence length and allocates memory based on the prediction. We implement the systems on top of FasterTransformer [4] since this library is faster than HF- Transformers [3] due to more optimizations. We also compare $S^3$ with an ideal system with a perfect predictor which we term Oracle. + +Models We use models ranging from 6 billion parameters to 175 billion parameters for the evaluation. The specifics of these models are explained in table 1. + +# 4.1 Throughput Analysis + +We evaluate $S^3$ 's throughput using Alpaca [18], Google Natural Questions (Google-NQ) [19], and The Pile [20] datasets. Specifically, we query $S^3$ with questions and ask it to generate the answers. + +Offline scenario: Maximum throughput Fig. 4 (a) - (c) reports the maximum throughput in sequences per second for different models and different datasets. We measure the throughput using the maximum batch size of each configuration. It shows that $S^3$ outperforms ORCA by $1.13 \times$ and up to $6.49 \times$ and closely matches Oracle, differing from $9.34\%$ and up to $40.52\%$ . The difference is magnified with larger models because the batch size is limited even for $S^3$ since most of the HBM + +Table 3: Average batch size and number of iterations for different models + +
ModelORCAS3Oracle
Batch sizeNum iterBatch sizeNum iterBatch sizeNum iter
GPT-J69.9479885301054564.88989
LLAMA 13B31.6117675274.662034527.041060
GPT-NEOX16.8933069157.593545292.191912
LLAMA 33B413979042.471315577.357223
+ +![](images/b49328969f6c19129ee590f3dbfdbd9d3f1fe6e9027aeb5db8919f3766c82356.jpg) +Figure 5: Maximum throughput of GPT3 running on different numbers of GPUs. + +![](images/92d055dabfaa6d142e6e48d645e8f79bde2950a2fe1634f0857520e4ccb9a4f4.jpg) +Figure 6: Latency breakdown of $\mathbf{S}^3$ + +is used to hold model weights. The batch size in $S^3$ gets cut off before saturating the throughput as shown in Fig. 1. We also report the throughput in tokens/s in table 2 evaluated on the Alpaca dataset. + +We can notice that the maximum throughput increases by $6.49 \times$ while the batch size increases by nearly $10 \times$ for every model. This comes from the unique behavior of Transformers. Feed-forward layers in the models benefit from batching layers while self-attention layers do not. This is because inputs in feed-forwards share the same weight while inputs in self-attentions attend to their own sequences. $\mathbf{S}^{3}$ 's performance benefit is increased when the parallelized portion is larger than the serialized portion. However, increasing the batch size entails adding more serialized inputs thereby growing the serialized portion. GPT-J is a good example of this characteristic which shows a similar throughput jump of $1.13 \times$ from ORCA to $\mathbf{S}^{3}$ and $1.09 \times$ from $\mathbf{S}^{3}$ and Oracle while the batch size differs by $8.69 \times$ and $1.97 \times$ , respectively. + +Online scenario: SLO-aware throughput We now consider a scenario with a latency SLO constraint. We set the SLO as 0.1875 seconds to generate one token given that average English readers can read at 4 words per second [12] or 0.25 second per word, and 0.75 words per token [13]. We next calculate the latency SLO for each sequence by multiplying 0.1875 by its sequence length (e.g., 11.25s for a sequence with 60 tokens). + +Fig. 4d reports the number of sequences that each model generates using ORCA, $S^3$ , and Oracle. Oracle exceeds the SLO for GPT-J, LLAMA 13B, and GPT-NEOX when it chooses the maximum batch size. So we limit the batch size for these models during this evaluation. $S^3$ generates a similar number of sequences with the ideal case and $1.13 \times 6.49 \times$ more sequences compared to ORCA. The throughput increase is similar to the offline scenario since $S^3$ meets the SLO in most cases with its maximum batch size. However, the difference between $S^3$ and Oracle reduces by $10\%$ compared to the offline scenarios because we limit the batch size hence the throughput of Oracle. + +# 4.2 Cost Analysis + +We evaluate $S^3$ with different numbers of GPUs. We partition GPT-3 into 6 and 8 GPUs in a pipeline-parallel manner, allocating 16 and 12 transformer layers per GPU for each configuration, respectively. We also evaluate on 10 GPU setting where we allocate 10 layers to 9 GPUs and 6 to the remaining GPU. $S^3$ pipelines each batch and schedules the a batch whenever the GPU processing the first partition passes its result to the next batch. This reduces GPU idle time by having every GPU processing batches concurrently. + +Table 4: Average batch size and number of iterations for different system configurations on Alpaca + +
Num GPUsORCAS3Oracle
Batch sizeNum iterBatch sizeNum iterBatch sizeNum iter
611.9546734114.224891209.472667
828.6819482247.852254470.651187
1048.9911403399.621398564.88989
+ +Table 5: Accuracies and runtime of different predictors on different datasets + +
Model accuracy (%)Model sizeDatasetsRuntime (ms)
Alpaca [18]Google [19]The Pile [20]
MS-minibert22M98.0677.9960.12.3
Distilbert-base66M98.682.6865.64.1
Bert-base110M99.5485.0868.27.6
Bert-large340M99.689.2571.914.5
+ +Fig. 5 reports the maximum throughput using the different numbers of GPUs. First, we can see that $S^3$ achieves similar throughput using 6 GPUs compared to ORCA with 10 GPUs. ORCA shows $0.92\%$ higher throughput than $S^3$ to be specific. More GPUs shard the model into finer pieces and leave more space for storing the KV cache, allowing us to increase the batch size. Table 4 supports this claim by reporting larger batch sizes with more GPUs. $S^3$ can achieve similar effect with fewer GPUs by optimizing memory allocation strategy. + +Naturally, it leads to a similar question with the one in the throughput evaluation on why $S^3$ with 6 GPUs shows similar throughput with ORCA with 10 GPUs even with $2.33 \times$ the batch size. The answer is in the pipelined execution and the sequential nature of self-attention layers in Transformer-based models as explained in 4.1. The systems complete a batch at every $\left\lceil \frac{l}{number \, of \, GPUs} \right\rceil$ instead of at every $l$ layers. For example, $S^3$ with 6 GPUs completes a batch at every 16 layers instead of 96 for GPT3. Similarly, ORCA with 10 GPUs completes at every 10 layers and thus processes more quickly. The increase in the latency negates the throughput benefit from $S^3$ such that the two systems show almost identical throughput even with the $2.33 \times$ larger batch size when using $S^3$ . + +# 4.3 Overhead: Latency Breakdown + +We evaluate runtime latency of each component in $S^3$ . We classify the runtime latency into three categories: generation, penalty, and overhead. Generation is the time $S^3$ spent on processing the model to generate tokens. Penalty denotes the time it took for $S^3$ to preempt the KV cache and hidden states from the GPU and load it back to the GPU. Overhead includes the time it took for predictor, scheduler, and supervisor, combined. Fig. 6 show that penalty and overhead combined are negligible (i.e., $11\%$ on average) compared to the generation. Of course, the penalty would increase if the predictor is less accurate and triggers more data traffic between the GPU and the host. In contrast, the overhead will increase if we employ a more accurate but heavier predictor, thus introducing a new trade-off. + +# 4.4 Predictor Ablation Study + +We vary the predictor size from 22M to 340M parameters and report their accuracies on different datasets in table 5. We can observe a similar trend in all three datasets where a larger predictor generates more accurate predictions. The accuracies differ among different datasets since the length distribution differs. Specifically, Alpaca [18] showed the smallest variance among the length distributions in the three datasets compared to The Pile [20] showing the greatest. + +# 5 Related Works + +Machine learning serving systems The high interest in machine learning has sparked numerous research in its service platforms [1,4,6, 16, 28-38]. Especially, the surprising performance of Transformer-based language models has directed many researchers to develop Transformer-specific + +serving systems [4, 6, 34, 37-39]. Most of the systems focus on reducing the latency without much concern for throughput with the exceptions of [6, 31]. The throughput-oriented systems use memory hierarchy to store parts of the model in slower memory and to increase the batch size. However, they all allocate the same memory to every sequence and do not consider preempting a sequence based on a prediction. $S^3$ can improve these systems by reducing the required memory size, removing the need for larger but slower memories such as SSDs, and reducing the fiscal cost of memory overall. + +FastServe [39] tackles the head-of-line blocking problem caused by query-level scheduling while $S^3$ addresses the inefficient memory allocation issue. It proactively manages KV cache similar to $S^3$ 's supervisor by migrating sequences between the host memory and the GPU HBM. However, FastServe does not have an output sequence length predictor and thus uses a skip-join Multi-Level Feedback Queue since it is unaware of the job execution time. We expect $S^3$ to work with FastServe so that $S^3$ 's predictor delivers more information to FastServe for better scheduling. + +Reducing the KV cache overhead The issue of attention layers in Transformers requiring quadratic computation and memory with respect to the sequence length has been extensively studied. Various approaches have been proposed to address this problem. Low-rank approximation [40, 41] and exploiting sparsity [42-44] are among the methods that aim to mitigate the issue. Another approach, known as multi-query attention [37, 45], suggests reducing the cache size by utilizing one attention head per key-value (KV) cache instead of employing multiple attention heads, as illustrated in Table 1. Additionally, works focusing on model size reduction through compression [46] and quantization [6, 47, 48] can also contribute to reducing the size of the KV cache. These approaches are complementary to our work and can be employed to reduce the penalty caused by mispredictions, allowing for the use of a less accurate but lighter predictor. + +Sequence length prediction While there are limited works that predict output sequence length based on an input sequence, Yan et al. [49] propose a convolution-based small network with embedding layers to forecast output sequence length in machine translation. In our approach, we employ a similar strategy but utilize a small Transformer-based predictor to estimate the output sequence lengths in text generation tasks. This predictor allows us to accurately predict the output sequence length for text generation workloads. + +# 6 Limitations and Conclusion + +Limitations We make the following assumptions in this work. We assume that text generation request traces mimic the publicly available question-and-answering datasets since there are no publicly-available traces. Analyzing the actual trace would facilitate deploying $S^3$ in commercial workloads. Similarly, text generation task does not have any standardized latency SLO constraint as in other machine learning workloads [50] since it is a relatively new service. So we assume average reading speed of an English reader as our SLO. We will be able to evaluate $S^3$ in broader scenarios if organizations publicly release different SLOs for different text generation applications. + +Conclusion In summary, we introduce $\mathbf{S}^3$ , a framework designed to achieve high throughput in serving Transformer-based generative models. $\mathbf{S}^3$ leverages a predictor to estimate the output length of generated sequences and schedules them accordingly to maximize throughput. Additionally, $\mathbf{S}^3$ handles potential prediction errors to guarantee reliability. By allocating varying memory sizes to different inputs, $\mathbf{S}^3$ acknowledges that not all sequences should be treated equally. This approach expands the conventional trade-off between latency and throughput frontier, paving the way for new possibilities in optimizing the latency-throughput trade-off. + +# 7 Acknowledgement + +We thank the anonymous reviewers for their thoughtful comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 1704834 and No. 2118985 and supported in part by the Application Driving Architectures (ADA) Research Center and the National Science and Technology Council, Taiwan, under Grant No. 112-2222-E-A49-002-MY2. Chun-Feng Wu acknowledges the support from the Yushan Young Scholar Program by the Ministry of Education (MOE) in Taiwan. + +# References + +[1] D. Crankshaw, X. Wang, G. Zhou, M. J. Franklin, J. E. Gonzalez, and I. Stoica, "Clipper: A Low-Latency online prediction serving system," in 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17). Boston, MA: USENIX Association, Mar. 2017, pp. 613-627. [Online]. Available: https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/crankshaw +[2] R. Chard, Z. Li, K. Chard, L. Ward, Y. Babuji, A. Woodard, S. Tuecke, B. Blaiszik, M. J. Franklin, and I. Foster, "Dlhub: Model and data serving for science," in 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2019, pp. 283-292. +[3] H. Face, "Transformers," https://github.com/huggingface/transformers. +[4] NVIDIA, "FasterTransformer," https://github.com/NVIDIA/FasterTransformer. +[5] OpenAI, "OpenAI research GPT4," https://openai.com/research/gpt-4. +[6] Y. Sheng, L. Zheng, B. Yuan, Z. Li, M. Ryabinin, D. Y. Fu, Z. Xie, B. Chen, C. Barrett, J. E. Gonzalez, P. Liang, C. Ré, I. Stoica, and C. Zhang, "High-throughput Generative Inference of Large Language Models with a Single GPU," arXiv e-prints, p. arXiv:2303.06865, Mar. 2023. +[7] Google, "Bard," https://bard.google.com/. +[8] OpenAI, "ChatGPT," https://chat.openai.com/. +[9] P. Liang, R. Bommasani, T. Lee, D. Tsipras, D. Soylu, M. Yasunaga, Y. Zhang, D. Narayanan, Y. Wu, A. Kumar, B. Newman, B. Yuan, B. Yan, C. Zhang, C. Cosgrove, C. D. Manning, C. Ré, D. Acosta-Navas, D. A. Hudson, E. Zelikman, E. Durmus, F. Ladhak, F. Rong, H. Ren, H. Yao, J. Wang, K. Santhanam, L. Orr, L. Zheng, M. Yuksekgonul, M. Suzgun, N. Kim, N. Guha, N. Chatterji, O. Khattab, P. Henderson, Q. Huang, R. Chi, S. M. Xie, S. Santurkar, S. Ganguli, T. Hashimoto, T. Icard, T. Zhang, V. Chaudhary, W. Wang, X. Li, Y. Mai, Y. Zhang, and Y. Koreeda, “Holistic Evaluation of Language Models,” arXiv e-prints, p. arXiv:2211.09110, Nov. 2022. +[10] A. Narayan, I. Chami, L. Orr, S. Arora, and C. Ré, "Can Foundation Models Wrangle Your Data?" arXiv e-prints, p. arXiv:2205.09911, May 2022. +[11] B. Lefaudeau, F. Massa, D. Liskovich, W. Xiong, V. Caggiano, S. Naren, M. Xu, J. Hu, M. Tintore, S. Zhang, P. Labatut, and D. Haziza, "xformers: A modular and hackable transformer modelling library," https://github.com/facebookresearch/xformers, 2022. +[12] M. Brysbaert, “How many words do we read per minute? a review and meta-analysis of reading rate,” Journal of Memory and Language, vol. 109, p. 104047, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0749596X19300786 +[13] OpenAI, "OpenAI API Documentation," https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them. +[14] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, "Llama: Open and efficient foundation language models," arXiv preprint arXiv:2302.13971, 2023. +[15] S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, M. Pieler, U. S. Prashanth, S. Purohit, L. Reynolds, J. Tow, B. Wang, and S. Weinbach, “GPT-NeoX-20B: An open-source autoregressive language model,” in Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models. virtual+Dublin: Association for Computational Linguistics, May 2022, pp. 95–136. [Online]. Available: https://aclanthology.org/2022/bigscience-1.9 +[16] G.-I. Yu, J. S. Jeong, G.-W. Kim, S. Kim, and B.-G. Chun, "Orca: A distributed serving system for Transformer-Based generative models," in 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). Carlsbad, CA: USENIX Association, Jul. 2022, pp. 521-538. [Online]. Available: https://www.usenix.org/conference/osdi22/presentation/yu +[17] V. Sanh, L. Debut, J. Chaumont, and T. Wolf, “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv e-prints, p. arXiv:1910.01108, Oct. 2019. +[18] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto, "Stanford alpaca: An instruction-following llama model," https://github.com/tatsu-lab/stanford_alpaca, 2023. + +[19] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov, "Natural questions: a benchmark for question answering research," 2019. +[20] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy, "The Pile: An 800GB Dataset of Diverse Text for Language Modeling," p. arXiv:2101.00027, Dec. 2020. +[21] B. Wang and A. Komatsuzaki, “GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model,” https://github.com/kingofolz/mesh-transformer-jax, May 2021. +[22] D. S. Johnson, "Near-optimal bin packing algorithms," Ph.D. dissertation, Massachusetts Institute of Technology, 1973. +[23] PyTorch, "Pytorch Nested Tensor," https://pytorch.org/docs/stable/nested.html. +[24] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. Devito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 2019, pp. 8024–8035. [Online]. Available: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf +[25] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., "Tensorflow: A system for large-scale machine learning," in 12th {USENIX} Symposium on Operating Systems Design and Implementation (\{OSDI\} 16), 2016, pp. 265-283. +[26] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, "Language Models are Few-Shot Learners," arXiv e-prints, p. arXiv:2005.14165, May 2020. +[27] Lenovo, "Lenovo ThinkSystem SD650-N V2 Server," https://lenovopress.lenovo.com/lp1396-thinksystem-sd650-n-v2-server. +[28] F. Romero, Q. Li, N. J. Yadwadkar, and C. Kozyrakis, "INFaaS: Automated modelless inference serving," in 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX Association, Jul. 2021, pp. 397-411. [Online]. Available: https://www.usenix.org/conference/atc21/presentation/romero +[29] A. Gujarati, R. Karimi, S. Alzayat, W. Hao, A. Kaufmann, Y. Vigfusson, and J. Mace, "Serving DNNs like clockwork: Performance predictability from the bottom up," in 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, Nov. 2020, pp. 443-462. [Online]. Available: https://www.usenix.org/conference/osdi20/presentation/gujarati +[30] S. Choi, S. Lee, Y. Kim, J. Park, Y. Kwon, and J. Huh, "Serving heterogeneous machine learning models on Multi-GPU servers with Spatio-Temporal sharing," in 2022 USENIX Annual Technical Conference (USENIX ATC 22). Carlsbad, CA: USENIX Association, Jul. 2022, pp. 199-216. [Online]. Available: https://www.usenix.org/conference/atc22/presentation/choi-seungbeom +[31] R. Yazdani Aminabadi, S. Rajbhandari, M. Zhang, A. A. Awan, C. Li, D. Li, E. Zheng, J. Rasley, S. Smith, O. Ruwase, and Y. He, "DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale," arXiv e-prints, p. arXiv:2207.00032, Jun. 2022. +[32] U. Gupta, S. Hsia, V. Saraph, X. Wang, B. Reagen, G.-Y. Wei, H.-H. S. Lee, D. Brooks, and C.-J. Wu, "DeepRecSys: A System for Optimizing End-To-End At-scale Neural Recommendation Inference," arXiv e-prints, p. arXiv:2001.02772, Jan. 2020. +[33] S. Hsia, U. Gupta, B. Acun, N. Ardalani, P. Zhong, G.-Y. Wei, D. Brooks, and C.-J. Wu, "Mp-rec: Hardware-software co-design to enable multi-path recommendation," in + +Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, ser. ASPLOS 2023. New York, NY, USA: Association for Computing Machinery, 2023, p. 449-465. [Online]. Available: https://doi.org/10.1145/3582016.3582068 +[34] J. Fang, Y. Yu, C. Zhao, and J. Zhou, "TurboTransformers: An Efficient GPU Serving System For Transformer Models," arXiv e-prints, p. arXiv:2010.05680, Oct. 2020. +[35] C.-F. Wu, C.-J. Wu, G.-Y. Wei, and D. Brooks, "A joint management middleware to improve training performance of deep recommendation systems with sds," in Proceedings of the 59th ACM/IEEE Design Automation Conference (DAC 22), 2022, pp. 157-162. +[36] C. Olston, N. Fiedel, K. Gorovoy, J. Harmsen, L. Lao, F. Li, V. Rajashekhar, S. Ramesh, and J. Soyke, “TensorFlow-Serving: Flexible, High-Performance ML Serving,” arXiv e-prints, p. arXiv:1712.06139, Dec. 2017. +[37] R. Pope, S. Douglas, A. Chowdhery, J. Devlin, J. Bradbury, A. Levskaya, J. Heek, K. Xiao, S. Agrawal, and J. Dean, "Efficiently scaling transformer inference," arXiv preprint arXiv:2211.05102, 2022. +[38] X. Wang, Y. Xiong, Y. Wei, M. Wang, and L. Li, "LightSeq: A High Performance Inference Library for Transformers," arXiv e-prints, p. arXiv:2010.13887, Oct. 2020. +[39] B. Wu, Y. Zhong, Z. Zhang, G. Huang, X. Liu, and X. Jin, "Fast Distributed Inference Serving for Large Language Models," arXiv e-prints, p. arXiv:2305.05920, May 2023. +[40] K. Choromanski, V. Likhosherstov, D. Dohan, X. Song, A. Gane, T. Sarlos, P. Hawkins, J. Davis, A. Mohiuddin, L. Kaiser, D. Belanger, L. Colwell, and A. Weller, "Rethinking Attention with Performers," arXiv e-prints, p. arXiv:2009.14794, Sep. 2020. +[41] V. Likhosherstov, K. Choromanski, J. Davis, X. Song, and A. Weller, "Sub-Linear Memory: How to Make Performers SLiM," arXiv e-prints, p. arXiv:2012.11346, Dec. 2020. +[42] I. Beltagy, M. E. Peters, and A. Cohan, "Longformer: The Long-Document Transformer," arXiv e-prints, p. arXiv:2004.05150, Apr. 2020. +[43] M. Zaheer, G. Guruganesh, A. Dubey, J. Ainslie, C. Alberti, S. Ontanon, P. Pham, A. Ravula, Q. Wang, L. Yang, and A. Ahmed, "Big Bird: Transformers for Longer Sequences," arXiv e-prints, p. arXiv:2007.14062, Jul. 2020. +[44] H. Wang, Z. Zhang, and S. Han, "SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning," arXiv e-prints, p. arXiv:2012.09852, Dec. 2020. +[45] N. Shazeer, "Fast Transformer Decoding: One Write-Head is All You Need," arXiv e-prints, p. arXiv:1911.02150, Nov. 2019. +[46] J. W. Rae, A. Potapenko, S. M. Jayakumar, and T. P. Lillicrap, "Compressive Transformers for Long-Range Sequence Modelling," arXiv e-prints, p. arXiv:1911.05507, Nov. 2019. +[47] S. Kim, A. Gholami, Z. Yao, M. W. Mahoney, and K. Keutzer, "I-BERT: Integer-only BERT Quantization," arXiv e-prints, p. arXiv:2101.01321, Jan. 2021. +[48] Z. Liu, Y. Wang, K. Han, W. Zhang, S. Ma, and W. Gao, “Post-training quantization for vision transformer,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34. Curran Associates, Inc., 2021, pp. 28 092–28 103. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2021/file/ec8956637a99787bd197eacd77acce5e-Paper.pdf +[49] Z. Yang, Y. Gao, W. Wang, and H. Ney, “Predicting and using target length in neural machine translation,” in Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Suzhou, China: Association for Computational Linguistics, Dec. 2020, pp. 389–395. [Online]. Available: https://aclanthology.org/2020.aacl-main.41 +[50] V. Janapa Reddi, C. Cheng, D. Kanter, P. Mattson, G. Schmuelling, C.-J. Wu, B. Anderson, M. Breughe, M. Charlebois, W. Chou, R. Chukka, C. Coleman, S. Davis, P. Deng, G. Diamos, J. Duke, D. Fick, J. S. Gardner, I. Hubara, S. Idgunji, T. B. Jablin, J. Jiao, T. St. John, P. Kanwar, D. Lee, J. Liao, A. Lokhmotov, F. Massa, P. Meng, P. Micikevicius, C. Osborne, G. Pekhimenko, A. Tejusve Raghunath Rajan, D. Sequeira, A. Sirasao, F. Sun, H. Tang, M. Thomson, F. Wei, E. Wu, L. Xu, K. Yamada, B. Yu, G. Yuan, A. Zhong, P. Zhang, and Y. Zhou, “MLPerf Inference Benchmark,” arXiv e-prints, p. arXiv:1911.02549, Nov. 2019. \ No newline at end of file diff --git a/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/images.zip b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0483670fbafe4eed7ff974f6487f54c2488d7bab --- /dev/null +++ b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1faace79cb04583dfe2346baa2079c1226235f4b15d17f929dcbc8ab84193a89 +size 365003 diff --git a/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/layout.json b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9564629607737071be5fa0a820457e83f8317926 --- /dev/null +++ b/s3increasinggpuutilizationduringgenerativeinferenceforhigherthroughput/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc6f84b4b6ac41b3143646d60a6b13ef41c99910c9cb4e3d1bb98f3b64217a34 +size 393061 diff --git a/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_content_list.json b/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..720ce531503e8ec349ea2a6802ed0048d0865bac --- /dev/null +++ b/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb3871e072c634f2f3056223224bf87208137b2fee8d6c59f11bd48e8f173c63 +size 281246 diff --git a/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_model.json b/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e2bd98dd1119873280e1f141c8bd41b81d85db05 --- /dev/null +++ b/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01a0de2c98fecb292c55d2a2bdd05f8940f7ce694ab720b0185558437a39def1 +size 324869 diff --git a/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_origin.pdf b/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93491aca4e0b8be3cf96cf1f6c977eb4f95d7665 --- /dev/null +++ b/se3equivariantconvolutionandtransformerinrayspace/eecd66a8-a3dd-455e-9a7c-5c22a80b3be3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7deb5a35f264e4f3adf544a09916456d5701e6ef8318a0680db30f9601a00493 +size 34079955 diff --git a/se3equivariantconvolutionandtransformerinrayspace/full.md b/se3equivariantconvolutionandtransformerinrayspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7de7939c17af4b793e5925a7e3d67166ce68532e --- /dev/null +++ b/se3equivariantconvolutionandtransformerinrayspace/full.md @@ -0,0 +1,1301 @@ +# Equivariant Light Field Convolution and Transformer in Ray Space + +Yinshuang Xu + +University of Pennsylvania xuyin@seas.upenn.edu + +Jiahui Lei + +University of Pennsylvania leijh@seas.upenn.edu + +Kostas Daniilidis + +University of Pennsylvania and Archimedes, Athena RC kostas@cis.upenn.edu + +# Abstract + +3D reconstruction and novel view rendering can greatly benefit from geometric priors when the input views are not sufficient in terms of coverage and inter-view baselines. Deep learning of geometric priors from 2D images requires each image to be represented in a $2D$ canonical frame and the prior to be learned in a given or learned $3D$ canonical frame. In this paper, given only the relative poses of the cameras, we show how to learn priors from multiple views equivariant to coordinate frame transformations by proposing an $SE(3)$ -equivariant convolution and transformer in the space of rays in 3D. We model the ray space as a homogeneous space of $SE(3)$ and introduce the $SE(3)$ -equivariant convolution in ray space. Depending on the output domain of the convolution, we present convolution-based $SE(3)$ -equivariant maps from ray space to ray space and to $\mathbb{R}^3$ . Our mathematical framework allows us to go beyond convolution to $SE(3)$ -equivariant attention in the ray space. We showcase how to tailor and adapt the equivariant convolution and transformer in the tasks of equivariant $3D$ reconstruction and equivariant neural rendering from multiple views. We demonstrate $SE(3)$ -equivariance by obtaining robust results in roto-translated datasets without performing transformation augmentation. + +# 1 Introduction + +Recent years have seen significant advances in learning-based techniques [67, 68, 60, 69, 61, 73, 11, 53] harnessing the power of deep learning for extraction of geometric priors from multiple images and associated ground-truth shapes. Such approaches extract features from each view and aggregate these features into a geometric prior. However, these approaches are not $SE(3)$ -equivariant to transformations of the frame where the priors and images are defined. While view pooling or calculating variance [69, 72, 46, 73, 11] can be used to aggregate features and tackle equivariance, view pooling discards the rich geometric information contained in a multiple view setup. + +In this paper, we address the problem of learning geometric priors that are $SE(3)$ -equivariant with respect to transformations of the reference coordinate frame. We argue that all information needed for tasks like novel view rendering or 3D reconstruction is contained in the light field [6, 37]. Our input is a light field, a function defined on oriented rays in 3D whose values can be the radiance or features extracted from pixel values. We will use the term light field, and we will be specific when it is a radiance field or a feature field. Images are discrete samples of this field: the camera position determines which rays are sampled, while the camera orientation leaves the sample of the light field unchanged up to pixel discretization. We model the light field as a field over a homogeneous space of $SE(3)$ , the ray space $\mathcal{R}$ parameterized by the Plücker coordinates. The ray space $\mathcal{R}$ is the space of oriented light rays, for any ray $x \in \mathcal{R}$ , the Plücker coordinate is $x = (d, m)$ , where $d \in \mathbb{S}^2$ is the direction of the ray, and $m = x \times d$ where $x$ is any point on the ray. + +We define a convolution in the continuous ray space as an equivariant convolution on a homogeneous space [18]. Since our features are not limited to scalar values, we will draw upon the tools of tensor field networks and representation theory, discussed in detail in the Appendix. In Sec. 3.1 we study the group action of $SE(3)$ on $\mathcal{R}$ , the stabilizer group for $\mathcal{R}$ , and how $SE(3)$ transforms the feature field over $\mathcal{R}$ . In Sec. 3.4, we focus on developing the equivariant convolution in $\mathcal{R}$ , providing analytical solutions for the kernels with the derived constraints in convolution from $\mathcal{R}$ to $\mathcal{R}$ and from $\mathcal{R}$ to $\mathbb{R}^3$ , respectively. Meanwhile, we make the kernel locally supported without breaking the equivariance. By varying the output domain of the convolution, we introduce equivariant convolutions from the ray space to the ray space and from the ray space to the $3D$ Euclidean space. + +The constraint of the kernel limits the expressiveness of equivariant convolution when used without a deep structure. In Sec 3.5, we introduce an equivariant transformer in $\mathcal{R}$ . The equivariant transformer generates the equivariant key, query, and value by leveraging the kernel derived in the convolution, resulting, thus, in invariant attention weights and, hence, equivariant outputs. We provide a detailed derivation of two cases of cross-attention: the equivariant transformer from $\mathcal{R}$ to $\mathcal{R}$ and the equivariant transformer from $\mathcal{R}$ to $\mathbb{R}^3$ . In the first case, the features that generate the key and value are attached to source rays, while the feature generating the query is attached to the target ray. In the second case, the feature generating the query is attached to the target point. + +We demonstrate the composition of equivariant convolution and transformer modules in the tasks of $3D$ reconstruction from multi-views and novel view synthesis given the multi-view features. The inputs consist of finite sampled radiance fields or finite feature fields, while our proposed equivariant convolution and transformer are designed for continuous light fields. If an object or a scene undergoes a rigid transformation and is resampled by the same multiple cameras, the $SE(3)$ group action is not transitive in the light field sample. This lack of transitivity can significantly impact the computation of equivariant features, mainly because the views are sparse, unlike densely sampled point clouds. Object motion introduces new content, resulting in previously non-existing rays in the light field sampling. Hence, our equivariance is an exact equivariance with respect to the choice of coordinate frame. In the 3D reconstruction task, we experimentally show that equivariance is effective for small camera motions or arbitrary object rotations and generally provides more expressive representations. In the $3D$ object reconstruction application, we first apply an equivariant convolutional network in ray space to obtain the equivariant features attached to rays. We then apply equivariant convolution and equivariant transformer from $\mathcal{R}$ to $\mathbb{R}^3$ to obtain equivariant features attached to the query point, which are used to calculate the signed distance function (SDF) values and ultimately reconstruct the object. In the generalized rendering task, our model queries a target ray and obtains neighboring rays from source views. Our composition of equivariant modules is based on IBRNet [61], which consists of view feature aggregation and ray transformer. We replace the view feature aggregation in [61] with the equivariant convolution and transformer over rays and the ray transformer part with the equivariant transformerover the points along the ray to get the density and color of the point, see Sec. 3.3. We summarize here our main contributions: + +(1) We model the ray space as a homogeneous space with $SE(3)$ as the acting group, and we propose the $SE(3)$ -equivariant generalized convolution as the fundamental operation on a light field whose values may be radiance or features. We derive two $SE(3)$ -equivariant convolutions, both taking input ray features and producing output ray features and point features, respectively. +(2) To enhance the feature expressiveness, we extend the equivariant convolution to an equivariant transformer in $\mathcal{R}$ , in particular, a transformer from $\mathcal{R}$ to $\mathcal{R}$ and a transformer from $\mathcal{R}$ to $\mathbb{R}^3$ . +(3) We adapt and compose the equivariant convolution and transformer module for $3D$ reconstruction from multiple views and generalized rendering from multi-view features. The experiments demonstrate the equivariance of our models. + +# 2 Related Work + +Equivariant Networks Group equivariant networks [15, 65, 62, 55, 63, 12, 19, 17, 22, 24, 23] provide deep learning pipelines that are equivariant by design with respect to group transformations of the input. While inputs like point clouds, 2D and 3D images, and spherical images have been studied extensively, our work is the first, as far as we know, to study equivariant convolution and cross-attention on light fields. The convolutional structure on homogeneous spaces or groups is sufficient and necessary for equivariance with respect to compact group actions as proved in [18, 1, 36]. Recently, Cesa et al. [8], Xu et al. [70] provided a uniform way to design the steerable kernel in + +an equivariant convolutional neural network on a homogeneous space using Fourier analysis of the stabilizer group and the acting group, respectively, while Finzi et al. [26] proposed a numerical algorithm to compute a kernel by solving the linear equivariant map constraint. For arbitrary Lie groups, Finzi et al. [25], MacDonald et al. [39], Bekkers [4] designed the uniform group convolutional neural network. The fact that any $O(n)$ equivariant function can be expressed in terms of a collection of scalars is shown in [59]. For general manifolds, Cohen et al. [16], Weiler et al. [64] derived the general steerable kernel from a differential geometry perspective, where the group convolution on homogeneous space is a special case. The equivalent derivation for the light field is in the Appendix. Recently, equivariant transformers drew increasing attention, in particular for 3D point cloud analysis and reconstruction [27, 48, 10, 7]. A general equivariant self-attention mechanism for arbitrary groups was proposed in [44, 43], while an equivariant transformer model for Lie groups was introduced in Hutchinson et al. [32]. We are the first to propose an equivariant attention model in the $3D$ ray space. + +Light Field and Neural Rendering from Multiple Views The plenoptic function introduced in perception [6] and later in graphics [37] brought a new light into the scene representation problem and was directly applicable to the rendering problem. Instead of reconstructing and then rendering, light fields enabled rendering just by sampling the right rays. Recently, learning-based light field reconstruction [41, 34, 5, 66, 51, 2] became increasingly popular for novel view synthesis, while [50, 54, 53] proposed non-equivariant networks in the ray space. Due to the smaller dimension of the ray space, the networks in the ray space are more efficient compared to neural radiance fields [42], which leverages volumetric rendering. Several studies [73, 61, 53, 50, 11, 38, 13, 31, 58, 20] concentrate on generalizable rendering. These works are similar to ours in that they obtain the $3D$ prior from the $2D$ images, but they are not equivariant since they explicitly use the coordinates of the points or the rays in the network. + +The most related equivariant rendering approaches to us are [21, 47, 45]. The equivariance in the paper [21] is not the equivariance we mean in this work: [21] enforces the geometric consistency via a loss function. [47] is not strictly equivariant and it depends on the assumption of upright views and the camera ID embeddings, which results in its non-equivariance to camera permutation. It achieves data-driven equivariance by randomly choosing the first camera frame as the canonical frame. While [45] addresses the frame problem through relative pose, it is not theoretically equivariant in cases of individual camera rotations around their axes or minor individual rotations accompanied by small content changes. We want to emphasize that our central contribution is to propose an equivariant convolution and transformer on ray space, which can be integrated into a wide range of 3D learning models. + +Reconstruction from Multiple Views Dense reconstruction from multiple views is a well-established field of computer vision with advanced results even before the introduction of deep learning [28]. Such approaches cannot take advantage of shape priors and need a lot of views to provide a dense reconstruction. Deep learning enabled semantic reconstruction, i.e., the reconstruction from single or multiple views by providing the ground-truth 3D shape during training [14, 67, 68, 40]. These approaches decode the object from a global code without using absolute or relative camera poses. Regression of absolute or relative poses applied in [35, 71, 56, 69, 72, 57, 3, 46, 33, 20] is non-equivariant. + +# 3 Method + +In this section, we will first introduce the (feature) field on the ray space and $3D$ Euclidean space, respectively, and the corresponding $SE(3)$ group actions on the values of the fields in Sec. 3.1. To offer readers a holistic grasp—from a broad overview down to the intricate specifics and from foundational concepts to advanced techniques—we will present the reconstruction and the generalized rendering with the neural components of their architectures (convolutional and attentional) and their inputs and outputs in Sec. 3.2 and Sec. 3.3. Following that, we expose our central contribution: equivariant convolution and attention in ray space in Sec. 3.4 and Sec. 3.5. + +# 3.1 Feature field on Ray Space and $3D$ Euclidean Space + +# 3.1.1 Ray Space + +The ray space is the space of oriented light rays. As introduced in the introduction and App. Ex. 1, we use Plücker coordinates to parameterize the ray space $\mathcal{R}$ : for any ray $x \in \mathcal{R}$ , $x$ can be denoted as $(d, m)$ , where $d \in \mathbb{S}^2$ is the direction of the ray, and $m = x \times d$ is the moment of the ray with $x$ + +![](images/f1e90f537bee7618d76c44425acedeedb791115223c2795db594397ae80d6b85.jpg) +Figure 1: Feature attached to rays: we show the scalar feature and type-1 feature. When $\rho_{2}$ is the trivial representation, tensor features can be viewed in the plane orthogonal to the ray (the blue plane). When rotations act on the feature field, the scalar feature only changes position as attached to the rays: $(\mathcal{L}_g f)(x) = f(g^{-1}x)$ ; while the type-1 feature changes position and is itself rotated: $(\mathcal{L}_g f)(x) = \rho (\mathrm{h}(g^{-1},x)^{-1})f(g^{-1}x)$ , where $\rho (\gamma ,t) = e^{i\gamma}$ . + +![](images/dc00f4c0dfd3915d12eb3b9e606dd0f5af9a01e96495899b9fe5807bfb0ec52c.jpg) +Figure 2: Features attached to points: we show scalars and vectors (type-1 features). The black dot in the figure is the point, and the square and the vectors are the scalar and type-1 features attached to the point. When $g \in SE(3)$ acts on the feature field, we will see that the scalars are kept the same while the attached position is rotated, and the vector features change their position and alter their direction. + +being a point on the ray. Then any $g = (R, t) \in SE(3)$ acts on the ray space as: + +$$ +g x = g (\boldsymbol {d}, \boldsymbol {m}) = (R \boldsymbol {d}, R \boldsymbol {m} + \boldsymbol {t} \times (R \boldsymbol {d})). \tag {1} +$$ + +The ray space $\mathcal{R}$ is a homogeneous space with a transitive group action by $SE(3)$ . Given the origin in the homogeneous space as $\eta = ([0,0,1]^T,[0,0,0]^T)$ (the line representing $z$ -axis), the stabilizer group $H$ that leaves $\eta$ unchanged is $SO(2)\times \mathbb{R}$ (the rotation around and translation along the ray). The ray space is, thus, isomorphic to the quotient space $\mathcal{R}\cong SE(3) / (SO(2)\times \mathbb{R})$ . We parameterize the stabilizer group $H$ as $H = \{(\gamma ,t)|\gamma \in [0,2\pi),t\in \mathbb{R}\}$ . + +We follow the generalized convolution derivation for other homogeneous spaces in [18], which requires the use of principal bundles, section maps, and twists [29] explained in the appendix section A.2 and onwards. $SE(3)$ can be viewed as the principal $SO(2)\times \mathbb{R}$ -bundle, where we have the projection $p:SE(3)\to \mathcal{R}$ , for any $g\in SE(3)$ , $p(g) = g\eta$ ; a section map $s:\mathcal{R}\rightarrow SE(3)$ can be defined such that $p\circ s = id_{\mathcal{R}}$ . In App. Example 6, we elaborate on how we define the section map from the ray space to $SE(3)$ in our model. Generally, the action of $SE(3)$ induces a twist as $gs(x)\neq s(gx)$ . The twist can be characterized by the twist function $\mathrm{h}:SE(3)\times \mathcal{R}\to SO(2)\times \mathbb{R}$ , $gs(x) = s(gx)\mathrm{h}(g,x)$ , we provide the twist function in our model and its visualization in App. Example 6. + +# 3.1.2 Light Field + +The light field can be modeled as a function from the ray space to a vector space, $f: \mathcal{R} \to V$ . We also need to define the $SE(3)$ group action on the values of that field. Since the group action will be on a vector space $V$ , we will use the corresponding group representation of the stabilizer group $\rho: SO(2) \times \mathbb{R} \to GL(V)$ , see details in App. A.3. For example, a light field can be a radiance field $f$ that maps the ray space of oriented rays to their observed radiance (RGB) $f: \mathcal{R} \to \mathbb{R}^3$ which is a concatenation of three scalar fields over $\mathcal{R}$ . The group representation $\rho$ in this case is the identity and $g \in SE(3)$ acts on the radiance field $f$ as $(\mathcal{L}_g f)(x) = f(g^{-1}x)$ , shown as the scalar features in Fig. 1. Given that the stabilizer $H = SO(2) \times \mathbb{R}$ is a product group, the stabilizer representation can be written as the product $\rho(\gamma, t) = \rho_1(\gamma) \otimes \rho_2(t)$ , where $\rho_1$ is the group representation of $SO(2)$ and $\rho_2$ is the group representation of $\mathbb{R}$ . If the light field is a feature field (Fig. 1) with $\rho_2$ being the identity representation and $\rho_1$ corresponding to a type-1 field, $\rho_1(\gamma) = e^{i\gamma}$ , then type-1 features change position and orientation when $g \in SE(3)$ acts on it. Having explained the examples of scalar (type-0) and type-1 fields, we introduce the action on any feature field $f$ as [18]: + +$$ +\left(\mathcal {L} _ {g} f\right) (x) = \rho \left(\mathrm {h} \left(g ^ {- 1}, x\right) ^ {- 1}\right) f \left(g ^ {- 1} x\right), \tag {2} +$$ + +where $\rho$ is the group representation of $SO(2) \times \mathbb{R}$ corresponding to the space $V$ , determined by the field type of $f$ , and $\mathrm{h}$ is the twist function introduced by $SE(3)$ as shown in App. Example 6. + +# 3.1.3 Feature Field on $\mathbb{R}^3$ + +$\mathbb{R}^3$ is also a homogeneous space of $SE(3)$ like the ray space $\mathcal{R}$ , with the stabilizer group as $SO(3)$ , as stated in App. Example 2. For any $g = (R, t) \in SE(3)$ , it acts on the field $f$ over $\mathbb{R}^3$ also follows + +![](images/12c5deacee63284da96ca273ef91cc18de02f5dd63596dd8c72b2f0a73c96020.jpg) +Figure 3: The pipeline of equivariant $3D$ reconstruction: Firstly, we obtain the feature field over the ray space. Secondly, we perform an equivariant convolution from ray space to point space. Thirdly, we apply a $SE(3)$ equivariant cross-attention module to obtain an equivariant feature for a query. + +[18]: + +$$ +\left(\mathcal {L} _ {g} f\right) (x) = \rho (R) f \left(R ^ {- 1} \left(x - t\right)\right) +$$ + +where $\rho$ is the group representation of $SO(3)$ , since the twist function can be independent of the $3D$ position due to the fact that $SE(3) = \mathbb{R}^3 \rtimes SO(3)$ is a semidirect product group as stated in App. Example 4. The feature field over $\mathbb{R}^3$ and the corresponding group action is also used in [55, 63]. Fig. 2 visualizes the scalar feature ( $l_{out} = 0$ ) and vector feature ( $l_{out} = 1$ ) attached to one point, offering an intuitive understanding of the feature field over $\mathbb{R}^3$ . + +Given the feature field on the ray space and $3D$ Euclidean space and the corresponding group actions of $SE(3)$ . We will show two 3D multi-view applications of the equivariant convolution and transformer: $3D$ reconstruction and generalized neural rendering. In each application, we start with the specific definition of equivariance and then outline the corresponding pipeline. + +# 3.2 Equivariant 3D Reconstruction + +The radiance field serves as the input for the $3D$ reconstruction, which ultimately generates a signed distance field (SDF) denoted by the function $e:\mathbb{R}^3\to \mathbb{R}$ . As aforementioned, the radiance field is the multi-channel scalar field over $\mathcal{R}$ , while SDF is the scalar field over $\mathbb{R}^3$ . A $3D$ reconstruction $\Phi :\mathcal{F}\rightarrow \mathcal{E}$ , where $\mathcal{F}$ denotes the space of radiance fields and $\mathcal{E}$ denotes the space of signed distance fields, is equivariant when for any $g\in SE(3)$ , any $x\in \mathbb{R}^3$ , and any $f\in \mathcal{F}$ , $\Phi (\mathcal{L}_g f)(x) = \mathcal{L}_g^\prime (\Phi (f))(x)$ , where $\mathcal{L}_g$ and $\mathcal{L}_g^\prime$ are group actions on the light field and the SDF, respectively. Specifically, as $f$ and $e$ are scalar fields, $(\mathcal{L}_g f)(x) = f(g^{-1}x)$ for any $x\in \mathcal{R}$ , and $(\mathcal{L}_g^\prime e)(x) = e(g^{-1}x)$ for any $x\in \mathbb{R}^3$ . + +In practice, we have a finite sampling of the radiance field corresponding to the pixels of multiple views $V = \{f(x)|x\in L_V\}$ , where $L_{V}$ denotes the ray set of multi-views and $f\in \mathcal{F}$ is the radiance field induced by multi views sample from. The 3D reconstruction $\Phi$ is equivariant when for any $g\in SE(3)$ and any $x\in \mathbb{R}^3$ : $\Phi (\pmb {g}\cdot \pmb {V})(\pmb {x}) = \Phi (\pmb {V})(\pmb{g}^{-1}\pmb {x})$ . If we denote $V$ as $(L_V,f)$ , $g\cdot V = (g\cdot L_V,\mathcal{L}_gf)$ , where $g\cdot L_V$ is $g$ acting on the rays defined Eq. 1. + +We achieve equivariance using three steps as illustrated in Fig. 3: (1) the transition from pixel colors to a feature-valued light field (equi-CNN over rays), (2) the computation of features in $\mathbb{R}^3$ from features on the ray space by equivariant convolution from $\mathcal{R}$ to $\mathbb{R}^3$ , and (3) the equivariant transformer with the query generated by the feature on the point we want to compute SDF and key/value generated by features on rays. Note that we need (3) following (2) because the output feature of a single convolution layer is not expressive enough due to the constrained kernel. For the detailed practical adaption of the convolution and transformer in $3D$ reconstruction, please see the App. B, where we approximate the intra-view with $SE(2)$ equivariant convolution. + +# 3.3 Generalized Neural Rendering + +The light feature field $f_{in}:\mathcal{R}\to V$ serves as the input for neural rendering, which ultimately generates the light field $f:\mathcal{R}\rightarrow \mathbb{R}^3$ , a multi-channel scalar field over $\mathcal{R}$ . A neural rendering $\Psi :\mathcal{I}\rightarrow \mathcal{F}$ , where $\mathcal{I}$ denotes the space of the light feature fields and $\mathcal{E}$ denotes the space of the light field, is equivariant when for any $g\in SE(3)$ , any $x\in \mathcal{R}$ , and any $f_{in}\in \mathcal{I}$ , $\Psi (\mathcal{L}_gf_{in})(x) = \Psi (f_{in})(g^{-1}x)$ , where $\mathcal{L}_g$ is the group operator on the light feature field $f_{in}$ , as shown in Eq. 2 depending on the feature type. In the experiment of this paper, the input light feature field is + +![](images/ad4226fec644e3739f9ff83a8f95dd7abaaa594f202e20be8f29633ba108fe62.jpg) +Figure 4: The pipeline of equivariant neural rendering. Firstly, we obtain the features of the points along the target ray through convolution over rays. Secondly, we apply the equivariant cross-attention module to obtain features for generating the color of the points. Finally, we use equivariant self-attention over the points along the ray to obtain features for generating the density of points. + +scalar, i.e., $\mathcal{L}_g f_{in}(x) = f_{in}(g^{-1}x)$ . Similar to reconstruction, in practice, the neural rendering $\Psi$ is equivariant when for any $g \in SE(3)$ and any $x \in \mathcal{R}$ : $\Psi(g \cdot V)(x) = \Psi(V)(g^{-1}x)$ , where $V = \{f_{in}(x) | x \in L_V\}$ , and if we denote $V$ as $(L_V, f_{in})$ , then $g \cdot V = (g \cdot L_V, \mathcal{L}_g f_{in})$ , + +By restricting the field type of the output field over rays to have a group representation of $SO(2) \times \mathbb{R}$ as $\rho(\gamma, t) = \rho_1(\gamma) \otimes \rho_2(t)$ , where $\rho_2$ is the regular representation, we can obtain the feature of points along the ray by convolution or transformer from $\mathcal{R}$ to $\mathcal{R}$ . See App. Example 9 for more explanation of the regular representation. Alternatively, we can obtain the desired feature by applying convolution or transformer from $\mathcal{R}$ to $\mathcal{R}$ , with output features attached to the target ray corresponding to different irreducible representations of the stabilizer group. These features can be interpreted as Fourier coefficients of the function of the points along the ray. The Inverse Fourier Transform yields features for the points along the ray. More details are in the App. I.1. + +The feature of the points along the ray can be used to generate density and color for volumetric rendering [61, 73], or fed into attention and pooling for the final ray feature [58]. In this paper, we opt to generate the density and color and utilize volumetric rendering, which can be viewed as a specialized equivariant convolution from points to the ray. Method details are in App. I. + +We achieve the equivariant rendering through three steps as shown in Fig. 4: (1) we apply equivariant convolution from rays to rays to get the equivariant feature for points along the rays, which is a specific field type over $\mathcal{R}$ ; (2) to enhance the feature expressivity, we apply an equivariant transformer from rays to rays to get the color for each point; (3) we apply the equivariant self-attention over the points along the ray to reason over the points on the same ray; the output feature of the points will be fed to multiple perceptron layers to get the density of the points. + +# 3.4 Convolution in Ray Space + +# 3.4.1 Convolution from Rays to Rays + +The convolution, as stated in App. A.4 and [18] is then defined as + +$$ +f ^ {l _ {\text {o u t}}} (x) = \int_ {\mathcal {R}} \kappa (s (x) ^ {- 1} y) \rho_ {i n} \left(\mathrm {h} (s (x) ^ {- 1} s (y))\right) f ^ {l _ {\text {i n}}} (y) d y, \tag {3} +$$ + +where $\mathrm{h}(g)$ is the simplified form of the twist $\mathrm{h}(g,\eta)$ . Eq.3 is equivariant to $SE(3)$ if and only if the convolution kernel $\kappa$ satisfies that $\kappa(hx) = \rho_{out}(h)\kappa(x)\rho_{in}(\mathrm{h}^{-1}(h,x))$ , where $\rho_{in}$ and $\rho_{out}$ are the group representations of $SO(2) \times \mathbb{R}$ corresponding to the input feature type $l_{in}$ and output feature type $l_{out}$ , respectively. We derive the solutions of the kernel in the App. Example 9. + +Local kernel support The equivariance stands even if we constrain the kernel to be local. When $x = (d_x, m_x)$ meets the condition that $\angle(d_x, [0, 0, 1]^T) \leq \beta_0$ and $d(x, \eta) \leq d_0$ , $\kappa(x) \neq 0$ , this local support will not violate the constraint that $\kappa(hx) = \rho_{out}(h)\kappa(x)\rho_{in}(\mathsf{h}^{-1}(h,x))$ . + +Then, convolution in Eq. 3 is accomplished over the neighbors only as visualized in Fig. 5. In Fig. 5, any ray $y = (d_y, m_y)$ (denoted in blue) in the neighborhood of a ray $x = (d_x, m_x)$ will go through the cylinder with $x$ as the axis and $d_0$ as the radius since $d(x, y) \leq d_0$ . + +Moreover, for any $y$ , $\angle (\pmb{d}_y,\pmb{d}_x)\leq \beta_0$ . Any ray $y\in \mathcal{N}(x)$ is on one tangent plane of a cylinder with $x$ as the axis and $d(x,y)$ as the radius when $d(x,y) > 0$ . + +# 3.4.2 Convolution from Rays to Points + +![](images/a6f178c3a59be21a19556d7553d0f2ad4e956a092217ffef97bb6f0eea6f5373.jpg) +Figure 5: Neighborhood of a ray $x$ in the convolution. + +In applications such as $3D$ reconstruction, key point detection, and $3D$ segmentation, we expect the output to be the field over $\mathbb{R}^3$ . Using a convolution, we will define an equivariant map from light fields (fields on $\mathcal{R}$ ) to fields on $\mathbb{R}^3$ . We denote with $H_{1}$ and $H_{2}$ the stabilizer groups for the input and output homogeneous spaces, respectively, i.e., $SO(2)\times \mathbb{R}$ and $SO(3)$ in this case. As shown in the App. Example 4, we can choose the section map $s_2:\mathbb{R}^3\to SE(3)$ : $s_2(\pmb {x}) = (I,\pmb {x})$ for any $\pmb {x}\in \mathbb{R}^3$ and I is the + +identity matrix. Following [18], the convolution from rays to points becomes: + +$$ +f _ {2} ^ {l _ {o u t}} (x) = \int_ {\mathcal {R}} \kappa (s _ {2} (x) ^ {- 1} y) \rho_ {i n} (\mathrm {h} _ {1} (s _ {2} (x) ^ {- 1} s _ {1} (y))) f _ {1} ^ {l _ {i n}} (y) d y, +$$ + +where $h_1$ is the twist function corresponding to section $s_1: \mathcal{R} \to SE(3)$ defined aforementioned, $\rho_{in}$ is the group representation of $H_1$ ( $SO(2) \times \mathbb{R}$ ) corresponding to the feature type $l_{in}$ . The subscripts 1 and 2 denote the homogeneous spaces the features are defined on. The convolution is equivariant if and only if the kernel $\kappa$ satisfies that $\kappa(h_2x) = \rho_{out}(h_2)\kappa(x)\rho_{in}(\mathrm{h}_1^{-1}(h_2,x))$ for any $h_2 \in H_2$ , where $\rho_{out}$ is the group representation of $H_2$ ( $SO(3)$ ) corresponding to the feature type $l_{out}$ . + +In 3D reconstruction, $f^{lin}$ is the scalar field over $\mathcal{R}$ , i.e., $\rho_{in} = 1$ . The convolution is simplified to $f_2^{l_{out}}(x) = \int_{G / H_1} \kappa(s_2(x)^{-1}y)f_1^{lin}(y)dy$ and the corresponding constraint becomes $\kappa(h_2x) = \rho_{out}(h_2)\kappa(x)$ . App. Example 10 provides analytical kernel solutions. + +# 3.5 Equivariant Transformer over Rays + +We can extend the equivariant convolution to the equivariant transformer model. In general, the equivariant transformer can be formulated as: + +$$ +f _ {2} ^ {\text {o u t}} (x) = \sum_ {y \in \mathcal {N} (x)} \frac {\exp \left(\left\langle f _ {q} \left(x , f _ {2} ^ {i n}\right) , f _ {k} \left(x , y , f _ {1} ^ {i n}\right) \right\rangle\right)}{\sum_ {y \in \mathcal {N} (x)} \exp \left(\left\langle f _ {q} \left(x , f _ {2} ^ {i n}\right) f _ {k} \left(x , y , f _ {1} ^ {i n}\right) \right\rangle\right)} f _ {v} (x, y, f _ {1} ^ {i n}), \tag {4} +$$ + +where the subscript 1 denotes the homogeneous space $M_1 \cong G / H_1$ of the feature field $f_1^{in}$ that generates the key and value in the transformer; the subscript 2 denotes the homogeneous space $M_2 \cong G / H_2$ of the feature field $f_2^{in}$ that generates query in the transformer, which is also the homogeneous space of the output feature $f_2^{out}$ ; $x$ and $y$ represent elements in the homogeneous spaces $M_2$ and $M_1$ , respectively, where $y \in \mathcal{N}(x)$ indicates that the attention model is applied over $y$ , the neighbor of $x$ based on a defined metric. $f_k$ , $f_q$ , and $f_v$ are constructed equivariant keys, queries, and values in the transformer. $f_k$ and $f_v$ are constructed by equivariant kernel $\kappa_k$ and $\kappa_v$ while $f_q$ is constructed through an equivariant linear map, see App. F for detailed construction. + +When the transformer is a self-attention model, homogeneous space $M_1$ and $M_2$ are the same since $f_2^{in} = f_1^{in}$ . The above equivariant transformer could be applied to the other homogeneous space other than $\mathcal{R}$ , $\mathbb{R}^3$ , and acting group other than $SE(3)$ . This paper presents the equivariant cross-attention model over rays, i.e., $M_1$ is $\mathcal{R}$ . When the transformer is the cross-attention from rays to rays, $M_2$ is also $\mathcal{R}$ , the equivariant kernel $\kappa_k$ and $\kappa_v$ is the convolution kernel we derived in convolution from rays to rays in Sec. 3.4.1. When the transformer is the cross-attention from rays to points, $M_2$ is $\mathbb{R}^3$ , the equivariant kernel $\kappa_k$ and $\kappa_v$ is the convolution kernel we derived in convolution from rays to points in Sec. 3.4.2. With the construction in App. F, we claim that the transformer from rays to rays or from rays to points, as shown in the equation 4, is equivariant. The proof is in App. G. + +![](images/35d39e0ebe02866b49399b4f3da01bbd821bd0eabc54b4c71c33911ce6c1021e.jpg) +Figure 6: In the equivariant transformer (L), positional encoding is not directly used due to its lack of equivariance. Instead, the relative position within the kernel is utilized. To generate the query $f_{q}$ , we multiply the feature $f_{2}^{in}(x)$ (pre-existing or yielded by convolution) attached to $x$ (in $\mathcal{R}$ or $\mathbb{R}^3$ , depending on the task) by the designed equivariant linear matrix $W_{q}$ (see App. F). The key $f_{k}$ and value $f_{v}$ are constructed using designed equivariant kernels $\kappa_{k}$ and $\kappa_{v}$ . The transformer is equivariant due to equivariant $f_{k}, f_{q}$ , and $f_{v}$ . As $f_{k}, f_{q}$ and $f_{v}$ are equivariant, the entire transformer is equivariant. The conventional transformer (R) uses point position encoding for the query feature and obtains the query, key, and value through nonequivariant conventional linear mappings. + +![](images/7f1733c071cbb1e2345ca567d5ae9f79c27ea3bc06c04244985db9a86f255f0b.jpg) + +![](images/cda847612a89ad610c4ae50570bf75593d3ed4e02337d2a137099dfe1fa18ce5.jpg) +Figure 7: In the equivariant transformer (U), the query, key, and value are equivariant and can be composed of different types of features; they can be scalars, vectors, or higher-order tensors. The inner product, determined by the feature type, should apply to the same type of features. In contrast, the feature in a conventional transformer (D) is not equivariant, it does not contain vectors and tensors, and the inner product is conventional. + +To better understand the equivariant transformer, we visualize the comparison of the equivariant cross-attention transformer and conventional transformer shown in Fig. 6. Meanwhile, as stated in App. F, key, query, and value are generally composed of different types of features and are multi-channel, allowing for the multi-head attention mechanism. In Fig. 7, we visualize the comparison of the equivariant multi-head attention module from rays to points with conventional multi-head attention module. The attention module from rays to rays follows a similar concept but with variations in the feature types due to the differing group representations of $SO(2) \times \mathbb{R}$ and $SO(3)$ . + +# 4 Experiment + +# 4.1 3D Object Reconstruction from Multiple Views + +Datasets and Implementation We use the same train/val/test split of the Shapenet Dataset [9] and render ourselves for the equivariance test. To render the views for each camera, we fix eight cameras to one cube's eight corners. The cameras all point in the same direction toward the object's center. We use the following notation to denote the variety of transformations in training and testing: $I$ (no transformation), $Z$ (optical axis rotation), $R$ (bounded 3-dof camera rotation), $Y$ (vertical axis object rotation), $SO(3)$ (full object rotation). The details of the five settings are in App. J.1. + +As described in App. B, we use $SE(2)$ equivariant CNNs to approximate the equivariant convolution over the rays. For the fusion from the ray space to the point space model, we use one layer of convolution and three combined blocks of updating ray features and $SE(3)$ transformers. For more details, please see the App. J.2. + +Results We evaluate our model in seven experiment settings, $I / I$ , $I / Z$ , $I / R$ , $R / R$ , Y/SO(3), $SO(3) / SO(3)$ . The setting A/B indicates training the model on the A setup of the dataset and evaluating it on the B setup. Following the previous works, we use IoU and Chamfer-L1 Distance as the evaluation metric. Quantitative results are reported in table 1, and qualitative results are in Fig. 8. + +We compare with two other approaches [69], which follows a classic paradigm that queries 3D positions that are then back-projected to obtain image features for aggregation, and [72], which was state of the art in 3D object reconstruction from multi-views. Note that both baselines originally estimate the object poses, but we directly provide ground truth poses to them. See App. J.4 for more qualitative results. + +In table 1, our model outperforms the [72] and [69] by a large margin on $I / Z$ , $I / R$ , and $Y / SO(3)$ settings. Although theoretically, our model is not equivariant to the arbitrary rotation of the object, $Y / SO(3)$ shows the robustness of our model to the object rotation and the generalization ability to some extent. Our model outperforms other models for the chair and car categories in $R / R$ and + +
Methodchair
I/II/ZI/RR/RY/YY/so(3)SO(3)/SO(3)
Fvor w/ gt pose[72]0.691/0.0990.409/0.2530.398/0.2570.669/0.1130.687/0.1030.518/0.1940.664/0.114
DISN w/ gt pose[69]0.725/0.0940.335/0.3960.322/0.4050.500/0.2010.659/0.1200.419/0.3030.549/0.174
Ours0.731/0.0900.631/0.1300.592/0.1370.689/0.1050.698/0.1020.589/0.1420.674/0.113
Methodairplane
I/II/ZI/RR/RY/YY/so(3)SO(3)/SO(3)
Fvor w/ gt pose[72]0.770/0.0510.534/0.1680.533/0.1740.766/0.0530.760/0.0520.579/0.1470.746/0.056
DISN w/ gt pose[69]0.752/0.0580.465/0.1730.462/0.1710.611/0.1040.706/0.0690.530/0.1510.631/0.103
Ours0.773/0.0500.600/0.0920.579/0.1000.759/0.0510.734/0.0520.597/0.1010.722/0.056
Methodcar
I/II/ZI/RR/RY/YY/so(3)SO(3)/SO(3)
Fvor w/ gt pose[72]0.837/0.0900.466/0.2540.484/0.2580.816/0.1070.830/0.0940.496/0.2400.798/0.111
DISN w/ gt pose[69]0.822/0.0890.610/0.2320.567/0.2360.772/0.1350.802/0.0980.614/0.2050.769/0.123
Ours0.844/0.0810.739/0.1420.741/0.1500.836/0.0890.830/0.0890.744/0.1370.813/0.097
+ +Table 1: The results for the seven experiments of 8-view $3D$ reconstruction for the ShapeNet dataset. The metrics in the cell are IoU $\uparrow$ and Chamfer-L1 Distance $\downarrow$ . We implement [72] and [69] ourselves on our equivariant dataset. For the performance of [69], we follow their work to conduct the multi-view reconstruction by pooling over the feature of every view. The value of Chamfer-L1 Distance is $\times 10$ . + +$SO(3)/SO(3)$ settings while it is slightly inferior to [72] in the airplane category. Notably, our model only requires relative camera poses, while [72] and [69] utilize camera poses relative to the object frame, leveraging explicit positional encoding of the query point in the object frame, which is concatenated to the point feature. In addition, our model performs better in several experiments in $I/I$ and $Y/Y$ settings. This superiority can be attributed to the $SE(3)$ equivariant attention model, which considers scalar features and ray directions. For a detailed discussion of the results, please see appendix Sec. J.3 + +We provide an ablation study of the effectiveness of $SE(2)$ CNNs, equivariant convolution, transformer, and type-1 feature (vector feature) in our model. Meanwhile, we compare our method with the model that explicitly encodes the direction of rays. Please see the App. J.5 for the details of the ablation study. + +# 4.2 Neural Rendering + +Datasets and Implementation We use the same training and test dataset as in [61], which consists of both synthetic and real data. Two experiment settings illustrate our model's equiv-ariance: $I / I$ and $I / SO(3)$ . $I / I$ is the canonical setting, where we train and test the model in the same canonical frame defined in the dataset. In the $I / SO(3)$ setting, we test the model trained in the conical frame under arbitrarily rotated coordinate frames while preserving relative camera poses and the relative poses between the camera and the scene, thereby preserving the content of the multiple views. Each individual view itself is not transformed. Note that this experiment's $SO(3)$ setup differs from the $R$ and $SO(3)$ setups used in the reconstruc + +tion. Further details and discussions on this difference can be found in App. K.1. + +![](images/96f5edcdbfa369bce0be0c7cdbaa880d70963767b2772f2a7be043cb8827febd.jpg) +Figure 8: Qualitative results for equivariant reconstruction. Left: input views; Right: reconstruction meshes of different models and ground truth meshes show how the model is trained and tested, explained in the text. + +![](images/6ca43a43f953b38c5104e3099fce6b52caecd8749d9f79841d0e95198540b16b.jpg) +GT + +![](images/415773baff04bb66f7303439d20e651437d0498d2cbff9257217d7c2f930b794.jpg) + +![](images/3d4a912e725ae7bd0083f28446a056d0444df837bfbd2e0bde65786712ad1aff.jpg) +IBRNet (I/I) + +![](images/bf43b77e0568e52a29d87d4641982267d8ebdee43421d38779694e989f6a9b32.jpg) + +![](images/982601d8ce0589db98c2e8524f368c16454211878d2b52ba96fe60779b1d2b7c.jpg) +IBRNet (I/SC) +Figure 9: Qualitative results for Generalized Rendering. We observe a performance drop for IBRNet from $I$ to $SO(3)$ , while ours are robust to rotations in the testset. + +![](images/7487515285ab5f55e64416c7cd4af2ad6ee27d627b1869d5b81ed6b1a3f9173d.jpg) + +![](images/ecbfe9a230156f94475a715628bd8b16f6d862fb28673a38b613042b34e02f9e.jpg) +3) Ours (I/I) + +![](images/34ab11bb1f54bbe9dac84729e12b65f6b7ed514af0ef271ef75d1b712239732b.jpg) + +![](images/e844cb7d60c006c8f243c4c56aeb799d11bce81957a5f1bdc72272b1a7591b31.jpg) +Ours (I/S03) + +![](images/094917dc252bc44cf97284eaffebe4b25ea7a0a930e46dbd9fe308fc45fdb00c.jpg) +GT + +![](images/4621694c1597c40447b536eb324faa40c9cf29749bf62ecc99f1bfcdb2941251.jpg) + +![](images/f734dcba9c50a557bed63d4d1c695158ad36be8ee7e4229463fb83692d781c30.jpg) +IBRNet (I/I) + +![](images/6756c8bbd8a9c9207024c48d19453a8d665c98db11b3c72d43ba0abcff0b769c.jpg) + +![](images/ad0d4989675c9ba33987eeca448f8e0a42d1c6578b579fc61649e4be29d35e2c.jpg) +BRNet (I/SC) + +![](images/22468406e584e2e98cde7eb05467ccd7f7658b6c6b99325768f1f7f3aeb2d3c5.jpg) + +![](images/b4077cb434049b5923da2c58f3d7c9c5dc4a487b0e0cf07c2252b676ff9a0462.jpg) +B) Ours (I/I) + +![](images/b9f96cda5e35489dfdfbea789631641b5d565bacd7946ead12b044deb2411f0c.jpg) + +![](images/3129d36145a8e01ca61dd7bd99cbc2135500d65772354f30df1a5acf84c80534.jpg) +Ours (I/S03) + +
DatasetMethodI/II/SO(3)
PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓Pix-Var↓
Realistic Synthetic 360°[42]IBRNet[61]26.910.9280.08426.770.9230.09166.58
Ours26.900.9290.08626.900.9290.0860.00
Real Forward-Facing [41]IBRNet[61]25.130.8170.20524.600.7970.22352.66
Ours24.930.8080.21224.930.8080.2120.00
Diffuse Synthetic 360°[49]IBRNet [61]37.210.9890.01937.070.9880.01934.51
Ours37.110.9870.01937.110.9870.0190.00
+ +Table 2: The results for the experiments of generalized rendering without per-scene tuning. The metrics in the cell are PSNR↑, SSIM↑, and Pixel Variance↓ (denoted as Pix-Var). The evaluation of IBRNet[61] is performed by testing the released model on both canonical and rotated test datasets. + +Our model architecture is based on IBRNet[61], with view feature aggregation and ray transformer components modifications. Specifically, we replace the view feature aggregation in [61] with the equivariant convolution and transformer over rays and the ray transformer part with the equivariant self-attention over the points along the ray. For more information on the implementation details, please refer to App. K.2. + +Results We compare with IBRNet on $I / I$ and $I / SO(3)$ settings to show that our proposed models can be embedded in the existing rendering framework and achieve equivariance. Following previous works on novel view synthesis, our evaluation metrics are PSNR, SSIM, and LPIPS [74]. In the $I / SO(3)$ test period, we randomly rotate each data six times and report the average metrics. Meanwhile, we record the max pixel variance and report the average value. We show a qualitative result in Fig. 9, where IBRNet presents several blurred transverse lines in the $I / SO(3)$ setting while ours are robust to the rotation. In table 2, our model performs comparably with IBRNet [61] in $I / I$ setting without performance drop in $I / SO(3)$ setting. The slight decrease in PSNR/SSIM/LPIPS for IBRNet from $I / I$ to $I / SO(3)$ can be attributed to the training process involving multiple datasets with different canonical frames, which includes transformation augmentation and makes the model more robust to coordinate frame changes. Additionally, conventional metrics like PSNR/SSIM may not directly capture image variations. Therefore, we introduce an additional metric, pixel variance, to illustrate the changes better. We observe that IBRNet [61] exhibits pixel variance for different rotations, whereas our approach remains robust to rotation. Our method performs comparably with IBRNet in the $I / SO(3)$ setting in DeepVoxels [49] because the synthetic data consists of Lambertian objects with simple geometry, where the ray directions do not significantly affect the radiance. For more qualitative results, see App. K.3. + +# 5 Conclusion and Broader Impacts + +To learn equivariant geometric priors from multiple views, we modeled the convolution on the light field as a generalized convolution on the homogeneous space of rays with $SE(3)$ as the acting group. To obtain expressive point features, we extended convolution to equivariant attention over rays. The main limitation of the approach is the finite sampling of the light field. The sampling of the light field by sparse views cannot account for large object motions with drastic aspect change, leading to a breakdown of equivariance. This novel general equivariant representation framework for light fields can inspire further work on 3D vision and graphics tasks. We do not see any direct negative impact of our work, but it could have negative societal consequences if misused without authorization, for example, when using images violating privacy. + +# 6 Acknowledgement + +The authors gratefully acknowledge support by the support by the following grants: NSF FRR 2220868, NSF IIS-RI 2212433, NSF TRIPODS 1934960, NSF CPS 2038873. + +# References + +[1] Jimmy Aronsson. Homogeneous vector bundles and g-equivariant convolutional neural networks. Sampling Theory, Signal Processing, and Data Analysis, 20(2):1-35, 2022. +[2] Benjamin Attal, Jia-Bin Huang, Michael Zollhöfer, Johannes Kopf, and Changil Kim. Learning neural light fields with ray-space embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19819-19829, 2022. +[3] Miguel Angel Bautista, Walter Talbott, Shuangfei Zhai, Nitish Srivastava, and Joshua M Susskind. On the generalization of learning-based 3d reconstruction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2180-2189, 2021. +[4] Erik J Bekkers. B-spline cnns on lie groups. arXiv preprint arXiv:1909.12057, 2019. +[5] Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. X-fields: Implicit neural view-, light-and time-image interpolation. ACM Transactions on Graphics (TOG), 39(6):1-15, 2020. +[6] James R Bergen and Edward H Adelson. The plenoptic function and the elements of early vision. Computational models of visual processing, 1:8, 1991. +[7] Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik Bekkers, and Max Welling. Geometric and physical quantities improve e (3) equivariant message passing. arXiv preprint arXiv:2110.02905, 2021. +[8] Gabriele Cesa, Leon Lang, and Maurice Weiler. A program to build e (n)-equivariant steerable cnns. In International Conference on Learning Representations, 2021. +[9] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. +[10] Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Edgar Dobriban, and Kostas Daniilidis. SE(3)-equivariant attention networks for shape reconstruction in function space. arXiv preprint arXiv:2204.02394, 2022. +[11] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14124-14133, 2021. +[12] Haiwei Chen, Shichen Liu, Weikai Chen, Hao Li, and Randall Hill. Equivariant point network for 3d point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14514-14523, 2021. +[13] Yuedong Chen, Haofei Xu, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Explicit correspondence matching for generalizable neural radiance fields. arXiv preprint arXiv:2304.12294, 2023. +[14] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628-644. Springer, 2016. +[15] Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990-2999. PMLR, 2016. +[16] Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. In International conference on Machine learning, pages 1321-1330. PMLR, 2019. +[17] Taco S Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical cnns. arXiv preprint arXiv:1801.10130, 2018. +[18] Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. Advances in neural information processing systems, 32, 2019. + +[19] Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, and Leonidas Guibas. Vector neurons: A general framework for so (3)-equivariant networks. arXiv preprint arXiv:2104.12229, 2021. +[20] Yilun Du, Cameron Smith, Ayush Tewari, and Vincent Sitzmann. Learning to render novel views from wide-baseline stereo pairs. arXiv preprint arXiv:2304.08463, 2023. +[21] Emilien Dupont, Miguel Bautista Martin, Alex Colburn, Aditya Sankar, Josh Susskind, and Qi Shan. Equivariant neural rendering. In International Conference on Machine Learning, pages 2761-2770. PMLR, 2020. +[22] Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), pages 52–68, 2018. +[23] Carlos Esteves, Yinshuang Xu, Christine Allen-Blanchette, and Kostas Daniilidis. Equivariant multiview networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1568-1577, 2019. +[24] Carlos Esteves, Ameesh Makadia, and Kostas Daniilidis. Spin-weighted spherical cnns. arXiv preprint arXiv:2006.10731, 2020. +[25] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In International Conference on Machine Learning, pages 3165-3176. PMLR, 2020. +[26] Marc Finzi, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. arXiv preprint arXiv:2104.09459, 2021. +[27] Fabian B Fuchs, Daniel E Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d rototranslation equivariant attention networks. arXiv preprint arXiv:2006.10503, 2020. +[28] Yasutaka Furukawa, Carlos Hernández, et al. Multi-view stereo: A tutorial. Foundations and Trends® in Computer Graphics and Vision, 9(1-2):1-148, 2015. +[29] Jean Gallier and Jocelyn Quaintance. Differential geometry and Lie groups: a computational perspective, volume 12. Springer Nature, 2020. +[30] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2786-2795, 2021. +[31] Xin Huang, Qi Zhang, Ying Feng, Xiaoyu Li, Xuan Wang, and Qing Wang. Local implicit ray function for generalizable radiance field representation. arXiv preprint arXiv:2304.12746, 2023. +[32] Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, and Hyunjik Kim. Lietransformer: Equivariant self-attention for lie groups. In International Conference on Machine Learning, pages 4533–4543. PMLR, 2021. +[33] Hanwen Jiang, Zhenyu Jiang, Kristen Grauman, and Yuke Zhu. Few-view object reconstruction with unknown categories and camera poses. arXiv preprint arXiv:2212.04492, 2022. +[34] Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics (TOG), 35(6):1-10, 2016. +[35] Abhishek Kar, Christian Hane, and Jitendra Malik. Learning a multi-view stereo machine. Advances in neural information processing systems, 30, 2017. +[36] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International Conference on Machine Learning, pages 2747-2755. PMLR, 2018. +[37] Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 31-42, 1996. +[38] Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, and Wenping Wang. Sparseneus: Fast generalizable neural surface reconstruction from sparse views. arXiv preprint arXiv:2206.05737, 2022. + +[39] Lachlan E MacDonald, Sameera Ramasinghe, and Simon Lucey. Enabling equivariance for arbitrary lie groups. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8183-8192, 2022. +[40] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4460-4470, 2019. +[41] Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 38(4):1-14, 2019. +[42] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65 (1):99-106, 2021. +[43] David Romero, Erik Bekkers, Jakub Tomczak, and Mark Hoogendoorn. Attentive group equivariant convolutional networks. In International Conference on Machine Learning, pages 8188-8199. PMLR, 2020. +[44] David W Romero and Jean-Baptiste Cordonnier. Group equivariant stand-alone self-attention for vision. arXiv preprint arXiv:2010.00977, 2020. +[45] Aleksandr Safin, Daniel Durckworth, and Mehdi SM Sajjadi. Repast: Relative pose attention scene representation transformer. arXiv preprint arXiv:2304.00947, 2023. +[46] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2304-2314, 2019. +[47] Mehdi SM Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, et al. Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6229-6238, 2022. +[48] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pages 9323-9332. PMLR, 2021. +[49] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2437-2446, 2019. +[50] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. Advances in Neural Information Processing Systems, 34:19313-19325, 2021. +[51] Pratul P Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. Learning to synthesize a 4d rgbd light field from a single image. In Proceedings of the IEEE International Conference on Computer Vision, pages 2243-2251, 2017. +[52] Norman Steenrod. The topology of fibre bundles, volume 27. Princeton university press, 1999. +[53] Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. Generalizable patch-based neural rendering. arXiv preprint arXiv:2207.10662, 2022. +[54] Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. Light field neural rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8269-8279, 2022. +[55] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. +[56] Shubham Tulsiani, Alexei A Efros, and Jitendra Malik. Multi-view consistency as supervisory signal for learning shape and pose prediction. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2897-2905, 2018. +[57] Michal J Tyszzkiewicz, Kevis-Kokitsi Maninis, Stefan Popov, and Vittorio Ferrari. Raytran: 3d pose estimation and shape reconstruction of multiple objects from videos with ray-traced transformers. arXiv preprint arXiv:2203.13296, 2022. + +[58] Mukund Varma, Peihao Wang, Xuxi Chen, Tianlong Chen, Subhashini Venugopalan, and Zhangyang Wang. Is attention all that nef needs? In The Eleventh International Conference on Learning Representations, 2022. +[59] Soledad Villar, David W Hogg, Kate Storey-Fisher, Weichi Yao, and Ben Blum-Smith. Scalars are universal: Equivariant machine learning, structured like classical physics. Advances in Neural Information Processing Systems, 34:28848-28863, 2021. +[60] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In Proceedings of the European conference on computer vision (ECCV), pages 52-67, 2018. +[61] Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2021. +[62] Maurice Weiler and Gabriele Cesa. General $e(2)$ -equivariant steerable cnns. arXiv preprint arXiv:1911.08251, 2019. +[63] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. arXiv preprint arXiv:1807.02547, 2018. +[64] Maurice Weiler, Patrick Forre, Erik Verlinde, and Max Welling. Coordinate independent convolutional networks-isometry and gauge equivariant convolutions on riemannian manifolds. arXiv preprint arXiv:2106.06020, 2021. +[65] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5028-5037, 2017. +[66] Gaochang Wu, Yebin Liu, Lu Fang, and Tianyou Chai. Revisiting light field rendering with deep anti-aliasing neural network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. +[67] Haozhe Xie, Hongxun Yao, Xiaoshuai Sun, Shangchen Zhou, and Shengping Zhang. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2690-2698, 2019. +[68] Haozhe Xie, Hongxun Yao, Shengping Zhang, Shangchen Zhou, and Wenxiu Sun. Pix2vox++: Multi-scale context-aware 3d object reconstruction from single and multiple images. International Journal of Computer Vision, 128(12):2919-2935, 2020. +[69] Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. Advances in Neural Information Processing Systems, 32, 2019. +[70] Yinshuang Xu, Jiahui Lei, Edgar Dobriban, and Kostas Daniilidis. Unified fourier-based kernel and nonlinearity design for equivariant networks on homogeneous spaces. In International Conference on Machine Learning, pages 24596-24614. PMLR, 2022. +[71] Mingyue Yang, Yuxin Wen, Weikai Chen, Yongwei Chen, and Kui Jia. Deep optimized priors for 3d shape modeling and reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3269-3278, 2021. +[72] Zhenpei Yang, Zhile Ren, Miguel Angel Bautista, Zaiwei Zhang, Qi Shan, and Qixing Huang. Fvor: Robust joint shape and pose optimization for few-view object reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2497-2507, 2022. +[73] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4578-4587, 2021. +[74] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. + +# Appendix + +The introduction of convolution and attention to the space of rays in 3D required additional geometric representations for which there was no space in the main paper to elaborate. We will introduce here all the necessary notations and definitions. We have accompanied this presentation with examples of specific groups to elucidate the abstract concepts needed in the definitions. + +# A Preliminary + +# A.1 Group Actions and Homogeneous Spaces + +![](images/1f40fcee705d99828f45e718247ba480bab9681781ed7a80b059cb126d8bac93.jpg) +Figure 10: The visualization of Plücker coordinates: A ray $x$ can be denoted as $(d, m)$ where $x$ is any point on the ray $x$ , and $d$ is the direction of the ray $x$ . $m$ is defined as $x \times d$ . + +Given the action of the group $G$ on a homogeneous space $X$ , and given $x_0$ as the origin of $X$ , the stabilizer group $H$ of $x_0$ in $G$ is the group that leaves $x_0$ intact, i.e., $H = \{h \in G | hx_0 = x_0\}$ . The group, $G$ , can be partitioned into the quotient space (the set of left cosets) $G / H$ and $X$ is isomorphic to $G / H$ since all group elements in the same coset transform $x_0$ to the same element in $X$ , that is, for any element $g' \in gH$ we have $g'x_0 = gx_0$ . + +Example 1. $SE(3)$ acting on the ray space $\mathcal{R}$ : Take $SE(3)$ as the acting group and the ray space $\mathcal{R}$ as its homogeneous space. We use Plücker coordinates to parameterize the ray space $\mathcal{R}$ : any $x \in \mathcal{R}$ can be denoted as $(d, m)$ , where $d \in \mathbb{S}^2$ is the direction of the ray, and $m = x \times d$ where $x$ is any point on the ray, as shown in figure 10. A group element + +$g = (R, \pmb{t}) \in SE(3)$ acts on the ray space as: + +$$ +g x = g (\boldsymbol {d}, \boldsymbol {m}) = (R \boldsymbol {d}, R \boldsymbol {m} + \boldsymbol {t} \times (R \boldsymbol {d})). \tag {5} +$$ + +We can choose the fixed origin of the homogeneous space to be $\eta = ([0,0,1]^T,[0,0,0]^T)$ , the line identical with the $z$ -axis of the coordinate system. Then, the stabilizer group $H$ (the rotation around and translation along the ray) can be parameterized as $H = \{(R_Z(\gamma),t[0,0,1]^T)|\gamma \in [0,2\pi), t\in \mathbb{R}\}$ , i.e., $H\simeq SO(2)\times \mathbb{R}$ . We can simplify $H$ as $H = \{(\gamma ,t)|\gamma \in [0,2\pi), t\in \mathbb{R}\}$ . $\mathcal{R}$ is the quotient space $SE(3) / (SO(2)\times \mathbb{R})$ up to isomorphism. + +Example 2. $SE(3)$ acting on the 3D Euclidean space $\mathbb{R}^3$ : $\mathbb{R}^3$ is isomorphic to $SE(3)/SO(3)$ . Consider another case when $SE(3)$ acts on the homogeneous space $\mathbb{R}^3$ ; for any $g = (R, t) \in SE(3)$ and $\pmb{x} \in \mathbb{R}^3$ , $gx = Rx + t$ . If the fixed origin is $[0,0,0]^T$ , the stabilizer subgroup is $H = SO(3)$ since any rotation $g = (R,0)$ leaves $[0,0,0]^T$ unchanged. + +Example 3. $SO(3)$ acting on the sphere $\mathbb{S}^2$ : $\mathbb{S}^2$ is isomorphic to $SO(3)/SO(2)$ . The last example is $SO(3)$ acting on the homogeneous space sphere $\mathbb{S}^2$ . Given the fixed origin point as $[0,0,1]^T$ , the stabilizer group is $SO(2)$ . + +# A.2 Principal Bundle + +As stated in [29, 18], the partition of the group $G$ into cosets allows us to treat the group $G$ as the principal bundle where the total space is $G$ , the base space is the homogeneous space $G / H^1$ , the canonical fiber is the stabilizer group $H$ , the projection map $p: G \to G / H$ reads $p(g) = gH = gx_0 = x$ . The section $s: G / H \to G$ of $p$ should satisfy that $p \circ s = id_{G / H}$ , where $id_{G / H}$ is the identity map on $G / H$ . Note that non-trivial principal bundles do not have a continuous global section, but we can define a continuous section locally on the open set $U \subseteq G / H$ . The action of $G$ causes a twist of the fiber, i.e., $gs(x)$ might not be equal to $s(gx)$ though they are in the same coset. We use the twist function $h: G \times G / H \to H$ to denote the twist: $gs(x) = s(gx)\mathfrak{h}(g,x)$ . Same as [18], we simplify $\mathfrak{h}(g,eH)$ to be $\mathfrak{h}(g)$ , where $e$ is the identity element in $G$ and $eH = x_0$ . + +$$ +p ^ {- 1} (x) \simeq S O (2) \times \mathbb {R} +$$ + +![](images/89e540543515cbc6e8264d8ca5a8e89f57d906b79acfd2020038d7bd684c1386.jpg) +$H = SO(2)\times \mathbb{R}$ +Figure 11: We can view $SE(3)$ as an $SO(2) \times \mathbb{R}$ -principal bundle, where the projection map $p: SE(3) \to \mathcal{R}$ is $p(R, t) = (R[0,0,1]^T, t \times (R[0,0,1]^T)$ , and the inverse of $p$ is $p^{-1}(x) = \{(R, t) | (R, t) \eta = x\}$ . We use the coordinate frames (red axis denotes $Z$ -axis, green axis denotes $X$ -axis, and purple axis denotes $Y$ -axis) to denote the element in $SE(3)$ because we can use the position of the coordinate origin to denote the translation $t$ and use $X$ -axis, $Y$ -axis, and $Z$ -axis to represent the first, second and third columns in rotation $R$ . When we say next "the coordinate frame on the line/ray" we will mean that its origin is on the line/ray. By this convention, the coordinate frames representing the element in $H = SO(2) \times \mathbb{R}$ are the frames whose $Z$ -axis aligns with $[0,0,1]^T$ and whose origin is $[0,0,t]^T$ for any $t \in \mathbb{R}$ , i.e., frames on the yellow line in the left of the figure. For one ray $x = (\pmb{d}, \pmb{m})$ (illustrated as the chosen blue ray), the coordinate frames on the ray $x$ whose $Z$ -axis aligns with the ray $\pmb{d}_x$ are in $p^{-1}(x)$ . As shown in the figure, there exists a bijection (gray double arrow line) between $p^{-1}(x)$ and $H = SO(2) \times \mathbb{R}$ . $p^{-1}(x)$ is isomorphic to $H = SO(2) \times \mathbb{R}$ . + +Example 4. Projection, section map and twist function for $\mathbb{R}^3$ and $SE(3)$ : According to Ex. 2, we can consider a bundle with total space as $SE(3)$ , base space as $\mathbb{R}^3$ , and the fiber as $SO(3)$ . For any $g = (R,t) \in SE(3)$ , the projection map $p: SE(3) \to \mathbb{R}^3$ projects $g$ as $p(R,t) = t$ . For any $\boldsymbol{x} \in \mathbb{R}^3$ , we can define the section map $s: \mathbb{R}^3 \to SE(3)$ as $s(\boldsymbol{x}) = (I,\boldsymbol{x})$ . The twist function $\mathrm{h}: SE(3) \times \mathbb{R}^3 \to SO(3)$ is that $\mathrm{h}(g,\boldsymbol{x}) = s(g\boldsymbol{x})^{-1}gs(\boldsymbol{x}) = R$ for any $\boldsymbol{x} \in \mathbb{R}^3$ and any $g = (R,t) \in SE(3)$ . This twist function is independent of $\boldsymbol{x}$ due to the fact that $SE(3) = \mathbb{R}^3 \rtimes SO(3)$ is a semidirect product group as stated in [18]. + +Example 5. Projection, section map, and twist function for $\mathbb{S}^2$ and $SO(3)$ : As shown in Ex. 3, $SO(3)$ can be viewed as a principal bundle with the base space as $\mathbb{S}^2$ and the fiber as $SO(2)$ . With the rotation $R \in SO(3)$ parameterized as $R = R_Z(\alpha)R_Y(\beta)R_Z(\gamma)$ , the projection $p: G \to G / H$ maps $R$ as follows: + +$$ +\begin{array}{l} p (R) = R _ {Z} (\alpha) R _ {Y} (\beta) R _ {Z} (\gamma) [ 0, 0, 1 ] ^ {T} \\ = R _ {Z} (\alpha) R _ {Y} (\beta) [ 0, 0, 1 ] ^ {T} \\ = \left[ \sin (\beta) \cos (\alpha), \sin (\beta) \sin (\alpha), \cos (\beta) \right] ^ {T}. \\ \end{array} +$$ + +For any $\pmb{d} \in \mathbb{S}^2$ , the section map $s: \mathbb{S}^2 \to SO(3)$ of $p$ should satisfy that $p \circ s = id_{\mathbb{S}^2}$ as mentioned above, i.e., $s(\pmb{d})[0, 0, 1]^T = \pmb{d}$ . For instance, we could define the section map $s$ as: + +$$ +s (\boldsymbol {d}) = R _ {Z} \left(\alpha_ {\boldsymbol {d}}\right) R _ {Y} \left(\beta_ {\boldsymbol {d}}\right), +$$ + +where $\alpha_{d}$ and $\beta_{d}$ satisfies that + +$$ +\boldsymbol {d} = \left[ \sin \left(\beta_ {\boldsymbol {d}}\right) \cos \left(\alpha_ {\boldsymbol {d}}\right), \sin \left(\beta_ {\boldsymbol {d}}\right) \sin \left(\alpha_ {\boldsymbol {d}}\right), \cos \left(\beta_ {\boldsymbol {d}}\right) \right] ^ {T}. +$$ + +Specifically, when $\pmb{d} = [0,0,1]^T$ , $\alpha_{\pmb{d}} = 0$ and $\beta_{\pmb{d}} = 0$ ; when $\pmb{d} = -[0,0,1]^T$ , $\alpha_{\pmb{d}} = 0$ and $\beta_{\pmb{d}} = \pi$ . As defined, the twist function $\mathrm{h}:SO(3)\times \mathbb{S}^2\to SO(2)$ is that $\mathrm{h}(R,d) = s(Rd)^{-1}Rs(d)$ . + +Example 6. Projection, section map, and twist function for $\mathcal{R}$ and $SE(3)$ : The final example is $SE(3)$ with $\mathcal{R}$ as the base space and $SO(2) \times \mathbb{R}$ as the fiber, which is the focus of this work, as shown + +![](images/657be27871b3f9cff80c411f3aa7df0535bf913e9d3fd6c0a943851363e9b27c.jpg) +Figure 12: For a ray $x = (d, m)$ , we need to choose an element $(R, t) \in SE(3)$ as the representative element $s(x)$ such that $s(x)[0, 0, 1]^T, [0, 0, 0]^T) = x$ . This figure shows one example of the section map $s$ from ray space to $SE(3)$ . This map also serves as the section provided in this paper. The axes of the coordinate frame in the figure represent $R = s_a(d) = R_Z(\alpha_d)R_Y(\beta_d)$ , where the green axis, purple axis, and red axis represent 1st, 2nd and 3rd column in the rotation matrix $R$ , respectively. The origin of the frame, $t = s_b(d, m) = d \times m$ , denotes the translation. + +$$ +d = [ \sin (\beta) \cos (\alpha), \sin (\beta) \sin (\alpha), \cos (\beta) ] ^ {T} +$$ + +in figure 11. According to the group action defined in Eq. 5, the projection map $p: SE(3) \to \mathcal{R}$ is: + +$$ +p \left(\left(R, \boldsymbol {t}\right)\right) = \left(R, \boldsymbol {t}\right) \eta = \left(R [ 0, 0, 1 ] ^ {T}, \boldsymbol {t} \times \left(R [ 0, 0, 1 ] ^ {T}\right)\right). +$$ + +This represents a ray direction $\pmb{d}$ with the 3rd column of a rotation matrix and the moment $\pmb{m}$ with the cross product of the translation and the ray direction. We can construct a section $s: G / H \to G$ using the Plücker coordinate: + +$$ +s \left(\left(\boldsymbol {d}, \boldsymbol {m}\right)\right) = \left(s _ {a} (\boldsymbol {d}), s _ {b} (\boldsymbol {d}, \boldsymbol {m})\right), +$$ + +where $s_a(\pmb{d}) \in SO(3)$ is a rotation that $s_a(\pmb{d})[0,0,1]^T = \pmb{d}$ , i.e., $s_a$ is a section map from $\mathbb{S}^2$ to $SO(3)$ as shown in Ex. 5; and $s_b(\pmb{d},\pmb{m}) \in \mathbb{R}^3$ is a point on the ray $(\pmb{d},\pmb{m})$ . In this paper, we define the section map as $s((\pmb{d},\pmb{m})) = (R_Z(\alpha_d)R_Y(\beta_d),\pmb{d} \times \pmb{m})$ , where $\alpha_d$ and $\beta_d$ satisfy that $\pmb{d} = R_Z(\alpha_d)R_Y(\beta_d)[0,0,1]^T$ , which is the same as Ex. 5. Figure 12 displays the visualization of the section map. + +Given the section map, for any $g = (R_g, t_g) \in SE(3)$ and $x = (\pmb{d}_x, \pmb{m}_x) \in \mathcal{R}$ , we have the twist function $h: SE(3) \times \mathcal{R} \to SO(2) \times \mathbb{R}$ is $h(g, x) = s^{-1}(gx)gs(x) = (\mathrm{h}_a(R_g, d_x), \mathrm{h}_b(g, x))$ , where $h_a: SO(3) \times \mathbb{S}^2 \to SO(2)$ is the twist function corresponding to $s_a$ , as shown in Ex. 5, and $h_b(g, x) = \langle R_g s_b(x) + t_g - s_b(gx), R_g d_x \rangle$ . With the above section $s$ defined in this paper, the twist function $h: SE(3) \times \mathcal{R} \to SO(2) \times \mathbb{R}$ is + +$$ +\mathrm {h} (g, x) = s ^ {- 1} (g x) g s (x) = \left(R _ {Z} \left(R _ {g}, \boldsymbol {d} _ {x}\right), \langle \boldsymbol {t} _ {g}, \left(R _ {g} \boldsymbol {d} _ {x}\right) \rangle\right), +$$ + +where $R_Z(R_g, \pmb{d}_x) = R_Y^{-1}(\beta_{R_g \pmb{d}_x}) R_Z^{-1}(\alpha_{R_g \pmb{d}_x}) R_g R_Z(\alpha_{\pmb{d}_x}) R_Y(\beta_{\pmb{d}_x})$ . + +To understand the twist function clearly, we visualize a twist induced by a translation in $SE(3)$ in figure 13, + +# A.3 Associated Vector Bundle + +Given the principal bundle $G$ , we can construct the associated vector bundle by replacing the fiber $H$ with the vector space $V$ , where $V \simeq \mathbb{R}^n$ and $H$ acts on $V$ through a group representation + +![](images/a849e2c7fac4c9233fcf92ce13dc567f336f1dc732adeab28ab82ff33b3a0c05.jpg) +Figure 13: When we translate a ray $x = (d, m)$ with $g = (I, t) \in SE(3)$ , we will find that $gs(x)$ does not agree with $s(gx)$ . As defined in figure 12, we have $s_b(x) \perp d$ and $s_b(gx) \perp d$ . Following the geometry of the figure, we obtain that $h_b(g, x) = \langle t, d \rangle [0, 0, 1]^T$ , i.e., $h(g, x) = s(gx)^{-1} gs(x) = (I, \langle t, d \rangle [0, 0, 1]^T) = (0, \langle t, d \rangle)$ . + +$\rho : H \to GL(V)$ . The group representation corresponds to the type of geometric quantity in the vector space $V$ , for example, the scalar, the vector, or the higher-order tensor. + +The quotient space $E = G \times_{\rho} V / H$ is defined through the right action of $H$ on $G \times V$ : $(g, v)h = (gh, \rho(h)^{-1}v)$ for any $h \in H$ , $g \in G$ and $v \in V$ . With the defined projection map $p: G \times_{\rho} V \to G / H$ : $p([g, v]) = gH$ , where $[g, v] = \{(gh, \rho(h)^{-1}v) | h \in H\}$ , the element in $G \times_{\rho} V$ , we obtain the fiber bundle $E = G \times_{\rho} V$ associated to the principal bundle $G$ . For more background and details of the associated vector bundle, we recommend referring to the following sources: [52] and [18]. + +The feature function $f: U \subseteq G / H \to V$ can encode the local section of the associated vector bundle $s_v: U \subseteq G / H \to G \times_{\rho} V$ : $s_v(x) = [s(x), f(x)]$ , where $s$ is the section map of the principal bundle as defined in Sec. A.2. The group $G$ acting on the field $f$ as shown in [18]: + +$$ +\left(\mathcal {L} _ {g} f\right) (x) = \rho \left(\mathrm {h} \left(g ^ {- 1}, x\right)\right) ^ {- 1} f \left(g ^ {- 1} x\right), \tag {6} +$$ + +where $\mathrm{h}: G \times G / H \to H$ is the twist function as defined in Sec. A.2. + +# A.4 Equivariant Convolution Over Homogeneous Space + +The generalized equivariant convolution over homogeneous space, as stated in [18], that maps a feature field $f^{l_{in}}$ over homogeneous space $G / H_1$ to a feature $f_{out}^{l'}$ over homogeneous space $G / H_2$ by convolving with a kernel $\kappa$ is defined as: + +$$ +f ^ {l _ {\text {o u t}} ^ {\prime}} (x) = \int_ {G / H _ {1}} \kappa \left(s _ {2} (x) ^ {- 1} y\right) \rho_ {i n} \left(\mathrm {h} _ {1} \left(s _ {2} (x) ^ {- 1} s _ {1} (y)\right)\right) f ^ {l _ {i n}} (y) d y, \tag {7} +$$ + +where $l_{in}$ and $l_{out}^{\prime 2}$ denote the input and output feature types, respectively. $\rho_{in}$ is the group representation of $H_{1}$ corresponding to the feature type $l_{in}$ , $s_1$ is the section map from $G / H_{1}$ to $G$ (see Sec. A.2), $s_2$ is the section map from $G / H_{2}$ to $G$ (see Sec. A.2), $\mathrm{h}_1$ is the twist function corresponding to $s_1$ (see Sec. A.2). + +The convolution is equivariant with respect to $G$ , that is + +$$ +\mathcal {L} _ {g} ^ {\text {o u t}} f ^ {l _ {\text {o u t}} ^ {\prime}} = \kappa * \mathcal {L} _ {g} ^ {\text {i n}} f ^ {l _ {\text {i n}}}, +$$ + +if and only if $\kappa(h_2x) = \rho_{out}(h_2)\kappa(x)\rho_{in}(\mathrm{h}_1^{-1}(h_2,x))$ for any $h_2 \in H_2$ , where $\rho_{out}$ is the group representation of $H_2$ corresponding to the feature type $l_{out}'$ . + +In the following examples, we will illustrate three instances where the input and output homogeneous spaces, denoted as $G / H_{1}$ and $G / H_{2}$ , respectively, are identical, meaning that $H_{1} = H_{2}$ . These examples involve convolutions from $\mathbb{R}^3$ to $\mathbb{R}^3$ , from $\mathbb{S}^2$ to $\mathbb{S}^2$ , and from $\mathcal{R}$ to $\mathcal{R}$ . Furthermore, we will show an example where $H_{1}$ and $H_{2}$ differ, explicitly focusing on the convolution from $\mathcal{R}$ to $\mathbb{R}^3$ . Example 7. $SE(3)$ equivariant convolution from $\mathbb{R}^3$ to $\mathbb{R}^3$ : If we use the section map as stated in Ex. 4, we will find that $\mathrm{h}(s(x)^{-1}s(y)) = I$ , therefore convolution 7 becomes: + +$$ +\begin{array}{l} f ^ {l _ {o u t}} (x) = \int_ {\mathbb {R} ^ {3}} \kappa (s (x) ^ {- 1} y) f ^ {l _ {i n}} (y) d y \\ = \int_ {\mathbb {R} ^ {3}} \kappa (y - x) f ^ {l _ {i n}} (y) d y \\ \end{array} +$$ + +and $\kappa$ should satisfy + +$$ +\begin{array}{l} \kappa (R x) = \rho_ {o u t} (R) \kappa (x) \rho_ {i n} (\mathrm {h} ^ {- 1} (R, x)) \\ = \rho_ {o u t} (R) \kappa (x) \rho_ {i n} \left(\mathrm {h} ^ {- 1} (R)\right) \\ = \rho_ {o u t} (R) \kappa (x) \rho_ {i n} ^ {- 1} (R) \\ \end{array} +$$ + +for any $R \in SO(3)$ . When the feature type $l_{in}$ and $l_{out}$ corresponds to the irreducible representation, we have + +$$ +\kappa (R x) = D _ {l _ {o u t}} (R) \kappa (x) D _ {l _ {i n}} (R) ^ {- 1} +$$ + +where $D^{l_{in}}$ and $D^{l_{out}}$ are the Wigner-D matrices, i.e. irreducible representations corresponding to the feature types $l_{in}$ and $l_{out}$ , which is the same as the analytical result in [63]. + +Example 8. $SO(3)$ equivariant spherical convolution from $\mathbb{S}^2$ to $\mathbb{S}^2$ : For spherical convolution, when we substitute the section in Eq. 7 with the section we defined in Ex. 5, the convolution integral takes the following form: + +$$ +\begin{array}{l} f ^ {l _ {\text {o u t}}} (\alpha , \beta) \\ = \int_ {\alpha^ {\prime} \in [ 0, 2 \pi), \beta^ {\prime} \in [ 0, \pi)} \kappa (R _ {Y} ^ {- 1} (\beta) R _ {Z} ^ {- 1} (\alpha) R _ {Z} (\alpha^ {\prime}) R _ {Y} (\beta^ {\prime}) [ 0, 0, 1 ] ^ {T}) \\ \end{array} +$$ + +$$ +\rho_ {i n} \left(\mathrm {h} \left(R _ {Y} ^ {- 1} (\beta) R _ {Z} ^ {- 1} (\alpha) R _ {Z} \left(\alpha^ {\prime}\right) R _ {Y} \left(\beta^ {\prime}\right)\right)\right) f ^ {l _ {i n}} \left(\alpha^ {\prime}, \beta^ {\prime}\right) d \alpha^ {\prime} \sin \left(\beta^ {\prime}\right) d \beta^ {\prime} +$$ + +where $[0,0,1]^T$ is the fixed original point as stated in Ex. 3, $\rho_{in}$ is the group representation of $SO(2)$ corresponding to the feature type $l_{in}$ . When $\rho_{in}$ and $\rho_{out}$ are the irreducible representations of $SO(2)$ , $\rho_{in}$ and $\rho_{out}$ can be denoted as $\rho_{in}(\theta) = e^{-il_{in}\theta}$ and $\rho_{out}(\theta) = e^{-il_{out}\theta}$ . + +To simplify the notation, we utilize $R(\theta)$ to represent $R_Z(\theta) \in SO(2)$ , where $\theta \in [0, 2\pi)$ . When considering the cases where $x = [0, 0, 1]^T$ , $h(R(\theta)x) = R(\theta)$ ; when $x = -[0, 0, 1]^T$ , $h(R(\theta)x) = R(-\theta)$ ; and when $x \in \mathbb{S}^2 - \{[0, 0, 1]^T, -[0, 0, 1]^T\}$ , $h(R(\theta)x) = R(-\theta) = I$ . Therefore, the kernel $\kappa$ should satisfy the following conditions: $\kappa(R(\theta)x) = e^{-il_{out}\theta}\kappa(x)$ for any $R(\theta) \in SO(2)$ and any $x \in \mathbb{S}^2 - \{[0, 0, 1]^T, -[0, 0, 1]^T\}$ ; $\kappa(x) = e^{-i(l_{out} - l_{in})\theta}\kappa(x)$ for $x = [0, 0, 1]^T$ ; and $\kappa(x) = e^{-i(l_{out} + l_{in})\theta}\kappa(x)$ for $x = -[0, 0, 1]^T$ . + +Specifically, when the input and output are scalar feature fields over the sphere, convolution reads + +$$ +\begin{array}{l} f ^ {o u t} (\alpha , \beta) \\ = \int_ {\alpha^ {\prime} \in [ 0, 2 \pi), \beta^ {\prime} \in [ 0, \pi)} \kappa \left(R _ {Y} ^ {- 1} (\beta) R _ {Z} ^ {- 1} (\alpha) R _ {Z} \left(\alpha^ {\prime}\right) R _ {Y} \left(\beta^ {\prime}\right) \eta\right) \\ f ^ {i n} \left(\alpha^ {\prime}, \beta^ {\prime}\right) d \alpha^ {\prime} s i n \left(\beta^ {\prime}\right) d \beta^ {\prime} \\ \end{array} +$$ + +$\kappa$ has such constraint: + +$$ +\kappa (R (\theta) x) = \kappa (x) +$$ + +for any $R(\theta) \in SO(2)$ , which is consistent with the isotropic kernel of the convolution in [22]. + +Example 9. $SE(3)$ equivariant convolution from $\mathcal{R}$ to $\mathcal{R}$ : In our case, the equivariant convolution from ray space to ray space is also based on the generalized equivariant convolution over a homogeneous space. See Sec. 3.4.1 for the details. We solve the constraint of the kernel here: + +$$ +\kappa (h x) = \rho_ {\text {o u t}} (h) \kappa (x) \rho_ {\text {i n}} \left(\mathrm {h} ^ {- 1} (h, x)\right), \tag {8} +$$ + +for any $h\in SO(2)\times \mathbb{R}$ + +The irreducible group representation $\rho_{in}$ for the corresponding feature type $l_{in} = (\omega_{in}^{1},\omega_{in}^{2})$ , where $\omega_{in}^{1}\in \mathbb{N}$ and $\omega_{in}^{2}\in \mathbb{R}$ , can be written as $\rho_{in}(\gamma ,t) = e^{-i(\omega_{in}^{1}\gamma +\omega_{in}^{2}t)}$ for any $h = (\gamma ,t)\in SO(2)\times \mathbb{R}$ and the irreducible group representation $\rho_{out}(\gamma ,t) = e^{-i(\omega_{out}^{1}\gamma +\omega_{out}^{2}t)}$ for the feature type $l_{out} = (\omega_{out}^{1},\omega_{out}^{2})$ , where $\omega_{out}^{1}\in \mathbb{N}$ and $\omega_{out}^{2}\in \mathbb{R}$ , for any $h = (\gamma ,t)\in SO(2)\times \mathbb{R}$ + +To simplify the notation, we utilize $R(\gamma)$ to represent $R_Z(\gamma) \in SO(2)$ , where $\gamma \in [0, 2\pi)$ . For any $h = (\gamma, t) \in SO(2) \times \mathbb{R}$ and any $x = (d_x, m_x) \in \mathcal{R}$ , we have $\mathrm{h}(h, x) = s(hx)^{-1}hs(x) = (R_Z(R(\gamma), d_x), \langle t[0, 0, 1]^T, d_x \rangle)$ according to Ex. 6. Since $SO(2) \times \mathbb{R}$ is a product group, we can have $\kappa(x) = \kappa_1(x)\kappa_2(x)$ , where + +$$ +\kappa_ {1} ((\gamma , t) x) = \rho_ {\text {o u t}} ((\gamma , 0)) \kappa_ {1} (x) \rho_ {i n} ^ {- 1} \left(\left(R _ {Z} (R (\gamma), \boldsymbol {d} _ {x}), 0\right)\right) \tag {9} +$$ + +$$ +\kappa_ {2} ((\gamma , t) x) = \rho_ {o u t} ((0, t)) \kappa_ {2} (x) \rho_ {i n} ^ {- 1} ((0, \langle t [ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {x} \rangle)) \tag {10} +$$ + +Now we solve the constraint for the kernel $\kappa_{1}$ : + +One can check that for any $\pmb{d}_x \in \mathbb{S}^2 - \left\{ [0, 0, 1]^T, -[0, 0, 1]^T \right\}$ , $R_Z(R(\gamma), \pmb{d}_x) = I$ ; when $\pmb{d}_x = [0, 0, 1]^T$ , $R_Z(R(\gamma), \pmb{d}_x) = R(\gamma)$ ; and when $\pmb{d}_x = -[0, 0, 1]^T$ , $R_Z(R(\gamma), \pmb{d}_x) = R(-\gamma)$ . + +Therefore, we obtain the constraint that + +$$ +\kappa_ {1} ((\gamma , t) x) = e ^ {- i \omega_ {o u t} ^ {1} \gamma} \kappa_ {1} (x) \tag {11} +$$ + +when $\pmb{d}_x\in \mathbb{S}^2 -\left\{[0,0,1]^T, - [0,0,1]^T\right\}$ + +$$ +\kappa_ {1} ((\gamma , t) x) = e ^ {- i \left(\omega_ {o u t} ^ {1} - \omega_ {i n} ^ {1}\right) \gamma} \kappa_ {1} (x) \tag {12} +$$ + +when $\pmb{d}_x = [0, 0, 1]^T$ ; + +$$ +\kappa_ {1} ((\gamma , t) x) = e ^ {- i \left(\omega_ {o u t} ^ {1} + \omega_ {i n} ^ {1}\right) \gamma} \kappa_ {1} (x) \tag {13} +$$ + +when $\pmb{d}_x = -[0,0,1]^T$ + +The solution for Eq. 11 is that $\kappa_{1}(x) = f(d(\eta, x), \angle([0,0,1]^{T}, \boldsymbol{d}_{x})) e^{-i\omega_{out}^{1} \text{atan2}([0,1,0]\boldsymbol{d}_{x}, [1,0,0]\boldsymbol{d}_{x})}$ , where $\text{atan2}$ is the 2-argument arctangent function, and $f$ is an arbitrary function that maps $(d(\eta, x), \angle([0,0,1]^{T}, \boldsymbol{d}_{x}))$ to the complex domain. + +The solution for Eq. 12 is that when $\omega_{out}^{1} = \omega_{in}^{1}$ , $\kappa_{1}(x) = C$ , where $C$ is any constant value; when $\omega_{out}^{1} \neq \omega_{in}^{1}$ and $x = \eta$ , $\kappa_{1}(x) = 0$ ; when $\omega_{out}^{1} \neq \omega_{in}^{1}$ and $x \neq \eta$ , $\kappa_{1}(x) = f(d(\eta, x)) e^{-i(\omega_{out}^{1} - \omega_{in}^{1}) \tan 2([0, 1, 0] \mathbf{m}_{x}, [1, 0, 0] \mathbf{m}_{x})}$ , where $f$ is an arbitrary function that maps $d(x, \eta)$ to the complex domain. + +The solution for Eq. 13 is that when $\omega_{out}^{1} = -\omega_{in}^{1}$ , $\kappa_{1}(x) = C$ , where $C$ is any constant value; when $\omega_{out}^{1} \neq -\omega_{in}^{1}$ and $x = -\eta$ , $\kappa_{1}(x) = 0$ ; when $\omega_{out}^{1} \neq -\omega_{in}^{1}$ and $x \neq -\eta$ , $\kappa_{1}(x) = f(d(\eta, x)) e^{-i(\omega_{out}^{1} + \omega_{in}^{1}) \tan 2([0, 1, 0]^{T} \boldsymbol{m}_{x}, [1, 0, 0]^{T} \boldsymbol{m}_{x})}$ , where $f$ is an arbitrary function that maps $d(x, \eta)$ to the complex domain. + +Next, we will solve the constraint for the kernel $\kappa_{2}$ , which is that $\kappa_{2}((\gamma ,t)x) = e^{-i(\omega_{out}^{2} - \omega_{in}^{2}\langle [0,0,1]^{T},\pmb{d}_{x}\rangle)t}\kappa_{2}(x)$ . + +When $\pmb{d}_x = [0,0,1]^T$ , and $\omega_{out}^2 \neq \omega_{in}^2$ , $\kappa_2(x) = 0$ ; When $\pmb{d}_x = -[0,0,1]^T$ and $\omega_{out}^2 \neq -\omega_{in}^2$ , $\kappa_2(x) = 0$ ; When $\pmb{d}_x = [0,0,1]^T$ , and $\omega_{out}^2 = \omega_{in}^2$ , $\kappa_2(x) = f(d(x,\eta))$ , where $f$ is an arbitrary + +function that maps $d(x, \eta)$ to the complex domain; When $\pmb{d}_x = -[0, 0, 1]^T$ , and $\omega_{out}^2 = -\omega_{in}^2$ , $\kappa_2(x) = f(d(x, \eta))$ , where $f$ is an arbitrary function that maps $d(x, \eta)$ to the complex domain; when $\pmb{d}_x \in \mathbb{S}^2 - \{[0, 0, 1]^T, -[0, 0, 1]^T\}$ , + +$$ +\kappa_ {2} (x) = f \left(d (\eta , x), \angle \left([ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {x}\right)\right) e ^ {- i \left(\omega_ {\text {o u t}} ^ {2} - \omega_ {\text {i n}} ^ {2} \langle [ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {x} \rangle\right) g (x)}, \tag {14} +$$ + +where $f$ is an arbitrary function that maps $(d(\eta, x), \angle([0, 0, 1]^T, \boldsymbol{d}_x))$ to the complex domain; $g(x) = [0, 0, 1](\boldsymbol{x}_Q - [0, 0, 0]^T)$ , where $\boldsymbol{x}_Q$ represents the 3D coordinates of a point $Q$ . This point $Q$ can be defined as the intersection of $x$ and $\eta$ if $x$ and $\eta$ intersect. Alternatively, if $x$ and $\eta$ do not intersect, $Q$ is determined as the intersection of $\eta$ and the ray $y$ , which is perpendicular to both $x$ and $\eta$ , and intersects with both $x$ and $\eta$ . Refer to Figure 16 for a visual representation. One can easily check that $g((\gamma, t)x) = t + g(x)$ , as shown in figure 16, which makes the solution valid. + +If $x$ and $\eta$ are intersected, i.e., $[0, 0, 1] \pmb{m}_x = 0$ , + +$$ +g (x) = [ 0, 0, 1 ] \left(\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x} - \frac {[ 1 , 0 , 0 ] (\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x})}{[ 1 , 0 , 0 ] \boldsymbol {d} _ {x}} \boldsymbol {d} _ {x}\right) +$$ + +when $[1,0,0]\pmb{d}_x\neq 0$ + +$$ +g (x) = [ 0, 0, 1 ] \left(\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x} - \frac {[ 0 , 1 , 0 ] (\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x})}{[ 0 , 1 , 0 ] \boldsymbol {d} _ {x}} \boldsymbol {d} _ {x}\right) +$$ + +when $[1,0,0]\pmb{d}_x = 0$ + +When $x$ and $\eta$ are not intersected, + +$$ +g (x) = [ 0, 0, 1 ] (\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x} - \frac {[ 1 , 0 , 0 ] (\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x}) [ 1 , 0 , 0 ] \boldsymbol {d} _ {x} + [ 0 , 1 , 0 ] (\boldsymbol {d} _ {x} \times \boldsymbol {m} _ {x}) [ 0 , 1 , 0 ] \boldsymbol {d} _ {x}}{([ 1 , 0 , 0 ] \boldsymbol {d} _ {x}) ^ {2} + ([ 0 , 1 , 0 ] \boldsymbol {d} _ {x}) ^ {2}} \boldsymbol {d} _ {x}). +$$ + +Regular Representation Here, we delve into the case where the output field type corresponds to the group representation of $SO(2)\times \mathbb{R}$ that $\rho (\gamma ,t) = \rho_1(\gamma)\otimes \rho_2(t)$ for any $(\gamma ,t)\in SO(2)\times \mathbb{R}$ , where $\rho_{2}$ is the regular representation. The regular representation of a group G is a linear representation that arises from the group action of G on itself by translation, that is when $\rho_{2}:\mathbb{R}\to GL(V)$ is the regular representation, for any $v\in V$ , for any $t,t'\in \mathbb{R}$ , we have $(\rho_{2}(t^{\prime})v)_{t} = v_{t - t^{\prime}}$ , in other words, $v\in V$ can be viewed as a function defined on $\mathbb{R}$ or an infinite dimensional vector. Then according to Ex. 6, the group $SE(3)$ acting on the the field $f$ would be: + +$$ +\begin{array}{l} \left(\mathcal {L} _ {g} f\right) (x) _ {t} = \left(\rho (\mathrm {h} \left(g ^ {- 1}, x\right)) ^ {- 1} f \left(g ^ {- 1} x\right)\right) _ {t} \\ = \rho_ {1} \left(\mathrm {h} _ {a} \left(R _ {g ^ {- 1}}, \boldsymbol {d} _ {x}\right)\right) ^ {- 1} f \left(g ^ {- 1} x\right) _ {t + \mathrm {h} _ {b} \left(g ^ {- 1}, x\right)} \\ = \rho_ {1} \left(R _ {Z} \left(R _ {g ^ {- 1}}, \boldsymbol {d} _ {x}\right)\right) ^ {- 1} f \left(g ^ {- 1} x\right) _ {t + \langle \boldsymbol {t} _ {g - 1}, \left(R _ {g - 1} \boldsymbol {d} _ {x}\right) \rangle} \\ \end{array} +$$ + +for any $t\in \mathbb{R}$ $x\in \mathcal{R}$ and $g\in SE(3)$ + +The points $\pmb{x}$ on the ray $x = (d_x, m_x)$ can be uniquely expressed as $\pmb{x} = s_b(x) + t_x d_x = d_x \times m_x + t_x d_x$ , therefore for any $x \in \mathcal{R}$ , any $t \in \mathbb{R}$ , $f(x)_t$ can be expressed as a feature attached to the point $s_b(x) + t d_x$ along the ray $x$ , i.e., $f(x)_t = f'(s_b(x) + t d_x, d_x)$ as shown in figure 14. + +Therefore, we have $f^{\prime}(\pmb {x},\pmb {d}) = f((\pmb {d},\pmb {x}\times \pmb {d}))_{\langle \pmb {x} - \pmb {d}\times (\pmb {x}\times \pmb {d}),\pmb {d}\rangle}$ , one can easily check: + +$$ +\left(\mathcal {L} _ {g} f ^ {\prime}\right) (\boldsymbol {x}, \boldsymbol {d}) = \rho_ {1} \left(\mathrm {h} _ {a} \left(R _ {g ^ {- 1}}, \boldsymbol {d}\right)\right) ^ {- 1} f ^ {\prime} \left(R _ {g ^ {- 1}} \boldsymbol {x} + \boldsymbol {t} _ {g ^ {- 1}}, R _ {g ^ {- 1}} \boldsymbol {d}\right) = \rho_ {1} \left(R _ {Z} \left(R _ {g ^ {- 1}}, \boldsymbol {d}\right)\right) ^ {- 1} f ^ {\prime} \left(g ^ {- 1} \boldsymbol {x}, R _ {g ^ {- 1}} \boldsymbol {d}\right) \tag {15} +$$ + +We should note the difference of the point $\pmb{x}$ along the ray and the independent point $\pmb{x}$ , as shown in the above equation, the point $\pmb{x}$ along the ray $x = (d, x \times d)$ is denoted as $(\pmb{x}, d)$ instead of $\pmb{x}$ . Actually, it can be viewed as a homogeneous space of $SE(3)$ larger than $\mathbb{R}^3$ , whose elements are in $\mathbb{R}^3 \times \mathbb{S}^2$ , as shown in figure 15. + +To summarize, the features attached to the ray, whose type corresponds to the regular representation of translation, can be considered as the features attached to the points along the ray. The action of $SE(3)$ on features attached to these points can be expressed as shown in Eq. 15. + +The solution $\kappa$ also can be expressed as + +$$ +\kappa (x) _ {t} = \kappa_ {1} (x) \kappa_ {2} (x) _ {t} \tag {16} +$$ + +![](images/5def014b4625271c2f3b24e69b05555e643e398b471bc9dce665980659bf8182.jpg) +Figure 14: The feature attached to the ray, which corresponds to the regular representation of translation, can also be treated as the features attached to the points along the ray. + +for any $t \in \mathbb{R}$ , and their constraint is also the same as Eq. 9 and Eq. 10. As a result, the solution for $\kappa_{1}$ should be the same. We only need to solve $\kappa_{2}$ : + +$$ +\kappa_ {2} \left(\left(\gamma , t ^ {\prime}\right) x\right) _ {t} = e ^ {i \omega_ {i n} ^ {2} \langle [ 0, 0, 1 ] ^ {T}, d _ {x} \rangle t ^ {\prime}} \kappa_ {2} (x) _ {t - t ^ {\prime}} \tag {17} +$$ + +for any $(\gamma ,t^{\prime})\in SO(2)\times \mathbb{R}$ + +When $\pmb{d}_x\in \mathbb{S}^2 -\left\{[0,0,1]^T, - [0,0,1]^T\right\}$ + +$$ +\kappa_ {2} (x) _ {t} = f \left(d (\eta , x), \angle \left([ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {x}\right)\right) e ^ {i \omega_ {i n} ^ {2} \langle [ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {x} \rangle g (x)} \delta (t - g (x)), \tag {18} +$$ + +where $f$ and $g$ are the same function as defined in 14, and $\delta(t) = 1$ only when $t = 0$ . + +when $\pmb{d}_x\in \left\{\left[0,0,1\right]^T, - \left[0,0,1\right]^T\right\} ,\kappa_2(x)_t = 0$ for any $t\in \mathbb{R}$ + +Example 10. $SE(3)$ equivariant convolution from $\mathcal{R}$ to $\mathbb{R}^3$ : Following [18], the convolution from rays to points becomes: + +$$ +f _ {2} ^ {l _ {\text {o u t}}} (x) = \int_ {\mathcal {R}} \kappa \left(s _ {2} (x) ^ {- 1} y\right) \rho_ {i n} \left(\mathrm {h} _ {1} \left(s _ {2} (x) ^ {- 1} s _ {1} (y)\right)\right) f _ {1} ^ {l _ {\text {i n}}} (y) d y, \tag {19} +$$ + +where $\mathrm{h}_1$ is the twist function corresponding to section $s_1: \mathcal{R} \to SE(3)$ defined aforementioned, $\rho_{in}$ is the group representation of $SO(2) \times \mathbb{R}$ , corresponding to the feature type $l_{in}$ , $s_2: \mathbb{R}^3 \to SE(3)$ is the section map defined in paper as $s_2(\pmb{x}) = (I, \pmb{x})$ . + +In this paper, we give the analysis and solutions for the kernel where the input is the scalar field over the ray space, i.e., $\rho_{in} = 1$ , the trivial group representation, which is also the case of our application in reconstruction. + +The convolution is equivariant if and only if + +$$ +\kappa (h _ {2} x) = \rho_ {\text {o u t}} (h _ {2}) \kappa (x), +$$ + +![](images/d34da03027471d11c2fb1057f3c3fa445a5c447cfc772d3e98c700f90df2550f.jpg) + +![](images/87c60aecf6c0c467234abf818a9bd31f84b43cccc520498bb51d1f5be0d01818.jpg) +vector features(type-1) vector features(type-1) +Figure 15: As shown in the figure, the point along the ray is distinct from the independent point. Moreover, we can observe that the type-1 feature of the point along the ray differs from that of the independent point. Specifically, the type-1 feature for the point along the ray can be interpreted as a vector on the plane orthogonal to the ray direction. In contrast, the type-1 feature for the independent point can be interpreted as a three-dimensional vector. + +for any $h_2 \in SO(3)$ , where $\rho_{out}$ is the group representation of $SO(3)$ corresponding to the feature type $l_{out}$ . + +We can derive $\kappa(h_2x) = \rho_{out}(h_2)\kappa(x)$ analytically. For irreducible representation $\rho_{out}$ and any $x = (d_x, m_x) \in \mathcal{R}$ , if $\|m_x\| = 0$ , $\kappa(x) = cY^{l_{out}}(d_x)$ , where $c$ is an arbitrary constant and $Y^{l_{out}}$ is the spherical harmonics and $l_{out}$ is the order (type) of output tensor corresponding to the representation $\rho_{out}$ ; With $\|m_x\| \neq 0$ , $\kappa(x)$ becomes $\rho_{out}(\hat{x})f(\|m\|_x)$ , where $\hat{x}$ denotes the element $(d_x, \frac{m_x}{\|m_x\|}, d_x \times \frac{m_x}{\|m_x\|})$ in $SO(3)$ and $f: \mathbb{R} \to \mathbb{R}^{(2l_{out} + 1) \times 1}$ . + +Similar to the convolution from rays to rays, we also can have the local support of the kernel. We set $\kappa(x) \neq 0$ when $\|m_x\| \leq d_0$ , otherwise $\kappa(x) = 0$ . One can easily check that it doesn't break the equivariant constraint for the kernel. + +Specifically, when we set $d_0 = 0$ , the neighborhood of the target points in the convolution only includes the rays from all views going through the point. Hence, we can simplify the convolution to $f_2^{l_{out}}(x) = \int_{d(y,x) = 0} Y^{l_{out}}(\pmb{d}_{s_2(x)^{-1}y}) f_1^{in}(y) dy$ . This equation shows that for every point $x$ , we can treat the ray $y$ going through $x$ with feature $f_1^{in}$ as a point $y'$ , where $y' - x = \pmb{d}_{s_2(x)^{-1}y}$ , as shown in figure 17. + +# B Equivariant 3D Reconstruction + +# B.1 Approximation of the Equivariant Convolution from Rays to Rays + +In practical $3D$ reconstruction, we have multiple views instead of the whole light field. Although the convolution above is defined on the continuous ray space, the equivariance still strictly holds when the ray sampling (pixels from camera views) is the same up to coordinate change. In this case, we will show how we adjust the equivariant convolution from rays to rays and approximate it by an intra-view $SE(2)$ -convolution. + +![](images/5e0c37662dd3e149a2c96fb87d46fa6aede2230142f0be3499c06591c2b909eb.jpg) +Figure 16: Visualization of $g(x)$ . The left is the case that the ray $x$ and the ray $\eta$ are intersected, and the right is the case that the ray $x$ and the ray $\eta$ are not intersected. For the left, the point $Q$ is the intersection of $x$ and $\eta$ , and $Q = (0,0,g(x))$ ; for the right, the point $Q$ is the intersection of the line $y$ and the ray $\eta$ , where $y$ is perpendicular to both $\eta$ and $x$ , and intersects with both $\eta$ and $x$ . From the figure, in both cases, we can see that for any $t\in \mathbb{R}$ , $g((0,t)x) = t + g(x)$ . In general, we actually have for any $(\gamma ,t)\in SO(2)\times \mathbb{R}$ , $g((\gamma ,t)x) = t + g(x)$ . + +![](images/70dbbd1a19692853f267f80d2debe17dd9a04012a04061b3232e5a176fbe0d7f.jpg) +Figure 17: Interpreting rays $y_{i}$ as points $y_{i}'$ + +# B.1.1 From Light Field to Intra-view Convolution + +Following Fig. 18, neighboring rays are composed of two parts: a set of rays from the same view and another set of rays from different views. For one ray $x$ in view $A$ , the neighboring rays from view $B$ are in the neighborhood of the epipolar line of $x$ in view $B$ . When the two views are close, the neighborhood in the view $B$ would be very large. + +The kernel solution in Ex. 9 suggests that $\kappa(x)$ is related to $\angle(\pmb{d}_x, [0,0,1]^T)$ and $d((x,\eta)$ , where $\eta = ([0,0,1]^T, [0,0,0]^T)$ as mentioned before. It would be memory- and time-consuming to memorize the two metrics beforehand or to compute the angles and distances on the fly. Practically, the light field is only sampled from a few sparse viewpoints, which causes the relative angles of the rays in different views to be large and allows them to be excluded from the kernel neighborhood; therefore, in our implementation, the ray neighborhood is composed of only rays in the same view. + +![](images/6944f709950ff712d34b5bf633971ec6e858c16e82342f539a7ad6475c03587e.jpg) +Figure 18: For simplification, we show a situation of two views. For a ray $x$ from view A, one part of the neighboring rays is from view A (the blue rays in the figure), $\mathcal{N}_A(x)$ . For any ray $y \in \mathcal{N}_A(x)$ , we have $d(y, x) = 0$ , and we require $\angle(\pmb{d}_y, \pmb{d}_x) \leq \beta_0$ . The other part is from the other view B (the red rays in the figure). As illustrated in figure 5, the neighboring rays always cross a cylinder around $x$ ; therefore, the neighboring rays from view B are the projection of the cylinder with radius $r = d_0$ in view B, that is, $\mathcal{N}_B$ is composed of the neighboring pixels of the epipolar line (the black dotted dash) corresponding to $x$ in view B. For any ray $y$ in the projection of the cylinder, we have $d(y, x) \neq d_0$ . Since we require that $\angle(\pmb{d}_y, \pmb{d}_x) \leq \beta_0$ for any ray $y \in \mathcal{N}_B(x)$ , $\mathcal{N}_B(x)$ is part of the projection of the cylinder, denoted as the shaded yellow part in view B. + +# B.1.2 From Intra-view Light Field to Spherical Convolution + +After showing that a small kernel support in the case of sparse views affects only intra-view rays, we can prove that an intra-view light-field convolution is equivalent to a spherical convolution when we constrain the feature field types over $\mathcal{R}$ . + +We exploit the desired property that a feature defined on a ray is constant along the ray. This means that the translation part of the stabilizer group (translation along the ray) leaves the feature as is. In math terms, the irreducible representation for the translation $\mathbb{R}$ is the identity, which means that the field function is a scalar field for the translation group, with the formula $(\mathcal{L}_t f)(x) = f(t^{-1} x)$ . We prove that, in this case, the intra-view convolution over rays is equivalent to the spherical convolution; please see Sec. C. + +# B.1.3 From SO(3)- to SE(2)-convolution + +While there is an established framework for spherical convolution using a Fourier transform [17, 22, 24] it is not applicable in our case because the boundaries of the constrained field of view cause an explosion in the high frequencies of the spherical harmonics. We will make a compromise here and approximate the SO(3) convolution with an SE(2) convolution on the image plane by making the assumption that the field of view is small. One can see the rationale behind this approximation by keeping only the first order terms in the optical flow equation: the rotational term is only due to $\Omega_z$ while the translational term is $(-T_x - \Omega_y, - T_y + \Omega_x)$ with $(\Omega_{x},\Omega_{y},\Omega_{z})$ as the angular velocity. We provide a justification using the formalism of the previous paragraphs in appendix Sec. E. + +# B.2 Ray Fusion: Equivariant Convolution and Transformer + +To reconstruct a 3D object, we use an implicit function known as the signed distance function (SDF) defined on $\mathbb{R}^3$ . As a result, we require an equivariant model that can transform features from rays to + +points to obtain the SDF. This can be achieved using the equivariant convolution in Sec. 3.4.2 and transformer in Sec. 3.2 in the paper 3.5, which allows us to transform features from the ray space to points in 3D space while maintaining equivariance. + +# B.2.1 Equivariant Convolution from Rays to Points + +In this paper, we obtain the scalar feature field over rays after the SE(2)-equivariant CNNs. As illustrated in figure 3, we utilize the equivariant convolution (discussed in Sec. 3.4.2) to compute features for a query point by convolving over neighboring rays. Our experiments have shown that convolving only over rays that go through the point achieves the best results, and the equivariant kernel used for this convolution is provided in Ex.10. Moreover, in the implementation, we can concatenate the input feature $f_1^{in}$ with the depth embedding of the query point $x$ . While this theoretically breaks the ideal equivariance for continuous light fields, it does not affect the practical equivariance, as it is rare for two cameras to share the same ray. + +# B.2.2 Equivariant Transformer from Rays to Points + +For the third step, we introduce an equivariant transformer to alleviate the loss of expressivity due to the constrained kernel $\kappa$ in Eq. 19. Again, the attention key and values are generated from the feature attached to rays, while the query is generated from the feature attached to points. + +In the implementation, we apply a transformer over the rays going through the query point. We can continue to use the interpretation that treats any ray $y$ passing through the point $x$ as a point $y'$ such that $y' - x = d_{s_2(x)^{-1}y}$ , as shown in figure 17. Since $y$ becomes point $y'$ , the ray feature $f_1^{in}$ becomes the feature over $\mathbb{R}^3$ attached to "points" $y'$ . We can update the neighboring ray feature by directly concatenating the equivariant feature of the point to every ray feature before through a $SO(3)$ equivariant MLP. The transformer in Eq. 4 would be converted to the transformer in [27] over $\mathbb{R}^3$ . See appendix Sec. H for details. The composition of the ray updating block and transformer block are shown in figure 22. + +# C Proof of Equivalence of Intra-view Light Field Convolution and Spherical Convolution + +The property that a feature defined on a ray is constant along the ray means that the translation part of the stabilizer group (translation along the ray) leaves the feature as is. In math terms, the irreducible representation for the translation $\mathbb{R}$ is the identity, which means that the field function is a scalar field for the translation group, with the formula $(\mathcal{L}_t f)(x) = f(t^{-1} x)$ . The equivariant condition on the kernel can then be simplified as + +$$ +\kappa ((h, t) x) = \rho_ {o u t} (h) \kappa (x) \rho_ {i n} \left(\mathrm {h} _ {a} ^ {- 1} (h, \boldsymbol {d} _ {x})\right), +$$ + +where $h \in SO(2)$ and $t \in \mathbb{R}$ , $\rho_{in}$ and $\rho_{out}$ are irreducible representations for $SO(2)$ , and $\mathrm{h}_a$ is the twist function as shown in Ex. 6 that $\mathrm{h}(g, x) = (\mathrm{h}_a(R_g, \pmb{d}_x), \mathrm{h}_b(g, x))$ , i.e., the twist of the fiber introduced by action of $SO(3)$ corresponding to the section map $s_a$ of $SO(3)$ in Ex. 5 and Ex. 6. Now we describe the relationship between the intra-view light-field convolution and the spherical convolution: + +Proposition C.1. When the translation group acts on feature $f: \mathcal{R} \to V$ as $(\mathcal{L}_t f)(x) = f(t^{-1} x)$ for any $x \in \mathcal{R}$ , the equivariant intra-view light-field convolution: + +$$ +f ^ {l _ {o u t}} (x) = \int_ {y \in \mathcal {N} (x)} \kappa (s (x) ^ {- 1} y) \rho_ {i n} (h (s (x) ^ {- 1} s (y))) f ^ {l _ {i n}} (y) d y +$$ + +becomes a spherical convolution: + +$$ +f ^ {l _ {o u t}} (x) = \int_ {\boldsymbol {d} _ {y} \in \mathbb {S} ^ {2}} \kappa^ {\prime} (s _ {a} (\boldsymbol {d} _ {x}) ^ {- 1} \boldsymbol {d} _ {y}) \rho_ {i n} (h _ {a} (s _ {a} (\boldsymbol {d} _ {x}) ^ {- 1} s _ {a} (\boldsymbol {d} _ {y}))) +$$ + +$$ +f ^ {\prime l _ {i n}} \left(\boldsymbol {d} _ {y}\right) d \boldsymbol {d} _ {y}, \tag {20} +$$ + +where $f^{l_{in}}(\pmb{d}_y) = f^{l_{in}}(\pmb{d}_y, \pmb{c}_x \times \pmb{d}_y)$ , $\pmb{c}_x$ denotes the camera center that $x$ goes through, $s_a$ is the section map of $SO(3)$ as defined in appendix Ex. 5, and $\kappa'(s_a(\pmb{d}_x)^{-1}\pmb{d}_y) = \kappa(s_a(\pmb{d}_x)^{-1}\pmb{d}_y, (s(x)^{-1}\pmb{x}_c) \times (s_a(\pmb{d}_x)^{-1}\pmb{d}_y))$ . + +Proof. The $SE(3)$ equivariant convolution over rays transforms into intra-view convolution when the neighboring lights are in the same view. Moreover, the simplified kernel constraint derived in the paper is that for any $(h,t)\in SO(2)\times \mathbb{R}$ and $x = (d_x,m_x)\in \mathcal{R}$ : + +$$ +\kappa ((h, t) x) = \rho_ {o u t} (h) \kappa (x) \rho_ {i n} \left(\mathrm {h} _ {a} ^ {- 1} (h, d _ {x})\right), +$$ + +where $\mathbf{h}_a:SO(3)\times \mathbb{S}^2\to SO(2)$ is the twist function: $\mathbf{h}_a(g,\pmb {d}) = s_a(g\pmb {d})^{-1}gs_a(\pmb {d})$ for any $g\in SO(3)$ and $\pmb {d}\in \mathbb{S}^2$ + +With the simplified kernel constraint, we can prove that intra-view light field convolution is equivalent to spherical convolution: + +$$ +\begin{array}{l} f ^ {l _ {o u t}} (x) \\ = \int_ {d (y, \boldsymbol {c} _ {x}) = 0} \kappa (s (x) ^ {- 1} y) \rho_ {i n} (\mathrm {h} (s (x) ^ {- 1} s (y))) f ^ {l _ {i n}} (y) d y (21) \\ = \int_ {d (y, \boldsymbol {c} _ {x}) = 0} \kappa (s (x) ^ {- 1} y) \rho_ {i n} \left(\mathrm {h} _ {a} \left(s _ {a} \left(\boldsymbol {d} _ {x}\right) ^ {- 1} s _ {a} \left(\boldsymbol {d} _ {y}\right)\right)\right) f ^ {l _ {i n}} (y) d y (22) \\ = \int_ {\boldsymbol {d} _ {y} \in \mathbb {S} ^ {2}} \kappa (s _ {a} (\boldsymbol {d} _ {x}) ^ {- 1} \boldsymbol {d} _ {y}, s (x) ^ {- 1} \boldsymbol {x} _ {c} \times (s _ {a} (\boldsymbol {d} _ {x}) ^ {- 1} \boldsymbol {d} _ {y})) \\ \rho_ {i n} \left(\mathrm {h} _ {a} \left(s _ {a} \left(\boldsymbol {d} _ {x}\right) ^ {- 1} s _ {a} \left(\boldsymbol {d} _ {y}\right)\right)\right) f ^ {l _ {i n}} \left(\boldsymbol {d} _ {y}, \boldsymbol {c} _ {x} \times \boldsymbol {d} _ {y}\right) d \boldsymbol {d} _ {y} (23) \\ = \int_ {\boldsymbol {d} _ {y} \in \mathbb {S} ^ {2}} \kappa^ {\prime} (s _ {a} (\boldsymbol {d} (x)) ^ {- 1} \boldsymbol {d} _ {y}) \rho_ {i n} \left(\mathrm {h} _ {a} (s _ {a} (\boldsymbol {d} _ {x}) ^ {- 1} s _ {a} (\boldsymbol {d} _ {y}))\right) \\ f ^ {\prime l _ {i n}} \left(\boldsymbol {d} _ {y}\right) d \boldsymbol {d} _ {y}. (24) \\ \end{array} +$$ + +In line 21, $c_x$ is the camera center that $x$ goes through. + +The line 21 is equal to the line 22 because we assume that the irreducible representation for the translation $\mathbb{R}$ is the identity as mentioned in the paper. + +From line 22 to line 23, We can replace $s(x)^{-1}y$ with + +$$ +\left(s _ {a} \left(\boldsymbol {d} _ {x}\right) ^ {- 1} \boldsymbol {d} _ {y}, \left(s (x) ^ {- 1} \boldsymbol {x} _ {c}\right) \times \left(s _ {a} \left(\boldsymbol {d} _ {x}\right) ^ {- 1} \boldsymbol {d} _ {y}\right)\right) +$$ + +due to the facts that $s_a(\pmb{d}_x)^{-1}\pmb{d}_y = \pmb{d}_{s(x)^{-1}y}$ and point $s(x)^{-1}\pmb{x}_c$ is on the ray $s(x)^{-1}y$ . Since $y$ goes through $\pmb{c}_x$ , we can replace $y$ with $(\pmb{d}_y, \pmb{c}_x \times \pmb{d}_y)$ . + +From line 23 to 24, we have $f^{l_{in}}(\pmb{d}_y) = f^{l_{in}}(\pmb{d}_y, \pmb{c}_x \times \pmb{d}_y)$ because $\pmb{c}_x$ is fixed for any view. Additionally, from line 23 to 24 we replace + +$$ +\kappa \left(s _ {a} \left(\boldsymbol {d} _ {x}\right) ^ {- 1} \boldsymbol {d} _ {y}, \left(s (x) ^ {- 1} \boldsymbol {x} _ {c}\right) \times \left(s _ {a} \left(\boldsymbol {d} _ {x}\right) ^ {- 1} \boldsymbol {d} _ {y}\right)\right) +$$ + +with $\kappa^{\prime}(s_a(\pmb {d}_x)^{-1}\pmb {d}_y)$ . It is because according to + +$$ +\kappa ((h, t) x) = \rho_ {o u t} (h) \kappa (x) \rho_ {i n} \left(\mathrm {h} _ {a} ^ {- 1} (h, \boldsymbol {d} _ {x})\right), +$$ + +we have $\kappa((e, t)x) = \kappa(x)$ for any $t \in \mathbb{R}$ , where $e$ is the identity element in $SO(2)$ ; thus when $t = ((-s(x)^{-1}x_c))^T[0, 0, 1]^T$ , we have + +$$ +\begin{array}{l} \kappa \left(s _ {a} (x) ^ {- 1} \boldsymbol {d} _ {y}, s (x) ^ {- 1} \boldsymbol {x} _ {c} \times \left(s _ {a} (x) ^ {- 1} \boldsymbol {d} _ {y}\right)\right) \\ = \kappa \left(s _ {a} (x) ^ {- 1} \boldsymbol {d} _ {y}, \left(s (x) ^ {- 1} \boldsymbol {x} _ {c} + t [ 0, 0, 1 ] ^ {T}\right) \times \left(s _ {a} (x) ^ {- 1} \boldsymbol {d} _ {y}\right)\right) (25) \\ = \kappa \left(\left(s _ {a} (x) ^ {- 1} \boldsymbol {d} _ {y}, [ 0, 0, 0 ] ^ {T}\right) \right. (26) \\ = \kappa^ {\prime} \left(\left(s _ {a} (x) ^ {- 1} \boldsymbol {d} _ {y}\right). \right. \\ \end{array} +$$ + +Line 25 is equal to 26 because $s(x)^{-1}\pmb{x}_c$ is always on the $z$ axis, and thus $s(x)^{-1}\pmb{x}_c + t[0,0,1]^T = [0,0,0]^T$ . + +# D Spherical Convolution Expressed in Gauge Equivariant Convolution Format + +Group convolution is a particular case of gauge equivariant convolution [64], where gauge equivariant means the equivariance with respect to the transformation of the section map (transformation of the + +![](images/3869931013f322a2739202fa8a0629a2fba07ab88076db3879febc7498d84481.jpg) +Figure 19: Illustration of $h_{x \to y}$ . $s(x)[1,0,0]^T$ and $s(x)[0,1,0]^T$ (yellow) attached to $x$ are tangent vectors on $x$ . We parallel transport $s(x)[1,0,0]^T$ and $s(x)[0,1,0]^T$ along the geodesic (black dashed line) between $x$ and $y$ . The transported tangent vectors need to undergo a transformation $h_{x \to y}$ in $SO(2)$ to align with the vectors $s(y)[1,0,0]^T$ and $s(y)[0,1,0]^T$ (green) attached to $y$ . + +![](images/b35c393977e915916e930a82bd2d1765cd608b0b5e0bfa0396faea6fb6e0cee7.jpg) + +tangent frame). In the following paragraph we give the elaborated definition of gauge equivariance for the sphere. + +Suppose $f: \mathbb{S}^2 \to V$ is the field function corresponding to the section choice $s_a: \mathbb{S}^2 \to SO(3)$ , we use $\mathcal{L}_{s_a \to s_a'}$ acting on $f$ to denote the change of section map from $s_a$ to $s_a'$ : $(\mathcal{L}_{s_a \to s_a'} f)(x) = \rho(s_a(x)^{-1} s_a'(x))^{-1} f(x)$ , where $\rho$ is the irreducible representation of $SO(2)$ corresponding to the field type of $f$ . The convolution $\Phi$ is gauge equivariant when $\Phi(\mathcal{L}_{s_a \to s_a'} f) = \mathcal{L}_{s_a \to s_a'} (\Phi(f))$ . + +In this section, we show that the spherical convolution can be expressed in terms of the gauge equivariant convolution [16], which provides the convenience for us to verify the approximation of spherical convolution through the $SE(2)$ convolution: + +$$ +f ^ {l _ {o u t}} (x) = \int_ {y \in \mathcal {N} (x)} \kappa^ {\prime} (s (x) ^ {- 1} y) \rho_ {i n} (h _ {y \rightarrow x}) ^ {- 1} f ^ {l _ {i n}} (y) d y, +$$ + +where $\kappa^{\prime}(hx) = \rho_{out}(h)\kappa^{\prime}(x)\rho_{in}^{-1}(h)$ for any $h\in SO(2)$ . + +Since the focus of this section's discussion is spherical convolution, here we use $s(x)$ to denote $s_a(x)$ for any $x \in \mathbb{S}^2$ . + +For any $x, y \in \mathbb{S}^2$ , $s(x)[1,0,0]^T$ , $s(x)[0,1,0]^T$ attached to $x$ are tangent vectors on $x$ , we parallel transport $s(x)[1,0,0]^T$ and $s(x)[0,1,0]^T$ along the geodesic between $x$ and $y$ and get two tangent vectors on $y$ , denoted as $s(x \to y)_1$ and $s(x \to y)_2$ as shown in the figure 19, where the parallel transport along a smooth curve is a way to translate a vector "parallelly" based on the affine connection, that is, for a smooth curve $\gamma : [0,1] \to \mathbb{S}^2$ , the parallel transport $X: \mathrm{Im}(\gamma) \to \mathcal{T}\mathbb{S}^2$ along the curve $\gamma$ satisfies that $\nabla_{\dot{\gamma}(t)}X = 0$ , where $\mathrm{Im}(\gamma) = \{\gamma(t) | t \in [0,1]\}$ and $\nabla$ is the affine connection. + +$s(x \to y)_1$ and $s(x \to y)_2$ need to undergo a transformation in $SO(2)$ to align with $s(y)[1,0,0]^T$ and $s(y)[0,1,0]^T$ on $\mathbf{y}$ as shown in the figure 19. We denote the transformation as $h_{x \to y}$ . + +With the above notation, the spherical convolution can be expressed as: + +$$ +\begin{array}{l} f ^ {l _ {\text {o u t}}} (x) = \int_ {y \in \mathcal {N} (x)} \kappa (s (x) ^ {- 1} y) \rho_ {i n} (\mathrm {h} (s (x) ^ {- 1} s (y))) f ^ {l _ {i n}} (y) d y \\ = \int_ {y \in \mathcal {N} (x)} \kappa (s (x) ^ {- 1} y) \rho_ {i n} \left(h _ {s (x) ^ {- 1} y \rightarrow \eta}\right) \\ \rho_ {i n} \left(h _ {s (x) ^ {- 1} y \rightarrow \eta}\right) ^ {- 1} \rho_ {i n} \left(\mathrm {h} (s (x) ^ {- 1} s (y)) f ^ {l _ {i n}} (y) d y \right. \\ = \int_ {y \in \mathcal {N} (x)} \kappa (s (x) ^ {- 1} y) \rho_ {i n} \left(h _ {s (x) ^ {- 1} y \rightarrow \eta}\right) \\ \rho_ {i n} \left(h _ {y \rightarrow x}\right) ^ {- 1} f ^ {l _ {i n}} (y) d y \\ = \int_ {y \in \mathcal {N} (x)} \kappa^ {\prime} (s (x) ^ {- 1} y) \rho_ {i n} \left(h _ {y \rightarrow x}\right) ^ {- 1} f ^ {l _ {i n}} (y) d y, \\ \end{array} +$$ + +where $\eta = [0,0,1]^T$ , the fixed origin point in $\mathbb{S}^2$ , and $\kappa'(x) = \kappa(x)\rho_{in}(h_{x \to \eta})^{-1}$ for any $x \in \mathcal{N}(\eta)$ . We can derive the equivariant condition that $\kappa'$ should satisfy: + +$$ +\begin{array}{l} \kappa^ {\prime} (h x) = \kappa (h x) \rho_ {i n} \left(h _ {h x \rightarrow \eta}\right) ^ {- 1} \\ = \rho_ {o u t} (h) \kappa (x) \rho_ {i n} (\mathrm {h} (h, x)) ^ {- 1} \rho_ {i n} (h _ {h x \rightarrow \eta}) \\ = \rho_ {o u t} (h) \kappa (x) \rho_ {i n} \left(h _ {x \rightarrow \eta}\right) ^ {- 1} \rho_ {i n} \left(h ^ {- 1}\right) \\ = \rho_ {o u t} (h) \kappa^ {\prime} (x) \rho_ {i n} ^ {- 1} (h). \\ \end{array} +$$ + +Therefore, the spherical convolution can be expressed as the gauge equivariant convolution format: + +$$ +f ^ {l _ {o u t}} (x) = \int_ {y \in \mathcal {N} (x)} \kappa^ {\prime} (s (x) ^ {- 1} y) \rho_ {i n} (h _ {y \rightarrow x}) ^ {- 1} f ^ {l _ {i n}} (y) d y, +$$ + +where $\kappa^{\prime}(hx) = \rho_{out}(h)\kappa^{\prime}(x)\rho_{in}^{-1}(h)$ for any $h\in SO(2)$ . + +# E Converting Spherical Convolution to $SE(2)$ Equivariant Convolution + +As stated in Sec. D, spherical convolution is gauge equivariant with respect to the choice of section map $s_a$ , and the spherical convolution can be written as gauge equivariant convolution. In this section, we use the gauge equivariant convolution to analyze the $SE(2)$ equivariant convolution's approximation of spherical convolution. + +Since each view performs spherical convolution on its own, we only analyze the convolution for one view for the sake of simplicity. We use $V$ to denote the space of the rays in the same view, where $V \subset \mathbb{S}^2$ . For any $x \in V$ , we can choose the section map $s_a$ such that $h_{x \to o} = e$ , where $o \in \mathbb{S}^2$ that $o$ aligns with the optical axis as shown in the figure 20. Again, we use $s(x)$ to denote $s_a(x)$ for any $x \in \mathbb{S}^2$ in this section. + +When $FOV$ is small, for any $x, y \in V$ , we can have such approximation: $h_{x \to y} = e$ . Then the above gauge equivariant convolution in Sec. D can be approximated as + +$$ +\begin{array}{l} f ^ {l _ {o u t}} (x) = \int_ {y \in \mathcal {N} (x)} \kappa^ {\prime} (s (x) ^ {- 1} y) f ^ {l _ {i n}} (y) d y \\ \underline {{\underline {{t = s (x) ^ {- 1} y}}}} \int_ {t \in \mathcal {N} (\eta)} \kappa^ {\prime} (t) f ^ {l _ {i n}} (s (x) t) d t, \\ \end{array} +$$ + +where $\eta = [0,0,1]^T$ , the fixed origin in $\mathbb{S}^2$ , and $\kappa'(hx) = \rho_{out}(h)\kappa'(x)\rho_{in}^{-1}(h)$ for any $h \in SO(2)$ . + +Additionally, as illustrated in figure 21, we have a map from $V$ to the projection points on the picture plane represented as $\omega : V \to \mathbb{R}^2$ , where $\omega(o)$ is defined as $[0,0]^T$ . When $FOV$ is small, we have such approximation that for any $h \in SO(2)$ , $t \in \mathcal{N}(\eta)$ , and $x \in V$ , + +$$ +\omega (s (x) t) \approx \omega (x) + \omega (s (o) t). +$$ + +![](images/924893b5511fde49f73c6fcd91921c3e7c123716b27e62c4581802debd9f374e.jpg) +Figure 20: Section choice for every view + +It is because + +$$ +\begin{array}{l} \omega (s (x) t) = \omega (x) + \omega (s (o) t) \\ + r (\frac {s i n \beta_ {t}}{c o s \beta_ {t}} - \frac {s i n \beta_ {t}}{c o s \beta_ {x} c o s (\beta_ {x} + \beta_ {t})}), \\ \end{array} +$$ + +and we have + +$$ +\begin{array}{l} \lim _ {t \rightarrow \eta} r \left(\frac {\sin \beta_ {t}}{\cos \beta_ {t}} - \frac {\sin \beta_ {t}}{\cos \beta_ {x} \cos \left(\beta_ {x} + \beta_ {t}\right)}\right) \\ = r \left(\tan \beta_ {x}\right) ^ {2} \beta_ {t} + o \left(\beta_ {t} ^ {2}\right), \\ \end{array} +$$ + +when $\beta_{x}$ is small (FOV is small), the approximation stands. + +Then $f^{l_{out}}(x) = \kappa'(t)f^{l_{in}}(s(x)t)dt$ can be approximately conducted in the image plane: + +$$ +\begin{array}{l} f ^ {\prime} l _ {o u t} (\omega (x)) \\ = \int_ {\omega (s (o) t) \in \mathcal {N} ([ 0, 0 ] ^ {T})} \kappa^ {\prime \prime} (\omega (s (o) t)) f ^ {\prime l _ {i n}} (\omega (x) + \omega (s (o) t)) \\ d (\omega (s (o) t)), \tag {27} \\ \end{array} +$$ + +where for any $x\in \mathbb{S}^2$ $f^{\prime}(\omega (x)) = f(x)$ , and for any $t\in \mathcal{N}(\eta)$ $\kappa ''(\omega (s(o)t)) = \kappa '(t)$ + +Since for any $h \in SO(2)$ and any $t \in \mathcal{N}(\eta)$ , $\omega(s(o)ht) = h\omega(s(o)t)$ , we have for any $h \in SO(2)$ and any $t \in \mathcal{N}(\eta)$ , + +$$ +\begin{array}{l} \kappa^ {\prime \prime} (h \omega (s (o) t)) = \kappa^ {\prime \prime} (\omega (s (o) h t)) = \kappa^ {\prime} (h t) \\ = \rho_ {o u t} (h) \kappa^ {\prime} (t) \rho_ {i n} ^ {- 1} (h) = \rho_ {o u t} (h) \kappa^ {\prime \prime} (s (o) t) \rho_ {i n} ^ {- 1} (h) \\ \frac {p = \omega (s (o) t) \in \mathbb {R} ^ {2}}{k ^ {\prime \prime} (h p)} \\ = \rho_ {o u t} (h) \kappa^ {\prime \prime} (p) \rho_ {i n} ^ {- 1} (h). \\ \end{array} +$$ + +![](images/819fff8b22e03bba5a33b3281dc8251f69ac8ba06bb1de7ece41587813213888.jpg) +Figure 21: Illustration of projection map $\omega$ + +Therefore, convolution 27 is exactly $SE(2)$ equivariant convolution and it can be used to approximate the spherical convolution. + +In other words, we can intuitively approximate the equivariant convolution over the partial sphere using the $SE(2)$ equivariant network when the distortion of the sphere and the tangent plane of the optical axis is modest. + +# F Construction of Features in Equivariant Light Field Transformer + +Noted that $f_{2}^{out}$ , $f_{2}^{in}$ and $f_{1}^{in}$ are features that are composed of fields of different types, denoted as $f_{2}^{out} = \oplus_{i}f_{2}^{l_{out i}}$ , $f_{2}^{in} = \oplus_{i}f_{2}^{l_{ini}}$ , and $f_{1}^{in} = \oplus_{i}f_{1}^{l_{ini3}}$ . $f_{k}$ , $f_{q}$ , and $f_{v}$ are constructed equivariant key features, query features, and value features, respectively, which are composed of fields of different types as well. + +We use $f_{k} = \oplus_{i}f_{k}^{l_{ki}}$ , $f_{q} = \oplus_{i}f_{q}^{l_{ki}}$ , and $f_{v} = \oplus_{i}f_{v}^{l_{vi}}$ to denote $f_{k}, f_{q}$ and $f_{v}$ , respectively. We construct the features $f_{k}, f_{q}$ and $f_{v}$ through the equivariant kernels $\kappa_{k} = \oplus_{j,i}\kappa_{k}^{l_{kj},l_{in_i}^{\prime}}$ , $\kappa_{v} = \oplus_{j,i}\kappa_{v}^{l_{vj},l_{in_i}^{\prime}}$ and equivariant matrix $W_{q} = \oplus_{j,i}W_{q}^{l_{kj},l_{in_i}}$ : + +$$ +\begin{array}{l} f _ {k} ^ {l _ {k j}} (x, y, f _ {1} ^ {i n}) \\ = \sum_ {i} \kappa_ {k} ^ {l _ {k _ {j}}, l _ {i n _ {i}} ^ {\prime}} \left(s _ {2} (x) ^ {- 1} y\right) \rho_ {1} ^ {l _ {i n _ {i}} ^ {\prime}} \left(\mathrm {h} _ {1} \left(s _ {2} (x) ^ {- 1} s _ {1} (y)\right)\right) f _ {1} ^ {l _ {i n _ {i}} ^ {\prime}} (y); \tag {28} \\ \end{array} +$$ + +$$ +\begin{array}{l} f _ {v} ^ {l _ {v j}} (x, y, f _ {1} ^ {i n}) \\ = \sum_ {i} \kappa_ {v} ^ {l _ {v j}, l _ {i n i} ^ {\prime}} \left(s _ {2} (x) ^ {- 1} y\right) \rho_ {1} ^ {l _ {i n i} ^ {\prime}} \left(\mathrm {h} _ {1} \left(s _ {2} (x) ^ {- 1} s _ {1} (y)\right)\right) f _ {1} ^ {l _ {i n i} ^ {\prime}} (y); \tag {29} \\ \end{array} +$$ + +$$ +f _ {q} ^ {l _ {k j}} \left(x, f _ {2} ^ {i n}\right) = \sum_ {i} W _ {q} ^ {l _ {k j}, l _ {i n _ {i}}} f _ {2} ^ {l _ {i n _ {i}}} (x), \tag {30} +$$ + +where for any $i,j$ , any $h_2 \in SO(3)$ , and any $x \in \mathcal{R} \kappa_k^{l_{k_j}, l_{ini_i}'}$ and $\kappa_v^{l_{v_j}, l_{ini_i}'}$ should satisfy that: + +$$ +\kappa_ {k} ^ {l _ {k _ {j}}, l _ {i n i} ^ {\prime}} (h _ {2} x) = \rho_ {2} ^ {l _ {k _ {j}}} (h _ {2}) \kappa_ {k} ^ {l _ {k _ {j}}, l _ {i n i} ^ {\prime}} (x) \rho_ {1} ^ {l _ {i n i} ^ {\prime}} (\mathsf {h} _ {1} ^ {- 1} (h _ {2}, x)); +$$ + +$$ +\kappa_ {v} ^ {l _ {v _ {j}}, l _ {i n _ {i}} ^ {\prime}} (h _ {2} x) = \rho_ {2} ^ {l _ {v _ {j}}} (h _ {2}) \kappa_ {v} ^ {l _ {v _ {j}}, l _ {i n _ {i}} ^ {\prime}} (x) \rho_ {1} ^ {l _ {i n _ {i}} ^ {\prime}} (\mathsf {h} _ {1} ^ {- 1} (h _ {2}, x)), +$$ + +where $\mathrm{h}_1(h_2,x) = s_1(h_2x)^{-1}h_2s_1(x)$ is the twist function, and for any $i,j$ and any $h_2\in SO(3)$ , $W_{q}^{l_{k_{j}},l_{ini}}$ satisfies that: + +$$ +\rho_ {2} ^ {l _ {k _ {j}}} \left(h _ {2}\right) W _ {q} ^ {l _ {k _ {j}}, l _ {i n _ {i}}} = W _ {q} ^ {l _ {k _ {j}}, l _ {i n _ {i}}} \rho_ {1} ^ {l _ {i n _ {i}}} \left(h _ {2}\right). \tag {31} +$$ + +When the group representation is irreducible representation, due to Schur's Lemma, we have $W_{q}^{l_{k_{j}},l_{in_{i}}} = cI$ when $l_{k_j} = l_{ini}$ , where $c$ is an arbitrary real number, otherwise $W_{q}^{l_{k_{j}},l_{in_{i}}} = 0$ . + +# G Proof for Equivariance of Light Field Transformer + +The equivariant light field transformer defined in the paper reads: + +$$ +\begin{array}{l} f _ {2} ^ {\text {o u t}} (x) \\ = \sum_ {y \in \mathcal {N} (x)} \frac {\exp \left(\left\langle f _ {q} \left(x , f _ {2} ^ {i n}\right) , f _ {k} \left(x , y , f _ {1} ^ {i n}\right) \right\rangle\right)}{\sum_ {y \in \mathcal {N} (x)} \exp \left(\left\langle f _ {q} \left(x , f _ {2} ^ {i n}\right) f _ {k} \left(x , y , f _ {1} ^ {i n}\right) \right\rangle\right)} \\ f _ {v} (x, y, f _ {1} ^ {i n})) \tag {32} \\ \end{array} +$$ + +is in a general form. + +According to [18], one can prove that $f_{q}$ , $f_{k}$ and $f_{v}$ are equivariant, that is, for any $g \in SE(3)$ , $x \in \mathbb{R}^3$ and $y \in \mathcal{R}$ , + +$$ +\begin{array}{l} f _ {q} ^ {l _ {k _ {j}}} (g \cdot x, \mathcal {L} _ {g} ^ {i n} (f _ {2} ^ {i n})) = \rho_ {2} ^ {l _ {k _ {j}}} \left(\mathrm {h} _ {2} (g ^ {- 1}, g \cdot x) ^ {- 1}\right) f _ {q} ^ {l _ {k _ {j}}} (x, f _ {2} ^ {i n}); \\ f _ {k} ^ {l _ {k _ {j}}} (g \cdot x, g \cdot y, \mathcal {L} _ {g} ^ {\prime i n} (f _ {1} ^ {i n})) = \rho_ {2} ^ {l _ {k _ {j}}} \left(\mathrm {h} _ {2} (g ^ {- 1}, g \cdot x) ^ {- 1}\right) f _ {k} ^ {l _ {k _ {j}}} (x, y, f _ {1} ^ {i n}); \\ f _ {v} ^ {l _ {v _ {j}}} (g \cdot x, g \cdot y, \mathcal {L} _ {g} ^ {\prime i n} (f _ {1} ^ {i n})) = \rho_ {2} ^ {l _ {v _ {j}}} \left(\mathrm {h} _ {2} (g ^ {- 1}, g \cdot x) ^ {- 1}\right) f _ {v} ^ {l _ {v _ {j}}} (x, y, f _ {1} ^ {i n}), \\ \end{array} +$$ + +where $\mathcal{L}^{in}$ and $\mathcal{L}^{\prime in}$ are group action of $SE(3)$ on $f_{2}^{in}$ and $f_{1}^{in}$ , respectively. + +The inner product $\langle f_q, f_k \rangle = \sum_i (\overline{f_q^{l_{k_i}}})^T f_k^{l_{k_i}}$ is invariant due to the property of unitary representation, which results in the equivariance of the transformer. + +# H From $SE(3)$ Equivariant Transformer in Ray Space to $SE(3)$ Equivariant Transformer in Euclidean Space + +In our implementation for the reconstruction task, the attention model is always only applied over the rays going through the points. We can continue to use the interpretation in the convolution from ray space to $\mathbb{R}^3$ in Ex. 10 that treats any ray $y$ passing through the point $x$ as a point $y'$ such that $y' - x = d_{s_2(x)^{-1}y}$ as shown in the figure 17. + +After we get the initial feature of query points through equivariant convolution from $\mathcal{R}$ to $\mathbb{R}^3$ , we update the neighboring ray feature by directly concatenating the query point feature to every ray feature before through a $SO(3)$ equivariant MLP as shown in the figure 22. $SO(3)$ equivariant MLP is composed of an equivariant nonlinear layer and self-interaction layer as in the tensor field networks [55]. + +Since $y$ becomes point $y'$ , and $f_1^{in}$ is the feature over $R^3$ attached to "points" $y'$ , it becomes $\oplus_i f_1^{lin_i4}$ . Then transformer 32 would be converted to the transformer in [27] over $\mathbb{R}^3$ : + +![](images/dab118b2ded195f2cf67409801cf4fbe5cd1eb73b194d729054fb4a0e8247cf5.jpg) +Figure 22: The structure of ray updating and $SE(3)$ transformer. We treat any ray $y$ going through point $x$ as a point $y' \in \mathbb{R}^3$ such that $y' - x = d_{s_2(x)^{-1}y}$ . The blue block indicates the ray feature update, and the pink block is the equivariant attention model. For the ray feature updating, the point feature (lavender) is concatenated to every ray feature (light yellow, light blue, and light red) and goes through an equivariant MLP. For the transformer, we get the equivariant query, key, and value feature through the designed linear matrix $W_q$ , designed kernels $\kappa_k$ and $\kappa_v$ , then apply multi-head attention to obtain the output point feature, which can subsequently be fed into the next ray feature updating and $SE(3)$ transformer block. + +$$ +f _ {2} ^ {o u t} (x) +$$ + +$$ += \sum_ {y ^ {\prime} \in \mathcal {N} (x)} \frac {e x p (\langle f _ {q} (x , f _ {2} ^ {i n}) , f _ {k} (x , y ^ {\prime} , f _ {1} ^ {i n}) \rangle)}{\sum_ {y ^ {\prime} \in \mathcal {N} (x)} e x p (\langle f _ {q} (x , f _ {2} ^ {i n}) f _ {k} (x , y ^ {\prime} , f _ {1} ^ {i n}) \rangle} +$$ + +$$ +f _ {v} \left(x, y ^ {\prime}, f _ {1} ^ {i n}\right)), \tag {33} +$$ + +where the subscript denotes the points to which the feature is attached, i.e., $x$ and $y'$ . + +The features $f_{k}, f_{v}$ are constructed by the equivariant kernels $\kappa_{k} = \oplus_{j,i}\kappa_{k}^{l_{k_{j}},l_{ini}}$ , $\kappa_{v} = \oplus_{j,i}\kappa_{v}^{l_{v_{j}},l_{ini}}$ : + +$$ +f _ {k} ^ {l _ {k _ {j}}} (x, y, f _ {1} ^ {i n}) = \sum_ {i} \kappa_ {k} ^ {l _ {k _ {j}}, l _ {i n _ {i}}} (y ^ {\prime} - x) f _ {1} ^ {l _ {i n _ {i}}} (y); +$$ + +$$ +f _ {v} ^ {l _ {k j}} (x, f _ {2} ^ {i n}) = \sum_ {i} \kappa_ {v} ^ {l _ {v _ {j}}, l _ {i n _ {i}}} (y ^ {\prime} - x) f _ {2} ^ {l _ {i n _ {i}}} (y), +$$ + +where for any $i,j$ , any $h_2 \in SO(3)$ , and any $x \in \mathbb{R}^3$ $\kappa_k^{l_{k_j},l_{ini}}$ and $\kappa_v^{l_{v_j},l_{ini}}$ should satisfy that: + +$$ +\kappa_ {k} ^ {l _ {k j}, l _ {i n _ {i}}} (h _ {2} x) = \rho_ {2} ^ {l _ {k j}} (h _ {2}) \kappa_ {k} ^ {l _ {k j}, l _ {i n _ {i}}} (x) \rho_ {2} ^ {l _ {i n _ {i}}} (h _ {2} ^ {- 1}); +$$ + +$$ +\kappa_ {v} ^ {l _ {v _ {j}}, l _ {i n _ {i}}} (h _ {2} x) = \rho_ {2} ^ {l _ {v _ {j}}} (h _ {2}) \kappa_ {v} ^ {l _ {v _ {j}}, l _ {i n _ {i}}} (x) \rho_ {1} ^ {l _ {i n _ {i}}} (h _ {2} ^ {- 1}) +$$ + +as stated in [27]. + +The feature $f_{q}$ is constructed in the same way as Equation 30. + +![](images/fa4d8fcf7064d11e762d3a85a6741a83fd1b1502e82d5e431d1b44b255fd3e87.jpg) +Figure 23: The comparison of the equivariant light field transformer and the conventional transformer. The left is the equivariant light field transformer, and the right is the conventional transformer. In our light field transformer, the position encoding is not directly concatenated to the features because this is not equivariant. We first obtain the equivariant feature attached to the point by equivariant convolution over the rays. We then construct features $f_{k}$ , and $f_{v}$ with derived designed kernels $\kappa_{k}$ and $\kappa_{v}$ to keep them equivariant; we construct $f_{q}$ by the designed equivariant linear layer $W_{q}$ . Since $f_{k}, f_{q}$ , and $f_{v}$ are all equivariant, the inner product of $f_{k}$ and $f_{q}$ is invariant, which results in invariant attention weight. Therefore, the whole transformer is equivariant. In contrast, the conventional transformer concatenates the ray position encoding with the feature attached to the ray, uses the point position encoding for the query feature for the point, and applies multi-head attention using $f_{k}, f_{q}$ , and $f_{v}$ , which are obtained by the Linear layer. We should note that $W_{q}$ in the light field transformer is designed to be equivariant, satisfying equation 31, which differs from the conventional linear map $W_{q}$ in the conventional transformer. For the attention blocks after the first block, the query features of the point in our model and the conventional model are both the output of the last attention block. The difference is that our query feature keeps equivariant while the feature in the conventional transformer is not. + +![](images/20ccc08e6bbabb640e9848d257249accb00de70fa7f4cb8a5c1e1e4b3f12f39e.jpg) + +Figure 22 shows the structures of ray feature update and $SE(3)$ equivariant transformer. + +In figure 23, we compare the $SE(3)$ equivariant transformer and the conventional transformer to illustrate how the equivariance is guaranteed in the equivariant transformer. In figure 24, we present the types of futures in $SE(3)$ equivariant attention head and conventional attention head, respectively. It indicates that geometric information is aggregated equivariantly in multi-head attention in the equivariant transformer. + +# I Equivariant Neural Rendering + +Equivariant rendering relates to equivariant $3D$ reconstruction, where we focus on multiple views instead of the entire light field. The equivariance property is maintained when the ray sampling is invariant up to a coordinate change. + +# I.1 Convolution from Rays to Rays + +For neural rendering tasks, we query one ray and apply the convolution over the neighboring rays to obtain the feature attached to the target query ray. Similar to the reconstruction, we utilize a kernel with local support. However, there is a distinction in that for neural rendering, the kernel $\kappa$ is constrained to be nonzero only when $d(x,\eta) = 0$ , while there are no constraints on $\angle (d_x,[0,0,1]^T)$ . + +![](images/2139ea90830058a12f90214002f37324bb51e25aa86733c98a917ded1e3c2ec3.jpg) +Figure 24: The comparison of multi-head attention modules in the equivariant light field transformer and in the conventional transformer. The figure above is the multi-head attention module in an equivariant light transformer, and the figure below is the conventional transformer. In the light field transformer, the query, key, and value features are composed of different types of features; they can be scalars, vectors, or higher-order tensors. The inner product should apply to the same type of features, and the type of feature determines the way of applying the inner product. In contrast, the feature in a conventional transformer doesn't contain vectors and tensors, and the inner product is conventional. + +As a result, the neighboring rays exclusively encompass the rays on the epipolar line for the target ray in each source view, as depicted in Figure 25. + +The scalar field over rays serves as the input to the convolution. The output field type corresponds to the regular representation of translation. This is because this field type serves as the input for the cross-attention module later on. If this field type were not utilized, the transformer would reach the entire neighboring set, leading to inferior performance compared to applying the transformer individually for each point and then applying it over the points along the ray. A similar observation is made in [58], which states that the two-stage transformer outperforms the one-stage transformer. Using the field type corresponding to the regular representation of the translation as the input, the transformer from rays to rays is equivalent to performing a transformer for each point, respectively, as explained in the following section. + +In Eq. 18, we already provide the solution of the kernel. We give a detailed explanation in this case and show that it is equivalent to performing convolution from rays to rays with output field types corresponding to irreducible representations, followed by applying Inverse Fourier Transform. Given that the input field is a scalar field, we have $\omega_{in}^{1} = 0$ and $\omega_{in}^{2} = 0$ . When considering an output field type of $(\omega_{out}^{1},reg)$ , where $reg$ represents the regular representation of translation, the convolution can be expressed as follows: + +$$ +\begin{array}{l} (f _ {o u t} ^ {(\omega_ {o u t} ^ {1}, r e g)}) _ {t} = \int_ {y \in \mathcal {N} (x)} \kappa_ {1} (s (x) ^ {- 1} y) (\kappa_ {2} (s (x) ^ {- 1} y)) _ {t} f _ {i n} (y) d y \\ = \int_ {y \in \mathcal {N} (x)} \kappa_ {1} (s (x) ^ {- 1} y) f (d (\eta , s (x) ^ {- 1} y), \angle ([ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {s (x) ^ {- 1} y})) \delta (t - g (s (x) ^ {- 1} y)) f _ {i n} (y) d y \\ = \int_ {g (s (x) ^ {- 1} y) = t} \kappa_ {1} (s (x) ^ {- 1} y) f (d (\eta , s (x) ^ {- 1} y), \angle ([ 0, 0, 1 ] ^ {T}, \pmb {d} _ {s (x) ^ {- 1} y})) f _ {i n} (y) d y. \\ \end{array} +$$ + +From the above equation, we can intuitively find that when the output field corresponds to the regular representation of the translation, the convolution happens at every point along the ray, respectively. We can treat $f_{out}^{(\omega_{out}^{1},reg)}$ as a function over $\mathbb{R}$ , and for any $\omega \in \mathbb{R}$ we apply the Fourier Transform to $f_{out}^{(\omega_{out}^{1},reg)}$ : + +![](images/604b29995e45b7eb9d0551227fea59f7bbeca997eed20a8cf2ef0423970b7298.jpg) +Figure 25: For simplification, we show two source views. For a target query ray $x$ , the neighboring rays (denoted by red rays) are on the epipolar lines (denoted as yellow dotted dashes) for the target ray in each source view. For any ray $y \in \mathcal{N}(x)$ , $d(x,y) = 0$ . + +$$ +\begin{array}{l} \mathcal {F} (\omega) = \int_ {t} f _ {o u t} ^ {\left(\omega_ {o u t} ^ {1}, r e g\right)} (t) e ^ {- i \omega t} d t \\ = \int_ {t} \int_ {g (s (x) ^ {- 1} y) = t} \kappa_ {1} (s (x) ^ {- 1} y) f (d (\eta , s (x) ^ {- 1} y), \angle ([ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {s (x) ^ {- 1} y})) f _ {i n} (y) d y e ^ {- i \omega t} d t \\ = \int_ {y} \kappa_ {1} (s (x) ^ {- 1} y) f (d (\eta , s (x) ^ {- 1} y), \angle ([ 0, 0, 1 ] ^ {T}, \boldsymbol {d} _ {s (x) ^ {- 1} y})) e ^ {- i \omega g (s (x) ^ {- 1} y)} f _ {i n} (y) d y \\ = \int_ {y} \kappa_ {1} (s (x) ^ {- 1} y) \kappa_ {2} ^ {\prime} (s (x) ^ {- 1} y) f _ {i n} (y) d y, \\ \end{array} +$$ + +where $\kappa_2' = f(d(\eta, s(x)^{-1}y), \angle([0, 0, 1]^T, \pmb{d}_{s(x)^{-1}y}))e^{-i\omega g(s(x)^{-1}y)}$ , which is exactly the kernel corresponding to $\omega_{out}^2 = \omega$ and $\omega_{in}^2 = 0$ as stated in Eq. 14. Therefore, we know that the field corresponding to the irreducible representation of the translation can be treated as the Fourier coefficients of the field corresponding to the regular representation. We can first obtain the features of different irreducible representations attached to the ray and subsequently apply the Inverse Fourier Transform to get the features for points along the ray, as shown in figure 26. + +# I.2 Cross-attention over Rays + +The feature that generates the query in the transformer is the feature attached to the target ray, whose feature type corresponds to the regular representation of the translation. The feature that generates the key and value in the transformer is attached to the neighboring rays in the source view, whose + +![](images/b684350c7f88e4c7bb13bc34b608cab00a72875d15f2765f62c4d79c64c7873a.jpg) +Figure 26: The features for points along the ray (the field type corresponds to the regular representation) can be obtained by the Inverse Fourier Transform of features attached to the ray, where the types of feature fields correspond to the irreducible representation of the translation. + +feature type corresponds to the scalar field. The output is the feature attached to the target ray, whose feature type corresponds to the regular representation. Therefore, the transformer becomes: + +$$ +\left(f _ {2} ^ {\text {o u t}} (x)\right) _ {t} = \sum_ {y \in \mathcal {N} (x)} \frac {\exp \left(\left\langle \left(f _ {q} \left(x , f _ {2} ^ {\text {i n}}\right)\right) _ {t} , \left(f _ {k} \left(x , y , f _ {1} ^ {\text {i n}}\right)\right) _ {t} \right\rangle\right)}{\sum_ {y \in \mathcal {N} (x)} \exp \left(\left\langle \left(f _ {q} \left(x , f _ {2} ^ {\text {i n}}\right)\right) _ {t} \left(f _ {k} \left(x , y , f _ {1} ^ {\text {i n}}\right)\right) _ {t} \right\rangle}\right)} \left(f _ {v} (x, y, f _ {1} ^ {\text {i n}})\right) _ {t}, \tag {34} +$$ + +where + +$$ +\left(f _ {k} (x, y, f _ {1} ^ {i n})\right) _ {t} = \left(\kappa_ {k} \left(s _ {2} (x) ^ {- 1} y\right)\right) _ {t} f _ {1} ^ {i n} (y) +$$ + +$$ +\left(f _ {v} (x, y, f _ {1} ^ {i n})\right) _ {t} = \left(\kappa_ {v} \left(s _ {2} (x) ^ {- 1} y\right)\right) _ {t} f _ {1} ^ {i n} (y) +$$ + +$$ +\left(f _ {q} \left(x, f _ {2} ^ {i n}\right)\right) _ {t} = C \left(f _ {2} ^ {i n} (x)\right) _ {t}. +$$ + +In the equations above, $\kappa_{k}$ and $\kappa_{v}$ are the kernels derived in Ex. 9 Eq. 16, $C$ is the equivariant weight matrix satisfying Eq. 31. + +The expression above indicates that the feature types of both key and value correspond to the regular representation of translation, as well as the feature type of the query. Moreover, the transformer operates on each point along the ray independently. It should be noted that the features $(f_{k})_{t}$ , $(f_{q})_{t}$ , $(f_{v})_{t}$ and $(f_{2}^{in})_{t}$ may have multiple channels and may consist of different types of features corresponding to various representations of $SO(2)$ . The inner product $\langle \cdot ,\cdot \rangle$ can only happen in the field type of the same representation of $SO(2)$ . This allows for the implementation of a multi-head attention module, where each head can attend to a specific type of feature and multiple channels. + +# I.3 Self-attention over Points Along the Ray + +After the cross-attention over rays, we get the features of the points along the ray, i.e., the feature attached to the ray corresponding to the regular representation of translation. $SE(3)$ acts on the feature $f^{\prime}$ attached to the point along the ray as mentioned in Eq.15 : + +$$ +\left(\mathcal {L} _ {g} f ^ {\prime}\right) (\boldsymbol {x}, \boldsymbol {d}) = \rho_ {1} \left(R _ {Z} \left(R _ {g ^ {- 1}}, \boldsymbol {d}\right)\right) ^ {- 1} f ^ {\prime} \left(g ^ {- 1} \boldsymbol {x}, R _ {g ^ {- 1}} \boldsymbol {d}\right), +$$ + +where $\rho_{1}$ is the group representation of $SO(2)$ . + +We will apply the self-attention model to these points along the same ray. For two points $\pmb{x}_1$ and $\pmb{x}_2$ on the same ray $(\pmb{d}, \pmb{x}_1 \times \pmb{d})$ , one can observe that for the same type of feature, $\langle (\mathcal{L}_g f')(\pmb{x}_1, \pmb{d}), (\mathcal{L}_g f')(\pmb{x}_1, \pmb{d}) \rangle = \langle f'(g^{-1} \pmb{x}_1, R_{g^{-1}} \pmb{d}), f'(g^{-1} \pmb{x}_1, R_{g^{-1}} \pmb{d}) \rangle$ , which makes attention weight invariant, the transformer could be formulated as: + +$$ +f ^ {\text {o u t}} (x) = \sum_ {\mathrm {y} \text {o n} \text {t h e} \text {s a m e r a y} \text {a s} \mathrm {x}} \frac {\exp \left(\left\langle f _ {q} \left(f ^ {i n} , x\right) , f _ {k} \left(f ^ {i n} , x , y\right) \right\rangle\right)}{\sum_ {\mathrm {y} \text {o n} \text {t h e} \text {s a m e r a y} \text {a s} \mathrm {x}} \exp \left(\left\langle f _ {q} \left(f ^ {i n} , x\right) , f _ {k} \left(f ^ {i n} , x , y\right) \right\rangle\right)} f _ {v} (x, y, f ^ {i n}), \tag {35} +$$ + +where + +$$ +f _ {k} ^ {l} (x, y, f ^ {i n}) = c _ {k} (d (x, y)) I \left(f ^ {i n}\right) ^ {l} (y) +$$ + +$$ +f _ {v} ^ {l} (x, y, f ^ {i n}) = c _ {v} (d (x, y)) I \left(f ^ {i n}\right) ^ {l} (y) +$$ + +$$ +f _ {q} ^ {l} (x, f ^ {i n}) = c _ {q} I (f ^ {i n}) ^ {l} (x), +$$ + +and $x$ and $y$ are the points along the same ray with direction $\mathbf{d}$ , we can denote $x$ as $(\mathbf{x}, \mathbf{d})$ and $y$ as $(\mathbf{y}, \mathbf{d})$ , $d(x, y)$ is the signed distance $\langle \mathbf{d}, \mathbf{y} - \mathbf{x} \rangle$ , $c_k, c_v$ are arbitrary functions that take signed distance as the input and output complex values and $c_q$ is an arbitrary constant complex. It should be noted that the features $f_k, f_q, f_v$ , and $f^{in}$ may have multiple channels and consist of different types of features corresponding to various representations of $SO(2)$ , the inner product $\langle \cdot, \cdot \rangle$ can only happen in the same type of field. This allows for implementing a multi-head attention module, where each head can attend to a specific feature type and multiple channels. Here, $f_k^l$ denotes the type- $l$ feature in feature $f_k, f_v^l$ represents the type- $l$ feature in feature $f_v, f_q^l$ denotes the type- $l$ feature in feature $f_l,$ and $(f^{in})^l$ represents the type- $l$ feature in feature $f^{in}$ . + +Note that this transformer architecture also follows the general format of the transformer in Eq. 4. We only simplify the kernel $\kappa_{k},\kappa_{v}$ to be trivial equivariant kernels. + +To obtain a scalar feature density for each point, the feature output of each point can be fed through an equivariant MLP, which includes equivariant linear layers and gated/norm nonlinear layers. These layers are similar to the ones used in [62] and [63]. + +# J 3D Reconstruction Experiment + +# J.1 Generation of the Dataset + +The I dataset is obtained by fixing the orientation of the object as well as the eight camera orientations. With the object orientation fixed, we can independently rotate each camera around its optical axis by a random angle in a uniform distribution of $(- \pi, \pi]$ to obtain the Z dataset. For the R dataset, we rotate every camera randomly by any rotation in $SO(3)$ while fixing the object. The equivariance stands with the content unchanged. Therefore, in practice, we require that the object projection after the rotation does not have new parts of the object. We satisfy this assumption by forcing the camera to fixate on a new random point inside a small neighborhood and subsequently rotate each camera around its optical axis with the uniformly random angle in $(- \pi, \pi]$ . We generate the $Y$ dataset by rotating the object only with azimuthal rotations while keeping the camera orientations the same. The $SO(3)$ dataset is generated by rotating the object with random rotation in $SO(3)$ with the orientations of cameras unchanged, which will potentially result in new image content. Equivalence is not theoretically guaranteed in this setup, but we still want to test the performance of our method. + +![](images/b2174e2abb4e359b671f2a2f4ca39bd4abf6517c68abf6997a4653e725961fc9.jpg) +Figure 27: The number of parameters and FLOPs of $SE(2)$ equivariant CNNs. We set batch size as one to calculate number of FLOPs. + +# J.2 Implementation Details + +We use $SE(2)$ equivariant CNNs to approximate the equivariant convolution over the rays. We use the same ResNet backbone as implemented in [30] that is equivariant to the finite group $C_8$ , which we find achieves the best result compared with other $SE(2)$ equivariant CNNs. We use a similar pyramid structure as [69] that concatenates the output feature of every block. Since every hidden feature is the regular representation, in the final layer we use $1 \times 1 SE(2)$ -equivariant convolutional layers to transfer the hidden representation to scalar type. + +For the fusion from the ray space to the point space model, we use one layer of convolution and three combined blocks of updating ray features and $SE(3)$ transformers. For the equivariant $SE(3)$ multi-head-attention, we only use the scalar feature and the vector (type-1) feature in the hidden layer. The kernel matrix includes the spherical harmonics of degrees 0 and 1. We also concatenate every output point feature of every block as in the $2D$ backbone. Since the output feature of every block includes the vector feature, we transfer it to the scalar feature through one vector neuron layer and the inner vector product. We use the same weighted SDF loss as in [69] during training, which applies both uniform and near-surface sampling. We report the number of parameters and floating-point operations (FLOPs) of our $2D$ backbone and light fusion networks in Fig. 27 and Fig. 28 respectively. + +![](images/bc8ce45c540d23ab1dd2e7e3974f624738d6f7ebaf60f67ed6d89cb23d2e2aff.jpg) +Figure 28: The number of parameters and FLOPs of the ray fusion model, which is composed of convolution from rays to points and transformer from rays to points. We set batch size as one to calculate the number of FLOPs. + +# J.3 Discussion of Results + +There is still a performance gap between $I / I$ and $I / Z$ . Although $SE(2)$ equivariant networks are theoretically strictly equivariant, the error in practice is introduced by the finite sampling of the image and the pooling layers. Additionally, we use the ResNet that is equivariant to $C_8$ approximation of $SO(2)$ , which causes this gap but increases the whole pipeline performance in the other tasks. There is no significant difference between $I / Z$ and $I / R$ , which shows that approximating the spherical field convolution by $SE(2)$ equivariant convolution is reasonable in practice. + +# J.4 Qualitative Results + +Figure 29 shows a qualitative result for the chair category. There are more qualitative results shown in Fig. 37, Fig. 38, and Fig. 39. + +# J.5 Ablation Study + +First, we replace the $SE(2)$ CNNs backbone with the conventional CNNs to test the effectiveness of $SE(2)$ CNNs. Secondly, we remove the equivariant convolution/transformer part and use trivial aggregation (max-pooling) combined with MLP. Finally, we run an equivariant convolution and transformer without using the type-1 (vector) feature while keeping the number of parameters similar to our model. + +![](images/f73700061af025856fb88fca9efbda104f12dbe59c8587b0748c95caac98360c.jpg) +Figure 29: Qualitative results for equivariant reconstruction. Left: input views; Right: reconstruction meshes of different models and ground truth meshes. The captions below the meshes show how the model is trained and tested, explained in the text. + +Table 3 summarizes the result on the chair category, which illustrates that in the $I / I$ and $Y / Y$ trials, $SE(2)$ CNN is less expressive than traditional CNN, but it contributes to the equivariance of our model looking at the results of $I / Z, I / R$ , and $Y / SO(3)$ . Equivariant ray convolution and transformer improve both the reconstruction performance and the equivariance outcome. We also compare the ray convolution and transformer with the models operating only on scalar features without vector features, and again, we see a drop in performance in every setting, proving the value of taking ray directions into account. + +We also compare to a baseline where the ray difference information is encoded in the feature explicitly. Most models that encode ray directions aim at rendering, like IBRnet. Here, we modified IBRnet (Fig.2 of IBRnet paper) to query 3D points only for their SDF value instead of querying all densities along the ray that would be necessary for rendering. We replaced the ray direction differences with the ray directions themselves because we use a query point and not a query ray. We report in table 4 IoU result for Y/Y and Y/SO(3) (where Y is augmentation only along the vertical axis) for two models - IBRNet with conventional CNNs as 2D backbone and IBRNet with SE(2)-equivariant CNNs as 2D backbone. For the $SO(3)$ setting, we rotate the whole 8 cameras with the same rotation, which is equivalent to rotating the object with the inverse rotation, and we use the object canonical frame to encode the ray information. + +The baseline is not equivariant: It explicitly uses the ray directions as inputs to MLPs. Ray directions or their differences change when the coordinate system is transformed, breaking, thus, equivariance. Table 4 demonstrates that our model is more resilient to object rotations. We can enhance equivariance + +
Methodw/o SE(2)w/o conv& transw/o type-1Full model
I/I0.767/0.0790.695/0.1050.722/0.0930.731/0.090
I/Z0.430/0.2340.533/0.1750.553/0.1580.631/0.130
I/R0.417/0.2490.442/0.2410.466/0.2030.592/0.137
R/R0.672/0.1120.658/0.1220.682/0.1090.689/0.105
Y/Y0.731/0.0900.644/0.1240.677/0.1110.698/0.102
Y/SO(3)0.467/0.0.2170.534/0.1700.569/0.1630.589/0.142
SO(3)/SO(3)0.655/0.1200.616/0.1420.636/0.1300.674/0.113
+ +Table 3: Ablation: w/o $SE(2)$ means replacing $SE(2)$ equivariant network with conventional; w/o ray conv& trans denotes the model where we replace the light field convolution and the light field equivariant transformer with max-pooling; w/o type-1 means using only scalar features in convolution and transformers. + +
MethodY/YY/SO(3)SO(3)/SO(3)
IBRNet [61] w/o SE(2)0.6890.4320.611
IBRNet [61] w/SE(2)0.6520.5010.619
Ours0.6980.5980.674
+ +Table 4: Comparison of our model and a baseline which encodes the ray information explicitly. IBRNet w/o SE(2) is the modified IBRNet with conventional CNN backbone, IBRNet w/SE(2) is the model where we replace the conventional CNN backbone with the SE(2) equivariant CNN. + +by using SE(2) equivariant modeling, and our model outperforms the baseline in the Y/Y setting. We believe that the transformer in our model is responsible for the performance improvement. + +# K Neural Rendering Experiment + +# K.1 Experiment Settings Discussion + +Two experiment settings illustrate our model's equivariance: $I / I$ and $I / SO(3)$ . $I / I$ is the canonical setting, where we train and test the model in the same canonical frame defined in the dataset. $I / SO(3)$ is that we test the model trained in the canonical frame under arbitrary rotated coordinate frames, which means that all the camera poses in one scene are transformed by the same rotation without changing their relative camera poses and relative poses between the camera and the scene, which doesn't change the content of the multiple views. The reason we don't apply translation to the cameras is that there exists a depth range for points sampling in the model and the comparing baseline [61], which effectively mitigates the impact of translation. + +We should note that the $SO(3)$ setting in this experiment setting differs from $R$ and $SO(3)$ settings in reconstruction. $R$ changes the relative pose of the cameras, and each image is transformed due to the rotation of each camera without altering the content, i.e., the sampling of the light field is nearly unchanged. The $R$ setting aims to demonstrate that replacing the conventional method with ray-based convolution can get rid of the canonical frame for each view. + +$SO(3)$ in reconstruction is to rotate the object pose randomly without changing the pose of the camera, which is equivalent to transforming the cameras by the inverse rotation but fixing the object, resulting in changes in the relative poses between the camera and the object, the content of the image and, therefore, the sampling of the light field. This setting shows that even for non-theoretically equivariant cases, our model in reconstruction still demonstrates robustness. + +In the rendering experiment using the $SO(3)$ setting, each image itself is not transformed, unlike the $R$ setting in the reconstruction. The content of the images remains unchanged, including the light field sampling, unlike the $SO(3)$ setting in the reconstruction. Since each image is not transformed, even if the conventional $2D$ convolution is applied to the image, the scalar feature attached to the ray is not altered, and the light feature field sampling remains the same up to the transform of the coordinate frame. This setting was used to demonstrate that our model is $SE(3)$ -equivariant when the input is the scalar light feature field. + +![](images/38c7a4b1e3e0b7ad950814238aeb0333f32cb350ff6dbe562062384b744fa26f.jpg) +Figure 30: The number of parameters and FLOPs of the model, which takes the scalar feature attached to rays as input and predicts the color and density for points along the target ray. The calculation of FLOPs is performed for single-pixel rendering with 10 source views. + +![](images/f4378ed7e3d7e2f58d95b779f573c999cc66558d65e344520851b271d4ddacd0.jpg) +Figure 31: In terms of qualitative results for rendering, we compare the performance of IBRNet and our model in both the given canonical frame (denoted as "IBRNet(I)" and "Ours(I)" respectively) and a rotated frame (denoted as "IBRNet(SO(3))" and "Ours(SO(3))" respectively). Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +![](images/4347a43959b197455f65bd662ffe3e8ac345df88497dfb6f8f9b47abae30c968.jpg) + +![](images/99d2e1f082728a9ea0a15c8a3c98168c878c173f314f0e74f746567a9158f54a.jpg) + +![](images/e1f11ae9a5fdd481d0814b934d2b6bea229d366b1ec938bc572b6c90faddbdb0.jpg) + +![](images/5371f6ce16482925170a731d9b4fffe52c0f9f3a67883e521a3e0ada2a083972.jpg) + +![](images/cf03d833ba930236cc98e86818a6cd347b1ce4be77d87643f04c4e558c6a8fc3.jpg) + +![](images/1b05100acd4c8bde9d49ba26aff62c18a7e1cf3c62fa93c7ed073d44527b1077.jpg) + +![](images/7de437924aed398268a5b2eb50be98d853a700e286c9f7559ae30b0f0bf322cd.jpg) + +![](images/b93f0d57d7eacf2d1a259c865ff30683ba750816de67828111742d9fa3429548.jpg) + +# K.2 Implementation Details + +As described in the paper, we use a similar architecture as [61], where we replace the aggregation of view features by equivariant convolution and equivariant transformer over rays. In equivariant convolution, the input is scalar feature field over rays, which means that $\omega_{in}^{1} = 0$ and $\omega_{in}^{2} = 0$ ; for the output field, we use regular representation of translation as described in Sec. 3.3, and we use $\omega_{out}^{1} = 0, 2^{1}, \dots, 2^{7}$ for group representation of $SO(2)$ , each field type has 4 channels. In equivariant transformer over rays, we update the key and value before going to the attention module in the experiment; the specific operation is that we concatenate key $f_{k}$ and query $f_{q}$ , we concatenate + +![](images/15094ebd126168a7e6f0bb16589a5553380248a71a56d50431a6ea81cd3ff0b8.jpg) +Figure 32: In terms of qualitative results for rendering, we compare the performance of IBRNet and our model in both the given canonical frame (denoted as "IBRNet(I)" and "Ours(I)" respectively) and a rotated frame (denoted as "IBRNet(SO(3))" and "Ours(SO(3))" respectively). Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +![](images/bdac13d31f2549ac8771797f017356fc5aec377e937cdfaed10be478a5cf9245.jpg) +Figure 33: In terms of qualitative results for rendering, we compare the performance of IBRNet and our model in both the given canonical frame (denoted as "IBRNet(I)" and "Ours(I)" respectively) and a rotated frame (denoted as "IBRNet(SO(3))" and "Ours(SO(3))" respectively). Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +$f_{v}$ and query $f_{q}$ , and then we feed the concatenated key and value into two equivariant MLPs (equivariant linear layers and gated/norm nonlinear layers, similar to the ones used in [62]) to get the newly updated key and updated value, which will be fed into attention module. In line with [61], our approach does not involve generating features for the color of every point. In our implementation, we directly multiply the attention weights obtained from the softmax operator in the transformer with the corresponding colors in each view to perform color regression. + +We replace the ray transformer with the equivariant transformer over the points along the ray; the input features comprise the feature types corresponding to the group representations $\omega_{in} = 0,2^{1},\dots ,2^{7}$ for $SO(2)$ . Each feature type has 4 channels; the output comprises the same feature type, and each type has 2 channels. We will first convert the feature into a scalar feature by an equivariant MLP (equivariant linear layers and gated/norm nonlinear layers, similar to the ones used in [62].) and then feed it into a conventional MLP to get the density. We report in Fig. 30 the number of parameters and floating-point operations (FLOPs) of the model composed of the convolution and transformers. + +# K.3 Qualitative Results + +Fig. 32, Fig. 33, Fig. 34, Fig. 31, Fig. 35 and Fig. 36 show the qualitative results on Real-Forward-Facing [41] and Realistic Synthetic $360^{\circ}$ [49] data. Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +![](images/935c8639357868b4f78d8963f3e3f70f6cb5835b20427e64d28eaf4ef6d16696.jpg) +Ground Truth + +![](images/42685f60a35d9627e331ad7cacfe28a2ee726d1392c91c55798ef709f15220ad.jpg) + +![](images/4ba7b41ec8d999ad6d850cd9b16ea1aba32f409bc115a54e7fac497ff8a29449.jpg) +Figure 34: In terms of qualitative results for rendering, we compare the performance of IBRNet and our model in both the given canonical frame (denoted as "IBRNet(I)" and "Ours(I)" respectively) and a rotated frame (denoted as "IBRNet(SO(3))" and "Ours(SO(3))" respectively). Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +![](images/2b8ebbd5e2f2ca9871344f9133e723ff6099b3a33bbaabff141a96aa11c89aab.jpg) + +![](images/626b31d4c50cd1859804aba0403be6cce3b3a23539ab4c5ba9ed51143d3186d6.jpg) +IBRNet(I) + +![](images/b3f6bd00377fc423f0687cf076a3e5a6413caae21a84689b19ee7e7d6fffe124.jpg) + +![](images/0a508111d67f8e473164831db1445bd06c358b36e76e17134378a2f6eab00aef.jpg) +IBRNet(SO(3)) + +![](images/0a740cd7093610207a7aa21844749884c9d1a5b334607a3a710b76b89f73bc35.jpg) + +![](images/60e953ebb4ce3a307045577acaca46386718f5ad69460475e1c6d11d8e9af59d.jpg) +Ours(I) +Ours(SO(3)) + +![](images/e7d8f70e70f75c57aa763277ffdf14da6f32f2fb7b73acc190f09e21a5f1ac58.jpg) +Figure 35: In terms of qualitative results for rendering, we compare the performance of IBRNet and our model in both the given canonical frame (denoted as "IBRNet(I)" and "Ours(I)" respectively) and a rotated frame (denoted as "IBRNet(SO(3))" and "Ours(SO(3))" respectively). Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +![](images/636a8275ae18bcb822f89743041e342694ffb606472511a67b3f4863969a81be.jpg) + +![](images/b9e0a02df6cff2f7c504be85fe0dbd6b0284487bccc9014c7239da03cf2a8482.jpg) + +![](images/a88fe50fa6fe3ec0b26a32105747616a5974f41d5d9c4bf238bd56c3c77e7f0f.jpg) + +![](images/9482e73539ed64c90421707863617c3122453ceb58f3c4c88df491a2333e0365.jpg) + +![](images/cee20902624e577967f5c9fedec641d084bd63edfb936ab720fe4414cc69830f.jpg) + +![](images/192ba939aa5ae3aec1d052da37bbb9f4f80b694e56a9a0472b4549b5933bf813.jpg) + +![](images/04b3ae71e486390bd3e68a980196682b6eb662939a23b12dc0ad340140dc60a5.jpg) + +![](images/e07cb3f8fcec1a644efd0bcbd96f613baa35e88c728e547030bfd6b30dc752df.jpg) + +![](images/8548ecb0e8314951adbeaa19a27604e0339c301f660d61fe4924761a787fed2e.jpg) + +![](images/7eb9021c05c22356e98c863c641896b0b30fa6936e80784c65abbc1b7ad5e911.jpg) + +![](images/f48c2ff9d480e2b2cf8256c1d0e5bd552654a11db190783705c477e50acf4703.jpg) + +![](images/5b5dcd03939d8d8438806ff5a7783cc857c182ce0e9c0510f7908fc5f777122a.jpg) + +![](images/568ef309efcbdb994c56f27f851811d6e62731e942135eb6f97960ece2513b64.jpg) + +![](images/054688354eeaf224ebb6bbfd4bbe773f477f6177fc46785e4a4c8f9db9983a13.jpg) +Figure 36: In terms of qualitative results for rendering, we compare the performance of IBRNet and our model in both the given canonical frame (denoted as "IBRNet(I)" and "Ours(I)" respectively) and a rotated frame (denoted as "IBRNet(SO(3))" and "Ours(SO(3))" respectively). Our model performs comparably to IBRNet in the canonical setting. However, IBRNet experiences a performance drop in the rotated frame, while our model remains robust to the rotation. + +![](images/db8bbf3720a0a73f6c367e5e06e377b38c7b9bf6ad8b39a8a5aa747260efb76f.jpg) + +![](images/416a3f1dcdc70bf0603b9f90de806568823c960ae78fc1281cbfd48ee0cb906f.jpg) + +![](images/8d57a867e8092166caa7593bc1bd23655d5261deab4622a541761e734ba21aef.jpg) + +![](images/74c46cc6e335c390e9da366a27b26c89e2f12ebc4e99cec97614eacb625c40bd.jpg) +Figure 37: Qualitative Result for the chair. Left: input views; Right: reconstruction meshes of different models. The captions below the meshes show how the model is trained and tested. + +![](images/af49213129cf9d39329374df32e5bec77a1dbcde925b6412b1c91bf696d81554.jpg) +Figure 38: Qualitative Result for the car. Left: input views; Right: reconstruction meshes of different models. The captions below the meshes show how the model is trained and tested. + +![](images/bf04c5be390000d030ce6aad788b0c9e5ef72e5a1c3c8cb26070f0fa25902839.jpg) +Figure 39: Qualitative Result for the car. Left: input views; Right: reconstruction meshes of different models. The captions below the meshes show how the model is trained and tested. \ No newline at end of file diff --git a/se3equivariantconvolutionandtransformerinrayspace/images.zip b/se3equivariantconvolutionandtransformerinrayspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..cd85553d9e554a709857515c89ecc6583a4d8c7f --- /dev/null +++ b/se3equivariantconvolutionandtransformerinrayspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:450025b8facc90c33213ac3bdb87846c7270c6b78197a540ac7d7b19f914d7d8 +size 2525908 diff --git a/se3equivariantconvolutionandtransformerinrayspace/layout.json b/se3equivariantconvolutionandtransformerinrayspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..351cbcc12d6168db4d4f77d0264b035203ff7752 --- /dev/null +++ b/se3equivariantconvolutionandtransformerinrayspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1a2d5f26776800f156ef78b0f2e432285b34601ba45de61d824e6bddb9bc7d0 +size 2246260 diff --git a/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_content_list.json b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e3426f00577d2dc31318f4cd301e18b81d208e0d --- /dev/null +++ b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27a531f39c394ac012534a27550d21a87289bfd0b1ea1487b4b4218708fd5686 +size 348671 diff --git a/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_model.json b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..23468fea26bb0f76aea4337b97ac7201e7138fdf --- /dev/null +++ b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf8f805ab34d44fbf1d80de504ccb29d8b2f9e520287eb404f553b0e022bcc0b +size 397157 diff --git a/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_origin.pdf b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5acc0864cb142c56494b40cdf60a50e4de3c06a0 --- /dev/null +++ b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/c4d1a5b7-943a-4c24-b9c1-cfb8188682fc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad91a42eb99523dae1e743666487d4899fbbce8220dd4426041988d46085e49a +size 4858698 diff --git a/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/full.md b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0209d846cfcc898de93f9d5b9607c0516e6d81a7 --- /dev/null +++ b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/full.md @@ -0,0 +1,1898 @@ +# (S)GD over Diagonal Linear Networks: Implicit Bias, Large Stepsizes and Edge of Stability + +Mathieu Even* +Inria - ENS Paris + +Scott Pesme* EPFL + +Suriya Gunasekar +Microsoft Research + +Nicolas Flammarion EPFL + +# Abstract + +In this paper, we investigate the impact of stochasticity and large step sizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over 2-layer diagonal linear networks. We prove the convergence of GD and SGD with macroscopic step sizes in an overparametrised regression setting and provide a characterisation of their solution through an implicit regularisation problem. Our characterisation provides insights on how the choice of minibatch sizes and step sizes lead to qualitatively distinct behaviors in the solutions. Specifically, we show that for sparse regression learned with 2-layer diagonal linear networks, large step sizes consistently benefit SGD, whereas they can hinder the recovery of sparse solutions for GD. These effects are amplified for step sizes in a tight window just below the divergence threshold, known as the "edge of stability" regime. + +# 1 Introduction + +The stochastic gradient descent algorithm (SGD) [51] is the foundational algorithm for almost all neural network training. Though a remarkably simple algorithm, it has led to many impressive empirical results and is a key driver of deep learning. However the performances of SGD are quite puzzling from a theoretical point of view as (1) its convergence is highly non-trivial and (2) there exist many global minimums for the training objective which generalise very poorly [66]. + +To explain this second point, the concept of implicit regularisation has emerged: if overfitting is harmless in many real-world prediction tasks, it must be because the optimisation process is implicitly favoring solutions that have good generalisation properties for the task. The canonical example is overparametrised linear regression with more trainable parameters than number of samples: although there are infinitely many solutions that fit the samples, GD and SGD explore only a small subspace of all the possible parameters. As a result, it can be shown that they implicitly converge to the closest solution in terms of the $\ell_2$ distance, and this without explicit regularisation [66, 24]. + +Currently, most theoretical works on implicit regularisation have primarily focused on continuous time approximations of (S)GD where the impact of crucial hyperparameters such as the stepsize and the minibatch size are ignored. One such common simplification is to analyse gradient flow, which is a continuous time limit of GD and minibatch SGD with an infinitesimal stepsize. By definition, this analysis does not capture the effect of stepsize or stochasticity. Another approach is to approximate SGD by a stochastic gradient flow [60, 48], which tries to capture the noise and the stepsize using an appropriate stochastic differential equation. However, there are no theoretical guarantees that these results can be transferred to minibatch SGD as used in practice. This is a limitation in our understanding since the performances of most deep learning models are often sensitive to the choice of stepsize and minibatch size. The importance of stepsize and SGD minibatch size is common knowledge in practice and has also been systematically established in controlled experiments [36, 42, 20]. + +![](images/38cc8f524f8229816691324369382c0406eade31f7cc97c18f3d0544b7aa95d2.jpg) +Figure 1: Noiseless sparse regression with a diagonal linear network using SGD and GD, with parameters initialized at the scale of $\alpha = 0.1$ (Section 2). The test losses at convergence for various step sizes are plotted for GD and SGD. Small step sizes correspond to gradient flow (GF) performance. We see that increasing the step size improves the generalisation properties of SGD, but deteriorates that of GD. The dashed vertical lines at step sizes $\tilde{\gamma}_{\mathrm{max}}^{\mathrm{SGD}}$ and $\tilde{\gamma}_{\mathrm{max}}^{\mathrm{GD}}$ denote the largest step sizes for which SGD and GD, respectively, converge. See Section 2 for the precise experimental setting. + +In this work, we aim to expand our understanding of the impact of stochasticity and stepsizes by analysing the (S)GD trajectory in 2-layer diagonal networks (DLNs). In Fig. 1, we show that even in our simple network, there are significant differences between the nature of the solutions recovered by SGD and GD at macroscopic step sizes. We discuss this behavior further in the later sections. + +The 2-layer diagonal linear network which we consider is a simplified neural network that has received significant attention lately [61, 57, 26, 50]. Despite its simplicity, it surprisingly reveals training characteristics which are observed in much more complex architectures, such as the role of the initialisation [61], the role of noise [48, 50], or the emergence of saddle-to-saddle dynamics [6, 49]. It therefore serves as an ideal proxy model for gaining a deeper understanding of complex phenomenons such as the roles of stepsizes and of stochasticity as highlighted in this paper. We also point out that implicit bias and convergence for more complex architectures such as 2-layer ReLU networks, matrix multiplication are not yet fully understood, even for the simple gradient flow. Therefore studying the subtler effects of large stepsizes and stochasticity in these settings is currently out of reach. + +# 1.1 Main results and paper organisation + +The overparametrised regression setting and diagonal linear networks are introduced in Section 2. We formulate our theoretical results (Theorems 1 and 2) in Section 3: we prove that for macroscopic step sizes, gradient descent and stochastic gradient descent over 2-layer diagonal linear networks converge to a zero-training loss solution $\beta_{\infty}^{\star}$ . We further provide a refined characterization of $\beta_{\infty}^{\star}$ through a trajectory-dependent implicit regularisation problem, that captures the effects of hyperparameters of the algorithm, such as step sizes and batch sizes, in useful and analysable ways. In Section 4 we then leverage this crisp characterisation to explain the influence of crucial parameters such as the step size and batch-size on the recovered solution. Importantly our analysis shows a stark difference between the generalisation performances of GD and SGD for large step sizes, hence explaining the numerical results seen in Fig. 1 for the sparse regression setting. Finally, in Section 5, we use our results to shed new light on the Edge of Stability (EoS) phenomenon [14]. + +# 1.2 Related works + +Implicit bias. The concept of implicit bias from optimization algorithm in neural networks has been studied extensively in the past few years, starting with early works of Telgarsky [55], Neyshabur et al. [45], Keskar et al. [36], Soudry et al. [53]. The theoretical results on implicit regularisation have been extended to multiplicative parametrisations [23, 25], linear networks [34], and homogeneous networks [40, 35, 13]. For regression loss on diagonal linear networks studied in this work, Woodworth et al. [61] demonstrate that the scale of the initialisation determines the type of solution obtained, with large initialisations yielding minimum $\ell_2$ norm solutions—the neural tangent kernel regime [30] and small initialisation resulting in minimum $\ell_1$ norm solutions—the rich regime [13]. The analysis relies on the link between gradient descent and mirror descent established by Ghai et al. [21] and further explored by Vaskevicius et al. [56], Wu and Rebeschini [62]. These works focus on full batch gradient, and often in the infinitesimal stepsize limit (gradient flow), leading to general insights and results that do not take into account the effects of stochasticity and large step sizes. + +The effect of stochasticity in SGD on generalisation. The relationship between stochasticity in SGD and generalisation has been studied in various works [41, 29, 11, 38, 64]. Empirically, models generated by SGD exhibit better generalisation performance than those generated by GD [37, 31, 27]. + +Explanations related to the flatness of the minima picked by SGD have been proposed [28]. Label noise has been shown to influence the implicit bias of SGD [26, 8, 15, 50] by implicitly regularising the sharp minimisers. Recently, studying a stochastic gradient flow that models the noise of SGD in continuous time with Brownian diffusion, Pesme et al. [48] characterised for diagonal linear networks the limit of their stochastic process as the solution of an implicit regularisation problem. However similar explicit characterisation of the implicit bias remains unclear for SGD with large step sizes. + +The effect of stepsizes in GD and SGD. Recent efforts to understand how the choice of stepsizes affects the learning process and the properties of the recovered solution suggest that larger stepsizes lead to the minimisation of some notion of flatness of the loss function [52, 37, 44, 33, 64, 43], backed by empirical evidences or stability analyses. Larger stepsizes have also been proven to be beneficial for specific architectures or problems: two-layer network [39], regression [63], kernel regression [7] or matrix factorisation [59]. For large stepsizes, it has been observed that GD enters an Edge of Stability (EoS) regime [32, 14], in which the iterates and the train loss oscillate before converging to a zero-training error solution; this phenomenon has then been studied on simple toy models [1, 67, 12, 16] for GD. Recently, [2] presented empirical evidence that large stepsizes can lead to loss stabilisation and towards simpler predictors. + +# 2 Setup and preliminaries + +Overparametrised linear regression. We consider a linear regression over inputs $X = (x_{1},\ldots ,x_{n})\in (\mathbb{R}^{d})^{n}$ and outputs $y = (y_{1},\dots ,y_{n})\in \mathbb{R}^{n}$ . We consider overparametrised problems where input dimension $d$ is (much) larger than the number of samples $n$ . In this case, there exists infinitely many linear predictors $\beta^{\star}\in \mathbb{R}^{d}$ which perfectly fit the training set, i.e., $y_{i} = \langle \beta^{\star},x_{i}\rangle$ for all $1\leqslant i\leqslant n$ . We call such vectors interpolating predictors or interpolators and we denote by $S$ the set of all interpolators $S = \{\beta^{\star}\in \mathbb{R}^{d}$ s.t. $\langle \beta^{\star},x_i\rangle = y_i,\forall i\in [n]\}$ . Note that $S$ is an affine space of dimension greater than $d - n$ and equal to $\beta^{\star} + \mathrm{span}(x_1,\ldots ,x_n)^{\perp}$ for any $\beta^{\star}\in S$ . We consider the following quadratic loss: $\mathcal{L}(\beta) = \frac{1}{2n}\sum_{i = 1}^{n}(\langle \beta ,x_i\rangle -y_i)^2$ , for $\beta \in \mathbb{R}^d$ . + +2-layer linear diagonal network. We parametrisse regression vectors $\beta$ as functions $\beta_w$ of trainable parameters $w \in \mathbb{R}^p$ . Although the final prediction function $x \mapsto \langle \beta_w, x \rangle$ is linear in the input $x$ , the choice of the parametrisation drastically changes the solution recovered by the optimisation algorithm [25]. In the case of the linear parametrisation $\beta_w = w$ many first-order methods (SGD, GD, with or without momentum) converge towards the same solution and the choice of stepsize does not impact the recovered solution beyond convergence. In an effort to better understand the effects of stochasticity and large stepsize, we consider the next simple parametrisation, that of a 2-layer diagonal linear neural network given by: + +$$ +\beta_ {w} = u \odot v \text {w h e r e} w = (u, v) \in \mathbb {R} ^ {2 d}. \tag {1} +$$ + +This parametrisation can be viewed as a simple neural network $x \mapsto \langle u, \sigma(\mathrm{diag}(v)x) \rangle$ where the output weights are represented by $u$ , the inner weights is the diagonal matrix $\mathrm{diag}(v)$ , and the activation $\sigma$ is the identity function. In this spirit, we refer to the entries of $w = (u,v) \in \mathbb{R}^{2d}$ as the weights and to $\beta := u \odot v \in \mathbb{R}^d$ as the prediction parameter. Despite the simplicity of the parametrisation (1), the loss function $F$ over parameters $w = (u,v) \in \mathbb{R}^{2d}$ is non-convex (and thus the corresponding optimization problem is challenging to analyse), and is given by: + +$$ +F (w) := \mathcal {L} (u \odot v) = \frac {1}{2 n} \sum_ {i = 1} ^ {n} \left(y _ {i} - \langle u \odot v, x _ {i} \rangle\right) ^ {2}. \tag {2} +$$ + +Mini-batch SGD. We minimise $F$ using mini-batch SGD: let $w_{0} = (u_{0}, v_{0})$ and for $k \geqslant 0$ , + +$$ +w _ {k + 1} = w _ {k} - \gamma_ {k} \nabla F _ {\mathcal {B} _ {k}} \left(w _ {k}\right), \quad \text {w h e r e} \quad F _ {\mathcal {B} _ {k}} (w) := \frac {1}{2 b} \sum_ {i \in \mathcal {B} _ {k}} \left(y _ {i} - \langle u \odot v, x _ {i} \rangle\right) ^ {2}, \tag {3} +$$ + +where $\gamma_{k}$ are stepsizes, $\mathcal{B}_k\subset [n]$ are mini-batches of $b\in [n]$ distinct samples sampled uniformly and independently, and $\nabla F_{\mathcal{B}_k}(w_k)$ are minibatch gradients of partial loss over $\mathcal{B}_k$ , $F_{\mathcal{B}_k}(w)\coloneqq \mathcal{L}_{\mathcal{B}_k}(u\odot v)$ defined above. Classical SGD and full-batch GD are special cases with $b = 1$ and $b = n$ , respectively. For $k\geqslant 0$ , we consider the successive prediction parameters $\beta_{k}\coloneqq u_{k}\odot v_{k}$ built from the weights + +$w_{k} = (u_{k},v_{k})$ . We analyse SGD initialised at $u_0 = \sqrt{2}\alpha \in \mathbb{R}_{>0}^d$ and $v_{0} = \mathbf{0}\in \mathbb{R}^{d}$ , resulting in $\beta_0 = \mathbf{0}\in \mathbb{R}^d$ independently of the chosen weight initialisation $\alpha^2$ . + +Experimental details. We consider the noiseless sparse regression setting where $(x_{i})_{i\in [n]}\sim$ $\mathcal{N}(0,I_d)$ and $y_{i} = \langle \beta_{\ell_{1}}^{\star},x_{i}\rangle$ for some $s$ -sparse vector $\beta_{\ell_1}^\star$ . We perform (S)GD over the DLN with a uniform initialisation $\alpha = \alpha \mathbf{1}\in \mathbb{R}^{d}$ where $\alpha >0$ Fig. 1 and Fig. 2 (left) correspond to the setup $(n,d,s,\alpha) = (20,30,3,0.1)$ , Fig. 2 (right) to $(n,d,s,\alpha) = (50,100,4,0.1)$ and Fig. 3 to $(n,d,s,\alpha) = (50,100,2,0.1)$ + +Notations. Let $H := \nabla^2\mathcal{L} = \frac{1}{n}\sum_i x_i x_i^\top$ denote the Hessian of $\mathcal{L}$ , and for a batch $\mathcal{B} \subset [n]$ let $H_{\mathcal{B}} := \nabla^2\mathcal{L}_{\mathcal{B}} = \frac{1}{|\mathcal{B}|}\sum_{i\in \mathcal{B}}x_ix_i^\top$ denote the Hessian of the partial loss over the batch $\mathcal{B}$ . Let $L$ denote the "smoothness" such that $\forall \beta, \|H_{\mathcal{B}}\beta\|_2 \leqslant L\|\beta\|_2, \|H_{\mathcal{B}}\beta\|_\infty \leqslant L\|\beta\|_\infty$ for all batches $\mathcal{B} \subset [n]$ of size $b$ . A real function (e.g., log, exp) applied to a vector must be understood as element-wise application, and for vectors $u, v \in \mathbb{R}^d$ , $u^2 = (u_i^2)_{i \in [d]}$ , $u \odot v = (u_iv_i)_{i \in [d]}$ and $u / v = (u_i / v_i)_{i \in [d]}$ . We write 1, 0 for the constant vectors with coordinates 1 and 0 respectively. The Bregman divergence [9] of a differentiable convex function $h: \mathbb{R}^d \to \mathbb{R}$ is defined as $D_h(\beta_1, \beta_2) = h(\beta_1) - (h(\beta_2) + \langle\nabla h(\beta_2), \beta_1 - \beta_2\rangle)$ . + +# 3 Implicit bias of SGD and GD + +We start by recalling some known results on the implicit bias of gradient flow on diagonal linear networks before presenting our main theorems on characterising the (stochastic) gradient descent solutions (Theorem 1) as well as proving the convergence of the iterates (Theorem 2). + +# 3.1 Warmup: gradient flow + +We first review prior findings on gradient flow on diagonal linear neural networks. Woodworth et al. [61] show that the limit $\beta_{\alpha}^{*}$ of the gradient flow $\mathrm{d}w_{t} = -\nabla F(w_{t})\mathrm{d}t$ initialised at $(u_0,v_0) = (\sqrt{2}\alpha ,0)$ is the solution of the minimal interpolation problem: + +$$ +\beta_ {\boldsymbol {\alpha}} ^ {*} = \underset {\beta^ {\star} \in \mathcal {S}} {\operatorname {a r g m i n}} \psi_ {\boldsymbol {\alpha}} \left(\beta^ {\star}\right), \quad \text {w h e r e} \quad \psi_ {\boldsymbol {\alpha}} (\beta) = \frac {1}{2} \sum_ {i = 1} ^ {d} \left(\beta_ {i} \operatorname {a r c s i n h} \left(\frac {\beta_ {i}}{\alpha_ {i} ^ {2}}\right) - \sqrt {\beta_ {i} ^ {2} + \alpha_ {i} ^ {4}} + \alpha_ {i} ^ {2}\right). \tag {4} +$$ + +The convex potential $\psi_{\alpha}$ is the hyperbolic entropy function (or hypentropy) [21]. Depending on the structure of the vector $\alpha$ , the generalisation properties of $\beta_{\alpha}^{\star}$ highly vary. We point out the two main characteristics of $\alpha$ that affect the behaviour of $\psi_{\alpha}$ and therefore also the solution $\beta_{\alpha}^{\star}$ . + +1. The Scale of $\alpha$ . For an initialisation vector $\alpha$ we call the $\ell_1$ -norm $\| \alpha \| _1$ the scale of the initialisation. It is an important quantity affecting the properties of the recovered solution $\beta_{\alpha}^{\star}$ . To see this let us consider a uniform initialisation of the form $\alpha = \alpha 1$ for a scalar value $\alpha >0$ . In this case the potential $\psi_{\alpha}$ has the property of resembling the $\ell_1$ -norm as the scale $\alpha$ vanishes: $\psi_{\alpha}\sim \ln (1 / \alpha)\| .\|_{1}$ as $\alpha \rightarrow 0$ . Hence, a small initialisation results in a low $\ell_1$ -norm solution which is known to induce sparse recovery guarantees [10]. This setting is often referred to as the "rich" regime [61]. In contrast, using a large initialisation scale leads to solutions with low $\ell_2$ -norm: $\psi_{\alpha}\sim \| .\|_{2}^{2} / (2\alpha^{2})$ as $\alpha \to \infty$ , a setting known as the "kernel" or "lazy" regime. Overall, to retrieve the minimum $\ell_1$ -norm solution, one should use a uniform initialisation with small scale $\alpha$ , see Fig. 7 in Appendix D for an illustration and [61, Theorem 2] for a precise characterisation. +2. The Shape of $\alpha$ . In addition to the scale of the initialisation $\alpha$ , a lesser studied aspect is its "shape", which is a term we use to refer to the relative distribution of $\{\alpha_i\}_i$ along the $d$ coordinates [3]. It is a crucial property because having $\alpha \rightarrow 0$ does not necessarily lead to the potential $\psi_{\alpha}$ being close to the $\ell_1$ -norm. Indeed, we have that $\psi_{\alpha}(\beta) \stackrel{\alpha \rightarrow 0}{\sim} \sum_{i=1}^{d} \ln \left( \frac{1}{\alpha_i} \right) |\beta_i|$ (see Appendix D), therefore if the vector $\ln(1 / \alpha)$ has entries changing at different rates, then $\psi_{\alpha}(\beta)$ is a weighted $\ell_1$ -norm. In words, if the entries of $\alpha$ do not go to zero "uniformly", then the resulting implicit bias minimizes a + +weighed $\ell_1$ -norm. This phenomenon can lead to solutions with vastly different sparsity structure than the minimum $\ell_1$ -norm interpolator. See Fig. 7 and Example 1 in Appendix D. + +# 3.2 Implicit bias of (stochastic) gradient descent + +In Theorem 1, we prove that for an initialisation $\sqrt{2}\alpha \in \mathbb{R}^d$ and for arbitrary stepsize sequences $(\gamma_k)_{k\geqslant 0}$ if the iterates converge to an interpolator, then this interpolator is the solution of a constrained minimisation problem which involves the hyperbolic entropy $\psi_{\alpha_{\infty}}$ defined in (4), where $\alpha_{\infty}\in \mathbb{R}^{d}$ is an effective initialisation which depends on the trajectory and on the stepsize sequence. Later, we prove the convergence of iterates for macroscopic step sizes in Theorem 2. + +Theorem 1 (Implicit bias of (S)GD). Let $(u_{k},v_{k})_{k\geqslant 0}$ follow the mini-batch SGD recursion (3) initialised at $(u_0,v_0) = (\sqrt{2}\pmb {\alpha},\mathbf{0})$ and with step sizes $(\gamma_k)_{k\geqslant 0}$ . Let $(\beta_{k})_{k\geqslant 0} = (u_{k}\odot v_{k})_{k\geqslant 0}$ and assume that they converge to some interpolator $\beta_{\infty}^{\star}\in S$ . Then, $\beta_{\infty}^{\star}$ satisfies: + +$$ +\beta_ {\infty} ^ {\star} = \underset {\beta^ {\star} \in \mathcal {S}} {\operatorname {a r g m i n}} D _ {\psi_ {\alpha_ {\infty}}} \left(\beta^ {\star}, \tilde {\beta} _ {0}\right), \tag {5} +$$ + +where $D_{\psi_{\alpha_{\infty}}}$ is the Bregman divergence with hyperentropy potential $\psi_{\alpha_{\infty}}$ of the effective initialisation $\alpha_{\infty}$ , and $\tilde{\beta}_0$ is a small perturbation term. The effective initialisation $\alpha_{\infty}$ is given by, + +$$ +\boldsymbol {\alpha} _ {\infty} ^ {2} = \boldsymbol {\alpha} ^ {2} \odot \exp \left(- \sum_ {k = 0} ^ {\infty} q \left(\gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right)\right)\right), \tag {6} +$$ + +where $q(x) = -\frac{1}{2}\ln \left((1 - x^2)^2\right)$ satisfies $q(x)\geqslant 0$ for $|x|\leqslant \sqrt{2}$ , with the convention $q(1) = +\infty$ . + +The perturbation term $\tilde{\beta}_0\in \mathbb{R}^d$ is explicitly given by $\tilde{\beta}_{0} = \frac{1}{2}\bigl (\pmb{\alpha}_{+}^{2} - \pmb{\alpha}_{-}^{2}\bigr)$ , where $q_{\pm}(x) = \mp 2x-\ln ((1\mp x)^2)$ , and $\pmb{\alpha}_{\pm}^{2} = \pmb{\alpha}^{2}\odot \exp (-\sum_{k = 0}^{\infty}q_{\pm}(\gamma_{k}\nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})))$ + +Trajectory-dependent characterisation. The characterisation of $\beta_{\infty}^{\star}$ in Theorem 1 holds for any stepsize schedule such that the iterates converge and goes beyond the continuous-time frameworks previously studied [61, 48]. The result even holds for adaptive stepsize schedules which keep the stepsize scalar such as AdaDelta [65]. An important aspect of our result is that $\alpha_{\infty}$ and $\tilde{\beta}_0$ depend on the iterates' trajectory. Nevertheless, we argue that our formulation provides useful ingredients for understanding the implicit regularisation effects of (S)GD for this problem compared to trivial characterisations (such as e.g., $\min_{\beta} \| \beta - \beta_{\infty}^{\star} \|$ ). Importantly, the key parameters $\alpha_{\infty}, \tilde{\beta}_0$ depend on crucial parameters such as the stepsize and noise in a useful and analysable manner: understanding how they affect $\alpha_{\infty}$ and $\tilde{\beta}_0$ coincides with understanding how they affect the recovered solution $\beta_{\infty}^{\star}$ and its generalisation properties. This is precisely the object of Sections 4 and 5 where we discuss the qualitative and quantitative insights from Theorem 1 in greater detail. + +The perturbation $\tilde{\beta}_0$ can be ignored. We show in Proposition 16, under reasonable assumptions on the stepsizes, that $|\tilde{\beta}_0| \leqslant \alpha^2$ and $\alpha_{\infty} \leqslant \alpha$ (component-wise). The magnitude of $\tilde{\beta}_0$ is therefore negligible in front of the magnitudes of $\beta^{\star} \in S$ and one can roughly ignore the term $\tilde{\beta}_0$ . Hence, the implicit regularisation Eq. (5) can be thought of as $\beta_{\infty}^{\star} \approx \mathrm{argmin}_{\beta^{\star} \in S} D_{\psi_{\alpha_{\infty}}}(\beta^{\star}, 0) = \psi_{\alpha_{\infty}}(\beta^{\star})$ , and thus the solution $\beta_{\infty}^{\star}$ minimises the same potential function that the solution of gradient flow (see Eq. (4)), but with an effective initialisation $\alpha_{\infty}$ . Also note that for $\gamma_k \equiv \gamma \to 0$ we have $\alpha_{\infty} \to \alpha$ and $\tilde{\beta}_0 \to 0$ (Proposition 19), recovering the previously known result for gradient flow (4). + +**Deviation from gradient flow.** The difference with gradient flow is directly associated with the quantity $\sum_{k} q(\gamma_{k} \nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))$ . Also, as the (stochastic) gradients converge to 0 and $q(x) \stackrel{x \to 0}{\sim} x^{2}$ , one should think of this sum as roughly being $\sum_{k} \nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})^{2}$ : the larger this sum, the more the recovered solution differs from that of gradient flow. The full picture of how large step sizes and stochasticity impact the generalisation properties of $\beta_{\infty}^{\star}$ and the recovery of minimum $\ell_{1}$ -norm solution is nuanced as clearly seen in Fig. 1. + +# 3.3 Convergence of the iterates + +Theorem 1 provides the implicit minimisation problem but says nothing about the convergence of the iterates. Here we show under very reasonable assumptions on the stepsizes that the iterates indeed + +converge towards a global optimum. Note that since the loss $F$ is non-convex, such a convergence result is non-trivial and requires an involved analysis. + +Theorem 2 (Convergence of the iterates). Let $(u_{k},v_{k})_{k\geqslant 0}$ follow the mini-batch SGD recursion (3) initialised at $u_0 = \sqrt{2}\alpha \in \mathbb{R}_{>0}^d$ and $v_{0} = 0$ , and let $(\beta_{k})_{k\geqslant 0} = (u_{k}\odot v_{k})_{k\geqslant 0}$ . Recall the "smoothness" parameter $L$ on the minibatch loss defined in the notations. There exist $B > 0$ verifying $B = \tilde{\mathcal{O}} (\min_{\beta^{\star}\in S}\| \beta^{\star}\|_{\infty})$ and a numerical constant $c > 0$ such that for stepsizes satisfying $\gamma_k\leqslant \frac{c}{LB}$ , the iterates $(\beta_{k})_{k\geqslant 0}$ converge almost surely to the interpolator $\beta_{\infty}^{\star}$ solution of Eq. (5). + +In fact, we can be more precise by showing an exponential rate of convergence of the losses as well as characterise the rate of convergence of the iterates as follows. + +Proposition 1 (Quantitative convergence rates). For a uniform initialisation $\alpha = \alpha \mathbf{1}$ and under the assumptions of Theorem 2, we have: + +$$ +\mathbb {E} \left[ \mathcal {L} \left(\beta_ {k}\right) \right] \leqslant \left(1 - \frac {1}{2} \gamma \alpha^ {2} \lambda_ {b}\right) ^ {k} \mathcal {L} \left(\beta_ {0}\right) \quad a n d \quad \mathbb {E} \left[ \left\| \beta_ {k} - \beta_ {\alpha_ {k}} ^ {\star} \right\| ^ {2} \right] \leqslant C \left(1 - \frac {1}{2} \gamma \alpha^ {2} \lambda_ {b}\right) ^ {k}, +$$ + +where $\lambda_{b} > 0$ is the largest value such that $\lambda_{b}H \preceq \mathbb{E}_{\mathcal{B}}[H_{\mathcal{B}}]$ , $C = 2B(\alpha^{2}\lambda_{\min}^{+})^{-1}\left(1 + (4B\lambda_{\max})(\alpha^{2}\lambda_{\min}^{+})^{-1}\right)\mathcal{L}(\beta_{0})$ and $\lambda_{\min}^{+}, \lambda_{\max} > 0$ are respectively the smallest non-null and the largest eigenvalues of $H$ , and $\beta_{\alpha_k}^{\star}$ is the interpolator that minimises the perturbed hypentropy $h_k$ of parameter $\alpha_k$ , as defined in Eq. (7) in the next subsection. + +The convergence of the losses is proved directly using the time-varying mirror structure that we exhibit in the next subsection, the convergence of the iterates is proved by studying the curvature of the mirror maps on a small neighborhood around the affine interpolation space. + +# 3.4 Sketch of proof through a time varying mirror descent + +As in the continuous-time framework, our results heavily rely on showing that the iterates $(\beta_k)_k$ follow a mirror descent recursion with time-varying potentials on the convex loss $\mathcal{L}(\beta)$ . To show this, we first define the following quantities: + +$$ +\boldsymbol {\alpha} _ {k} ^ {2} := \boldsymbol {\alpha} _ {+, k} \odot \boldsymbol {\alpha} _ {-, k} \quad \text {a n d} \quad \phi_ {k} := \frac {1}{2} \operatorname {a r c s i n h} \left(\frac {\boldsymbol {\alpha} _ {+, k} ^ {2} - \boldsymbol {\alpha} _ {- , k} ^ {2}}{2 \boldsymbol {\alpha} _ {k} ^ {2}}\right) \in \mathbb {R} ^ {d}, +$$ + +where $\alpha_{\pm ,k}\coloneqq \alpha \exp \left(-\frac{1}{2}\sum_{i = 0}^{k - 1}q_{\pm}\bigl {(}\gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell})\bigr)\right)\in \mathbb{R}^{d}$ . Finally for $k\geqslant 0$ , we define the potentials $(h_k:\mathbb{R}^d\to \mathbb{R})_{k\geqslant 0}$ as: + +$$ +h _ {k} (\beta) = \psi_ {\boldsymbol {\alpha} _ {k}} (\beta) - \langle \phi_ {k}, \beta \rangle . \tag {7} +$$ + +Where $\psi_{\alpha_k}$ is the hyperbolic entropy function defined Eq. (4). Now that all the relevant quantities are defined, we can state the following proposition which explicits the time-varying stochastic mirror descent. + +Proposition 2. The iterates $(\beta_{k} = u_{k}\odot v_{k})_{k\geqslant 0}$ from Eq. (3) satisfy the Stochastic Mirror Descent recursion with varying potentials $(h_k)_k$ .. + +$$ +\nabla h _ {k + 1} \left(\beta_ {k + 1}\right) = \nabla h _ {k} \left(\beta_ {k}\right) - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right), +$$ + +where $h_k: \mathbb{R}^d \to \mathbb{R}$ for $k \geqslant 0$ are defined Eq. (7). Since $\nabla h_0(\beta_0) = 0$ we have: + +$$ +\nabla h _ {k} \left(\beta_ {k}\right) \in \operatorname {s p a n} \left(x _ {1}, \dots , x _ {n}\right). \tag {8} +$$ + +Theorem 1 and 2 and Proposition 1 follow from this key proposition: by suitably modifying classical convex optimization techniques to account for the time-varying potentials, we can prove the convergence of the iterates towards an interpolator $\beta_{\infty}^{\star}$ along with that of the relevant quantities $\alpha_{\pm ,k}$ , $\alpha_{k}$ and $\phi_{k}$ . The implicit regularisation problem then directly follows from: (1) the limit condition $\nabla h_{\infty}(\beta_{\infty}) \in \operatorname{Span}(x_1, \ldots, x_n)$ as seen from Eq. (8) and (2) the interpolation condition $X\beta_{\infty}^{\star} = y$ . Indeed, these two conditions exactly correspond to the KKT conditions of the convex problem Eq. (5). + +# 4 Analysis of the impact of the stepsize and stochasticity on $\alpha_{\infty}$ + +In this section, we analyse the effects of large step sizes and stochasticity on the implicit bias of (S)GD. We focus on how these factors influence the effective initialisation $\alpha_{\infty}$ , which plays a key role as shown in Theorem 1. From its definition in Eq. (6), we see that $\alpha_{\infty}$ is a function of the vector $\sum_{k} q(\gamma_k \nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k))$ . We henceforth call this quantity the gain vector. For simplicity of the discussions, from now on, we consider constant step sizes $\gamma_k = \gamma$ for all $k \geqslant 0$ and a uniform initialisation of the weights $\alpha = \alpha \mathbf{1}$ with $\alpha > 0$ . We can then write the gain vector as: + +$$ +\operatorname {G a i n} _ {\gamma} := \ln \left(\frac {\boldsymbol {\alpha} ^ {2}}{\boldsymbol {\alpha} _ {\infty} ^ {2}}\right) = \sum_ {k} q (\gamma \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})) \in \mathbb {R} ^ {d}. +$$ + +Following our discussion in section 3.1 on the scale and the shape of $\alpha_{\infty}$ , we recall the link between the scale and shape of $\mathrm{Gain}_{\gamma}$ and the recovered solution: + +1. The scale of $\mathrm{Gain}_{\gamma}$ , i.e. the magnitude of $\| \mathrm{Gain}_{\gamma} \|_1$ indicates how much the implicit bias of (S)GD differs from that of gradient flow: $\| \mathrm{Gain}_{\gamma} \|_1 \sim 0$ implies that $\alpha_{\infty} \sim \alpha$ and therefore the recovered solution is close to that of gradient flow. On the contrary, $\| \mathrm{Gain}_{\gamma} \|_1 \gg \ln(1 / \alpha)$ implies that $\alpha_{\infty}$ has effective scale much smaller than $\alpha$ thereby changing the implicit regularisation Eq. (5). +2. The shape of $\mathrm{Gain}_{\gamma}$ indicates which coordinates of $\beta$ in the associated minimum weighted $\ell_{1}$ problem are most penalised. First recall from Section 3.1 that a uniformly large $\mathrm{Gain}_{\gamma}$ leads to $\psi_{\alpha_{\infty}}$ being closer to the $\ell_{1}$ -norm. However, with small weight initialisation $\alpha \rightarrow 0$ , we have, + +$$ +\psi_ {\alpha_ {\infty}} (\beta) \sim \ln \left(\frac {1}{\alpha}\right) \| \beta \| _ {1} + \sum_ {i = 1} ^ {d} \operatorname {G a i n} _ {\gamma} (\mathrm {i}) | \beta_ {\mathrm {i}} |, \tag {9} +$$ + +In this case, having a heterogeneously large vector $\mathrm{Gain}_{\gamma}$ leads to a weighted $\ell_{1}$ norm as the effective implicit regularisation, where the coordinates of $\beta$ corresponding to the largest entries of $\mathrm{Gain}_{\gamma}$ are less likely to be recovered. + +# 4.1 The scale of $\mathrm{Gain}_{\gamma}$ is increasing with the stepsize + +The following proposition highlights the dependencies of the scale of the gain $\| \mathrm{Gain}_{\gamma}\| _1$ in terms of various problem constants. + +Proposition 3. Let $\Lambda_b, \lambda_b > 0^3$ be the largest and smallest values, respectively, such that $\lambda_b H \preceq \mathbb{E}_{\mathcal{B}}[H_{\mathcal{B}}^2] \preceq \Lambda_b H$ . For any stepsize $\gamma > 0$ satisfying $\gamma \leqslant \frac{c}{BL}$ (as in Theorem 2), initialisation $\alpha \mathbf{1}$ and batch size $b \in [n]$ , the magnitude of the gain satisfies: + +$$ +\lambda_ {b} \gamma^ {2} \sum_ {k} \mathbb {E L} \left(\beta_ {k}\right) \leqslant \mathbb {E} \left[ \| \operatorname {G a i n} _ {\gamma} \| _ {1} \right] \leqslant 2 \Lambda_ {b} \gamma^ {2} \sum_ {k} \mathbb {E L} \left(\beta_ {k}\right), \tag {10} +$$ + +where the expectation is over a uniform and independent sampling of the batches $(\mathcal{B}_k)_{k\geqslant 0}$ . + +The slower the training, the larger the gain. Eq. (10) shows that the slower the training loss converges to 0, the larger the sum of the loss and therefore the larger the scale of $\mathrm{Gain}_{\gamma}$ . This means that the (S)GD trajectory deviates from that of gradient flow if the stepsize and/or noise slows down the training. This supports observations previously made from stochastic gradient flow [48] analysis. + +The bigger the stepsize, the larger the gain. The effect of the stepsize on the magnitude of the gain is not directly visible in Eq. (10) because a larger stepsize tends to speed up the training. For stepsize $0 < \gamma \leqslant \gamma_{\max} = \frac{c}{BL}$ as in Theorem 2 we have that (see Appendix G.1): + +$$ +\sum_ {k} \gamma^ {2} \mathcal {L} \left(\beta_ {k}\right) = \Theta \left(\gamma \ln \left(\frac {1}{\alpha}\right) \left\| \beta_ {\ell_ {1}} ^ {\star} \right\| _ {1}\right). \tag {11} +$$ + +Eq. (11) clearly shows that increasing the stepsize boosts the magnitude $\| \mathrm{Gain}_{\gamma}\|_{1}$ up until the limit of $\gamma_{\mathrm{max}}$ . Therefore, the larger the stepsize the smaller is the effective scale of $\alpha_{\infty}$ . In turn, larger gap between $\alpha_{\infty}$ and $\alpha$ leads to a larger deviation of (S)GD from the gradient flow. + +![](images/f458d970cb054567bb35e323e3c71f4e46a3fdde5ba75ee579f7da53491402c8.jpg) +Figure 2: Left: the scale of $\mathrm{Gain}_{\gamma}$ explodes as $\gamma \rightarrow \tilde{\gamma}_{\mathrm{max}}$ for both GD and SGD. Right: $\beta_{\mathrm{sparse}}^{\star}$ is fixed, we perform 100 runs of GD and SGD with different feature matrices, and we plot the $d$ coordinates of $\mathrm{Gain}_{\gamma}$ (for GD and SGD) on the $x$ -axis (which is in log scale for better visualisation). The shape of $\mathrm{Gain}_{\gamma}^{\mathrm{SGD}}$ is homogeneous whereas that of GD is heterogeneous with much higher magnitude on the support of $\beta_{\mathrm{sparse}}^{\star}$ . The shape of $\mathrm{Gain}_{\gamma}^{\mathrm{GD}}$ is proportional to the expected gradient at initialisation which is $(\beta_{\mathrm{sparse}}^{\star})^{2}$ . + +![](images/76201e4818bd0cc68c5ff813b7b29e6c839cbe358c22fadb1eeb93ecfc03eacd.jpg) + +Large step sizes and Edge of Stability. The previous paragraph holds for step sizes smaller than $\gamma_{\mathrm{max}}$ for which we can theoretically prove convergence. But what if we use even bigger step sizes? Let $(\beta_k^\gamma)_k$ denote the iterates generated with stepsize $\gamma$ and let us define $\tilde{\gamma}_{\mathrm{max}} := \sup_{\gamma \geqslant 0} \{\gamma \text{ s.t. } \forall \gamma' \in (0, \gamma), \sum_k \mathcal{L}(\beta_k^{\gamma'}) < \infty\}$ , which corresponds to the largest step size such that the iterates still converge for a given problem (even if not provably so). From Proposition 3 we have that $\gamma_{\mathrm{max}} \leqslant \tilde{\gamma}_{\mathrm{max}}$ . As we approach this upper bound on convergence $\gamma \rightarrow \tilde{\gamma}_{\mathrm{max}}$ , the sum $\sum_k \mathcal{L}(\beta_k^\gamma)$ diverges. For such large step sizes, the iterates of gradient descent tend to "bounce" and this regime is commonly referred to as the Edge of Stability. In this regime, the convergence of the loss can be made arbitrarily slow due to these bouncing effects. As a consequence, as seen through Eq. (10), the magnitude of $\mathrm{Gain}_\gamma$ can be become arbitrarily big as observed in Fig. 2 (left). In this regime, the recovered solution tends to dramatically differ from the gradient flow solution, as seen in Fig. 1. + +Impact of stochasticity and linear scaling rule. Assuming inputs $x_{i}$ sampled from $\mathcal{N}(0,\sigma^2 I_d)$ with $\sigma^2 > 0$ , we obtain $\mathbb{E}\left[\| \mathrm{Gain}_{\gamma}\| _1\right] = \Theta \left(\gamma \frac{\sigma^2d}{b}\ln \left(\frac{1}{\alpha}\right)\| \beta_{\ell_1}^\star \| _1\right)$ , w.h.p. over the dataset (see Appendix G.3, Proposition 17). The scale of $\mathrm{Gain}_{\gamma}$ decreases with batch size and there exists a factor $n$ between that of SGD and that of GD. Additionally, the magnitude of $\mathrm{Gain}_{\gamma}$ depends on $\frac{\gamma}{b}$ , resembling the linear scaling rule commonly used in deep learning [22]. + +By analysing the magnitude $\| \mathrm{Gain}_{\gamma}\| _1$ , we have explained the distinct behavior of (S)GD with large stepsizes compared to gradient flow. However, our current analysis does not qualitatively distinguish the behavior between SGD and GD beyond the linear stepsize scaling rules, in contrast with Fig. 1. A deeper understanding of the shape of $\mathrm{Gain}\gamma$ is needed to explain this disparity. + +# 4.2 The shape of $\mathrm{Gain}_{\gamma}$ explains the differences between GD and SGD + +In this section, we restrict our presentation to single batch SGD $(b = 1)$ and full batch GD $(b = n)$ . When visualising the typical shape of $\mathrm{Gain}_{\gamma}$ for large step sizes (see Fig. 2 - right), we note that GD and SGD behave very differently. For GD, the magnitude of $\mathrm{Gain}_{\gamma}$ is higher for coordinates in the support of $\beta_{\ell_1}^{\star}$ and thus these coordinates are adversely weighted in the asymptotic limit of $\psi_{\alpha_{\infty}}$ (per (9)). This explains the distinction seed in Fig. 1, where GD in this regime has poor sparse recovery despite having a small scale of $\alpha_{\infty}$ , as opposed to SGD that behaves well. + +The shape of $\mathrm{Gain}_{\gamma}$ is determined by the sum of the squared gradients $\sum_{k} \nabla \mathcal{L}_{\mathcal{B}_k} (\beta_k)^2$ , and in particular by the degree of heterogeneity among the coordinates of this sum. Precisely analysing the sum over the whole trajectory of the iterates $(\beta_k)_k$ is technically out of reach. However, we empirically observe for the trajectories shown in Fig. 2 that the shape is largely determined within the first few iterates as formalized in the observation below. + +Observation 1. $\sum_{k}\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)^2\lesssim \mathbb{E}[\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_0)^2 ]$ + +In the simple case of a Gaussian noiseless sparse recovery problem (where $y_{i} = \langle \beta_{\mathrm{sparse}}^{\star}, x_{i} \rangle$ for some sparse vector $\beta_{\mathrm{sparse}}^{\star}$ ), we can control these gradients for GD and SGD (Appendix G.4) as: + +$$ +\nabla \mathcal {L} \left(\beta_ {0}\right) ^ {2} = \left(\beta_ {\text {s p a r s e}} ^ {\star}\right) ^ {2} + \varepsilon , \text {f o r s o m e} \varepsilon \text {v e r i f y i n g} \| \varepsilon \| _ {\infty} < < \left\| \beta_ {\text {s p a r s e}} ^ {\star} \right\| _ {\infty} ^ {2}, \tag {12} +$$ + +$$ +\mathbb {E} _ {i _ {0}} \left[ \nabla \mathcal {L} _ {i _ {0}} \left(\beta_ {0}\right) ^ {2} \right] = \Theta \left(\| \beta_ {\text {s p a r s e}} ^ {\star} \| _ {2} ^ {2} \mathbf {1}\right). \tag {13} +$$ + +The gradient of GD is heterogeneous. Since $\beta_{\mathrm{sparse}}^{\star}$ is sparse by definition, we deduce from Eq. (25) that $\nabla \mathcal{L}(\beta_0)$ is heterogeneous with larger values corresponding to the support of $\beta_{\mathrm{sparse}}^{\star}$ . Along with observation 1, this means that $\mathrm{Gain}_{\gamma}$ has much larger values on the support of $\beta_{\mathrm{sparse}}^{\star}$ . The corresponding weighted $\ell_1$ -norm therefore penalises the coordinates belonging to the support of $\beta_{\mathrm{sparse}}^{\star}$ , which hinders the recovery of $\beta_{\mathrm{sparse}}^{\star}$ (as explained in Example 1, Appendix D). + +The stochastic gradient of SGD is homogeneous. On the contrary, from Eq. (26), we have that the initial stochastic gradients are homogeneous, leading to a weighted $\ell_1$ -norm where the weights are roughly balanced. The corresponding weighted $\ell_1$ -norm is therefore close to the uniform $\ell_1$ -norm and the classical $\ell_1$ recovery guarantees are expected. + +Overall summary of the joint effects of the scale and shape. In summary we have the following trichotomy which fully explains Fig. 1: + +1. for small stepsizes, the scale is small, and (S)GD solutions are close to that of gradient flow; +2. for large stepsizes the scale is significant and the recovered solutions differ from GF: + +- for SGD the shape of $\alpha_{\infty}$ is uniform, the associated norm is closer to the $\ell_1$ -norm and the recovered solution is closer to the sparse solution; +- for GD, the shape is heterogeneous, the associated norm is weighted such that it hinders the recovery of the sparse solution. + +In this last section, we relate heuristically these findings to the Edge of Stability phenomenon. + +# 5 Edge of Stability: the neural point of view + +In recent years it has been noticed that when training neural networks with 'large' step sizes at the limit of divergence, GD enters the Edge of Stability (EoS) regime. In this regime, as seen in Fig. 3, the iterates of GD 'bounce' / 'oscillate'. In this section, we come back to the point of view of the weights $w_{k} = (u_{k}, v_{k}) \in \mathbb{R}^{2d}$ and make the connection between our previous results and the common understanding of the EoS phenomenon. The question we seek to answer is: in which case does GD enter the EoS regime, and if so, what are the consequences on the trajectory? Keep in mind that this section aims to provide insights rather than formal statements. We study the GD trajectory starting from a small initialisation $\alpha = \alpha 1$ where $\alpha \ll 1$ such that we can consider that gradient flow converges close to the sparse interpolator $\beta_{\mathrm{sparse}}^{\star} = \beta_{w_{\mathrm{sparse}}^{\star}}$ corresponding to the weights $w_{\mathrm{sparse}}^{\star} = (\sqrt{|\beta_{\mathrm{sparse}}^{\star}|}, \mathrm{sign}(\beta_{\mathrm{sparse}}^{\star})\sqrt{|\beta_{\mathrm{sparse}}^{\star}|})$ (see Lemma 1 in [49] for the mapping from the predictors to weights for gradient flow). The trajectory of GD as seen in Fig. 3 (left) can be decomposed into up to 3 phases. + +First phase: gradient flow. The stepsize is appropriate for the local curvature (as seen in Fig. 3, lower right) around initialisation and the iterates of GD remain close to the trajectory of gradient flow (in black in Fig. 3). If the stepsize is such that $\gamma < \frac{2}{\lambda_{\mathrm{max}}(\nabla^2F(w_{\mathrm{sparse}}^\star))}$ , then it is compatible with the local curvature and the iterates can converge: in this case GF and GD converge to the same point (as seen in Fig. 1 for small stepsizes). For larger $\gamma > \frac{2}{\lambda_{\mathrm{max}}(\nabla^2F(w_{\mathrm{sparse}}^\star))}$ (as is the case for $\gamma_{\mathrm{GD}}$ in Fig. 3, lower right), the iterates cannot converge to $\beta_{\mathrm{sparse}}^\star$ and we enter the oscillating phase. + +Second phase: oscillations. The iterates start oscillating. The gradient of $F$ writes $\nabla_{(u,v)}F(w)\sim$ $(\nabla \mathcal{L}(\beta)\odot v,\nabla \mathcal{L}(\beta)\odot u)$ and for $w$ in the vicinity of $w_{\mathrm{sparse}}^{\star}$ we have that $u_{i}\approx v_{i}\approx 0$ for $i\notin \operatorname {supp}(\beta_{\mathrm{sparse}}^{\star})$ . Therefore for $w\sim w_{\mathrm{sparse}}^{\star}$ we have that $\nabla_uF(w)_i\approx \nabla_vF(w)_i\approx 0$ for $i\notin \operatorname {supp}(\beta_{\mathrm{sparse}}^{\star})$ and the gradients roughly belong to $\operatorname {Span}(e_i,e_{i + d})_{i\in \operatorname {supp}(\beta_{\mathrm{sparse}}^{\star})}$ . This means + +![](images/78e0b57d3d310fb82cd484bd7bdb76ebfc629115489374912204efeb7486e084.jpg) +Figure 3: GD at the $EoS$ . Left: For GD, the coordinates on the support of $\beta_{\mathrm{sparse}}^{\star}$ oscillate and drift towards 0. Right, top: The GD train loses saturate before eventually converging. Bottom: GF converges towards a solution that has a high hessian maximum eigenvalue. GD cannot converge towards this solution because of its large stepsize: it therefore drifts towards a solution that has a curvature just below $2 / \gamma$ . + +that only the coordinates of the weights $(u_i, v_i)$ for $i \in \operatorname{supp}(\beta_{\mathrm{sparse}}^{\star})$ can oscillate and similarly for $(\beta_i)_{i \in \operatorname{supp}(\beta_{\mathrm{sparse}}^{\star})}$ (as seen Fig. 3 left). + +Last phase: convergence. Due to the oscillations, the iterates gradually drift towards a region of lower curvature (Fig. 3, lower right, the sharpness decreases) where they may (potentially) converge. Theorem 1 enables us to understand where they converge: the coordinates of $\beta_{k}$ that have oscillated significantly along the trajectory belong to the support of $\beta_{\mathrm{sparse}}^{\star}$ , and therefore $\mathrm{Gain}_{\gamma}(\mathrm{i})$ becomes much larger for $i \in \operatorname{supp}(\beta_{\mathrm{sparse}}^{\star})$ than for the other coordinates. Thus, the coordinates of the solution recovered in the $EoS$ regime are heavily penalised on the support of the sparse solution. This is observed in Fig. 3 (left): the oscillations of $(\beta_{i})_{i \in \operatorname{supp}(\beta_{\mathrm{sparse}}^{\star})}$ lead to a gradual shift of these coordinates towards 0, hindering an accurate recovery of the solution $\beta_{\mathrm{sparse}}^{\star}$ . + +SGD in the $EoS$ regime. In contrast to the behavior of GD where the oscillations primarily occur on the non-sparse coordinates of ground truth sparse model, for SGD we see a different behavior in Fig. 6 (Appendix A). For stepsizes in the $EoS$ regime, just below the non-convergence threshold: the fluctuation of the coordinates occurs evenly over all coordinates, leading to a uniform $\alpha_{\infty}$ . These fluctuations are reminiscent of label-noise SGD [2], that have been shown to recover the sparse interpolator in diagonal linear networks [50]. + +# 6 Conclusion + +We study the effect of stochasticity along with large step sizes when training DLNs with (S)GD. We prove convergence of the iterates as well as explicitly characterise the recovered solution by exhibiting an implicit regularisation problem which depends on the iterates' trajectory. In essence the impact of stepsize and minibatch size are captured by the effective initialisation parameter $\alpha_{\infty}$ that depends on these choices in an informative way. We then use our characterisation to explain key empirical differences between SGD and GD and provide further insights on the role of stepsize and stochasticity. In particular, our characterisation explains the fundamentally different generalisation properties of SGD and GD solutions at large step sizes as seen in Fig. 1: without stochasticity, the use of large step sizes can prevent the recovery of the sparse interpolator, even though the effective scale of the initialization decreases with larger stepsize for both SGD and GD. We also provide insights on the link between the Edge of Stability regime and our results. + +# Acknowledgements + +M. Even deeply thanks Laurent Massoulié for making it possible to visit Microsoft Research and the Washington state during an internship supervised by Suriya Gunasekar, the MSR Machine Learning Foundations group for hosting him, and Martin Jaggi for inviting him for a week in Lausanne at EPFL, making it possible to meet and discuss with Scott Pesme and Nicolas Flammarion. + +# References + +[1] Kwangjun Ahn, Sébastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suarez, and Yi Zhang. Learning threshold neurons via the "edge of stability". arXiv preprint, 2022. +[2] M. Andriushchenko, A. Varre, L. Pillaud-Vivien, and N. Flammarion. SGD with large step sizes learns sparse features. arXiv preprint, 2022. +[3] Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E Woodworth, Nathan Srebro, Amir Globerson, and Daniel Soudry. On the implicit bias of initialization shape: Beyond infinitesimal mirror descent. In International Conference on Machine Learning, pages 468-477. PMLR, 2021. +[4] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253-263, January 2008. +[5] H. H Bauschke, J. Bolte, and M. Teboulle. A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications. Mathematics of Operations Research, 42(2):330-348, 2017. +[6] Raphaël Berthier. Incremental learning in diagonal linear networks. arXiv preprint arXiv:2208.14673, 2022. +[7] G. Beugnot, J. Mairal, and A. Rudi. On the benefits of large learning rates for kernel methods. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pages 254–282. PMLR, 02–05 Jul 2022. +[8] G. Blanc, N. Gupta, G. Valiant, and P. Valiant. Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process. In Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pages 483-513. PMLR, 09-12 Jul 2020. +[9] L.M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7(3):200-217, 1967. ISSN 0041-5553. +[10] E. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207-1223, 2006. +[11] Pratik Chaudhari and Stefano Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. In International Conference on Learning Representations, 2018. +[12] Lei Chen and Joan Bruna. On gradient descent convergence beyond the edge of stability, 2022. +[13] Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On Lazy Training in Differentiable Programming. 2019. +[14] Jeremy Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. In International Conference on Learning Representations, 2021. +[15] Alex Damian, Tengyu Ma, and Jason D. Lee. Label noise SGD provably prefers flat global minimizers. In Advances in Neural Information Processing Systems, 2021. +[16] Alex Damian, Eshaan Nichani, and Jason D. Lee. Self-stabilization: The implicit bias of gradient descent at the edge of stability. In International Conference on Learning Representations, 2023. +[17] J. L. Doob. Stochastic Processes. John Wiley & Sons, 1990. +[18] Radu Alexandru Dragomir, Mathieu Even, and Hadrien Hendrikx. Fast stochastic Bregman gradient methods: Sharp analysis and variance reduction. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 2815-2825. PMLR, 18-24 Jul 2021. + +[19] Mathieu Even and Laurent Massoulie. Concentration of non-isotropic random tensors with applications to learning and empirical risk minimization. In Proceedings of Thirty Fourth Conference on Learning Theory, volume 134 of Proceedings of Machine Learning Research, pages 1847–1886. PMLR, 15–19 Aug 2021. +[20] Jonas Geiping, Micah Goldblum, Phillip E Pope, Michael Moeller, and Tom Goldstein. Stochastic training is not necessary for generalization. In International Conference on Learning Representations, 2022. +[21] Udaya Ghai, Elad Hazan, and Yoram Singer. Exponentiated gradient meets gradient descent. In Proceedings of the 31st International Conference on Algorithmic Learning Theory, volume 117 of Proceedings of Machine Learning Research, pages 386-407. PMLR, 08 Feb-11 Feb 2020. +[22] Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: TrainingImagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +[23] Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. Advances in Neural Information Processing Systems, 30, 2017. +[24] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1832-1841. PMLR, 10-15 Jul 2018. +[25] Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, volume 31, 2018. +[26] Jeff Z. HaoChen, Colin Wei, Jason Lee, and Tengyu Ma. Shape matters: Understanding the implicit bias of the noise covariance. In Proceedings of Thirty Fourth Conference on Learning Theory, volume 134 of Proceedings of Machine Learning Research, pages 2315-2357. PMLR, 15-19 Aug 2021. +[27] Fengxiang He, Tongliang Liu, and Dacheng Tao. Control batch size and learning rate to generalize well: Theoretical and empirical evidence. In Advances in Neural Information Processing Systems, volume 32, 2019. +[28] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1-42, January 1997. +[29] Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: Closing the generalization gap in large batch training of neural networks. In Proceedings of the 31st International Conference on Neural Information Processing Systems, page 1729-1739, 2017. +[30] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, page 8580-8589, 2018. +[31] Stanisław Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in SGD, 2017. +[32] Stanisław Jastrzebski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amost Storkey. On the relation between the sharpest directions of DNN loss and the SGD step length. In International Conference on Learning Representations, 2019. +[33] Stanislaw Jastrzkebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Width of minima reached by stochastic gradient descent is influenced by learning rate to batch size ratio. In Artificial Neural Networks and Machine Learning – ICANN 2018, pages 392–402, 2018. +[34] Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. In International Conference on Learning Representations, 2019. + +[35] Ziwei Ji and Matus Telgarsky. Directional convergence and alignment in deep learning. In Advances in Neural Information Processing Systems, volume 33, pages 17176-17186, 2020. +[36] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017. +[37] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017. +[38] Bobby Kleinberg, Yanzhi Li, and Yang Yuan. An alternative view: When does SGD escape local minima? In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2698-2707. PMLR, 10-15 Jul 2018. +[39] Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial large learning rate in training neural networks. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019. +[40] Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. arXiv preprint arXiv:1906.05890, 2019. +[41] Stephan Mandt, Matthew D. Hoffman, and David M. Blei. A variational analysis of stochastic gradient algorithms. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, page 354-363, 2016. +[42] Dominic Masters and Carlo Luschi. Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612, 2018. +[43] Rotem Mulayoff, Tomer Michaeli, and Daniel Soudry. The implicit bias of minima stability: A view from function space. In Advances in Neural Information Processing Systems, 2021. +[44] Mor Shpigel Nacson, Kavya Ravichandran, Nathan Srebro, and Daniel Soudry. Implicit bias of the step size in linear diagonal neural networks. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 16270-16295. PMLR, 17-23 Jul 2022. +[45] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014. +[46] Ryan O'Donnell. Analysis of boolean functions, 2021. +[47] Francesco Orabona, Koby Crammer, and Nicolò Cesa-Bianchi. A generalized online mirror descent with applications to classification and regression. Mach. Learn., 99(3):411-435, jun 2015. +[48] S. Pesme, L. Pillaud-Vivien, and N. Flammarion. Implicit bias of SGD for diagonal linear networks: a provable benefit of stochasticity. In Advances in Neural Information Processing Systems, 2021. +[49] Scott Pesme and Nicolas Flammarion. Saddle-to-saddle dynamics in diagonal linear networks. arXiv preprint arXiv:2304.00488, 2023. +[50] L. Pillaud-Vivien, J. Reygner, and N. Flammarion. Label noise (stochastic) gradient descent implicitly solves the lasso for quadratic parametrisation. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pages 2127-2159. PMLR, 2022. +[51] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statist, 22(3):400-407, 1951. +[52] Samuel L. Smith and Quoc V. Le. A Bayesian perspective on generalization and stochastic gradient descent. In International Conference on Learning Representations, 2018. + +[53] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. J. Mach. Learn. Res., 19(1):2822-2878, jan 2018. +[54] Terrence Tao. Concentration of measure. 254A, Notes 1, Blogpost, 2010. +[55] Matus Telgarsky. Margins, shrinkage, and boosting. In International Conference on Machine Learning, pages 307-315. PMLR, 2013. +[56] Tomas Vaskevicius, Varun Kanade, and Patrick Rebeschini. The statistical complexity of early-stopped mirror descent. In Advances in Neural Information Processing Systems, volume 33, pages 253-264, 2020. +[57] Tomas Vaskevicius, Varun Kanade, and Patrick Rebeschini. Implicit regularization for optimal sparse recovery. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019. +[58] Roman Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2018. +[59] Yuqing Wang, Minshuo Chen, Tuo Zhao, and Molei Tao. Large learning rate tames homogeneity: Convergence and balancing effect. In International Conference on Learning Representations, 2022. +[60] Stephan Wojtowytsch. Stochastic gradient descent with noise of machine learning type. part II: Continuous time analysis. arXiv preprint arXiv:2106.02588, 2021. +[61] Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models. In Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pages 3635-3673. PMLR, 09-12 Jul 2020. +[62] Fan Wu and Patrick Rebeschini. A continuous-time mirror descent approach to sparse phase retrieval. In Advances in Neural Information Processing Systems, volume 33, pages 20192-20203, 2020. +[63] Jingfeng Wu, Difan Zou, Vladimir Braverman, and Quanquan Gu. Direction matters: On the implicit bias of stochastic gradient descent with moderate learning rate. In International Conference on Learning Representations, 2021. +[64] Lei Wu, Chao Ma, and Weinan E. How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective. In Advances in Neural Information Processing Systems, volume 31, 2018. +[65] Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. +[66] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017. +[67] Xingyu Zhu, Zixuan Wang, Xiang Wang, Mo Zhou, and Rong Ge. Understanding edge-of-stability training dynamics with a minimalist example. International Conference on Learning Representations, 2023. + +# Organisation of the Appendix. + +1. In Appendix A, we provide additional experiments for uncentered data as well as on the behaviour of the sharpness and trace of the Hessian along the trajectory of the iterates. We finally provide an experiment highlighting the EoS regime for SGD. +2. In Appendix B, we prove that $(\beta_{k})$ follows a Mirror descent recursion with varying potentials. We explicit these potentials and discuss some consequences. +3. In Appendix C we prove that (S)GD on the $\frac{1}{2} (w_{+}^{2} - w_{-}^{2})$ and $u \odot v$ parametrisations with suitable initialisations lead to the same sequence $(\beta_k)$ . +4. In Appendix D, we show that the hypentropy $\psi_{\alpha}$ converges to a weighted- $\ell_1$ -norm when $\alpha$ converges to 0 non-uniformly. We then discuss the effects of this weighted $\ell_1$ -norm for sparse recovery. +5. In Appendix E, we provide our descent lemmas for mirror descent with varying potentials and prove the boundedness of the iterates. +6. In Appendix F, we prove our main results: Theorem 1 and Theorem 2, as well as quantitative convergence (Proposition 1). +7. In Appendix G, we prove the lemmas and propositions given in the main text. +8. In Appendix H, we provide technical lemmas used throughout the proof of Theorem 1 and Theorem 2. +9. In Appendix I, we provide concentration results for random matrices and random vectors, used to estimate with high probability (w.r.t. the dataset) quantities related to the data. + +# A Additional experiments and results + +# A.1 Uncentered data + +When the data is uncentered, the discussion and the conclusion for GD are somewhat different. This paragraph is motivated by the observation of Nacson et al. [44] who notice that GD with large stepsizes helps to recover low $\ell_{1}$ solutions for uncentered data (Fig. 4). We make the following assumptions on the uncentered inputs. + +Assumption 1. There exist $\mu \in \mathbb{R}^d$ and $\delta, c_0, c_1, c_2 > 0$ such that for all $s$ -sparse vectors $\beta$ verifying $\langle \mu, \beta \rangle \geqslant c_0 \| \beta \|_{\infty} \| \mu \|_{\infty}$ , there exists $\varepsilon \in \mathbb{R}^d$ such that $(X^\top X)\beta = \langle \beta, \mu \rangle \mu + \varepsilon$ where $\| \varepsilon \|_2 \leqslant \delta \| \beta \|_2$ and $c_1 \langle \beta, \mu \rangle^2 \mu^2 \leqslant \frac{1}{n} \sum_i x_i^2 \langle x_i, \beta \rangle^2 \leqslant c_2 \langle \beta, \mu \rangle^2 \mu^2$ . + +Assumption 1 is not restrictive and holds with high probability for $\mathcal{N}(\mu \mathbf{1},\sigma^2 I_d)$ inputs when $\mu \gg \sigma \mathbf{1}$ (see Lemma 9 in Appendix). The following lemma characterises the initial shape of SGD and GD gradients for uncentered data. + +Proposition 4 (Shape of the (stochastic) gradient at initialisation). Under Assumption 1 and if $\langle \mu, \beta_{\mathrm{sparse}}^{\star} \rangle \geqslant c_0 \| \beta \|_{\infty} \| \mu \|_{\infty}$ , the squared full batch gradient and the expected stochastic gradient descent at initialisation satisfy, for some $\varepsilon$ satisfying $\| \varepsilon \|_{\infty} \ll \| \beta_{\mathrm{sparse}} \|_2$ : + +$$ +\nabla \mathcal {L} \left(\beta_ {0}\right) = \left\langle \beta_ {\text {s p a r s e}} ^ {\star}, \mu \right\rangle^ {2} \mu^ {2} + \varepsilon , \tag {14} +$$ + +$$ +\mathbb {E} _ {i \sim \operatorname {U n i f} ([ \mathrm {n} ])} \left[ \nabla \mathcal {L} _ {i} \left(\beta_ {0}\right) ^ {2} \right] = \Theta \left(\left\langle \beta_ {\text {s p a r s e}} ^ {\star}, \mu \right\rangle^ {2} \mu^ {2}\right). \tag {15} +$$ + +In this case the initial gradients of SGD and of GD are both homogeneous, explaining the behaviours of gradient descent in Fig. 4 (App. A): large stepsizes help in the recovery of the sparse solution in the presence of uncentered data, as opposed to centered data. Note that for decentered data with a $\mu \in \mathbb{R}^d$ orthogonal to $\beta_{\mathrm{sparse}}^{\star}$ , there is no effect of decentering on the recovered solution. If the support of $\mu$ is the same as that of $\beta_{\mathrm{sparse}}^{\star}$ , the effect is detrimental and the same discussion as in the centered data case applies. + +Fig. 4: for uncentered data the solutions of GD and SGD have similar behaviours, corroborating Proposition 4. + +![](images/149374317011b7369e6206dcc6c5c09545f258660018b22edd16a134ab969921.jpg) +Figure 4: Noiseless sparse regression with a 2-layer DLN with uncentered data $x_{i} \sim \mathcal{N}(\mu \mathbf{1}, I_{d})$ where $\mu = 5$ . All the stepsizes lead to convergence to a global solution and the solutions of SGD and GD have similar behaviours, corroborating Proposition 4. The setup corresponds to $(n, d, s, \alpha) = (20, 30, 3, 0.1)$ . + +# A.2 Behaviour of the maximal value and trace of the hessian + +Here in Fig. 5, we provide some additional experiments on the behaviour of: (1) the maximum eigenvalue of the hessian $\nabla^2 F(w_\infty^\gamma)$ at the convergence of the iterates of SGD and GD (2) the trace + +of hessian at the convergence of the iterates. As is clearly observed, increasing the stepsize for GD leads to a 'flatter' minimum in terms of the maximum eigenvalue of the hessian, while increasing the stepsize for SGD leads to a 'flatter' minimum in terms of its trace. These two solutions have very different structures. Indeed from the value of the hessian Eq. (22) at a global solution, and (very) roughly assuming that $X^{\top}X = I_{d}$ and that $\alpha \sim 0$ (pushing the EoS phenomenon), one can see that minimising $\lambda_{\mathrm{max}}(\nabla^{2}F(w))$ under the constraints $X(w_{+}^{2} - w_{-}^{2}) = y$ and $w_{+}\odot w_{-} = 0$ is equivalent to minimising $\| \beta \|_{\infty}$ under the constraint $X\beta = y$ . On the other hand minimising the trace of the hessian is equivalent to minimising the $\ell_1$ -norm. + +![](images/7898a3b13ec8bd61669e4c393d744c27d4ce824547657c44135c4e507d3200da.jpg) +Figure 5: Noiseless sparse regression setting. Diagonal linear network. Centered data. Behaviour of 2 different types of flatness of the recovered solution by SGD and GD depending on the stepsize. The setup corresponds to $(n,d,s,\alpha) = (20,30,3,0.1)$ . + +![](images/93fa17e1692c607a83b6cfe98e117b2c0cb6a91a319558d87cbf42f338746335.jpg) + +# A.3 Edge of Stability for SGD + +![](images/e50262f3c26ff16925daaaab003fa778741ab9d376d1816068ce9e4382ac7805.jpg) +Figure 6: SGD at the edge of stability: all coordinates fluctuate, and the sparse solution is recovered. As opposed to GD at the EoS, since all coordinates fluctuate, the coordinates to recover are not more penalised than the others. + +# B Main ingredients behind the proof of Theorem 1 and Theorem 2 + +In this section, we show that the iterates $(\beta_{k})_{k\geqslant 0}$ follow a stochastic mirror descent with varying potentials. At the core of our analysis, this result enables us to (i) prove convergence of the iterates to an interpolator and (ii) completely characterise the inductive bias of the algorithm (SGD or GD). Unveiling a mirror-descent like structure to characterise the implicit bias of a gradient method is classical. For gradient flow over diagonal linear networks [61], the iterates follow a mirror flow with respect to the hypentropy (4) with parameter $\alpha$ the initialisation scale, while for stochastic gradient flow [48] the mirror flow has a continuously evolving potential. + +# B.1 Mirror descent and varying potentials + +We recall that for a strictly convex reference function $h: \mathbb{R}^d \to \mathbb{R}$ , the (stochastic) mirror descent iterates algorithm write as [5, 18], where the minimum is assumed to be attained over $\mathbb{R}^d$ and unique: + +$$ +\beta_ {k + 1} = \operatorname {a r g m i n} _ {\beta \in \mathbb {R} ^ {d}} \left\{\eta_ {k} \langle g _ {k}, \beta \rangle + D _ {h} (\beta , \beta_ {k}) \right\}, \tag {16} +$$ + +for stochastic gradients $g_{k}$ , stepsize $\gamma_{k} \geqslant 0$ , and $D_{h}(\beta, \beta') = h(\beta) - h(\beta') - \langle \nabla h(\beta'), \beta - \beta' \rangle$ is the Bregman divergence associated to $h$ . Iteration (16) can also be cast as + +$$ +\nabla h \left(\beta_ {k + 1}\right) = \nabla h \left(\beta_ {k}\right) - \gamma_ {k} g _ {k}. \tag {17} +$$ + +Now, let $(h_k)$ be strictly convex reference functions $\mathbb{R}^d\to \mathbb{R}$ . Whilst in continuous time, there is only one natural way to extend mirror flow to varying potentials, in discrete time the varying potentials can be incorporated in (16) (replacing $h$ by $h_k$ and leading to $\nabla h_{k}(\beta_{k + 1}) = \nabla h_{k}(\beta_{k}) - \gamma_{k}g_{k}$ ), the mirror descent with varying potentials we study in this paper incorporates $h_{k + 1}$ and $h_k$ in (17). The iterates are thus defined as through: + +$$ +\beta_ {k + 1} = \operatorname {a r g m i n} _ {\beta \in \mathbb {R} ^ {d}} \left\{\eta_ {k} \langle g _ {k}, \beta \rangle + D _ {h _ {k + 1}, h _ {k}} (\beta , \beta_ {k}) \right\}, +$$ + +where $D_{h_{k + 1},h_k}(\beta ,\beta ') = h_{k + 1}(\beta) - h_k(\beta ') - \langle \nabla h_k(\beta '),\beta -\beta '\rangle$ , a recursion that can also be cast as: + +$$ +\nabla h _ {k + 1} \left(\beta_ {k + 1}\right) = \nabla h _ {k} \left(\beta_ {k}\right) - \gamma_ {k} g _ {k}. +$$ + +To derive convergence of the iterates, we prove analogs to classical mirror descent lemmas, generalised to time-varying potentials. + +# B.2 The iterates $(\beta_{k})$ follow a stochastic mirror descent with varying potential recursion + +In this section we show and prove that the iterates $(\beta_{k})_{k}$ follow a stochastic mirror descent with varying potentials. Before stating the proposition, we recall the definition of the potentials. To do so we introduce several quantities. + +Let $q, q_{\pm} : \mathbb{R} \to \mathbb{R} \cup \{\infty\}$ be defined as: + +$$ +q _ {\pm} (x) = \mp 2 x - \ln \left((1 \mp x) ^ {2}\right), +$$ + +$$ +q (x) = \frac {1}{2} (q _ {+} (x) + q _ {-} (x)) = - \frac {1}{2} \ln \left((1 - x ^ {2}) ^ {2}\right), +$$ + +with the convention that $q(1) = \infty$ . Notice that $q(x) \geqslant 0$ for $|x| \leqslant \sqrt{2}$ and $q(x) < 0$ otherwise. For the iterates $\beta_{k} = u_{k} \odot v_{k} \in \mathbb{R}^{d}$ , we recall the definition of the following quantities: + +$$ +\boldsymbol {\alpha} _ {\pm , k} = \boldsymbol {\alpha} \exp \left(- \frac {1}{2} \sum_ {i = 0} ^ {k - 1} q _ {\pm} \left(\gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} \left(\beta_ {\ell}\right)\right)\right) \in \mathbb {R} _ {> 0} ^ {d}, +$$ + +$$ +\boldsymbol {\alpha} _ {k} ^ {2} = \boldsymbol {\alpha} _ {+, k} \odot \boldsymbol {\alpha} _ {-, k}, +$$ + +$$ +\phi_ {k} = \frac {1}{2} \operatorname {a r c s i n h} \left(\frac {\boldsymbol {\alpha} _ {+, k} ^ {2} - \boldsymbol {\alpha} _ {- , k} ^ {2}}{2 \boldsymbol {\alpha} _ {k} ^ {2}}\right) \in \mathbb {R} ^ {d}. +$$ + +Finally for $k\geqslant 0$ , we define the potentials $(h_k:\mathbb{R}^d\to \mathbb{R})_{k\geqslant 0}$ as: + +$$ +h _ {k} (\beta) = \psi_ {\boldsymbol {\alpha} _ {k}} (\beta) - \left\langle \phi_ {k}, \beta \right\rangle , \tag {18} +$$ + +where $\psi_{\alpha_k}$ is the hyperbolic entropy defined in (4) of scale $\alpha_{k}$ : + +$$ +\psi_ {\boldsymbol {\alpha} _ {k}} (\beta) = \frac {1}{2} \sum_ {i = 1} ^ {d} \left(\beta_ {i} \mathrm {a r c s i n h} \left(\frac {\beta_ {i}}{\alpha_ {k , i} ^ {2}}\right) - \sqrt {\beta_ {i} ^ {2} + \alpha_ {k , i} ^ {4}} + \alpha_ {k, i} ^ {2}\right) +$$ + +where $\alpha_{k,i}$ corresponds to the $i^{th}$ coordinate of the vector $\alpha_{k}$ . + +Now that all the relevant quantities are defined, we can state the following proposition which explicits the time-varying stochastic mirror descent followed by $(\beta_{k})_{k}$ + +Proposition 5. The iterates $(\beta_{k} = u_{k}\odot v_{k})_{k\geqslant 0}$ from Eq. (3) satisfy the Stochastic Mirror Descent recursion with varying potentials $(h_k)_k$ .. + +$$ +\nabla h _ {k + 1} \left(\beta_ {k + 1}\right) = \nabla h _ {k} \left(\beta_ {k}\right) - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right), \tag {19} +$$ + +where $h_k: \mathbb{R}^d \to \mathbb{R}$ for $k \geqslant 0$ are defined Eq. (18). Since $\nabla h_0(\beta_0) = 0$ we have: + +$$ +\nabla h _ {k} \left(\beta_ {k}\right) \in \operatorname {s p a n} \left(x _ {1}, \dots , x _ {n}\right) +$$ + +Proof. Using Proposition 6, we study the $\frac{1}{2} (w_{+}^{2} - w_{-}^{2})$ parametrisation instead of the $u \odot v$ , indeed this is the natural parametrisation to consider when doing the calculations as it "separates" the recursions on $w_{+}$ and $w_{-}$ . + +Let us focus on the recursion of $w_{+}$ : + +$$ +w _ {+, k + 1} = \left(1 - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})\right) \cdot w _ {+, k}. +$$ + +We have: + +$$ +\begin{array}{l} w _ {+, k + 1} ^ {2} = \left(1 - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})\right) ^ {2} \cdot w _ {+, k} ^ {2} \\ = \exp \left(\ln ((1 - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})) ^ {2})\right) \cdot w _ {+, k} ^ {2}, \\ \end{array} +$$ + +with the convention that $\exp (\ln (0)) = 0$ . This leads to: + +$$ +\begin{array}{l} w _ {+, k + 1} ^ {2} = \exp \big (- 2 \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (w _ {k}) + 2 \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) + \ln ((1 - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})) ^ {2}) \big) \cdot w _ {+, k} ^ {2} \\ = \exp \big (- 2 \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) - q _ {+} (\gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})) \big) \cdot w _ {+, k} ^ {2}, \\ \end{array} +$$ + +since $q_{+}(x) = -2x - \ln ((1 - x)^{2})$ . Expanding the recursion and using that $w_{+,k=0}$ is initialised at $w_{+,k=0} = \alpha$ , we thus obtain: + +$$ +\begin{array}{l} w _ {+, k} ^ {2} = \boldsymbol {\alpha} ^ {2} \exp \left(- \sum_ {\ell = 0} ^ {k - 1} q _ {+} \left(\gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right)\right) \exp \left(- 2 \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right) \\ = \boldsymbol {\alpha} _ {+, k} ^ {2} \exp \left(- 2 \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right), \\ \end{array} +$$ + +where we recall that $\alpha_{\pm ,k}^2 = \alpha^2\exp (-\sum_{\ell = 0}^{k - 1}q_{\pm}(\gamma_\ell g_\ell))$ . One can easily check that we similarly get: + +$$ +w _ {-, k} ^ {2} = \boldsymbol {\alpha} _ {-, k} ^ {2} \exp \left(+ 2 \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right), +$$ + +leading to: + +$$ +\begin{array}{l} \beta_ {k} = \frac {1}{2} \left(w _ {+, k} ^ {2} - w _ {-, k} ^ {2}\right) \\ = \frac {1}{2} \boldsymbol {\alpha} _ {+, k} ^ {2} \exp \left(- 2 \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right) - \frac {1}{2} \boldsymbol {\alpha} _ {-, k} ^ {2} \exp \left(+ 2 \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right). \\ \end{array} +$$ + +Using Lemma 4, the previous equation can be simplified into: + +$$ +\beta_ {k} = \boldsymbol {\alpha} _ {+, k} \boldsymbol {\alpha} _ {-, k} \sinh \Big (- 2 \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell}) + \arcsinh \left(\frac {\boldsymbol {\alpha} _ {+, k} ^ {2} - \boldsymbol {\alpha} _ {- , k} ^ {2}}{2 \boldsymbol {\alpha} _ {+, k} \boldsymbol {\alpha} _ {- , k}}\right) \Big), +$$ + +which writes as: + +$$ +\frac {1}{2} \operatorname {a r c s i n h} \left(\frac {\beta_ {k}}{\boldsymbol {\alpha} _ {k} ^ {2}}\right) - \phi_ {k} = - \sum_ {\ell = 0} ^ {k - 1} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell}) \in \operatorname {s p a n} (x _ {1}, \dots , x _ {n}), +$$ + +where $\phi_{k} = \frac{1}{2}\arcsinh\left(\frac{\alpha_{+,k}^{2} - \alpha_{-,k}^{2}}{2\alpha_{k}^{2}}\right)$ , $\pmb{\alpha}_{k}^{2} = \pmb{\alpha}_{+,k} \odot \pmb{\alpha}_{-,k}$ and since the potentials $h_k$ are defined in Eq. (18) as $h_k = \psi_{\alpha_k} - \langle \phi_k, \cdot \rangle$ with + +$$ +\psi_ {\boldsymbol {\alpha}} (\beta) = \frac {1}{2} \sum_ {i = 1} ^ {d} \left(\beta_ {i} \operatorname {a r c s i n h} \left(\frac {\beta_ {i}}{\boldsymbol {\alpha} _ {i} ^ {2}}\right) - \sqrt {\beta_ {i} ^ {2} + \boldsymbol {\alpha} _ {i} ^ {4}} + \boldsymbol {\alpha} _ {i} ^ {2}\right) \tag {20} +$$ + +specifically such that $\nabla h_k(\beta_k) = \frac{1}{2}\operatorname {arcsinh}\left(\frac{\beta_k}{\alpha_k^2}\right) - \phi_k$ . Hence, + +$$ +\nabla h _ {k} \left(\beta_ {k}\right) = \sum_ {\ell < k} \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} \left(\beta_ {\ell}\right), +$$ + +so that: + +$$ +\nabla h _ {k + 1} \left(\beta_ {k + 1}\right) = \nabla h _ {k} \left(\beta_ {k}\right) - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right), +$$ + +which corresponds to a Mirror Descent with varying potentials $(h_k)_k$ . + +![](images/d2889fcce9acc64190df19ee208a395d4982fbffc9d817aac54ba123dc5ffd31.jpg) + +# C Equivalence of the $u \odot v$ and $\frac{1}{2} (w_{+}^{2} - w_{-}^{2})$ parametrisations + +We here prove the equivalence between the $\frac{1}{2} (w_{+}^{2} - w_{-}^{2})$ and $u \odot v$ parametrisations, that we use throughout the proofs in the Appendix. + +Proposition 6. Let $(\beta_k)_{k\geqslant 0}$ and $(\beta_k')_{k\geqslant 0}$ be respectively generated by stochastic gradient descent on the $u\odot v$ and $\frac{1}{2} (w_{+}^{2} - w_{-}^{2})$ parametrisations: + +$$ +(u _ {k + 1}, v _ {k + 1}) = (u _ {k}, v _ {k}) - \gamma_ {k} \nabla_ {u, v} \left(\mathcal {L} _ {\mathcal {B} _ {k}} (u \odot v)\right) (u _ {k}, v _ {k}), +$$ + +and + +$$ +w _ {\pm , k + 1} = w _ {\pm , k} - \gamma_ {k} \nabla_ {w _ {\pm}} \left(\mathcal {L} _ {\mathcal {B} _ {k}} \left(\frac {1}{2} \left(w _ {+} ^ {2} - w _ {-} ^ {2}\right)\right)\right) \left(w _ {+, k}, w _ {-, k}\right), +$$ + +initialised as $u_0 = \sqrt{2}\pmb {\alpha},v_0 = 0$ and $w_{+,0} = w_{-,0} = \pmb{\alpha}$ . Then for all $k\geqslant 0$ , we have $\beta_{k} = \beta_{k}^{\prime}$ + +Proof. We have: + +$$ +w _ {\pm , 0} = \boldsymbol {\alpha}, \quad w _ {\pm , k + 1} = (1 \mp \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k} ^ {\prime})) w _ {\pm , k}, +$$ + +and + +$$ +u _ {0} = \sqrt {2} \alpha , \quad v _ {0} = 0, \quad u _ {k + 1} = u _ {k} - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) v _ {k}, \quad v _ {k + 1} = v _ {k} - \gamma_ {k} \nabla \mathcal {L} (\beta_ {k}) u _ {k}. +$$ + +Hence, + +$$ +\beta_ {k + 1} = \left(1 + \gamma_ {k} ^ {2} \nabla \mathcal {L} \left(\beta_ {k}\right) ^ {2}\right) \beta_ {k} - \gamma_ {k} \left(u _ {k} ^ {2} + v _ {k} ^ {2}\right) \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right), +$$ + +and + +$$ +\beta_ {k + 1} ^ {\prime} = \left(1 + \gamma_ {k} ^ {2} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k} ^ {\prime}\right) ^ {2}\right) \beta_ {k} ^ {\prime} - \gamma_ {k} \left(w _ {+, k} ^ {2} + w _ {-, k} ^ {2}\right) \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k} ^ {\prime}\right). +$$ + +Then, let $z_{k} = \frac{1}{2}\bigl (u_{k}^{2} - v_{k}^{2}\bigr)$ and $z_k^\prime = w_{+,k}w_{-k}$ . We have $z_0 = \alpha^2$ , $z_0' = \alpha^2$ and: + +$$ +z _ {k + 1} = \left(1 - \gamma_ {k} ^ {2} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) ^ {2}\right) z _ {k}, \quad z _ {k + 1} ^ {\prime} = \left(1 - \gamma_ {k} ^ {2} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k} ^ {\prime}) ^ {2}\right) z _ {k} ^ {\prime}. +$$ + +Using $a^2 + b^2 = \sqrt{(2ab)^2 + (a^2 - b^2)^2}$ for $a, b \in \mathbb{R}$ , we finally obtain that: + +$$ +u _ {k} ^ {2} + v _ {k} ^ {2} = \sqrt {(2 \beta_ {k}) ^ {2} + (2 z _ {k}) ^ {2}}, w _ {+, k} ^ {2} + w _ {-, k} ^ {2} = \sqrt {(2 \beta_ {k} ^ {\prime}) ^ {2} + (2 z _ {k} ^ {\prime}) ^ {2}}. +$$ + +We conclude by observing that $(\beta_{k},z_{k})$ and $(\beta_k',z_k')$ follow the exact same recursions, initialised at the same value $(0,\alpha^{2})$ . + +![](images/e24cfd92e1139bdb2820feba06e543179b3db2e0de4ca73f6ac7556a617c7336.jpg) + +# D Convergence of $\psi_{\alpha}$ to a weighted $\ell_1$ norm and harmful behaviour + +We show that when taking the scale of the initialisation to 0, one must be careful in the characterisation of the limiting norm, indeed if each entry does not go to zero "at the same speed", then the limit norm is a weighted $\ell_1$ -norm rather than the classical $\ell_1$ norm. + +Proposition 7. For $\alpha \geqslant 0$ and a vector $h\in \mathbb{R}^d$ , let $\tilde{\alpha} = \alpha \exp (-h\ln (1 / \alpha))\in \mathbb{R}^d$ . Then we have that for all $\beta \in \mathbb{R}^d$ + +$$ +\psi_ {\tilde {\alpha}} (\beta) \underset {\alpha \rightarrow 0} {\sim} \ln \left(\frac {1}{\alpha}\right) \cdot \sum_ {i = 1} ^ {d} (1 + h _ {i}) | \beta_ {i} |. +$$ + +Proof. Recall that + +$$ +\psi_ {\tilde {\alpha}} (\beta) = \frac {1}{2} \sum_ {i = 1} ^ {d} \left(\beta_ {i} \mathrm {a r c s i n h} \left(\frac {\beta_ {i}}{\tilde {\alpha} _ {i} ^ {2}}\right) - \sqrt {\beta_ {i} ^ {2} + \tilde {\alpha} _ {i} ^ {4}} + \tilde {\alpha} _ {i} ^ {2}\right) +$$ + +Using that $\operatorname{arcsinh}(x) \underset{|x| \to \infty}{\sim} \operatorname{sgn}(x) \ln(|x|)$ , and that $\ln \left(\frac{1}{\hat{\alpha}_i^2}\right) = (1 + h_i) \ln \left(\frac{1}{\alpha^2}\right)$ we obtain that + +$$ +\begin{array}{l} \psi_ {\tilde {\alpha}} (\beta) \underset {\alpha \rightarrow 0} {\sim} \frac {1}{2} \sum_ {i = 1} ^ {d} \operatorname {s g n} (\beta_ {i}) \beta_ {i} (1 + h _ {i}) \ln \left(\frac {1}{\alpha^ {2}}\right) \\ = \frac {1}{2} \ln (\frac {1}{\alpha^ {2}}) \sum_ {i = 1} ^ {d} (1 + h _ {i}) | \beta_ {i} |. \\ \end{array} +$$ + +![](images/86d3f4ee5aa4989d0d7ead444a65a889f5effe329a3e5f71f049a36880e42b54.jpg) + +The following Fig. 7 illustrates the effect of the non-uniform shape $\alpha$ on the corresponding potential $\psi_{\alpha}$ . + +![](images/a078c199a1b2e5bcfbba53b626b0fc5ccbd8c86a3b3fcd7b0b8697c9f9d79499.jpg) +Figure 7: Left: Uniform $\alpha = \alpha \mathbf{1}$ : a smaller scale $\alpha$ leads to the potential $\psi_{\alpha}$ being closer to the $\ell_1$ -norm. Right: A non uniform $\alpha$ can lead to the recovery of a solution which is very far from the minimum $\ell_1$ -norm solution. The affine line corresponds to the set of interpolators when $n = 1$ , $d = 2$ and $s = 1$ . + +More generally, for $\alpha$ such that $\alpha_{i}\rightarrow 0$ for all $i\in [d]$ at rates such that $\ln (1 / \alpha_i)\sim q_i\ln (1 / \max_i\alpha_i)$ , we retrieve a weighted $\ell_1$ norm: + +$$ +\frac {\psi_ {\alpha} (\beta)}{\ln \left(1 / \alpha^ {2}\right)} \rightarrow \sum_ {i = 1} ^ {d} q _ {i} \left| \beta_ {i} \right|. +$$ + +Hence, even for arbitrary small $\max_i\alpha_i$ , if the shape of $\alpha$ is bad', the interpolator $\beta_{\alpha}$ that minimizes $\psi_{\alpha}$ can be arbitrary far away from $\beta_{\ell^1}^{\star}$ the interpolator of minimal $\ell_1$ norm. + +We illustrate the importance of the previous proposition in the following example. + +Example 1. We illustrate how, even for arbitrary small $\max_i\alpha_i$ , the interpolator $\beta_{\alpha}^{\star}$ that minimizes $\psi_{\alpha}$ can be far from the minimum $\ell_1$ norm solution, due to the shape of $\alpha$ that is not uniform. The message of this example is that for $\alpha \to 0$ non-uniformly across coordinates, if the coordinates of $\alpha$ that go slowly to 0 coincide with the non-null coordinates of the sparse interpolator we want to retrieve, then $\beta_{\alpha}^{\star}$ will be far from the sparse solution. + +A simple counterexample can be built: let $\beta_{\mathrm{sparse}}^{\star} = (1, \dots, 1, 0, \dots, 0)$ (with only the $s = o(d)$ first coordinates that are non-null), and let $(x_i)$ , $(y_i)$ be generated as $y_i = \langle \beta_{\mathrm{sparse}}^{\star}, x_i \rangle$ with $x_i \sim \mathcal{N}(0, 1)$ . For $n$ large enough ( $n$ of order $s \ln(d)$ where $s$ is the sparsity), the design matrix $X$ is RIP [10], so that the minimum $\ell_1$ norm interpolator $\beta_{\ell^1}^{\star}$ is exactly equal to $\beta_{\mathrm{sparse}}^{\star}$ . + +However, if $\alpha$ is such that $\max_i\alpha_i\to 0$ with $h_i > > 1$ for $j\leqslant s$ and $h_i = 1$ for $i\geqslant s + 1$ ( $h_i$ as in Proposition 7), $\beta_{\alpha}^{\star}$ will be forced to verify $\beta_{\alpha ,i}^{\star} = 0$ for $i\leqslant s$ and hence $\| \beta_{\alpha ,1}^{\star} - \beta_{\ell^{1}}^{\star}\|_{1}\geqslant s.$ + +# E Main descent lemma and boundedness of the iterates + +The goal of this section is to prove the following proposition, our main descent lemma: for well-chosen stepsizes, the Bregman divergences $(D_{h_k}(\beta^\star ,\beta_k))_{k\geqslant 0}$ decrease. We then use this proposition to bound the iterates for both SGD and GD. + +Proposition 8. There exist a constant $c > 0$ and $B > 0$ such that $B = \mathcal{O}(\inf_{\beta^{\star}\in \mathcal{S}}\| \beta^{\star}\|_{\infty})$ for $GD$ and $B = \mathcal{O}(\ln (1 / \alpha)\inf_{\beta^{\star}\in \mathcal{S}}\| \beta^{\star}\|_{\infty})$ for SGD, such that if $\gamma_k\leqslant \frac{c}{LB}$ for all $k$ , then we have, for all $k\geqslant 0$ and any interpolator $\beta^{\star}\in \mathcal{S}$ : + +$$ +D _ {h _ {k + 1}} \left(\beta^ {\star}, \beta_ {k + 1}\right) \leqslant D _ {h _ {k}} \left(\beta^ {\star}, \beta_ {k}\right) - \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). +$$ + +To prove this result, we first provide a general descent lemma for time-varying mirror descent (Proposition 9, appendix E.1), before proving the proposition for fixed iteration $k$ and bound $B > 0$ on the iterates infinity norm in Appendix E.2 (Proposition 10). We finally use this to prove a bound on the iterates infinity norm in appendix E.3. + +# E.1 Descent lemma for (stochastic) mirror descent with varying potentials + +In the following we adapt a classical mirror descent equality but for time varying potentials, that differentiates from Orabona et al. [47] in that it enables us to prove the decrease of the Bregman divergences of the iterates. Moreover, as for classical MD, it is an equality. + +Proposition 9. For $h,g:\mathbb{R}^d\to \mathbb{R}$ functions, let $D_{h,g}(\beta ,\beta^{\prime}) = h(\beta) - g(\beta^{\prime}) - \langle \nabla g(\beta^{\prime}),\beta -\beta^{\prime}\rangle^{4}$ for $\beta ,\beta^{\prime}\in \mathbb{R}^{d}$ . Let $(h_k)$ strictly convex functions defined $\mathbb{R}^d$ $\mathcal{L}$ a convex function defined on $\mathbb{R}^d$ . Let $(\beta_{k})$ defined recursively through $\beta_0\in \mathbb{R}^d$ , and + +$$ +\beta_ {k + 1} \in \operatorname {a r g m i n} _ {\beta \in \mathbb {R} ^ {d}} \left\{\gamma_ {k} \langle \nabla \mathcal {L} (\beta_ {k}), \beta - \beta_ {k} \rangle + D _ {h _ {k + 1}, h _ {k}} (\beta , \beta_ {k}) \right\}, +$$ + +where we assume that the minimum is unique and attained in $\mathbb{R}^d$ . Then, $(\beta_{k})$ satisfies + +$$ +\nabla h _ {k + 1} \left(\beta_ {k + 1}\right) = \nabla h _ {k} \left(\beta_ {k}\right) - \gamma_ {k} \nabla \mathcal {L} \left(\beta_ {k}\right), +$$ + +and for any $\beta \in \mathbb{R}^d$ + +$$ +\begin{array}{l} D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) = D _ {h _ {k}} (\beta , \beta_ {k}) - \gamma_ {k} \langle \nabla \mathcal {L} (\beta_ {k}), \beta_ {k} - \beta \rangle + D _ {h _ {k + 1}} (\beta_ {k}, \beta_ {k + 1}) \\ - \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k}\right) + \left(h _ {k + 1} - h _ {k}\right) (\beta). \\ \end{array} +$$ + +Proof. Let $\beta \in \mathbb{R}^d$ . Since we assume that the minimum through which $\beta_{k + 1}$ is computed is attained in $\mathbb{R}^d$ , the gradient of the function $V_{k}(\beta) = \gamma_{k}\langle \nabla \mathcal{L}(\beta_{k}),\beta -\beta_{k}\rangle +D_{h_{k + 1},h_{k}}(\beta ,\beta_{k})$ evaluated at $\beta_{k + 1}$ is null, leading to $\nabla h_{k + 1}(\beta_{k + 1}) = \nabla h_k(\beta_k) - \gamma_k\nabla \mathcal{L}(\beta_k)$ . + +Then, since $\nabla V_{k}(\beta_{k + 1}) = 0$ , we have $D_{V_k}(\beta ,\beta_{k + 1}) = V_k(\beta) - V_k(\beta_{k + 1})$ . Using $\nabla^2 V_k = \nabla^2 h_{k + 1}$ , we also have $D_{V_k} = D_{h_{k + 1}}$ . Hence: + +$$ +D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) = \gamma_ {k} \langle \nabla \mathcal {L} (\beta_ {k}), \beta - \beta_ {k + 1} \rangle + D _ {h _ {k + 1}, h _ {k}} (\beta , \beta_ {k}) - D _ {h _ {k + 1}, h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right). +$$ + +We write $\gamma_{k}\langle \nabla \mathcal{L}(\beta_{k}),\beta -\beta_{k + 1}\rangle = \gamma_{k}\langle \nabla \mathcal{L}(\beta_{k}),\beta -\beta^{k}\rangle +\gamma_{k}\langle \nabla \mathcal{L}(\beta_{k}),\beta_{k} - \beta_{k + 1}\rangle$ . We also have $\gamma_{k}\langle \nabla \mathcal{L}(\beta_{k}),\beta_{k} - \beta_{k + 1}\rangle = \langle \nabla h_{k}(\beta_{k}) - \nabla h_{k + 1}(\beta_{k + 1}),\beta_{k} - \beta_{k + 1}\rangle = D_{h_{k},h_{k + 1}}(\beta_{k},\beta_{k + 1})+$ $D_{h_{k + 1},h_k}(\beta_{k + 1},\beta^k)$ , so that $\gamma_{k}\langle \nabla \mathcal{L}(\beta_{k}),\beta_{k} - \beta_{k + 1}\rangle -D_{h_{k + 1},h_{k}}(\beta_{k + 1},\beta^{k}) = D_{h_{k},h_{k + 1}}(\beta_{k},\beta_{k + 1})$ Thus, + +$$ +D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) = D _ {h _ {k + 1}, h _ {k}} (\beta , \beta_ {k}) - \gamma_ {k} \left(D _ {f} (\beta , \beta_ {k}) + D _ {f} (\beta_ {k}, \beta)\right) + D _ {h _ {k}, h _ {k + 1}} (\beta_ {k}, \beta_ {k + 1}), +$$ + +and writing $D_{h,g}(\beta ,\beta^{\prime}) = D_{g}(\beta ,\beta^{\prime}) + h(\beta) - g(\beta)$ concludes the proof. + +![](images/047bee340b0eb519bad74fe7ef9ced734b8dd1d66b698d2c721105ef6176f532.jpg) + +# E.2 Proof of Proposition 10 + +In next proposition, we use Proposition 9 to prove our main descent lemma. To that end, we bound the error terms that appear in Proposition 9 as functions of $\mathcal{L}_{\mathcal{B}_k}(\beta_k)$ and norms of $\beta_{k},\beta_{k + 1}$ , so that for explicit step sizes, the error terms can be cancelled by half of the negative quantity $-2\mathcal{L}_{\mathcal{B}_k}(\beta_k)$ . + +Additional notation: let $L_2, L_\infty > 0$ such that $\forall \beta, \| H_{\mathcal{B}}\beta \|_2 \leqslant L\| \beta \|_2, \| H_{\mathcal{B}}\beta \|_\infty \leqslant L\| \beta \|_\infty$ for all batches $\mathcal{B} \subset [n]$ of size $b$ . + +Proposition 10. Let $k \geqslant 0$ and $B > 0$ . Provided that $\| \beta_k\|_{\infty},\| \beta_{k + 1}\|_{\infty},\| \beta^{\star}\|_{\infty} \leqslant B$ and $\gamma_{k} \leqslant \frac{c}{LB}$ where $c > 0$ is some numerical constant, we have: + +$$ +D _ {h _ {k + 1}} \left(\beta^ {\star}, \beta_ {k + 1}\right) \leqslant D _ {h _ {k}} \left(\beta^ {\star}, \beta_ {k}\right) - \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). \tag {21} +$$ + +Proof. Let $\beta^{\star} \in S$ be any interpolator. From Proposition 9: + +$$ +D _ {h _ {k + 1}} (\beta^ {\star}, \beta_ {k + 1}) = D _ {h _ {k}} (\beta^ {\star}, \beta_ {k}) - 2 \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) + D _ {h _ {k + 1}} (\beta_ {k + 1}, \beta_ {k}) - (h _ {k + 1} - h _ {k}) (\beta_ {k}) + (h _ {k + 1} - h _ {k}) (\beta^ {\star}). +$$ + +We want to bound the last three terms of this equality. First, to bound the last two we apply Lemma 7 assuming that $\| \beta^{\star}\|_{\infty},\| \beta_{k + 1}\|_{\infty}\leqslant B$ + +$$ +- \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k}\right) + \left(h _ {k + 1} - h _ {k}\right) \left(\beta^ {*}\right) \leqslant 2 4 B L _ {2} \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right) +$$ + +We now bound $D_{h_{k + 1}}(\beta_k,\beta_{k + 1})$ . Classical Bregman manipulations provide that + +$$ +\begin{array}{l} D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) = D _ {h _ {k + 1} ^ {*}} \left(\nabla h _ {k + 1} \left(\beta_ {k + 1}\right), \nabla h _ {k + 1} \left(\beta_ {k}\right)\right) \\ = D _ {h _ {k + 1} ^ {*}} \left(\nabla h _ {k} \left(\beta^ {k}\right) - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right), \nabla h _ {k + 1} \left(\beta_ {k}\right)\right). \\ \end{array} +$$ + +From Lemma 6 we have that $h_{k + 1}$ is $\min(1 / (4\alpha_{k + 1}^2), 1 / (4B))$ strongly convex on the $\ell^\infty$ -centered ball of radius $B$ therefore $h_{k + 1}^*$ is $\max(4\alpha_{k + 1}^2, 4B) = 4B$ (for $\alpha$ small enough or $B$ big enough) smooth on this ball, leading to: + +$$ +\begin{array}{l} D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) \leqslant 2 B \| \nabla h _ {k} \left(\beta_ {k}\right) - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right) - \nabla h _ {k + 1} \left(\beta_ {k}\right) \| _ {2} ^ {2} \\ \leqslant 4 B \left(\| \nabla h _ {k} (\beta_ {k}) - \nabla h _ {k + 1} (\beta_ {k}) \| _ {2} ^ {2} + \| \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \| _ {2} ^ {2}\right). \\ \end{array} +$$ + +Using $|\nabla h_k(\beta) - \nabla h_{k + 1}(\beta)|\leqslant 2\delta_k$ where $\delta_{k} = q(\gamma_{k}\nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))$ , we get that: + +$$ +D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) \leqslant 8 B \| \delta_ {k} \| _ {2} ^ {2} + 4 B L \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). +$$ + +Now, $\| \delta_k\| _2^2\leqslant \| \delta_k\| _1\| \delta_k\|_\infty$ and using Lemma 5, $\| \delta_k\| _1\| \delta_k\|_\infty$ $\begin{array}{r}\leqslant \\ 4\| \gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\| _2^2\| \gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_\infty^2\\ \leqslant 2\| \gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\| _2^2\mathrm{since}\| \gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_\infty \end{array}$ $\gamma_{k}L_{\infty}\| \beta_{k} - \beta_{\infty}\| \leqslant \gamma_{k}\times 2LB\leqslant 1 / 2$ is verified for $\gamma_{k}\leqslant 1 / (4LB)$ . Thus, + +$$ +D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) \leqslant 4 0 B L _ {2} \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). +$$ + +Hence, provided that $\| \beta_k\|_{\infty}\leqslant B,\| \beta_{k + 1}\|_{\infty}\leqslant B$ and $\gamma_{k}\leqslant 1 / (4LB)$ , we have: + +$$ +D _ {h _ {k + 1}} \left(\beta^ {\star}, \beta_ {k + 1}\right) \leqslant D _ {h _ {k}} \left(\beta^ {\star}, \beta_ {k}\right) - 2 \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right) + 6 4 L _ {2} \gamma_ {k} ^ {2} B \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right), +$$ + +and thus + +$$ +D _ {h _ {k + 1}} \left(\beta^ {\star}, \beta_ {k + 1}\right) \leqslant D _ {h _ {k}} \left(\beta^ {\star}, \beta_ {k}\right) - \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). +$$ + +if $\gamma_k \leqslant \frac{c}{BL}$ , where $c = \frac{1}{64}$ . + +![](images/236414be96459fc1c914ce2ae174ad136a2ceec0912c343748014a678b7870bb.jpg) + +# E.3 Bound on the iterates + +We now bound the iterates $(\beta_{k})$ by an explicit constant $B$ that depends on $\| \beta^{\star}\| _1$ (for any fixed $\beta^{\star}\in \mathcal{S}$ ). + +The first bound we prove holds for both SGD and GD, and is of the form $\mathcal{O}(\| \beta^{\star}\|_{1}\ln (1 / \alpha^{2})$ while the second bound, that holds only for GD $(b = n)$ is of order $\mathcal{O}(\| \beta^{\star}\|_{1})$ (independent of $\alpha$ ). While a bound independent of $\alpha$ is only proved for GD, we believe that such a result also holds for SGD, and in both cases $B$ should be thought of order $\mathcal{O}(\| \beta^{\star}\|_{1})$ . + +# E.3.1 Bound that depends on $\alpha$ for GD and SGD + +A consequence of Proposition 10 is the boundedness of the iterates, as shown in next corollary. Hence, Proposition 10 can be applied using $B$ a uniform bound on the iterates $\ell^{\infty}$ norm. + +Corollary 1. Let $B = 3\| \beta^{\star}\|_{1}\ln \left(1 + \frac{\|\beta^{*}\|_{1}}{\alpha^{2}}\right)$ . For stepsizes $\gamma_{k} \leqslant \frac{c}{BL}$ , we have $\| \beta_k\|_\infty \leqslant B$ for all $k \geqslant 0$ . + +Proof. We proceed by induction. Let $k \geqslant 0$ such that $\| \beta_k\|_{\infty} \leqslant B$ for some $B > 0$ and $D_{h_k}(\beta^\star, \beta_k) \leqslant D_{h_0}(\beta^\star, \beta_0)$ (note that these two properties are verified for $k = 0$ , since $\beta_0 = 0$ ). For $\gamma_k$ sufficiently small (i.e., that satisfies $\gamma_k \leqslant \frac{c}{B'L}$ where $B' \geqslant \| \beta_{k+1}\|_{\infty}, \| \beta_k\|_{\infty}, \| \beta^\star\|_{\infty}$ ), using Proposition 10, we have $D_{h_{k+1}}(\beta^\star, \beta_{k+1}) \leqslant D_{h_k}(\beta^\star, \beta_k)$ so that $D_{h_{k+1}}(\beta^\star, \beta_{k+1}) \leqslant D_{h_0}(\beta^\star, \beta_0)$ , which can be rewritten as: + +$$ +\sum_ {i = 1} ^ {d} \alpha_ {k + 1, i} ^ {2} (\sqrt {1 + (\frac {\beta_ {k + 1 , i}}{\alpha_ {k + 1 , i} ^ {2}}) ^ {2}} - 1) \leqslant \sum_ {i = 1} ^ {d} \beta_ {i} ^ {\star} \operatorname {a r c s i n h} (\frac {\beta_ {k + 1 , i}}{\alpha^ {2}}). +$$ + +Hence, $\| \beta_{k + 1}\| _1\leqslant \| \beta^\star \| _1\ln (1 + \frac{\|\beta_{k + 1}\|_1}{\alpha^2})$ . We then notice that for $x,y > 0,x\leqslant y\ln (1 + x)\Rightarrow$ $x\leqslant 3y\ln (1 + y)$ : if $x > y\ln (1 + y)$ and $x > y$ , we have that $y\ln (1 + y) < y\ln (1 + x)$ , so that $1 + y < 1 + x$ , which contradicts our assumption. Hence, $x\leqslant \max (y,y\ln (1 + y))$ . In our case, $x = \left\| \beta^{k + 1}\right\| _1 / \alpha^2,y = \left\| \beta^\star \right\| _1 / \alpha^2$ so that for small alpha, $\ln (1 + y)\geqslant 1$ + +Hence, we deduce that $\| \beta_{k + 1}\| _1\leqslant B$ , where $B = \| \beta^{\star}\|_{1}\ln (1 + \frac{\|\beta^{\star}\|_{1}}{\alpha^{2}})$ + +This is true as long as $\gamma_{k}$ is tuned using $B^{\prime}$ a bound on $\max (\| \beta_k\|_\infty ,\| \beta_{k + 1}\|_\infty)$ . Using the continuity of $\beta_{k + 1}$ as a function of $\gamma_{k}$ ( $\beta_{k}$ being fixed), we show that $\gamma_{k}\leqslant \frac{1}{2}\times \frac{c}{BL}$ can be used using this $B$ . Indeed, let $\phi :\mathbb{R}^{+}\to \mathbb{R}^{d}$ be the function that takes as entry $\gamma_{k}\geqslant 0$ and outputs the corresponding $\| \beta_{k + 1}\|_{\infty}$ : $\phi$ is continuous. Let $\gamma_r = \frac{1}{2}\times \frac{c}{rL}$ for $r > 0$ and $\bar{r} = \sup \left\{r\geqslant 0:B < \phi (\gamma_r)\right\}$ (the set is upper-bounded; if is empty, we do not need what follows since it means that any stepsize leads to $\| \beta_{k + 1}\|_{\infty}\leqslant B$ ). By continuity of $\phi$ , $\phi (\gamma_{\bar{r}}) = B$ . Furthermore, for all $r$ that satisfies $r\geqslant \max (\phi (\gamma_r),B)\geqslant \max (\phi (\gamma_r),\| \beta_k\|_\infty,\| \beta^\star \|_\infty)$ , we have, using what is proved just above, that $\| \beta_{k + 1}\|_{\infty}\leqslant B$ and thus $\phi (\gamma_r)\leqslant B$ for such a $r$ : + +Lemma 1. For $r > 0$ such that $r\geqslant \max (\phi (\gamma_r),B)$ , we have $\phi (\gamma_r)\leqslant B$ + +Now, if $\bar{r} > B$ , by definition of $\bar{r}$ and by continuity of $\phi$ , since $\phi(\bar{r}) = B$ , there exists some $B < r < \bar{r}$ such that $\phi(\gamma_r) > B$ (definition of the supremum) and $\phi(\gamma_r) \leqslant 2B$ (continuity of $\phi$ ). This particular choice of $r$ thus satisfies $r > B$ and and $\phi(\gamma_r) \leqslant 2B \leqslant 2r$ , leading to $\phi(\gamma_r) \leqslant B$ , using Lemma 1, hence a contradiction: we thus have $\bar{r} \leqslant B$ . + +This concludes the induction: for all $r \geqslant B$ , we have $r \geqslant \bar{r}$ so that $\phi(\gamma_r) \leqslant B$ and thus for all stepsizes $\gamma \leqslant \frac{c}{2LB}$ , we have $\| \beta_{k+1} \|_\infty \leqslant B$ . + +![](images/66808889c3f1d58a08fb1dda16a35b6de54c0d4da852d29178326bf6de196361.jpg) + +# E.3.2 Bound independent of $\alpha$ + +We here assume in this subsection that $b = n$ . We prove that for gradient descent, the iterates are bounded by a constant that does not depend on $\alpha$ . + +Proposition 11. Assume that $b = n$ (full batch setting). There exists some $B = \mathcal{O}(\| \beta^{\star}\|_{1})$ such that for stepsizes $\gamma_k\leqslant \frac{c}{BL}$ , we have $\| \beta_k\|_{\infty}\leqslant B$ for all $k\geqslant 0$ . + +Proof. We first begin by proving the following proposition: for sufficiently small step sizes, the loss values decrease. In the following lemma we provide a bound on the gradient descent iterates $(w_{+,k}, w_{-,k})$ which will be useful to show that the loss is decreasing. + +Proposition 12. For $\gamma_k \leqslant \frac{c}{LB}$ where $B \geqslant \max(\|\beta_k\|_{\infty}, \|\beta_{k+1}\|_{\infty})$ , we have $\mathcal{L}(\beta_{k+1}) \leqslant \mathcal{L}(\beta_k)$ . + +Proof. Oddly, using the time-varying mirror descent recursion is not the easiest way to show the decrease of the loss, due to the error terms which come up. Therefore to show that the loss is decreasing we use the gradient descent recursion. Recall that the iterates $w_{k} = (w_{+,k},w_{-,k})\in \mathbb{R}^{2d}$ follow a gradient descent on the non-convex loss $F(w) = \frac{1}{2}\| y - \frac{1}{2} X(w_{+}^{2} - w_{-}^{2})\|_{2}$ . + +For $k \geqslant 0$ , using the Taylor formula we have that $F(w_{k + 1}) \leqslant F(w_k) - \gamma_k(1 - \frac{\gamma_k L_k}{2})\| \nabla F(w_k)\|^2$ with the local smoothness $L_{k} = \sup_{w \in [w_{k}, w_{k + 1}]}\lambda_{\max}(\nabla^{2}F(w))$ . Hence if $\gamma_k \leqslant 1 / L_k$ for all $k$ we get that the loss is non-increasing. We now bound $L_{k}$ . Computing the hessian of $F$ , we obtain that: + +$$ +\begin{array}{l} \nabla^ {2} F (w _ {k}) = \left( \begin{array}{c c} \mathrm {d i a g} (\nabla \mathcal {L} (\beta_ {k})) & 0 \\ 0 & - \mathrm {d i a g} (\nabla \mathcal {L} (\beta_ {k})) \end{array} \right) \\ + \left( \begin{array}{c c} \operatorname {d i a g} (w _ {+, k}) H \operatorname {d i a g} (w _ {+, k}) & - \operatorname {d i a g} (w _ {-, k}) H \operatorname {d i a g} (w _ {+, k}) \\ - \operatorname {d i a g} (w _ {+, k}) H \operatorname {d i a g} (w _ {-, k}) & \operatorname {d i a g} (w _ {-, k}) H \operatorname {d i a g} (w _ {-, k}) \end{array} \right). \tag {22} \\ \end{array} +$$ + +Let us denote by $M = \begin{pmatrix} M_{+} & M_{+,-} \\ M_{+, - } & M_{-} \end{pmatrix} \in \mathbb{R}^{2d\times 2d}$ the second matrix in the previous equality. With this notation $\| \nabla^2 F(w_k)\| \leqslant \| \nabla \mathcal{L}(\beta_k)\|_{\infty} + 2\| M\|$ (where the norm corresponds to the Schatten 2-norm which is the largest eigenvalue for symmetric matrices). Now, notice that: + +$$ +\begin{array}{l} \| M \| ^ {2} = \sup _ {u \in \mathbb {R} ^ {2 d}, \| u \| = 1} \| M u \| ^ {2} \\ = \sup_{\substack{u_{+}\in \mathbb{R}^{d},\| u_{+}\| = 1\\ u_{-}\in \mathbb{R}^{d},\| u_{-}\| = 1\\ (a,b)\in \mathbb{R}^{2},a^{2} + b^{2} = 1}}\left\| M\left( \begin{array}{c}a\cdot u_{+}\\ b\cdot u_{-} \end{array} \right)\right\|^{2}. \\ \end{array} +$$ + +We have: + +$$ +\begin{array}{l} \left\| M \left( \begin{array}{c} a \cdot u _ {+} \\ b \cdot u _ {-} \end{array} \right) \right\| ^ {2} = \left\| \left( \begin{array}{c} a M _ {+} u _ {+} + b M _ {+ -} u _ {-} \\ a M _ {+ -} u _ {+} + b M _ {-} u _ {-} \end{array} \right) \right\| ^ {2} \\ = \left\| a M _ {+} u _ {+} + b M _ {+ -} u _ {-} \right\| ^ {2} + \left\| a M _ {+ -} u _ {+} + b M _ {-} u _ {-} \right\| ^ {2} \\ \leqslant 2 \left(a ^ {2} \| M _ {+} u _ {+} \| ^ {2} + b ^ {2} \| M _ {+ -} u _ {-} \| ^ {2} + a ^ {2} \| M _ {+ -} u _ {+} \| ^ {2} + b ^ {2} \| M _ {-} u _ {-} \| ^ {2}\right) \\ \leqslant 2 \left(\| M _ {+} \| ^ {2} + \| M _ {+ -} \| ^ {2} + \| M _ {-} \| ^ {2}\right). \\ \end{array} +$$ + +Since $\| M_{\pm} \| \leqslant \lambda_{max} \cdot \| w_{\pm} \|_{\infty}^{2}$ and $\| M_{+ - } \| \leqslant \lambda_{max} \| w_{+} \|_{\infty} \| w_{-} \|_{\infty}$ we finally get that + +$$ +\begin{array}{l} \left\| M \right\| ^ {2} \leqslant 6 \lambda_ {m a x} ^ {2} \cdot \max \left(\left\| w _ {+} \right\| _ {\infty} ^ {2}, \left\| w _ {-} \right\| _ {\infty} ^ {2}\right) ^ {2} \\ \leqslant 6 \lambda_ {m a x} ^ {2} \left(\| w _ {+} ^ {2} \| _ {\infty} + \| w _ {-} ^ {2} \| _ {\infty}\right) ^ {2} \\ \leqslant 1 2 \lambda_ {m a x} ^ {2} \| w _ {+} ^ {2} + w _ {-} ^ {2} \| _ {\infty} ^ {2}. \\ \end{array} +$$ + +We now upper bound this quantity in the following lemma. + +Lemma 2. For all $k \geqslant 0$ , the following inequality holds component-wise: + +$$ +w _ {+, k} ^ {2} + w _ {-, k} ^ {2} = \sqrt {4 \boldsymbol {\alpha} _ {k} ^ {4} + \beta_ {k} ^ {2}}. +$$ + +Proof. Notice from the definition of $w_{+,k}$ and $w_{-,k}$ given in the proof of Proposition 5 that: + +$$ +\left| w _ {+, k} \right| \left| w _ {-, k} \right| = \alpha_ {-, k} \alpha_ {+, k} = \alpha_ {k} ^ {2}. \tag {23} +$$ + +And $\alpha_0 = \alpha^2$ . Now since $\alpha_{k}$ is decreasing coordinate-wise (under our assumptions on the stepsizes, $\gamma_k^2\nabla \mathcal{L}(\beta_k)^2\leqslant (1 / 2)^2 < 1$ ), we get that: + +$$ +w _ {+, k} ^ {2} + w _ {-, k} ^ {2} = 2 \sqrt {\boldsymbol {\alpha} _ {k} ^ {4} + \beta_ {k} ^ {2}} \leqslant 2 \sqrt {\boldsymbol {\alpha} ^ {4} + \beta_ {k} ^ {2}} +$$ + +leading to $w_{+,k}^2 + w_{-,k}^2 \leqslant \sqrt{4\alpha^4 + B^2}$ . + +![](images/4e90dca79060ffda756300ae62a31e0f0f9685364039c537769f604620b6ded0.jpg) + +From Lemma 2, $w_{+,k}^{2} + w_{-,k}^{2}$ is bounded by $2\sqrt{\alpha^4 + B^2}$ . Putting things together we finally get that $\| \nabla^2 F(w) \| \leqslant \| \nabla \mathcal{L}(\beta) \|_{\infty} + 8\lambda_{max}\sqrt{4\|\alpha\|_{\infty}^4 + B^2}$ . Hence, + +$$ +L _ {k} \leqslant \sup _ {\| \beta \| _ {\infty} \leqslant B} \| \nabla \mathcal {L} (\beta) \| _ {\infty} + 8 \lambda_ {\max } \sqrt {\| \boldsymbol {\alpha} \| _ {\infty} ^ {4} + B ^ {2}} \leqslant L B + 8 \lambda_ {\max } \sqrt {\| \boldsymbol {\alpha} \| _ {\infty} ^ {4} + B ^ {2}} \leqslant 1 0 L B, +$$ + +for $B\geqslant \| \pmb {\alpha}\|_{\infty}^{2}$ + +![](images/cfe302e8eed177d262e9b2cbfe557160d7a61fde7496c94aefb60b990250e4bf.jpg) + +We finally prove the bound on $\| \beta_k\|_{\infty}$ independent of $\alpha$ for a uniform initialisation $\alpha = \alpha \mathbf{1}$ , using the monotonic property of $\mathcal{L}$ . + +Proposition 13. Assume that $b = n$ (full batch setting). There exists some $B = \mathcal{O}(\| \beta^{\star}\|_{1})$ such that for stepsizes $\gamma_k\leqslant \frac{c}{BL}$ , we have $\| \beta_k\|_{\infty}\leqslant B$ for all $k\geqslant 0$ . + +Proof. In this proof, we first let $B$ be a bound on the iterates. Tuning stepsizes using this bound, we prove that the iterates are bounded by a some $B' = \mathcal{O}(\|\beta^{\star}\|_1)$ . Finally, we conclude by using the continuity of the iterates (at a finite horizon) that this explicit bound can be used to tune the step sizes. + +Writing the mirror descent with varying potentials, we have, since $\nabla h_0(\beta_0) = 0$ + +$$ +\nabla h _ {k} (\beta_ {k}) = - \sum_ {\ell < k} \gamma_ {\ell} \nabla \mathcal {L} (\beta_ {\ell}), +$$ + +leading to, by convexity of $h_k$ : + +$$ +h _ {k} \left(\beta_ {k}\right) - h _ {k} \left(\beta^ {\star}\right) \leqslant \langle \nabla h _ {k} \left(\beta_ {k}\right), \beta_ {k} - \beta^ {\star} \rangle = - \sum_ {\ell < k} \langle \gamma_ {\ell} \nabla \mathcal {L} \left(\beta_ {\ell}\right), \beta_ {k} - \beta^ {\star} \rangle . +$$ + +We then write, using $\nabla \mathcal{L}(\beta) = H(\beta -\beta^{\star})$ for $H = XX^{\top}$ , that $-\sum_{\ell < k}\langle \gamma_{\ell}\nabla \mathcal{L}(\beta_{\ell}),\beta_{k} - \beta^{\star}\rangle = -\sum_{\ell < k}\gamma_{\ell}\langle X^{\top}(\bar{\beta}_{k} - \beta^{\star}),X^{\top}(\beta_{k} - \beta^{\star})\rangle \leqslant \sum_{\ell < k}\gamma_{\ell}\sqrt{\mathcal{L}(\bar{\beta}_{k})\mathcal{L}(\beta_{k})}$ , leading to: + +$$ +h _ {k} (\beta_ {k}) - h _ {k} (\beta^ {\star}) \leqslant 2 \sqrt {\sum_ {\ell < k} \gamma_ {\ell} \mathcal {L} (\bar {\beta} _ {k}) \sum_ {\ell < k} \gamma_ {\ell} \mathcal {L} (\beta_ {k})} \leqslant 2 \sum_ {\ell < k} \gamma_ {\ell} \mathcal {L} (\bar {\beta} _ {k}) \leqslant 2 D _ {h _ {0}} (\beta^ {\star}, \beta^ {0}), +$$ + +where the last inequality holds provided that $\gamma_{k}\leqslant \frac{1}{CLB}$ . Thus, + +$$ +\psi_ {\boldsymbol {\alpha} _ {k}} (\beta_ {k}) \leqslant \psi_ {\boldsymbol {\alpha} _ {k}} (\beta^ {\star}) + 2 \psi_ {\boldsymbol {\alpha} _ {0}} (\beta^ {\star}) + \langle \phi_ {k}, \beta_ {k} - \beta^ {\star} \rangle . +$$ + +Then, $\langle \phi_k,\beta_k - \beta^\star \rangle \leqslant \| \phi_k\| _1\| \beta_k - \beta^\star \|_\infty$ and $\| \phi_k\| _1\leqslant C\lambda_{\max}\sum_{k < K}\gamma_k^2\mathcal{L}(\beta^k)\leqslant$ $C\lambda_{\mathrm{max}}\gamma_{\mathrm{max}}h_0(\beta^\star)$ . Then, using + +$$ +\| \beta \| _ {\infty} - \frac {1}{\ln (1 / \alpha^ {2})} \leqslant \frac {\psi_ {\alpha} (\beta)}{\ln (1 / \alpha^ {2})} \leqslant \| \beta \| _ {1} \big (1 + \frac {\ln (\| \beta \| _ {1} + \alpha^ {2})}{\ln (1 / \alpha^ {2})} \big), +$$ + +we have: + +$$ +\begin{array}{l} \| \beta_ {k} \| _ {\infty} \leqslant \frac {1}{\ln (1 / \alpha^ {2})} + \| \beta^ {\star} \| _ {1} \left(1 + \frac {\ln (\| \beta^ {\star} \| _ {1} + \alpha^ {2})}{\ln (1 / \alpha^ {2})}\right) + \| \beta^ {\star} \| _ {1} \left(1 + \frac {\ln (\| \beta^ {\star} \| _ {1} + \alpha^ {2})}{\ln (1 / \alpha^ {2})}\right) \\ + B _ {0} C \lambda_ {\max } \gamma_ {\max } h _ {0} \left(\beta^ {\star}\right) / \ln \left(1 / \alpha^ {2}\right) \\ \leqslant R + B _ {0} C \lambda_ {\max } \gamma_ {\max } h _ {0} \left(\beta^ {\star}\right) / \ln \left(1 / \alpha^ {2}\right), \\ \end{array} +$$ + +where $R = \mathcal{O}(\| \beta^{\star}\|_{1})$ is independent of $\alpha$ . Hence, since $B_0 = \sup_{k < \infty}\|\beta_k\|_{\infty} < \infty$ , we have: + +$$ +B _ {0} \left(1 - C \lambda_ {\max } \gamma_ {\max } h _ {0} \left(\beta^ {\star}\right) / \ln \left(1 / \alpha^ {2}\right)\right) \leqslant R \Rightarrow B _ {0} \leqslant 2 R, +$$ + +provided that $\gamma_{\mathrm{max}} \leqslant 1 / (2C\lambda_{\mathrm{max}}h_0(\beta^{\star}) / \ln(1 / \alpha^2))$ (note that $h_0(\beta^{\star}) / \ln(1 / \alpha^2)$ is independent of $\alpha^2$ ). + +Hence, if for all $k$ we have $\gamma_k \leqslant \frac{1}{C'LB}$ where $B$ bounds all $\|\beta_k\|_{\infty}$ , we have $\|\beta_k\|_{\infty} \leqslant 2R$ for all $k$ , where $R = \mathcal{O}(\|\beta^{\star}\|_{1})$ is independent of $\alpha$ and stepsizes $\gamma_k$ . + +Let $K > 0$ be fixed, and + +$$ +\bar{\gamma} = \inf \left\{\gamma >0\quad \text{s.t.}\quad \sup_{k\leqslant K}\| \beta_{k}\|_{\infty} > 2R\right\} . +$$ + +For $\gamma \geqslant 0$ a constant stepsize, let + +$$ +\varphi (\gamma) = \sup _ {k \leqslant K} \| \beta_ {k} \| _ {\infty}, +$$ + +which is a continuous function of $\gamma$ . For $r > 0$ , let $\gamma_r = \frac{1}{C' L r}$ . + +An important feature to notice is that if $\gamma < \gamma_{r}$ and $r$ bounds all $\| \beta_k\|_{\infty}, k \leqslant K$ , then $\varphi (\gamma) \leqslant R$ , as shown above. We will show that we have $\bar{\gamma} \geqslant \gamma_{2R}$ . Reasoning by contradiction, if $\bar{\gamma} < \gamma_{2R}$ : by continuity of $\varphi$ , we have $\varphi (\bar{\gamma}) \leqslant R$ and thus, there exists some small $0 < \varepsilon < \gamma_{2R} - \bar{\gamma}$ such that for all $\gamma \in [\bar{\gamma},\bar{\gamma} +\varepsilon]$ , we have $\varphi (\bar{\gamma}) \leqslant 2R$ . + +However, such $\gamma$ 's verify both $\varphi(\gamma) \leqslant 2R$ (since $\gamma \in [\bar{\gamma}, \bar{\gamma} + \varepsilon]$ and by definition of $\varepsilon$ ) and $\gamma \leqslant \gamma_{2R}$ (by definition of $\varepsilon$ ), and hence $\varphi(\gamma) \leqslant R$ . This contradicts the infimum of $\bar{\gamma}$ , and hence $\bar{\gamma} \geqslant \gamma_{2R}$ . Thus, for $\gamma \leqslant \gamma_{2R} = \frac{1}{2C'LR}$ , we have $\| \beta_k \|_{\infty} \leqslant R$ . + +![](images/fb157526f8c7eb24f09929035c6ca1787f6989ca872cc882bf3675e2f4348c13.jpg) + +# F Proof of Theorem 1 and 2, and of Proposition 1 + +# F.1 Proof of Theorem 1 and 2 + +We are now equipped to prove Theorem 1 and Theorem 2, condensed in the following Theorem. + +Theorem 3. Let $(u_{k},v_{k})_{k\geqslant 0}$ follow the mini-batch SGD recursion (3) initialised at $u_0 = \sqrt{2}\alpha \in \mathbb{R}_{>0}^d$ and $v_{0} = 0$ , and let $(\beta_{k})_{k\geqslant 0} = (u_{k}\odot v_{k})_{k\geqslant 0}$ . There exists and explicit $B > 0$ and a numerical constant $c > 0$ such that: + +1. For stepsizes satisfying $\gamma_k \leqslant \frac{c}{LB}$ , the iterates satisfy $\| \gamma_k \nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k) \|_{\infty} \leqslant 1$ and $\| \beta_k \|_{\infty} \leqslant B$ for all $k$ ; +2. For stepsizes satisfying $\gamma_k \leqslant \frac{c}{LB}$ , $(\beta_k)_{k \geqslant 0}$ converges almost surely to some $\beta_\infty^\star \in S$ , +3. If $(\beta_k)_k$ and the neurons $(u_k, v_k)_k$ respectively converge to a model $\beta_\infty^*$ and neurons $(u_\infty, v_\infty)$ satisfying $\beta_\infty^* \in S$ (and $\beta_\infty^* = u_\infty \odot v_\infty$ ), then for almost all stepsizes (with respect to the Lebesgue measure), the limit $\beta_\infty^*$ satisfies: + +$$ +\beta_ {\infty} ^ {\star} = \underset {\beta^ {\star} \in \mathcal {S}} {\operatorname {a r g m i n}} D _ {\psi_ {\alpha_ {\infty}}} (\beta^ {\star}, \tilde {\beta} _ {0}), +$$ + +for $\alpha_{\infty} \in \mathbb{R}_{>0}^{d}$ and $\tilde{\beta}_0 \in \mathbb{R}^d$ satisfying + +$$ +\boldsymbol {\alpha} _ {\infty} ^ {2} = \boldsymbol {\alpha} ^ {2} \odot \exp \left(- \sum_ {k = 0} ^ {\infty} q \left(\gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})\right)\right), +$$ + +where $q(x) = -\frac{1}{2}\ln \left(\left(1 - x^2\right)^2\right) \geqslant 0$ for $|x| \leqslant \sqrt{2}$ , and $\tilde{\beta}_0$ is a perturbation term equal to: + +$$ +\tilde {\beta} _ {0} = \frac {1}{2} \left(\boldsymbol {\alpha} _ {+} ^ {2} - \boldsymbol {\alpha} _ {-} ^ {2}\right), +$$ + +where, $q_{\pm}(x) = \mp 2x - \ln ((1\mp x)^2)$ , and $\pmb{\alpha}_{\pm}^{2} = \pmb{\alpha}^{2}\odot \exp (-\sum_{k = 0}^{\infty}q_{\pm}(\gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)))$ + +Proof. Point 1. The first point of the Theorem is a direct consequence of Corollary 1 and the bounds proved in appendix E.3. + +Point 2. Then, for stepsizes $\gamma_k \leqslant \frac{c}{LB}$ , using Proposition 8 for any interpolator $\beta^* \in S$ : + +$$ +D _ {h _ {k + 1}} \left(\beta^ {\star}, \beta_ {k + 1}\right) \leqslant D _ {h _ {k}} \left(\beta^ {\star}, \beta_ {k}\right) - \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). \tag {24} +$$ + +Hence, summing: + +$$ +\sum_ {k} \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \leqslant D _ {h _ {0}} (\beta^ {\star}, \beta_ {0}), +$$ + +so that the series converges. + +Under our stepsize rule, $\| \gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_{\infty}\leqslant \frac{1}{2}$ , leading to $\| q(\gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_{\infty}\leqslant 3\|\gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_{\infty}^2$ by Lemma 5. Using $\| \nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|^2\leqslant 2L_2\mathcal{L}_{\mathcal{B}_k}(\beta_k)$ , we have that $\ln (\alpha_{\pm ,k})$ , $\ln (\alpha_k)$ all converge. + +We now show that $\sum_{k}\gamma_{k}\mathcal{L}(\beta_{k}) < \infty$ . We have: + +$$ +\sum_ {\ell < k} \mathcal {L} (\beta_ {k}) = \sum_ {\ell < k} \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) + M _ {k}, +$$ + +where $M_{k} = \sum_{\ell < k}\gamma_{k}(\mathcal{L}(\beta_{k}) - \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))$ . We have that $(M_{k})$ is a martingale with respect to the filtration $(\mathcal{F}_k)$ defined as $\mathcal{F}_k = \sigma (\beta_\ell ,\ell \leqslant k)$ . Using our upper-bound on $\sum_{\ell < k}\gamma_k\mathcal{L}_{\mathcal{B}_k}(\beta_k)$ , we have: + +$$ +M _ {k} \geqslant \sum_ {\ell < k} \gamma_ {k} \mathcal {L} (\beta_ {k}) - \sum_ {\ell < k} \gamma_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \geqslant - D _ {h _ {0}} (\beta^ {\star}, \beta_ {0}), +$$ + +and hence $(M_{k})$ is a lower bounded martingale. Using Doob's first martingale convergence theorem (a lower bounded super-martingale converges almost surely, Doob [17]), $(M_{k})$ converges almost surely. Consequently, since $\sum_{\ell < k} \gamma_{k} \mathcal{L}(\beta_{k}) = \sum_{\ell < k} \gamma_{k} \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}) + M_{k}$ , we have that $\sum_{\ell < k} \gamma_{k} \mathcal{L}(\beta_{k})$ converges almost surely (the first term is upper bounded, the second converges almost surely). + +We now prove the convergence of $(\beta_{k})$ . Since it is a bounded sequence, let $\beta_{\sigma (k)}$ be a convergent sub-sequence and let $\beta_{\infty}^{\star}$ denote its limit: $\beta_{\sigma (k)}\to \beta_{\infty}^{\star}$ . + +Almost surely, $\sum_{k}\gamma_{k}\mathcal{L}(\beta_{k}) < \infty$ and so $\gamma_{k}\mathcal{L}(\beta_{k})\to 0$ , leading to $\mathcal{L}(\beta_k)\to 0$ since the step sizes are lower bounded, so that $\mathcal{L}(\beta_{\sigma (k)})\to 0$ , and hence $\mathcal{L}(\beta_{\infty}^{\star}) = 0$ : this means that $\beta_{\infty}^{\star}$ is an interpolator. + +Since the quantities $(\alpha_{k})_{k}$ , $(\alpha_{\pm ,k})_k$ and $(\phi_k)_k$ converge almost surely to $\alpha_{\infty}$ , $\alpha_{\pm}$ and $\phi_{\infty}$ , we get that the potentials $h_k$ uniformly converge to $h_\infty = \psi_{\alpha_\infty} - \langle \phi_\infty, \cdot \rangle$ on all compact sets. Now notice that we can decompose $\nabla h_{\infty}(\beta_{\infty}^{\star})$ as: + +$$ +\nabla h _ {\infty} (\beta_ {\infty} ^ {\star}) = \left(\nabla h _ {\infty} (\beta_ {\infty} ^ {\star}) - \nabla h _ {\infty} (\beta_ {\sigma (k)})\right) + \left(\nabla h _ {\infty} (\beta_ {\sigma (k)}) - \nabla h _ {\sigma (k)} (\beta_ {\sigma (k)})\right) + \nabla h _ {\sigma (k)} (\beta_ {\sigma (k)}). +$$ + +The first two terms converge to 0: the first is a direct consequence of the convergence of the extracted subsequence, the second is a consequence of the uniform convergence of $h_{\sigma(k)}$ to $h_\infty$ on compact sets. Finally the last term is always in $\operatorname{Span}(x_1, \ldots, x_n)$ due to Proposition 5, leading to $\nabla h_\infty(\beta_\infty^\star) \in \operatorname{Span}(x_1, \ldots, x_n)$ . Consequently, $\nabla h_\infty(\beta_\infty^\star) \in \operatorname{Span}(x_1, \ldots, x_n)$ . Notice that from the definition of $h_\infty$ , we have that $\nabla h_\infty(\beta_\infty^\star) = \nabla \psi_{\alpha_\infty}(\beta_\infty^\star) - \phi_\infty$ . Now since $\phi_\infty = \frac{1}{2} \operatorname{arcsinh}\left(\frac{\alpha_+^2 - \alpha_-^2}{2\alpha_\infty^2}\right)$ , one can notice that $\tilde{\beta}_0$ is precisely defined such that $\nabla \psi_{\alpha_\infty}(\tilde{\beta}_0) = \phi_\infty$ . Therefore $\nabla \psi_{\alpha_\infty}(\beta_\infty^\star) - \nabla \psi_{\alpha_\infty}(\tilde{\beta}_0) \in \operatorname{Span}(x_1, \ldots, x_n)$ . This condition along with the fact that $\beta_\infty^\star$ is an interpolator are exactly the optimality conditions of the convex minimisation problem: + +$$ +\min _ {\beta^ {\star} \in \mathcal {S}} D _ {\psi_ {\alpha_ {\infty}}} (\beta^ {\star}, \tilde {\beta} _ {0}) +$$ + +Therefore $\beta_{\infty}^{\star}$ must be equal to the unique minimiser of this problem. Since this is true for any sub-sequence we get that $\beta_{k}$ converges almost surely to: + +$$ +\beta_ {\infty} ^ {\star} = \underset {\beta \in \mathcal {S}} {\operatorname {a r g m i n}} D _ {\psi_ {\alpha_ {\infty}}} (\beta^ {\star}, \tilde {\beta} _ {0}). +$$ + +Point 3. From what we just proved, note that it is sufficient to prove that $\alpha_{k},\alpha_{\pm ,k},\phi_{k}$ converge to limits $\alpha_{\infty},\alpha_{\pm ,\infty},\phi_{\infty}$ satisfying $\alpha_{\infty},\alpha_{\pm ,\infty}\in \mathbb{R}_{>0}^{d}$ (with positive and non-null coordinates) and $\phi_{\infty}\in \mathbb{R}^{d}$ . Indeed, if this holds and since we assume that the iterates converge to some interpolator, we proved just above that this interpolator is uniquely defined through the desired implicit regularization problem. We thus prove the convergence of $\alpha_{k},\alpha_{\pm ,k},\phi_{k}$ . + +Note that the convergence of $u_{k}, v_{k}$ is equivalent to the convergence of $w_{\pm, k}$ in the $w_{+}^{2} - w_{-}^{2}$ parameterisation used in our proofs, that we use there too. We have: + +$$ +w _ {\pm , k + 1} = \left(1 \mp \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k})\right) \odot w _ {\pm , k}, +$$ + +so that + +$$ +\ln (w _ {\pm , k} ^ {2}) = \sum_ {\ell < k} \ln ((1 \mp \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})) ^ {2}). +$$ + +We now assume that stepsizes are such that for all $\ell \geqslant 0$ and $i\in [d]$ , stepsizes are such that we have $|\gamma_{\ell}\nabla_{i}\mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell})|\neq 1$ : this is true for all stepsizes except a countable number of stepsizes, and so this is true for almost all stepsizes. Since we assume that the iterates $\beta_{k}$ converge to some interpolator, this leads to $\gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell})\to 0$ if we assume that stepsizes do not diverge. + +Taking the limit, we have + +$$ +\ln (w _ {\pm , \infty} ^ {2}) = \sum_ {\ell < \infty} \ln ((1 \mp \gamma_ {\ell} \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})) ^ {2}). +$$ + +This limit is in $(\{-\infty \} \cup \mathbb{R})^d$ (since $w_{\pm ,\infty}\in \mathbb{R}^d$ ), and a coordinate of the limit is equal to $-\infty$ if and only if the sum on the RHS diverges to $-\infty$ (note that from our assumption just above, no term of the sum can be equal to $-\infty$ ). + +We have $\ln ((1\mp \gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell}))^{2})\sim \mp 2\gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell})$ as $\ell \to \infty$ , so that if for some coordinate $i$ we have $\sum_{\ell}\gamma_{\ell}\nabla_{i}\mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell}) = \mp \infty$ , then the coordinate $i$ of the limit satisfies $\ln (w_{i,\pm ,\infty}^2) = +\infty$ , which is impossible. Hence, the sum $\sum_{\ell}\gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell})$ is in $\mathbb{R}^d$ (and is thus converging); consequently, $\sum_{\ell}\gamma_{\ell}^{2}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell})^{2}$ converges and thus $\sum_{\ell}q(\gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell}))$ and $\sum_{\ell}q_{\pm}(\gamma_{\ell}\nabla \mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell}))$ all converge: the sequences $\alpha_{k},\alpha_{\pm ,k}$ thus converge to limits in $\mathbb{R}_>^d$ , and $\phi_k$ converges, concluding our proof. + +![](images/830a80f66f315df55af6783c09cc0ca8d937401ae2996236630c4f6530248d82.jpg) + +# F.2 Proof of Proposition 1 + +We begin with the following Lemma, that explicits the curvature of $D_h$ around the set of interpolators. + +Lemma 3. For all $k \geqslant 0$ , if $\mathcal{L}(\beta_k) \leqslant \frac{1}{2\lambda_{\max}} (\alpha^2\lambda_{\min}^+)^2$ , we have $\left\| \beta_k - \beta_{\alpha_k}^\star \right\|^2 \leqslant 2B(\alpha^2\lambda_{\min}^+)^{-1}\mathcal{L}(\beta_k)$ . + +Proof. Recall that the sequence $\mathbf{z}^k = \nabla h_k(\beta^k)$ satisfies $\mathbf{z}^0 = 0$ and $\mathbf{z}^{k + 1} = \mathbf{z}^k -\gamma_k\mathcal{L}(\beta^k)$ , so that we have that $\mathbf{z}^k\in V = \mathrm{Im}(\mathbf{X}\mathbf{X}^\top)$ for all $k\geqslant 0$ . Then, let $\beta_{k}^{\alpha}$ be the unique minimizer of $h_k$ over $\mathcal{S}$ the space of interpolators: $\beta_{k}^{\alpha}$ is exactly characterized by $\mathbf{X}^{\top}\beta_{k}^{\alpha} = \mathbf{Y}$ and $\nabla h_k(\beta_k^\alpha)\in V$ . We define $\mathbf{z}_k^\alpha \in V$ as $\mathbf{z}_k^\alpha = \nabla h_k(\beta_k^\alpha)$ . + +Now, fix $\mathbf{z}^{\alpha} = \mathbf{z}_k^\alpha$ and $h = h_k$ , and let us define $\psi : \mathbf{z} \in V \to D_{h^*}(\mathbf{z}, \mathbf{z}^\alpha)$ and $\phi : \mathbf{z} \in V \to \mathcal{L}(\nabla h^*(\mathbf{z}))$ . We next show that for all $\mathbf{z} \in V$ , there exists $\mu_z$ such that $\nabla^2 \phi(\mathbf{z}) \geqslant \mu_z \nabla^2 \psi(\mathbf{z})$ , and that $\mu_z \geqslant \mu$ for $\mathbf{z}$ in an open convex set of $V$ around $\mathbf{z}^\alpha$ , for some $\mu > 0$ . For $A \in \mathbb{R}^{d \times d}$ an operator/matrix on $\mathbb{R}^d$ , let us denote $A_V$ its restriction/co-restriction to $V$ . + +First, for $\mathbf{z} \in V$ , we have $\nabla^2 \psi(\mathbf{z}) = \nabla^2 (h^*(\mathbf{z}) - h^*(\mathbf{z}) - \langle \nabla h^*(\mathbf{z}^\alpha), z - z^\alpha \rangle)(\mathbf{z}) = \nabla^2 h^*(\mathbf{z})_V$ . Then, $\nabla \phi(\mathbf{z}) = \nabla^2 h^*(\mathbf{z}) \nabla \mathcal{L}(\nabla h^*(\mathbf{z}))$ , so that $\nabla^2 \phi(\mathbf{z}) = \left( \nabla^2 h^*(\mathbf{z}) \nabla^2 \mathcal{L}(\nabla h^*(\mathbf{z})) \nabla^2 h^*(\mathbf{z}) \right)_V + \nabla^3 h^*(\mathbf{z})(\nabla \mathcal{L}(\nabla h^*(\mathbf{z})), \dots, \cdot)_V$ . + +Since $h$ is $1/(2\alpha^2)$ smooth (on $\mathbb{R}^d$ and thus on $V$ ), $h^*$ is $2\alpha^2$ strongly convex (on $V$ and on $\mathbb{R}^d$ ). Using $V = \operatorname{Im}(\mathbf{X}\mathbf{X}^\top)$ and $\nabla^2\mathcal{L} \equiv \mathbf{X}\mathbf{X}^\top$ , we have $\left(\nabla^2h^*(\mathbf{z})\nabla^2\mathcal{L}(\nabla h^*(\mathbf{z}))\nabla^2h^*(\mathbf{z})\right)_V = \nabla^2h^*(\mathbf{z})_V\nabla^2\mathcal{L}(\nabla h^*(\mathbf{z}))_V\nabla^2h^*(\mathbf{z})_V$ , and thus $\left(\nabla^2h^*(\mathbf{z})\nabla^2\mathcal{L}(\nabla h^*(\mathbf{z}))\nabla^2h^*(\mathbf{z})\right)_V \succeq 2\alpha^2\lambda_{\min}^+ \nabla^2h^*(\mathbf{z})_V$ . + +For the other term of $\nabla^2\phi$ , namely $\nabla^3 h^*(\mathbf{z})(\nabla \mathcal{L}(\nabla h^*(\mathbf{z})), \cdot, \cdot)_V$ , we compute $\nabla_{ijk}^3 h^*(\mathbf{z}) = \mathbf{1}_{i = j = k} 2\alpha_{i,k}^2 \sinh(\mathbf{z}_i)$ , leading to: $\nabla^3 h^*(\mathbf{z})(\nabla \mathcal{L}(\nabla h^*(\mathbf{z})), \cdot, \cdot)_V = \operatorname{diag}(2\alpha^2 \sinh(\mathbf{z})) \odot (\mathbf{X}\mathbf{X}^\top (2\alpha^2 \sinh(\mathbf{z}) - \beta^\alpha)))_V$ . Thus, writing $\beta_{\mathbf{z}} = 2\alpha_{i,k}^2 \sinh(\mathbf{z}) = \nabla h^*(\mathbf{z})$ the primal surrogate of + +$\mathbf{z}$ we have: + +$$ +\begin{array}{l} \nabla^ {3} h ^ {*} (\mathbf {z}) \left(\nabla \mathcal {L} \left(\nabla h ^ {*} (\mathbf {z})\right), \cdot , \cdot\right) _ {V} = \operatorname {d i a g} \left(2 \alpha_ {i, k} ^ {2} \sinh (\mathbf {z}) \odot \left(\mathbf {X X} ^ {\top} \left(\beta_ {\mathbf {z}} - \beta_ {k} ^ {\alpha}\right)\right)\right) _ {V} \\ \succeq - \left\| \mathbf {X} \mathbf {X} ^ {\top} \left(\beta_ {\mathbf {z}} - \beta_ {k} ^ {\alpha}\right) \right\| _ {\infty} \operatorname {d i a g} \left(2 \alpha_ {k} ^ {2} \odot | \sinh (\mathbf {z}) |\right) _ {V} \\ \succeq - \left\| \mathbf {X} \mathbf {X} ^ {\top} \left(\beta_ {\mathbf {z}} - \beta_ {k} ^ {\alpha}\right) \right\| _ {\infty} \operatorname {d i a g} \left(2 \alpha_ {k} ^ {2} \odot \cosh (\mathbf {z})\right) _ {V} \\ = - \left\| \mathbf {X} \mathbf {X} ^ {\top} \left(\beta_ {\mathbf {z}} - \beta_ {k} ^ {\alpha}\right) \right\| _ {\infty} \nabla^ {2} \psi (\mathbf {z}). \\ \end{array} +$$ + +Wrapping things together, + +$$ +\nabla^ {2} \phi (\mathbf {z}) \succeq \left(2 \alpha^ {2} \lambda_ {\min } ^ {+} - \left\| \mathbf {X X} ^ {\top} \left(\beta_ {\mathbf {z}} - \beta^ {\alpha}\right) \right\| _ {\infty}\right) \nabla^ {2} \psi (\mathbf {z}). +$$ + +Let $\mathcal{Z} = \left\{\mathbf{z}\in V:\left\| \mathbf{X}\mathbf{X}^{\top}(\beta_{\mathbf{z}} - \beta_{k}^{\alpha})\right\|_{\infty} < \alpha^{2}\lambda_{\min}^{+}\right\}$ that satisfies + +$\left\{\beta \in V:\mathcal{L}(\beta_{\mathbf{z}}) < \frac{1}{2\lambda_{\max}} (\alpha^{2}\lambda_{\min}^{+})^{2}\right\} \subset \mathcal{Z}$ . $\mathcal{Z}$ is an open convex set of $V$ containing $\mathbf{z}^{\alpha}$ . On $\mathcal{Z}$ , $\nabla^2\phi \succeq \alpha^2\lambda_{\min}^+ \nabla^2\psi$ , and $\psi(\mathbf{z}^{\alpha}) = \phi(\mathbf{z}^{\alpha}) = 0$ , so that for all $\mathbf{z} \in \mathcal{Z}$ , we have $\phi(\mathbf{z}) \geqslant \alpha^2\lambda_{\min}^+\psi(\mathbf{z})$ . Hence, for all $\mathbf{z} \in \mathcal{Z}$ , we have $D_{h_k}(\beta_k^\alpha, \beta_\mathbf{z}) \leqslant D_{h^\star}(\mathbf{z}, \mathbf{z}^\alpha) \leqslant (\alpha^2\lambda_{\min}^+)^{-1}\mathcal{L}(\beta_\mathbf{z})$ , and using the fact that $D_{h_k}$ is $\frac{1}{4B}$ strongly convex, we obtain, for $\beta_\mathbf{z} = \beta_k$ (since $\mathbf{z}^k \in V$ ): if $\mathcal{L}(\beta_k) \leqslant \frac{1}{2\lambda_{\max}} (\alpha^2\lambda_{\min}^+)^2$ , we have $\| \beta_k^\alpha - \beta_k \|_2^2 \leqslant (\alpha^2\lambda_{\min}^+)^{-1}\mathcal{L}(\beta_k)$ . + +Proposition 14. As assume $\mathcal{L}$ is $L_{r}$ -relatively smooth with respect to all the $h_k$ 's. Then for all $\beta$ we have the following inequality. + +$$ +\begin{array}{l} \gamma_ {k} (\mathcal {L} (\beta_ {k + 1}) - \mathcal {L} (\beta)) \leqslant D _ {h _ {k}} (\beta , \beta_ {k}) - D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) - (1 - \gamma_ {k} L _ {r}) D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) \\ + \left(h _ {k + 1} - h _ {k}\right) (\beta) - \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right). \\ \end{array} +$$ + +Proof. For any $\beta, \beta_k, \beta_{k+1}$ , the following holds (three points identity for time varying potentials, Proposition 9): + +$$ +\begin{array}{l} D _ {h _ {k}} (\beta , \beta_ {k}) - D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) = \left[ h _ {k} (\beta) - \left(h _ {k} (\beta_ {k}) + \langle \nabla h _ {k} (\beta_ {k}), \beta - \beta_ {k} \rangle\right) \right] \\ \left. - \left[ h _ {k + 1} (\beta) - \left(h _ {k + 1} \left(\beta_ {k + 1}\right) + \langle \nabla h _ {k + 1} \left(\beta_ {k + 1}\right), \beta - \beta_ {k + 1} \rangle\right) \right] \right. \\ = h _ {k} (\beta) - h _ {k + 1} (\beta) + \left\langle \nabla h _ {k + 1} \left(\beta_ {k + 1}\right) - \nabla h _ {k} \left(\beta_ {k}\right), \beta - \beta_ {k + 1} \right\rangle \\ + h _ {k + 1} \left(\beta_ {k + 1}\right) - \left[ h _ {k} \left(\beta_ {k}\right) + \left\langle \nabla h _ {k} \left(\beta_ {k}\right), \beta_ {k + 1} - \beta_ {k} \right\rangle \right] \\ = h _ {k} (\beta) - h _ {k + 1} (\beta) + \left\langle \nabla h _ {k + 1} \left(\beta_ {k + 1}\right) - \nabla h _ {k} \left(\beta_ {k}\right), \beta - \beta_ {k + 1} \right\rangle \\ + h _ {k + 1} \left(\beta_ {k + 1}\right) - h _ {k} \left(\beta_ {k + 1}\right) + D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right). \\ \end{array} +$$ + +Rearranging and plugging in our mirror update we obtain that for all $\beta$ : + +$$ +\begin{array}{l} \gamma_ {k} \langle \nabla \mathcal {L} (\beta_ {k}), \beta_ {k + 1} - \beta \rangle = D _ {h _ {k}} (\beta , \beta_ {k}) - D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) \\ - D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right) - \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right) + \left(h _ {k + 1} - h _ {k}\right) (\beta). \\ \end{array} +$$ + +From the convexity of $\mathcal{L}$ and its $L_{r}$ -relative smoothness we also have that: + +$$ +\mathcal {L} \left(\beta_ {k + 1}\right) \leqslant \mathcal {L} (\beta) + \left\langle \nabla \mathcal {L} \left(\beta_ {k}\right), \beta_ {k + 1} - \beta \right\rangle + L _ {r} D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right), +$$ + +Finally: + +$$ +\begin{array}{l} \gamma_ {k} (\mathcal {L} (\beta_ {k + 1}) - \mathcal {L} (\beta)) \leqslant D _ {h _ {k}} (\beta , \beta_ {k}) - D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) - (1 - \gamma_ {k} L _ {r}) D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) \\ + (h _ {k + 1} - h _ {k}) (\beta) - (h _ {k + 1} - h _ {k}) (\beta_ {k + 1}). \\ \end{array} +$$ + +Note that in our setting, for any $\beta$ , $k \mapsto h_k(\beta)$ is increasing. We can therefore write that: + +$$ +\gamma_ {k} (\mathcal {L} (\beta_ {k + 1}) - \mathcal {L} (\beta)) \leqslant D _ {h _ {k}} (\beta , \beta_ {k}) - D _ {h _ {k + 1}} (\beta , \beta_ {k + 1}) - (1 - \gamma_ {k} L _ {r}) D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) + (h _ {k + 1} - h _ {k}) (\beta). +$$ + +In particular, for $\beta = \beta^{*}$ + +$$ +\begin{array}{l} \gamma_ {k} \mathcal {L} (\beta_ {k + 1}) \leqslant D _ {h _ {k}} (\beta^ {*}, \beta_ {k}) - D _ {h _ {k + 1}} (\beta^ {*}, \beta_ {k + 1}) - (1 - \gamma_ {k} L) D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) + (h _ {k + 1} - h _ {k}) (\beta^ {*}) \\ - \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right) \\ \leqslant D _ {h _ {k}} \left(\beta^ {*}, \beta_ {k}\right) - D _ {h _ {k + 1}} \left(\beta^ {*}, \beta_ {k + 1}\right) - \left(1 - \gamma_ {k} L _ {r}\right) D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right) + \left(h _ {k + 1} - h _ {k}\right) \left(\beta^ {*}\right) \\ \end{array} +$$ + +and in $\beta = \beta_{k}$ + +$$ +\begin{array}{l} \gamma_ {k} \mathcal {L} (\beta_ {k + 1}) \leqslant \gamma_ {k} \mathcal {L} (\beta_ {k}) - D _ {h _ {k + 1}} (\beta_ {k}, \beta_ {k + 1}) - (1 - \gamma_ {k} L _ {r}) D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) + (h _ {k + 1} - h _ {k}) (\beta_ {k}) \\ - \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right) \\ \leqslant \gamma_ {k} \mathcal {L} (\beta_ {k}) - D _ {h _ {k + 1}} (\beta_ {k}, \beta_ {k + 1}) - (1 - \gamma_ {k} L _ {r}) D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) + (h _ {k + 1} - h _ {k}) (\beta_ {k}) \\ \end{array} +$$ + +□ + +Proof of Proposition 1. We apply Proposition 14 for $\beta = \beta_{k}$ , with $L_{r} = 4BL$ (using Lemma 6) and replacing $\mathcal{L}$ by $\mathcal{L}_{\mathcal{B}_k}$ , to obtain: + +$$ +\begin{array}{l} \gamma_ {k} \left(\mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k + 1}\right) - \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right)\right) \leqslant - D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) - \left(1 - \gamma_ {k} L _ {r}\right) D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right) \\ + \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k}\right) - \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right), \\ \end{array} +$$ + +and thus, taking the mean wrt $\mathcal{B}_k$ + +$$ +\begin{array}{l} \gamma_ {k} \left(\mathbb {E} _ {\mathcal {B} _ {k}} \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right)\right) \leqslant - \mathbb {E} _ {\mathcal {B} _ {k}} D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) - \left(1 - \gamma_ {k} L _ {r}\right) \mathbb {E} _ {\mathcal {B} _ {k}} D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right) \\ + \mathbb {E} _ {\mathcal {B} _ {k}} \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k}\right) - \mathbb {E} _ {\mathcal {B} _ {k}} \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right) \\ \leqslant - \left(1 - \gamma_ {k} L _ {r}\right) \mathbb {E} _ {\mathcal {B} _ {k}} D _ {h _ {k}} \left(\beta_ {k + 1}, \beta_ {k}\right) \\ + \mathbb {E} _ {\mathcal {B} _ {k}} \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k}\right) - \mathbb {E} _ {\mathcal {B} _ {k}} \left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k + 1}\right). \\ \end{array} +$$ + +First, as in the proof of Proposition 10, using the fact that $h_k$ is $\ln(1 / \alpha_k)$ smooth, + +$$ +\begin{array}{l} D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) \geqslant \frac {1}{2 \ln (1 / \alpha_ {k})} \| \nabla h _ {k} (\beta_ {k}) - \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) - \nabla h _ {k} (\beta_ {k}) + \nabla h _ {k + 1} (\beta_ {k + 1}) - \nabla h _ {k} (\beta_ {k + 1}) \| _ {2} ^ {2} \\ \geqslant - \frac {1}{2 \ln \left(1 / \alpha_ {k}\right)} \| \nabla h _ {k} (\beta_ {k}) - \nabla h _ {k + 1} (\beta_ {k}) \| _ {2} ^ {2} + \frac {1}{4 \ln \left(1 / \alpha_ {k}\right)} \| \gamma_ {k} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \| _ {2} ^ {2}, \\ \end{array} +$$ + +and thus + +$$ +\mathbb {E} D _ {h _ {k}} (\beta_ {k + 1}, \beta_ {k}) \geqslant \mathbb {E} \left[ - \frac {1}{2 \ln (1 / \alpha_ {k})} \| \nabla h _ {k} (\beta_ {k}) - \nabla h _ {k + 1} (\beta_ {k}) \| _ {2} ^ {2} + \frac {\lambda_ {b}}{2 \ln (1 / \alpha_ {k})} \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B}} (\beta_ {k}) \right]. +$$ + +Now, we apply Lemma 7 assuming that $\| \beta^{\star}\|_{\infty},\| \beta_{k + 1}\|_{\infty}\leqslant B$ (which is satisfied since we are under the assumption of Theorem 2): + +$$ +\left(h _ {k + 1} - h _ {k}\right) \left(\beta_ {k}\right) - \left(h _ {k + 1} - h _ {k}\right) \left(\beta^ {\star}\right) \leqslant 2 4 B L \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right). +$$ + +Using $|\nabla h_k(\beta) - \nabla h_{k + 1}(\beta)|\leqslant 2\delta_k$ where $\delta_{k} = q(\gamma_{k}\nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))$ as in Proposition 10, we have: + +$$ +\mathbb {E} \| \nabla h _ {k} (\beta_ {k}) - \nabla h _ {k + 1} (\beta_ {k}) \| _ {2} ^ {2} \leqslant 1 6 B \gamma_ {k} ^ {2} \mathbb {E} \| \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \| ^ {2} \leqslant 3 2 B L \gamma_ {k} ^ {2} \mathbb {E} \mathcal {L} (\beta_ {k}). +$$ + +Wrapping everything together, + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k + 1}\right) - \mathcal {L} \left(\beta_ {k}\right) \right] \leqslant - (1 - \gamma_ {k} 4 B L) \frac {\lambda_ {b}}{2 \ln \left(1 / \alpha_ {k}\right)} \gamma_ {k} \mathbb {E} \mathcal {L} \left(\beta_ {k}\right) \\ + \left(\gamma_ {k} ^ {2} (1 - 4 \gamma_ {k} B L) 2 4 B L + \frac {3 2 B L}{\ln \left(1 / \alpha_ {k}\right)}\right) \gamma_ {k} ^ {2} \mathbb {E} \mathcal {L} (\beta_ {k}). \\ \end{array} +$$ + +Thus, for $\gamma_{k} \leqslant \frac{c'}{LB\ln(1 / (\min_{i}\alpha_{k,i}))}$ , we have the first part of Proposition 1. + +Using Lemma 3, we then have: + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \beta_ {k} - \beta_ {\alpha_ {k}} ^ {\star} \right\| ^ {2} \right] = \mathbb {E} \left[ \mathbf {1} _ {\left\{\mathcal {L} \left(\beta_ {k}\right) \leqslant \frac {1}{2 \lambda_ {\max }} \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {2} \right\}} \left\| \beta_ {k} - \beta_ {\alpha_ {k}} ^ {\star} \right\| ^ {2} \right] \\ + \mathbb {E} \left[ \mathbf {1} _ {\left\{\mathcal {L} \left(\beta_ {k}\right) > \frac {1}{2 \lambda_ {\max }} \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {2} \right\}} \left\| \beta_ {k} - \beta_ {\alpha_ {k}} ^ {\star} \right\| ^ {2} \right] \\ \leqslant \mathbb {E} \left[ \mathbf {1} _ {\left\{\mathcal {L} \left(\beta_ {k}\right) \leqslant \frac {1}{2 \lambda_ {\max }} \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {2} \right\}} 2 B \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {- 1} \mathcal {L} \left(\beta_ {k}\right) \right] \\ + \mathbb {P} \left(\mathcal {L} \left(\beta_ {k}\right) > \frac {1}{2 \lambda_ {\max }} \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {2}\right) \times 4 B ^ {2} \\ \leqslant 2 B \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {- 1} \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k}\right) \right] \\ + \frac {\mathbb {E} [ \mathcal {L} (\beta_ {k}) ]}{\frac {1}{2 \lambda_ {\max}} (\alpha^ {2} \lambda_ {\min} ^ {+}) ^ {2}} \times 4 B ^ {2} \\ = 2 B \left(\alpha^ {2} \lambda_ {\min } ^ {+}\right) ^ {- 1} \left(1 + \frac {4 B \lambda_ {\max }}{\alpha^ {2} \lambda_ {\min } ^ {+}}\right) \mathbb {E} \left[ \mathcal {L} \left(\beta_ {k}\right) \right]. \\ \end{array} +$$ + +![](images/c3d1cdc30ffc882d4f96002443fe49ac9163b011241690b6602989cb70ceb89c.jpg) + +# G Proof of miscellaneous results mentioned in the main text + +In this section, we provide proofs for results mentioned in the main text and that are not directly directed to the proof of Theorem 3. + +# G.1 Proof of Proposition 3 and the sum of the losses + +We start by proving the following proposition, present as is in the first 9 pages of this paper. We then continue with upper and lower bounds (of similar magnitude) on the sum of the losses. + +Proposition 3. Let $\Lambda_b, \lambda_b > 0^5$ be the largest and smallest values, respectively, such that $\lambda_bH \preceq \mathbb{E}_{\mathcal{B}}[H_{\mathcal{B}}^2] \preceq \Lambda_bH$ . For any stepsize $\gamma > 0$ satisfying $\gamma \leqslant \frac{c}{BL}$ (as in Theorem 2), initialisation $\alpha \mathbf{1}$ and batch size $b \in [n]$ , the magnitude of the gain satisfies: + +$$ +\lambda_ {b} \gamma^ {2} \sum_ {k} \mathbb {E L} (\beta_ {k}) \leqslant \mathbb {E} \left[ \| \operatorname {G a i n} _ {\gamma} \| _ {1} \right] \leqslant 2 \Lambda_ {b} \gamma^ {2} \sum_ {k} \mathbb {E L} (\beta_ {k}), \tag {10} +$$ + +where the expectation is over a uniform and independent sampling of the batches $(\mathcal{B}_k)_{k\geqslant 0}$ . + +Proof. From Lemma 5, for all $-1/2 \leqslant x \leqslant 1/2$ , it holds that $x^2 \leqslant q(x) \leqslant 2x^2$ . We have, using $\|\gamma_k \nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_{\infty} \leqslant 1/2$ (which holds under the stepsize assumption): + +$$ +\begin{array}{l} \mathbb {E} \| \mathrm {G a i n} _ {\gamma} \| _ {1} = - \mathbb {E} \sum_ {i} \ln \left(\frac {\alpha_ {\infty , i}}{\alpha}\right) \\ = \sum_ {\ell < \infty} \sum_ {i} \mathbb {E} q \left(\gamma_ {\ell} \nabla_ {i} \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right) \\ \leqslant 2 \sum_ {\ell < \infty} \sum_ {i} \mathbb {E} \left(\gamma_ {\ell} \nabla_ {i} \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell})\right) ^ {2} \\ = \sum_ {\ell < \infty} \gamma_ {\ell} ^ {2} \mathbb {E} \| \nabla \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell}) \| _ {2} ^ {2} \\ \leqslant 4 \Lambda_ {b} \sum_ {\ell < \infty} \gamma_ {\ell} ^ {2} \mathbb {E} \mathcal {L} _ {\mathcal {B} _ {\ell}} (\beta_ {\ell}), \\ \end{array} +$$ + +since $\mathbb{E}\| \nabla \mathcal{L}_{\mathcal{B}_\ell}(\beta_\ell)\| _2^2\leqslant 2\Lambda_b\mathcal{L}_{\mathcal{B}_\ell}(\beta_\ell)$ . For the left hand side we use $q(x)\geqslant x^{2}$ for $|x|\leqslant 1 / 2$ and $\mathbb{E}\| \nabla \mathcal{L}_{\mathcal{B}_k}(\beta_\ell)\| _2^2\geqslant 2\lambda_b\mathcal{L}_{\mathcal{B}_k}(\beta_\ell)$ . Finally, since $\mathcal{B}_{\ell}$ independent from $\beta_{\ell}$ , we have $\mathbb{E}\mathcal{L}_{\mathcal{B}_{\ell}}(\beta_{\ell}) = \mathbb{E}\mathcal{L}(\beta_{\ell})$ . + +Proposition 15. For stepsizes $\gamma_k \equiv \gamma \leqslant \frac{c}{LB}$ (as in Theorem 2), we have: + +$$ +\sum_ {k \geqslant 0} \gamma^ {2} \mathbb {E} \mathcal {L} (\beta_ {k}) = \Theta \left(\gamma \| \beta^ {\star} \| _ {1} \ln (1 / \alpha)\right). +$$ + +Proof. We first lower bound $\sum_{k < \infty} \gamma_k^2 \mathcal{L}_{\mathcal{B}_k}(\beta_k)$ . We have the following equality, that holds for any $k$ : + +$$ +\begin{array}{l} D _ {h _ {k + 1}} \left(\beta^ {\star}, \beta_ {k + 1}\right) = D _ {h _ {k}} \left(\beta^ {\star}, \beta_ {k}\right) - 2 \gamma \mathcal {L} _ {\mathcal {B} _ {k}} \left(\beta_ {k}\right) + D _ {h _ {k + 1}} \left(\beta_ {k}, \beta_ {k + 1}\right) \\ + \left(h _ {k} - h _ {k + 1}\right) \left(\beta_ {k}\right) - \left(h _ {k} - h _ {k + 1}\right) \left(\beta^ {\star}\right), \\ \end{array} +$$ + +leading to, by summing for $k \in \mathbb{N}$ : + +$$ +\sum_ {k < \infty} 2 \gamma \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) = D _ {h _ {0}} (\beta^ {\star}, \beta_ {0}) - \lim _ {k \to \infty} D _ {h _ {k}} (\beta^ {\star}, \beta_ {k}) + \sum_ {k < \infty} D _ {h _ {k + 1}} (\beta_ {k}, \beta_ {k + 1}) + \sum_ {k < \infty} \left(h _ {k} - h _ {k + 1}\right) (\beta_ {k}) - \left(h _ {k} - h _ {k + 1}\right) (\beta^ {\star}). +$$ + +First, since $h_k \to h_\infty$ , $\beta_k \to \beta_\infty$ , we have $\lim_{k \to \infty} D_{h_k}(\beta^\star, \beta_k) = 0$ . Then, $D_{h_{k+1}}(\beta_k, \beta_{k+1}) \geqslant 0$ . + +Finally, $\left| \left( h_{k} - h_{k + 1} \right) (\beta_{k}) - \left( h_{k} - h_{k + 1} \right) (\beta^{\star}) \right| \leqslant 16BL_{2}\gamma^{2}\mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})$ . Hence: + +$$ +\sum_ {k < \infty} 2 \gamma (1 + 1 6 \gamma B L _ {2}) \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \geqslant D _ {h _ {0}} (\beta^ {\star}, \beta_ {0}), +$$ + +and thus $\sum_{k < \infty} \gamma \mathcal{L}_{\mathcal{B}_k}(\beta_k) \geqslant D_{h_0}(\beta^\star, \beta_0) / 4$ for $\gamma \leqslant c / (BL)$ (with $c \geqslant 16$ ). This gives the RHS inequality. The LHS is a direct consequence of bounds proved in previous subsections. + +Hence, we have that + +$$ +\gamma^ {2} \sum_ {k} \mathcal {L} (\beta_ {k}) = \Theta \left(\gamma D _ {h _ {0}} (\beta^ {\star}, \beta_ {0})\right). +$$ + +Noting that $D_{h_0}(\beta^\star, \beta_0) = h_0(\beta^\star) = \Theta\left(\ln(1 / \alpha)\|\beta^\star\|_1\right)$ concludes the proof. + +![](images/304e535f2f1a9bd2ae9957ddd6187651e13b25ec1ecab554a37498edd0123063.jpg) + +# G.2 $\tilde{\beta}_0$ is negligible + +In the following proposition we show that $\tilde{\beta}_0$ is close to $\mathbf{0}$ and therefore one should think of the implicit regularization problem as $\beta_{\infty}^{\star} = \mathrm{argmin}_{\beta^{\star}\in S}\psi_{\alpha_{\infty}}(\beta^{\star})$ + +Proposition 16. Under the assumptions of Theorem 2, + +$$ +| \tilde {\beta} _ {0} | \leqslant \alpha^ {2}, +$$ + +where the inequality must be understood coordinate-wise. + +Proof. + +$$ +\begin{array}{l} | \tilde {\beta} _ {0} | = \frac {1}{2} | \alpha_ {+} ^ {2} - \alpha_ {-} ^ {2} | \\ = \frac {1}{2} \alpha^ {2} \left| \exp \left(- \sum_ {k} q _ {+} \left(\gamma_ {k} \nabla \mathcal {L} \left(\beta_ {k}\right)\right) - \exp \left(- \sum_ {k} q _ {-} \left(\gamma_ {k} \nabla \mathcal {L} \left(\beta_ {k}\right)\right) \right. \right. \right. \\ \leqslant \alpha^ {2}, \\ \end{array} +$$ + +where the inequality is because $q_{+}(\gamma_{k}\nabla \mathcal{L}(\beta_{k}))\geqslant 0,q_{-}(\gamma_{k}\nabla \mathcal{L}(\beta_{k}))\geqslant 0$ for all $k$ + +![](images/5940a59b20d9a6a81e9860e6b724ad4f10d399804404330bfd4e341540d6d4cf.jpg) + +# G.3 Impact of stochasticity and linear scaling rule + +Proposition 17. With probability $1 - 2ne^{-d / 16} - 3 / n^2$ over the $x_{i} \sim_{\mathrm{iid}} \mathcal{N}(0, \sigma^{2}I_{d})$ , $c_{1}\frac{d\sigma^{2}}{b}(1 + o(1)) \leqslant \lambda_{b} \leqslant \Lambda_{b} \leqslant c_{2}\frac{d\sigma^{2}}{b}(1 + o(1))$ , + +so that under these assumptions, + +$$ +\sum_ {k} \gamma_ {k} \mathbb {E} \mathcal {L} (\beta_ {k}) = \Theta \left(\frac {\gamma}{b} \sigma^ {2} \| \beta^ {\star} \| _ {1} \ln (1 / \alpha)\right). +$$ + +Proof. The bound on $\lambda_b, \Lambda_b$ is a direct consequence of the concentration bound provided in Lemma 13. + +![](images/2cf34fc48f28b37cae9feac79e086218633092c7c566fb9a901c8826347d3b85.jpg) + +# G.4 (Stochastic) gradients at the initialisation + +To understand the behaviour and the effects of the stochasticity and the stepsize on the shape of $\mathrm{Gain}_{\gamma}$ , we analyse a noiseless sparse recovery problem under the following standard assumption 2 [10] and as common in the sparse recovery literature, we make the following assumption 3 on the inputs. + +Assumption 2. There exists an $s$ -sparse ground truth vector $\beta_{\mathrm{sparse}}^{\star}$ where $s$ verifies $n = \Omega(s \ln(d))$ , such that $y_i = \langle \beta_{\mathrm{sparse}}^{\star}, x_i \rangle$ for all $i \in [n]$ . + +Assumption 3. There exists $\delta, c_1, c_2 > 0$ such that for all $s$ -sparse vectors $\beta$ , there exists $\varepsilon \in \mathbb{R}^d$ such that $(X^\top X)\beta = \beta + \varepsilon$ where $\| \varepsilon \|_{\infty} \leqslant \delta \| \beta \|_2$ and $c_1 \| \beta \|_2^2 \mathbf{1} \leqslant \frac{1}{n} \sum_i x_i^2 \langle x_i, \beta \rangle^2 \leqslant c_2 \| \beta \|_2^2 \mathbf{1}$ . + +The first part of Assumption 3 closely resembles the classical restricted isometry property (RIP) and is relevant for GD while the second part is relevant for SGD. Such an assumption is not restrictive and holds with high probability for Gaussian inputs $\mathcal{N}(0, \sigma^2 I_d)$ (see Lemma 10 in Appendix). + +Based on the claim above, we analyse the shape of the (stochastic) gradient at initialisation. For GD and SGD, it respectively writes, where $g_0 = \nabla \mathcal{L}_{i_0}(\beta_0)^2$ , $i_0 \sim \mathrm{Unif}([n])$ : + +$$ +\nabla \mathcal {L} (\beta_ {0}) ^ {2} = [ X ^ {\top} X \beta^ {\star} ] ^ {2}, \mathbb {E} _ {i _ {0}} [ g _ {0} ] = \frac {1}{n} \sum_ {i} x _ {i} ^ {2} \langle x _ {i}, \beta^ {\star} \rangle^ {2}. +$$ + +The following lemma then shows that while the initial stochastic gradients of SGD are homogeneous, it is not the case for that of GD. + +Proposition 18. Under Assumption 3, the squared full batch gradient and the expected stochastic gradient at initialisation satisfy, for some $\varepsilon$ verifying $\| \varepsilon \|_{\infty} \ll \left\| \beta_{\mathrm{sparse}}^{\star} \right\|_{\infty}^{2}$ : + +$$ +\nabla \mathcal {L} \left(\beta_ {0}\right) ^ {2} = \left(\beta_ {\text {s p a r s e}} ^ {\star}\right) ^ {2} + \varepsilon , \tag {25} +$$ + +$$ +\mathbb {E} _ {i _ {0}} \left[ \nabla \mathcal {L} _ {i _ {0}} \left(\beta_ {0}\right) ^ {2} \right] = \Theta \left(\| \beta^ {\star} \| _ {2} ^ {2} \mathbf {1}\right). \tag {26} +$$ + +Proof of Proposition 18. Under Assumption 3, we have using: + +$$ +\begin{array}{l} \nabla \mathcal {L} (\beta_ {0}) ^ {2} = (X ^ {\top} X \beta_ {\text {s p a r s e}} ^ {\star}) \\ = \left(\beta_ {\text {s p a r s e}} ^ {\star} + \varepsilon\right) ^ {2} \\ = \beta_ {\mathrm {s p a r s e}} ^ {\star} ^ {2} + \varepsilon^ {2} + 2 \varepsilon \beta_ {\mathrm {s p a r s e}} ^ {\star}. \\ \end{array} +$$ + +We have $\left\| \varepsilon^2 + 2\varepsilon \beta_{\mathrm{sparse}}^\star \right\|_\infty \leqslant \left\| \varepsilon \right\|_\infty^2 + 2\left\| \varepsilon \right\|_\infty \left\| \beta_{\mathrm{sparse}}^\star \right\|_\infty$ , and we conclude by using $\left\| \varepsilon \right\|_\infty \leqslant \delta \left\| \beta_{\mathrm{sparse}}^\star \right\|_2$ . + +Then, + +$$ +\mathbb {E} _ {i \sim \mathrm {U n i f} ([ n ])} [ \nabla \mathcal {L} _ {i} (\beta_ {0}) ^ {2} ] = \frac {1}{n} x _ {i} ^ {2} \langle x _ {i}, \beta_ {\mathrm {s p a r s e}} ^ {\star} \rangle , +$$ + +and we conclude using Assumption 3. + +![](images/ee583033324947ad5cb241ad5a1e084ae6ae2dd5c3aae6c70175326dba059e9a.jpg) + +Proof of Proposition 4. The proof proceeds as that of Proposition 18. + +![](images/4c057eccbb9942a7885cfe5594fddd93fa34e58b47515820e5dfa2d4ee30270a.jpg) + +# G.5 Convergence of $\alpha_{\infty}$ and $\tilde{\beta}_0$ for $\gamma \rightarrow 0$ + +Proposition 19. Let $\beta_0(\gamma),\alpha_{\infty}(\gamma)$ be as defined in Theorem 1, for constant stepsizes $\gamma_{k}\equiv \gamma$ . We have: + +$$ +\tilde {\beta} _ {0} (\gamma) \rightarrow 0, \quad \boldsymbol {\alpha} _ {\infty} \rightarrow \boldsymbol {\alpha} \mathbf {1}, +$$ + +when $\gamma \to 0$ + +Proof. We have, as proved previously, that + +$$ +\begin{array}{l} \left\| \sum_ {k} \gamma^ {2} \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) ^ {2} \right\| _ {1} \leqslant \sum_ {k} \gamma^ {2} \left\| \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) ^ {2} \right\| _ {1} \\ = \sum_ {k} \gamma^ {2} \| \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \| _ {2} ^ {2} \\ \leqslant 2 L \gamma^ {2} \sum_ {k} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \\ \leqslant 2 L \gamma D _ {h _ {0}} \left(\beta^ {\star}, \beta_ {0}\right), \\ \end{array} +$$ + +for $\gamma \leqslant \frac{c}{BL}$ . Thus, $\sum_{k} \gamma^{2} \nabla \mathcal{L}_{\mathcal{B}_{k}} (\beta_{k})^{2} \to 0$ as $\gamma \to 0$ (note that $\beta_{k}$ implicitly depends on $\gamma$ , so that this result is not immediate). + +Then, for $\gamma \leqslant \frac{c}{LB}$ + +$$ +\left\| \ln \left(\boldsymbol {\alpha} _ {\infty} ^ {2} / \alpha^ {2}\right) \right\| _ {1} \leqslant \sum_ {k} \| q (\gamma \mathcal {L} (\beta_ {k}) \| _ {1} \leqslant 2 \sum_ {k} \gamma^ {2} \| \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) ^ {2} \| _ {1}, +$$ + +which tends to 0 as $\gamma \to 0$ . Similarly, $\left\| \ln (\alpha_{+, \infty}^2 / \alpha^2) \right\|_1 \to 0$ and $\left\| \ln (\alpha_{-, \infty}^2 / \alpha^2) \right\|_1 \to 0$ as $\gamma \to 0$ , leading to $\tilde{\beta}_0(\gamma) \to 0$ as $\gamma \to 0$ . + +![](images/1a8d96a572688b8b1c709288f3d3a6a4554dc0713400b91517e9e09dd6530148.jpg) + +# H Technical lemmas + +In this section we present a few technical lemmas, used and referred to throughout the proof of ??. + +Lemma 4. Let $\alpha_{+},\alpha_{-} > 0$ and $x\in \mathbb{R}$ , and $\beta = \alpha_{+}^{2}e^{x} - \alpha_{-}^{2}e^{-x}$ . We have: + +$$ +\operatorname {a r c s i n h} \left(\frac {\beta}{2 \alpha_ {+} \alpha_ {-}}\right) = x + \ln \left(\frac {\alpha_ {+}}{\alpha_ {-}}\right) = x + \operatorname {a r c s i n h} \left(\frac {\alpha_ {+} ^ {2} - \alpha_ {-} ^ {2}}{2 \alpha_ {+} \alpha_ {-}}\right). +$$ + +Proof. First, + +$$ +\begin{array}{l} \frac {\beta}{2 \alpha_ {+} \alpha_ {-}} = \frac {1}{2} \left(\frac {\alpha_ {+}}{\alpha^ {-}} e ^ {x} - \left(\frac {\alpha_ {+}}{\alpha^ {-}}\right) ^ {- 1} e ^ {- x}\right) \\ = \frac {e ^ {x + \ln \left(\alpha_ {+} / \alpha_ {-}\right)} - e ^ {- x - \ln \left(\alpha_ {+} / \alpha_ {-}\right)}}{2} \\ = \sinh (x + \ln (\alpha_ {+} / \alpha_ {-})) , \\ \end{array} +$$ + +hence the result by taking the $\operatorname{arcsinh}$ of both sides. Note also that we have $\ln (\alpha_{+} / \alpha_{-}) = \operatorname{arcsinh}(\frac{\alpha_{+}^{2} - \alpha_{-}^{2}}{2\alpha_{+}\alpha_{-}})$ . + +Lemma 5. If $|x| \leqslant 1/2$ then $x^2 \leqslant q(x) \leqslant 2x^2$ . + +Lemma 6. On the $\ell_{\infty}$ ball of radius $B$ , the quadratic loss function $\beta \mapsto \mathcal{L}(\beta)$ is $4\lambda_{\max}\max(B, \alpha^2)$ -relatively smooth w.r.t all the $h_k$ 's. + +Proof. We have: + +$$ +\nabla^ {2} h _ {k} (\beta) = \mathrm {d i a g} \left(\frac {1}{2 \sqrt {\alpha_ {k} ^ {4} + \beta^ {2}}}\right) \succeq \mathrm {d i a g} \left(\frac {1}{2 \sqrt {\alpha^ {4} + \beta^ {2}}}\right), +$$ + +since $\alpha_{k} \leqslant \alpha$ component-wise. Thus, $\nabla^2 h_k(\beta) \succeq \frac{1}{2} \min \left( \min_{1 \leqslant i \leqslant d} \frac{1}{2|\beta_i|}, \frac{1}{2\alpha^2} \right) I_d = \frac{1}{\max(4\|\beta\|_{\infty}, 4\alpha^2)} I_d$ , and $h_k$ is $\frac{1}{\max(4B, 4\alpha^2)}$ -strongly convex on the $\ell^\infty$ norm of radius $B$ . Since $\mathcal{L}$ is $\lambda_{\max}$ -smooth over $\mathbb{R}^d$ , we have our result. + +Lemma 7. For $k\geqslant 0$ and for all $\beta \in \mathbb{R}^d$ + +$$ +\left| h _ {k + 1} (\beta) - h _ {k} (\beta) \right| \leqslant 8 L _ {2} \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \| \beta \| _ {\infty}. +$$ + +Proof. We have $\alpha_{+,k + 1}^2 = \alpha_{+,k}^2 e^{-\delta_{+,k}}$ and $\alpha_{-,k + 1}^2 = \alpha_{-,k}^2 e^{-\delta_{-,k}}$ , for $\delta_{+,k} = \tilde{q} (\gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k))$ and $\delta_{-,k} = \tilde{q} (-\gamma_k\nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k))$ . And $\alpha_{k + 1} = \alpha_k\exp (-\delta_k)$ where $\delta_{k}\coloneqq \delta_{+,k} + \delta_{-,k} = q(\gamma_{k}\nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k}))$ + +To prove the result we will use that for $\beta \in \mathbb{R}^d$ , we have $|(h_{k+1} - h_k)(\beta)| \leqslant \sum_{i=1}^{d} \int_0^{|\beta_i|} |\nabla_i h_{k+1}(x) - \nabla_i h_k(x)| \, \mathrm{d}x$ . + +First, using that $|\operatorname{arcsinh}(a) - \operatorname{arcsinh}(b)| \leqslant |\ln(a / b)|$ for $ab > 0$ . We have that + +$$ +\begin{array}{l} \left| \operatorname {a r c s i n h} \left(\frac {x}{\alpha_ {k + 1} ^ {2}}\right) - \operatorname {a r c s i n h} \left(\frac {x}{\alpha_ {k} ^ {2}}\right) \right| \leqslant \ln \left(\frac {\alpha_ {k} ^ {2}}{\alpha_ {k + 1} ^ {2}}\right) \\ \mathbf {\delta} = \delta_ {k} \mathbf {\Psi}, \\ \end{array} +$$ + +since $\delta_k \geqslant 0$ due to our stepsize condition. + +We now prove that $|\phi_{k + 1} - \phi_k| \leqslant \frac{|\delta_{+,k} - \delta_{-,k}|}{2}$ . We have $\phi_k = \arcsinh\left(\frac{\alpha_{+,k}^2 - \alpha_{-,k}^2}{2\alpha_{+,k}\alpha_{-,k}}\right)$ and hence, + +$$ +\left| \phi_ {k + 1} - \phi_ {k} \right| = \left| \operatorname {a r c s i n h} \left(\frac {\alpha_ {+, k} ^ {2} - \alpha_ {- , k} ^ {2}}{2 \alpha_ {+, k} \alpha_ {- , k}}\right) - \operatorname {a r c s i n h} \left(\frac {\alpha_ {+, k + 1} ^ {2} - \alpha_ {- , k + 1} ^ {2}}{2 \alpha_ {+, k + 1} \alpha_ {- , k + 1}}\right) \right|. +$$ + +Then, assuming that $\alpha_{+,k,i} \geqslant \alpha_{-,k,i}$ , we have: + +$$ +\begin{array}{l} \frac {\alpha_ {+, k + 1 , i} ^ {2} - \alpha_ {- , k + 1 , i} ^ {2}}{2 \alpha_ {+, k + 1 , i} \alpha_ {- , k + 1 , i}} = e ^ {\delta_ {k, i} / 2} \frac {\alpha_ {+, k , i} ^ {2} e ^ {- \delta_ {+ , k , i}} - \alpha_ {- , k , i} ^ {2} e ^ {- \delta_ {- , k , i}}}{2 \alpha_ {+, k , i} \alpha_ {- , k , i}} \\ \left\{ \begin{array}{l l} \leqslant \left\{ \begin{array}{l l} e ^ {\frac {\delta_ {+, k , i - \delta_ {- , k , i}}}{2}} \frac {\alpha_ {+, k , i} ^ {2} - \alpha_ {- , k , i} ^ {2}}{2 \alpha_ {+, k , i} \alpha_ {- , k , i}} & \text {i f} \quad \delta_ {+, k, i} \geqslant \delta_ {-, k, i} \\ e ^ {\frac {\delta_ {- , k , i - \delta_ {+ , k , i}}}{2}} \frac {\alpha_ {+, k , i} ^ {2} - \alpha_ {- , k , i} ^ {2}}{2 \alpha_ {+, k , i} \alpha_ {- , k , i}} & \text {i f} \quad \delta_ {-, k, i} \geqslant \delta_ {+, k, i} \end{array} \right. \\ \geqslant \left\{ \begin{array}{l l} e ^ {- \frac {\delta_ {+, k , i - \delta_ {- , k , i}}}{2}} \frac {\alpha_ {+, k , i} ^ {2} - \alpha_ {- , k , i} ^ {2}}{2 \alpha_ {+, k , i} \alpha_ {- , k , i}} & \text {i f} \quad \delta_ {+, k, i} \geqslant \delta_ {-, K, i} \\ e ^ {- \frac {\delta_ {- , k , i - \delta_ {+ , k , i}}}{2}} \frac {\alpha_ {+, k , i} ^ {2} - \alpha_ {- , k , i} ^ {2}}{2 \alpha_ {+, k , i} \alpha_ {- , k , i}} & \text {i f} \quad \delta_ {-, k, i} \geqslant \delta_ {+, k, i} \end{array} \right. \end{array} \right. \\ \end{array} +$$ + +We thus have $\frac{\alpha_{+,k+1,i}^2 - \alpha_{-,k+1,i}^2}{2\alpha_{+,k+1,i}\alpha_{-,k+1,i}} \in \left[e^{-\frac{\left|\delta_{+,k,i} - \delta_{-,k,i}\right|}{2}}, e^{\frac{\left|\delta_{+,k,i} - \delta_{-,k,i}\right|}{2}}\right] \times \frac{\alpha_{+,k,i}^2 - \alpha_{-,k,i}^2}{2\alpha_{+,k,i}\alpha_{-,k,i}}$ , and this holds similarly if $\alpha_{+,k,i} \leqslant \alpha_{-,k,i}$ . Then, using $|\operatorname{arcsinh}(a) - \operatorname{arcsinh}(b)| \leqslant |\ln(a / b)|$ we obtain that: + +$$ +\begin{array}{l} \left| \phi_ {k + 1} - \phi_ {k} \right| = \left| \operatorname {a r c s i n h} \left(\frac {\alpha_ {+, k} ^ {2} - \alpha_ {- , k} ^ {2}}{2 \alpha_ {+, k} \alpha_ {- , k}}\right) - \operatorname {a r c s i n h} \left(\frac {\alpha_ {+, k + 1} ^ {2} - \alpha_ {- , k + 1} ^ {2}}{2 \alpha_ {+, k + 1} \alpha_ {- , k + 1}}\right) \right| \\ \leqslant \frac {\left| \delta_ {+, k} - \delta_ {- , k} \right|}{2}. \\ \end{array} +$$ + +Wrapping things up, we have: + +$$ +\left| \nabla h _ {k} (\beta) - \nabla h _ {k + 1} (\beta) \right| \leqslant \delta_ {k} + \frac {\left| \delta_ {+, k} - \delta_ {- , k} \right|}{2} \leqslant 2 \delta_ {k}, +$$ + +This leads to the following bound: + +$$ +\begin{array}{l} \left| h _ {k + 1} (\beta) - h _ {k} (\beta) \right| \leqslant \langle | 2 \delta_ {k} |, | \beta | \rangle \\ \leqslant 2 \| \delta_ {k} \| _ {1} \| \beta \| _ {\infty}. \\ \end{array} +$$ + +Recall that $\delta_{k} = q(\gamma_{k}\nabla \mathcal{L}_{\mathcal{B}_{k}}(\beta_{k})$ , hence from Lemma 5 if $\gamma_k\| \nabla \mathcal{L}_{\mathcal{B}_k}(\beta_k)\|_\infty \leqslant 1 / 2$ , we get that + +$$ +\left\| \delta_ {k} \right\| _ {1} \leqslant 2 \gamma_ {k} ^ {2} \left\| \nabla \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \right\| _ {2} ^ {2} \leqslant 4 L _ {2} \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}). +$$ + +Putting things together we obtain that + +$$ +\begin{array}{l} \left| h _ {k + 1} (\beta) - h _ {k} (\beta) \right| \leqslant \langle | 2 \delta_ {k} |, | \beta | \rangle \\ \leqslant 8 L _ {2} \gamma_ {k} ^ {2} \mathcal {L} _ {\mathcal {B} _ {k}} (\beta_ {k}) \| \beta \| _ {\infty}. \\ \end{array} +$$ + +![](images/5825a3e4361b8baa005bc39e00fe81398a2d20c5f66021d2a1c4aa0d820a3b13.jpg) + +# I Concentration inequalities for matrices + +In this last section of the appendix, we provide and prove several concentration bounds for random vectors and matrices, with (possibly uncentered) isotropic gaussian inputs. These inequalities can easily be generalized to subgaussian random variables via more refined concentration bounds, and to non-isotropic subgaussian random variables [19], leading to a dependence on an effective dimension and on the subgaussian matrix $\Sigma$ . We present these lemmas before proving them in a row. + +The next two lemmas closely resemble the RIP assumption, for centered and then for uncentered. +gaussians. + +Lemma 8. Let $x_{1},\ldots ,x_{n}\in \mathbb{R}^{d}$ be i.i.d. random variables of law $\mathcal{N}(0,I_d)$ and $H = \frac{1}{n}\sum_{i = 1}^{n}x_{i}x_{i}^{\top}$ . Then, denoting by $\mathcal{C}$ the set of all $s$ -sparse vector $\beta \in \mathbb{R}^d$ satisfying $\| \beta \| _2\leqslant 1$ , there exist $C_4,C_5 > 0$ such that for any $\varepsilon >0$ , if $n\geqslant C_4s\ln (d)\varepsilon^{-2}$ , + +$$ +\mathbb{P}\left(\sup_{\beta \in \mathcal{S}}\| H\beta -\beta \|_{\infty}\geqslant \varepsilon\right)\leqslant e^{-C_{5}n}. +$$ + +Lemma 9. Let $x_{1},\ldots ,x_{n}\in \mathbb{R}^{d}$ be i.i.d. random variables of law $\mathcal{N}(\mu ,\sigma^2 I_d)$ and $H = \frac{1}{n}\sum_{i = 1}^{n}x_{i}x_{i}^{\top}$ . Then, denoting by $\mathcal{C}$ the set of all s-sparse vector $\beta \in \mathbb{R}^d$ satisfying $\| \beta \| _2\leqslant 1$ , there exist $C_4,C_5 > 0$ such that for any $\varepsilon >0$ , if $n\geqslant C_4s\ln (d)\varepsilon^{-2}$ + +$$ +\mathbb {P} \left(\sup _ {\beta \in \mathcal {S}} \left\| H \beta - \mu \langle \mu , \beta \rangle - \sigma^ {2} \beta \right\| _ {\infty} \geqslant \varepsilon\right) \leqslant e ^ {- C _ {5} n}. +$$ + +We then provide two lemmas that estimate the mean Hessian of SGD. + +Lemma 10. Let $x_{1},\ldots ,x_{n}$ be i.i.d. random variables of law $\mathcal{N}(0,I_d)$ . Then, there exist $c_{1},c_{2} > 0$ such that with probability $1 - \frac{1}{d^2}$ and if $n = \Omega (s^{5 / 4}\ln (d))$ , we have for all s-sparse vectors $\beta$ : + +$$ +c _ {1} \| \beta \| _ {2} ^ {2} \mathbf {1} \leqslant \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {2} \left\langle x _ {i}, \beta \right\rangle^ {2} \leqslant c _ {2} \| \beta \| _ {2} ^ {2} \mathbf {1}, +$$ + +where the inequality is meant component-wise. + +Lemma 11. Let $x_{1},\ldots ,x_{n}$ be i.i.d. random variables of law $\mathcal{N}(\mu ,\sigma^2 I_d)$ . Then, there exist $c_{0},c_{1},c_{2} > 0$ such that with probability $1 - \frac{c_0}{d^2} -\frac{1}{nd}$ and if $n = \Omega (s^{5 / 4}\ln (d))$ and $\mu \geqslant 4\sigma \sqrt{\ln(d)}\mathbf{1}$ , we have for all $s$ -sparse vectors $\beta$ : + +$$ +\frac {\mu^ {2}}{2} \left(\langle \mu , \beta \rangle^ {2} + \frac {1}{2} \sigma^ {2} \| \beta \| _ {2} ^ {2}\right) \leqslant \frac {1}{n} \sum_ {i} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2} \leqslant 4 \mu^ {2} \left(\langle \mu , \beta \rangle^ {2} + 2 \sigma^ {2} \| \beta \| _ {2} ^ {2}\right). +$$ + +where the inequality is meant component-wise. + +Finally, next two lemmas are used to estimate $\lambda_{b},\Lambda_{b}$ in our paper. + +Lemma 12. Let $x_{1},\ldots ,x_{n}\in \mathbb{R}^{d}$ be i.i.d. random variables of law $\mathcal{N}(\mu \mathbf{1},\sigma^2 I_d)$ . Let $H = \frac{1}{n}\sum_{i = 1}^{n}x_{i}x_{i}^{\top}$ and $\tilde{H} = \frac{1}{n}\sum_{i = 1}^{n}\| x_i\|^2 x_ix_i^\top$ . There exist numerical constants $C_2,C_3 > 0$ such that + +$$ +\mathbb {P} \left(C _ {2} \left(\mu^ {2} + \sigma^ {2}\right) d H \preceq \tilde {H} \preceq C _ {3} \left(\mu^ {2} + \sigma^ {2}\right) d H\right) \geqslant 1 - 2 n e ^ {- d / 1 6}. +$$ + +Lemma 13. Let $x_{1},\ldots ,x_{n}\in \mathbb{R}^{d}$ be i.i.d. random variables of law $\mathcal{N}(\mu \mathbf{1},\sigma^2 I_d)$ for some $\mu \in \mathbb{R}$ . Let $H = \frac{1}{n}\sum_{i = 1}^{n}x_{i}x_{i}^{\top}$ and for $1\leqslant b\leqslant n$ let $\tilde{H}_b = \mathbb{E}_{\mathcal{B}}\left[\left(\frac{1}{b}\sum_{i\in \mathcal{B}}x_{i}x_{i}^{\top}\right)^{2}\right]$ where $\mathcal{B}\subset [n]$ is sampled uniformly at random in $\{\mathcal{B}\subset [n]$ s.t. $|\mathcal{B}| = b\}$ . With probability $1 - 2ne^{-d / 16} - 3 / n^2$ , we have, for some numerical constants $c_{1},c_{2},c_{3},C > 0$ : + +$$ +\left(c _ {1} \frac {d (\mu^ {2} + \sigma^ {2})}{b} - c _ {2} \frac {(\sigma^ {2} + \mu^ {2}) \ln (n)}{\sqrt {d}} - c _ {3} \frac {\mu^ {2} d}{n}\right) H \preceq \tilde {H} _ {b} \preceq C \left(\frac {d (\mu^ {2} + \sigma^ {2})}{b} + \frac {(\sigma^ {2} + \mu^ {2}) \ln (n)}{\sqrt {d}} + \mu^ {2} d\right) +$$ + +Proof of Lemma 8. For $j \in [d]$ , we have: + +$$ +\begin{array}{l} (H \beta) _ {j} = \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} \langle x _ {i}, \beta \rangle \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \sum_ {j ^ {\prime} = 1} ^ {d} x _ {i j} x _ {i j ^ {\prime}} \beta_ {j ^ {\prime}} \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} ^ {2} \beta_ {j} + \frac {1}{n} \sum_ {i = 1} ^ {n} \sum_ {j ^ {\prime} \neq j} x _ {i j} x _ {i j ^ {\prime}} \beta_ {j ^ {\prime}} \\ = \frac {\beta_ {j}}{n} \sum_ {i = 1} ^ {n} x _ {i j} ^ {2} + \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} \sum_ {j ^ {\prime} \neq j} x _ {i j ^ {\prime}} \beta_ {j ^ {\prime}}. \\ \end{array} +$$ + +We thus notice that $\mathbb{E}[H\beta] = \beta$ , and + +$$ +(H \beta) _ {j} = \beta_ {j} + \frac {\beta_ {j}}{n} \sum_ {i = 1} ^ {n} \left(x _ {i j} ^ {2} - 1\right) + \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i}, +$$ + +where $z_{i} = x_{ij}\sum_{j^{\prime}\neq j}x_{ij^{\prime}}\beta_{j^{\prime}}$ , and $\sum_{j^{\prime}\neq j}x_{ij^{\prime}}\beta_{j^{\prime}}\sim \mathcal{N}(0,\| \beta \| ^2 -\beta_j^2)$ and $\| \beta \| ^2 -\beta_j^2\leqslant 1$ . Hence, $z_{j} + x_{ij}^{2} - 1$ is a centered subexponential random variables (with a subexponential parameter of order 1). Thus, for $t\leqslant 1$ : + +$$ +\mathbb {P} \left(\left| \frac {\beta_ {j}}{n} \sum_ {i = 1} ^ {n} (x _ {i j} ^ {2} - 1) + \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i} \right| \geqslant t\right) \leqslant 2 e ^ {- c n t ^ {2}}. +$$ + +Hence, using an $\varepsilon$ -net of $\mathcal{C} = \{\beta \in \mathbb{R}^d : \| \beta \|_2 \leqslant 1, \| \beta \|_0\}$ (of cardinality less than $d^s \times (C / \varepsilon)^s$ , and for $\varepsilon$ of order 1), we have, using the classical $\varepsilon$ -net trick explained in [Chapt. 9, [58] or [App. C, Even and Massoulie [19]]: + +$$ +\mathbb {P} \left(\sup _ {\beta \in \mathcal {C}, j \in [ d ]} | (H \beta) _ {j} - \beta_ {j} | \geqslant t\right) \leqslant d \times d ^ {s} (C / \varepsilon) ^ {s} \times 2 e ^ {- c n t ^ {2}} = \exp \left(- c \ln (2) n t ^ {2} + (s + 1) \ln (d) + s \ln (C / \varepsilon)\right). +$$ + +Consequently, for $t = \varepsilon$ and if $n \geqslant C_4 s \ln(d) / \varepsilon^2$ , we have: + +$$ +\mathbb {P} \left(\sup _ {\beta \in \mathcal {C}, j \in [ d ]} | (H \beta) _ {j} - \beta_ {j} | \geqslant t\right) \leqslant \exp \left(- C _ {5} n t ^ {2}\right). +$$ + +![](images/2c4e30c642c6b496411d55336224472ff4accc42e00438e85dc2ab9a8058884a.jpg) + +Proof of Lemma 9. We write $x_{i} = \sigma z_{i} + \mu$ where $z_{i}\sim \mathcal{N}(0,I_{d})$ . We have: + +$$ +\begin{array}{l} X ^ {\top} X \beta = \frac {1}{n} \sum_ {i = 1} ^ {n} (\mu + \sigma z _ {i}) \langle \mu + \sigma z _ {i}, \beta \rangle \\ = \mu \langle \mu , \beta \rangle + \frac {\sigma^ {2}}{n} \sum_ {i = 1} ^ {n} z _ {i} \langle z _ {i}, \beta \rangle + \frac {\sigma}{n} \sum_ {i = 1} ^ {n} \mu \langle z _ {i}, \beta \rangle + \frac {\sigma}{n} \sum_ {i = 1} ^ {n} z _ {i} \langle \mu , \beta \rangle \\ = \mu \langle \mu , \beta \rangle + \frac {\sigma^ {2}}{n} \sum_ {i = 1} ^ {n} z _ {i} \langle z _ {i}, \beta \rangle + \sigma \mu \left\langle \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i}, \beta \right\rangle + \frac {\sigma \langle \mu , \beta \rangle}{n} \sum_ {i = 1} ^ {n} z _ {i}. \\ \end{array} +$$ + +The first term is deterministic and is to be kept. The second one is of order $\sigma^2\beta$ whp using Lemma 8. Then, $\frac{1}{n}\sum_{i = 1}^{n}z_{i}\sim \mathcal{N}(0,I_{d} / n)$ , so that + +$$ +\mathbb {P} \left(\left| \left\langle \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i}, \beta \right\rangle \right| \geqslant t\right) \leqslant 2 e ^ {- n t ^ {2} / \left(2 \| \beta \| _ {2} ^ {2}\right)}, +$$ + +and + +$$ +\mathbb {P} \left(\left| \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i j} \right| \geqslant t\right) \leqslant 2 e ^ {- n t ^ {2} / 2}. +$$ + +Hence, + +$$ +\mathbb {P} \left(\sup _ {\beta \in \mathcal {C}} \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i j} \right\| _ {\infty} \geqslant t, \sup _ {\beta \in \mathcal {C}} \left| \langle \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i}, \beta \rangle \right| \geqslant t\right) \leqslant 4 e ^ {c s \ln (d)} e ^ {- n t ^ {2} / 2}. +$$ + +Thus, with probability $1 - Ce^{-n\varepsilon^2}$ and under the assumptions of Lemma 8, we have $\left\| X^\top X\beta - \mu \langle \mu, \beta \rangle - \sigma^2\beta \right\|_\infty \leqslant \varepsilon$ + +Proof of Lemma 10. To ease notations, we assume that $\sigma = 1$ . We remind (O'Donnell [46], Chapter 9 and Tao [54]) that for i.i.d. real random variables $a_1, \ldots, a_n$ that satisfy a tail inequality of the form + +$$ +\mathbb {P} \left(\left| a _ {1} - \mathbb {E} a _ {1} \right| \geqslant t\right) \leqslant C e ^ {- c t ^ {p}}, \tag {27} +$$ + +for $p < 1$ , then for all $\varepsilon >0$ there exists $C^\prime ,c^\prime$ such that for all $t$ + +$$ +\mathbb {P} \left(\left| \frac {1}{n} \sum_ {i = 1} ^ {n} a _ {i} - \mathbb {E} a _ {1} \right| \geqslant t\right) \leqslant C ^ {\prime} e ^ {- c ^ {\prime} n t ^ {p - \varepsilon}}. +$$ + +We now expand $\frac{1}{n}\sum_{i = 1}^{n}x_i^2\langle x_i,\beta \rangle^2$ + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2} = \frac {1}{n} \sum_ {i \in [ n ], k, \ell \in [ d ]} x _ {i} ^ {2} x _ {i k} x _ {i \ell} \beta_ {k} \beta_ {\ell} \\ = \frac {1}{n} \sum_ {i \in [ n ], k \in [ d ]} x _ {i} ^ {2} x _ {i k} ^ {2} \beta_ {k} ^ {2} + \frac {1}{n} \sum_ {i \in [ n ], k \neq \ell \in [ d ]} x _ {i} ^ {2} x _ {i k} x _ {i \ell} \beta_ {k} \beta_ {\ell}. \\ \end{array} +$$ + +Thus, for $j\in [d]$ + +$$ +\left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2}\right) _ {j} = \sum_ {k \in [ d ]} \frac {\beta_ {k} ^ {2}}{n} \sum_ {i \in [ n ]} x _ {i j} ^ {2} x _ {i k} ^ {2} + \sum_ {k \neq \ell \in [ d ]} \frac {\beta_ {k} \beta_ {\ell}}{n} \sum_ {i \in [ n ]} x _ {i j} ^ {2} x _ {i k} x _ {i \ell}. +$$ + +We notice that for all indices, all $x_{ij}^{2}x_{ik}x_{il}$ and $x_{ij}^{2}x_{ik}^{2}$ satisfy the tail inequality Eq. (27) for $C = 8$ , $c = 1/2$ and $p = 1/2$ , so that for $\varepsilon = 1/4$ : + +$$ +\mathbb {P} \big (\left| \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} ^ {2} x _ {i k} x _ {i \ell} \right| \geqslant t \big) \leqslant C ^ {\prime} e ^ {- c ^ {\prime} n t ^ {1 / 4}} \quad , \quad \mathbb {P} \big (\left| \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} ^ {2} x _ {i k} ^ {2} - \mathbb {E} \left[ x _ {i j} ^ {2} x _ {i k} ^ {2} \right] \right| \geqslant t \big) \leqslant C ^ {\prime} e ^ {- c ^ {\prime} n t ^ {1 / 4}}. +$$ + +For $j \neq k$ , we have $\mathbb{E}\left[x_{ij}^{2}x_{ik}^{2}\right] = 1$ while for $j = k$ , we have $\mathbb{E}\left[x_{ij}^{2}x_{ik}^{2}\right] = \mathbb{E}\left[x_{ij}^{4}\right] = 3$ . Hence, + +$$ +\mathbb {P} \left(\exists j, k \neq \ell , | \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} ^ {2} x _ {i k} x _ {i \ell} | \geqslant t, | \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i j} ^ {2} x _ {i k} ^ {2} - \mathbb {E} \left[ x _ {i j} ^ {2} x _ {i k} ^ {2} \right] | \geqslant t\right) \leqslant C ^ {\prime} d ^ {2} e ^ {- c ^ {\prime} n t ^ {1 / 4}}. +$$ + +Thus, with probability $1 - C^{\prime}d^{2}e^{-c^{\prime}nt^{1 / 4}}$ , for all $j\in [d]$ + +$$ +\left| \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2}\right) _ {j} - 2 \beta_ {j} ^ {2} - \| \beta \| _ {2} ^ {2} \right| \leqslant t \sum_ {k, \ell} | \beta_ {k} | | \beta_ {\ell} | = t \| \beta \| _ {1} ^ {2}. +$$ + +Using the classical technique of Baraniuk et al. [4], to make a union bound on all $s$ -sparse vectors, we consider an $\varepsilon$ -net of the set of $s$ -sparse vectors of $\ell^2$ -norm smaller than 1. This $\varepsilon$ -net is of cardinality less than $(C_0 / \varepsilon)^s d^s$ , and we only need to take $\varepsilon$ of order 1 to obtain the result for all $s$ -sparse vectors. This leads to: + +$$ +\begin{array}{l} \mathbb {P} \left(\exists \beta \in \mathbb {R} ^ {d} s \text {- s p a r s e a n d} \| \beta \| _ {2} \leqslant 1, \exists j \in \mathbb {R} ^ {d}, \quad \left| \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2}\right) _ {j} - 2 \beta_ {j} ^ {2} - \| \beta \| _ {2} ^ {2} \right| \geqslant t \| \beta \| _ {1} ^ {2}\right) \\ \leqslant C ^ {\prime} d ^ {2} e ^ {c _ {1} s + s \ln (d)} e ^ {- c ^ {\prime} n t ^ {1 / 4}}. \\ \end{array} +$$ + +This probability is equal to $C' / d^2$ for $t = \left(\frac{(s + 4)\ln(d) + c_1s}{c'n}\right)^4$ . We conclude that with probability $1 - C' / d^2$ , all $s$ -sparse vectors $\beta$ satisfy: + +$$ +\left| \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2}\right) _ {j} - 2 \beta_ {j} ^ {2} - \| \beta \| _ {2} ^ {2} \right| \leqslant \left(\frac {(s + 4) \ln (d) + c _ {1} s}{c ^ {\prime} n}\right) ^ {4} \| \beta \| _ {1} ^ {2} \leqslant \left(\frac {(s + 4) \ln (d) + c _ {1} s}{c ^ {\prime} n}\right) ^ {4} s \| \beta \| _ {2} ^ {2}, +$$ + +and the RHS is smaller than $\| \beta \| _2^2 /2$ for $n\geqslant \Omega (s^{5 / 4}\ln (d))$ + +Proof of Lemma 11. We write $x_{i} = \mu + \sigma z_{i}$ where $x_{i} \sim \mathcal{N}(0,1)$ . We have: + +$$ +\mathbb {P} \big (\forall i \in [ n ], \forall j \in [ d ], | z _ {i j} | \geqslant t \big) \leqslant e ^ {\ln (n d) - t ^ {2} / 2} = \frac {1}{n d}, +$$ + +for $t = 2\sqrt{\ln(nd)}$ . Thus, if $\mu \geqslant 4\sigma \sqrt{\ln(nd)}$ we have $\frac{\mu}{2} \leqslant x_i \leqslant 2\mu$ , so that + +$$ +\frac {\mu^ {2}}{2 n} \sum_ {i} \langle x _ {i}, \beta \rangle^ {2} \leqslant \frac {1}{n} \sum_ {i} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2} \leqslant \frac {4 \mu^ {2}}{n} \sum_ {i} \langle x _ {i}, \beta \rangle^ {2}. +$$ + +Then, $\langle x_i,\beta \rangle \sim \mathcal{N}(\langle \mu ,\beta \rangle ,\sigma^2\| \beta \| _2^2)$ . For now, we assume that $\| \beta \| _2 = 1$ . We have $\mathbb{P}(|\langle x_i,\beta \rangle^2 -$ $\langle \mu ,\beta \rangle^2 -\sigma^2\| \beta \| _2^2 |\geqslant t)\leqslant Ce^{-ct / \sigma^2}$ , and for $t\leqslant 1$ , using concentration of subexponential random variables [58]: + +$$ +\mathbb {P} \left(\left| \frac {1}{n} \sum_ {i} \langle x _ {i}, \beta \rangle^ {2} - \langle \mu , \beta \rangle^ {2} - \sigma^ {2} \| \beta \| _ {2} ^ {2} \right| \geqslant t\right) \leqslant C ^ {\prime} e ^ {- n c ^ {\prime} t ^ {2} / \sigma^ {4}}, +$$ + +and using the $\varepsilon$ -net trick of Baraniuk et al. [4], + +$$ +\mathbb {P} \left(\sup _ {\beta \in \mathcal {C}} \left| \frac {1}{n} \sum_ {i} \langle x _ {i}, \beta \rangle^ {2} - \langle \mu , \beta \rangle^ {2} - \sigma^ {2} \| \beta \| _ {2} ^ {2} \right| \geqslant t\right) \leqslant C ^ {\prime} e ^ {s \ln (d) - n c ^ {\prime} t ^ {2} / \sigma^ {4}} = \frac {C ^ {\prime}}{d ^ {2}}, +$$ + +for $t = \sigma^2\| \beta \| _2^2\sqrt{\frac{2(cs + 2)\ln(d)}{n}}$ . Consequently, we have, with probability $1 - \frac{C'}{d^2} -\frac{1}{nd}$ : + +$$ +\frac {\mu^ {2}}{2} \left(\langle \mu , \beta \rangle^ {2} + \frac {1}{2} \sigma^ {2} \| \beta \| _ {2} ^ {2}\right) \leqslant \frac {1}{n} \sum_ {i} x _ {i} ^ {2} \langle x _ {i}, \beta \rangle^ {2} \leqslant 4 \mu^ {2} \left(\langle \mu , \beta \rangle^ {2} + 2 \sigma^ {2} \| \beta \| _ {2} ^ {2}\right). +$$ + +![](images/8099292afe6aaefc74e0aa406d3b9cda66189c2dd4d2f930e96982ddfac71770.jpg) + +Proof of Lemma 12. First, we write $x_{i} = \mu \mathbf{1} + \sigma z_{i}$ , where $z_{i}\sim \mathcal{N}(0,I)$ , leading to: + +$$ +\frac {1}{n} \sum_ {i \in [ n ]} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} = \frac {1}{n} \sum_ {i \in [ n ]} \left(\sigma^ {2} \| z _ {i} \| _ {2} ^ {2} + d \mu^ {2} + 2 \sigma \mu \langle \mathbf {1}, z _ {i} \rangle\right) x _ {i} x _ {i} ^ {\top} +$$ + +We use concentration of $\chi_d^2$ random variables around $d$ : + +$$ +\mathbb {P} \left(\chi_ {d} ^ {2} > d + 2 t + 2 \sqrt {d t}\right) \geqslant t) \leqslant e ^ {- t} \quad \text {a n d} \quad \mathbb {P} \left(\chi_ {d} ^ {2} > d - 2 \sqrt {d t}\right) \leqslant t) \leqslant e ^ {- t}, +$$ + +so that for all $i \in [n]$ , + +$$ +\mathbb {P} \left(\| z _ {i} \| _ {2} ^ {2} \notin [ d - 2 \sqrt {d t}, d + 2 t + 2 \sqrt {d t} ]\right) \leqslant 2 e ^ {- t}. +$$ + +Thus, + +$$ +\mathbb {P} (\forall i \in [ n ], \| z _ {i} \| _ {2} ^ {2} \in [ d - 2 \sqrt {d t}, d + 2 t + 2 \sqrt {d t} ]) \geqslant 1 - 2 n e ^ {- t}. +$$ + +Taking $t = d / 16$ + +$$ +\mathbb {P} (\forall i \in [ n ], \| z _ {i} \| _ {2} ^ {2} \in [ \frac {d}{2}, 1 3 d / 8 ]) \geqslant 1 - 2 n e ^ {- d / 1 6}. +$$ + +Then, for all $i$ , $\langle \mathbf{1}, z_i \rangle$ is of law $\mathcal{N}(0, d)$ , so that $\mathbb{P}(|\langle \mathbf{1}, z_i \rangle| \geqslant t) \leqslant 2e^{-t^2/(2d)}$ and + +$$ +\mathbb {P} \left(\forall i \in [ n ], | \langle \mathbf {1}, z _ {i} \rangle | \geqslant t\right) \leqslant 2 n e ^ {- \frac {t ^ {2}}{2 d}}. +$$ + +Taking $t = \sqrt{2} d^{3 / 4}$ + +$$ +\mathbb {P} \left(\forall i \in [ n ], | \langle \mathbf {1}, z _ {i} \rangle | \geqslant d ^ {3 / 4}\right) \leqslant 2 n e ^ {- d ^ {1 / 2}}. +$$ + +Thus, with probability $1 - 2n(e^{-d / 16} + e^{-\sqrt{d}}$ , we have $\forall i \in [n]$ , $|\langle \mathbf{1}, z_i \rangle| \geqslant d^{3/4}$ and $\| z_i \|_2^2 \in [\frac{d}{2}, 13d / 8]$ , so that + +$$ +\left(\frac {d}{2} \sigma^ {2} + d \mu^ {2} - 2 \mu \sigma d ^ {3 / 4}\right) H \preceq \tilde {H} \preceq \left(\frac {1 3 d}{8} \sigma^ {2} + d \mu^ {2} + 2 \mu \sigma d ^ {3 / 4}\right) H, +$$ + +leading to the desired result. + +![](images/d6b0d4a9dc8b8837b20bb8a6c251c36c4db6a9eafe95e6c36bc2de944dc3cec9.jpg) + +Proof of Lemma 13. We have: + +$$ +\begin{array}{l} \tilde {H} _ {b} = \mathbb {E} \left[ \frac {1}{b ^ {2}} \sum_ {i, j \in \mathcal {B}} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \right] \\ = \mathbb {E} \left[ \frac {1}{b ^ {2}} \sum_ {i \in \mathcal {B}} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} + \frac {1}{b ^ {2}} \sum_ {i, j \in \mathcal {B}, i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \right] \\ = \frac {1}{b ^ {2}} \sum_ {i \in [ n ]} \mathbb {P} (i \in \mathcal {B}) \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} + \frac {1}{b ^ {2}} \sum_ {i \neq j} \mathbb {P} (i, j \in \mathcal {B}) \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top}. \\ \end{array} +$$ + +Then, since $\mathbb{P}(i\in \mathcal{B}) = \frac{b}{n}$ and $\mathbb{P}(i,j\in \mathcal{B}) = \frac{b(b - 1)}{n(n - 1)}$ for $i\neq j$ , we get that: + +$$ +\tilde {H} _ {b} = \frac {1}{b n} \sum_ {i \in [ n ]} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} + \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top}. +$$ + +Using Lemma 12, the first term satisfies: + +$$ +\mathbb {P} \Big (\frac {d (\mu^ {2} + \sigma^ {2})}{b} C _ {2} H \preceq \frac {1}{b n} \sum_ {i \in [ n ]} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} \preceq \frac {d (\mu^ {2} + \sigma^ {2})}{b} C _ {3} H \Big) \geqslant 1 - 2 n e ^ {- d / 1 6}. +$$ + +We now show that the second term is of smaller order. Writing $x_{i} = \mu \mathbf{1} + \sigma z_{i}$ where $z_{i}\sim \mathcal{N}(0,I_{d})$ , we have: + +$$ +\frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} = \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} +$$ + +For $i \neq j$ , $\langle x_i, x_j \rangle = \sum_{k=1}^d x_{ik} x_{jk} = \sum_{k=1}^d a_k$ where $a_k = x_{ik} x_{jk}$ satisfies $\mathbb{E}a_k = 0$ , $\mathbb{E}a_k^2 = 1$ and $\mathbb{P}(a_k \geqslant t) \leqslant 2\mathbb{P}(|x_{ik}| \geqslant \sqrt{t}) \leqslant 4e^{-t/2}$ . Hence, $a_k$ is a centered subexponential random variables. Using concentration of subexponential random variables [58], for $t \leqslant 1$ , + +$$ +\mathbb {P} \left(\frac {1}{d} | \langle x _ {i}, x _ {j} \rangle | \geqslant t\right) \leqslant 2 e ^ {- c d t ^ {2}}. +$$ + +Thus, + +$$ +\mathbb {P} \left(\forall i \neq j, \frac {1}{d} | \langle x _ {i}, x _ {j} \rangle | \leqslant t\right) \geqslant 1 - n (n - 1) e ^ {- c d t ^ {2}}. +$$ + +Then, taking $t = d^{-1 / 2}4\ln (n) / c$ we have: + +$$ +\mathbb {P} \left(\forall i \neq j, \frac {1}{d} | \langle x _ {i}, x _ {j} \rangle | \leqslant \frac {4 \ln (n)}{c \sqrt {d}}\right) \geqslant 1 - \frac {1}{n ^ {2}}. +$$ + +Going back to our second term, + +$$ +\begin{array}{l} \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} = \frac {(b - 1)}{b n (n - 1)} \sum_ {i < j} \langle x _ {i}, x _ {j} \rangle \left(x _ {i} x _ {j} ^ {\top} + x _ {j} x _ {i} ^ {\top}\right) \\ \preceq \frac {(b - 1)}{b n (n - 1)} \sum_ {i < j} | \langle x _ {i}, x _ {j} \rangle | \left(x _ {i} x _ {i} ^ {\top} + x _ {j} x _ {j} ^ {\top}\right), \\ \end{array} +$$ + +where we used $x_{i}x_{j}^{\top} + x_{j}x_{i}^{\top}\preceq x_{i}x_{i}^{\top} + x_{j}x_{j}^{\top}$ . Thus, + +$$ +\begin{array}{l} \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \preceq \sup _ {i \neq j} | \langle x _ {i}, x _ {j} \rangle | \times \frac {(b - 1)}{b n (n - 1)} \sum_ {i < j} \left(x _ {i} x _ {i} ^ {\top} + x _ {j} x _ {j} ^ {\top}\right) \\ = \sup _ {i \neq j} | \langle x _ {i}, x _ {j} \rangle | \times \frac {b - 1}{b} \frac {1}{n - 1} \sum_ {i = 1} ^ {n} x _ {i} x _ {i} ^ {\top} \\ = \sup _ {i \neq j} | \langle x _ {i}, x _ {j} \rangle | \times \frac {b - 1}{b} \frac {n}{n - 1} H. \\ \end{array} +$$ + +Similarly, we have + +$$ +\frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \succeq - \sup _ {i \neq j} | \langle x _ {i}, x _ {j} \rangle | \times \frac {b - 1}{b} \frac {n}{n - 1} H. +$$ + +Hence, with probability $1 - 1 / n^2$ + +$$ +- \frac {4 \ln (n)}{c \sqrt {d}} \times \frac {b - 1}{b} \frac {n}{n - 1} H \preceq \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \preceq \frac {4 \ln (n)}{c \sqrt {d}} \times \frac {b - 1}{b} \frac {n}{n - 1} H. +$$ + +Wrapping things up, with probability $1 - 1 / n^{2} - 2ne^{-d / 16}$ + +$$ +\left(- \frac {4 \ln (n)}{c \sqrt {d}} \frac {b - 1}{b} \frac {n}{n - 1} + C _ {2} \frac {d}{b}\right) \times H \preceq \tilde {H} _ {b} \preceq \left(\frac {4 \ln (n)}{c \sqrt {d}} \frac {b - 1}{b} \frac {n}{n - 1} + C _ {3} \frac {d}{b}\right) \times H. +$$ + +Thus, provided that $\frac{4\ln(n)}{c\sqrt{d}} \leqslant \frac{C_2d}{2b}$ and $d \geqslant 48\ln (n)$ , we have with probability $1 - 3 / n^2$ : + +$$ +C _ {2} ^ {\prime} \frac {d}{b} \times H \preceq \tilde {H} _ {b} \preceq C _ {3} ^ {\prime} \frac {d}{b} \times H. +$$ + +![](images/d2c3582bcc65e70737607d1f05c3f315c0cee56c27dbf473a0a07a77d6a9c45b.jpg) + +Proof of Lemma 13. We have: + +$$ +\begin{array}{l} \tilde {H} _ {b} = \mathbb {E} \left[ \frac {1}{b ^ {2}} \sum_ {i, j \in \mathcal {B}} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \right] \\ = \mathbb {E} \left[ \frac {1}{b ^ {2}} \sum_ {i \in \mathcal {B}} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} + \frac {1}{b ^ {2}} \sum_ {i, j \in \mathcal {B}, i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} \right] \\ = \frac {1}{b ^ {2}} \sum_ {i \in [ n ]} \mathbb {P} (i \in \mathcal {B}) \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} + \frac {1}{b ^ {2}} \sum_ {i \neq j} \mathbb {P} (i, j \in \mathcal {B}) \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top}. \\ \end{array} +$$ + +Then, since $\mathbb{P}(i\in \mathcal{B}) = \frac{b}{n}$ and $\mathbb{P}(i,j\in \mathcal{B}) = \frac{b(b - 1)}{n(n - 1)}$ for $i\neq j$ , we get that: + +$$ +\tilde {H} _ {b} = \frac {1}{b n} \sum_ {i \in [ n ]} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} + \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top}. +$$ + +Using Lemma 12, the first term satisfies: + +$$ +\mathbb {P} \Big (\frac {d (\mu^ {2} + \sigma^ {2})}{b} C _ {2} H \preceq \frac {1}{b n} \sum_ {i \in [ n ]} \| x _ {i} \| _ {2} ^ {2} x _ {i} x _ {i} ^ {\top} \preceq \frac {d (\mu^ {2} + \sigma^ {2})}{b} C _ {3} H \Big) \geqslant 1 - 2 n e ^ {- d / 1 6}. +$$ + +We now show that the second term is of smaller order. Writing $x_{i} = \mu \mathbf{1} + \sigma z_{i}$ where $z_{i}\sim \mathcal{N}(0,I_{d})$ , we have: + +$$ +\begin{array}{l} \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \langle x _ {i}, x _ {j} \rangle x _ {i} x _ {j} ^ {\top} = \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \left(\sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle \mathbf {1}, z _ {i} + z _ {j} \rangle + \mu^ {2} d\right) x _ {i} x _ {j} ^ {\top} \\ = \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \left(\sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle \mathbf {1}, z _ {i} + z _ {j} \rangle\right) x _ {i} x _ {j} ^ {\top} + \frac {(b - 1)}{b n (n - 1)} \mu^ {2} d \sum_ {i \neq j} x _ {i} x _ {j} ^ {\top} \\ \end{array} +$$ + +For $i \neq j$ , $\langle z_i, z_j \rangle = \sum_{k=1}^d z_{ik}z_{jk} = \sum_{k=1}^d a_k$ where $a_k = z_{ik}z_{jk}$ satisfies $\mathbb{E}a_k = 0$ , $\mathbb{E}a_k^2 = 1$ and $\mathbb{P}(a_k \geqslant t) \leqslant 2\mathbb{P}(|z_{ik}| \geqslant \sqrt{t}) \leqslant 4e^{-t/2}$ . Hence, $a_k$ is a centered subexponential random variables. Using concentration of subexponential random variables [58], for $t \leqslant 1$ , + +$$ +\mathbb {P} \left(\frac {1}{d} | \langle x _ {i}, x _ {j} \rangle | \geqslant t\right) \leqslant 2 e ^ {- c d t ^ {2}}. +$$ + +Thus, + +$$ +\mathbb {P} \left(\forall i \neq j, \frac {1}{d} | \langle x _ {i}, x _ {j} \rangle | \leqslant t\right) \geqslant 1 - n (n - 1) e ^ {- c d t ^ {2}}. +$$ + +Then, taking $t = d^{-1 / 2}4\ln (n) / c$ we have: + +$$ +\mathbb {P} \left(\forall i \neq j, \frac {1}{d} | \langle x _ {i}, x _ {j} \rangle | \leqslant \frac {4 \ln (n)}{c \sqrt {d}}\right) \geqslant 1 - \frac {1}{n ^ {2}}. +$$ + +For $i\in [n]$ $\langle \mathbf{1},z_i\rangle \sim \mathcal{N}(0,d)$ so that $\mathbb{P}(|\langle \mathbf{1},z_i\rangle |\geqslant t)\leqslant 2e^{-t^2 /(2d)}$ , and + +$$ +\mathbb {P} (\forall i \in [ n ], | \langle \mathbf {1}, z _ {i} \rangle | \leqslant t) \geqslant 1 - 2 n e ^ {- t ^ {2} / (2 d)} = 1 - \frac {2}{n ^ {2}}, +$$ + +for $t = 3\sqrt{d}\ln (n)$ . Hence, with probability $1 - 3 / n^2$ , for all $i \neq j$ we have $|\sigma^2\langle z_i,z_j\rangle +\sigma \mu \langle \mathbf{1},z_i + z_j\rangle |\leqslant (\sigma^2 +\sigma \mu)C\ln (n) / \sqrt{d}$ . + +Now, + +$$ +\begin{array}{l} \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \left(\sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle {\bf 1}, z _ {i} + z _ {j} \rangle\right) x _ {i} x _ {j} ^ {\top} = \frac {(b - 1)}{b n (n - 1)} \sum_ {i < j} \left(\sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle {\bf 1}, z _ {i} + z _ {j} \rangle\right) (x _ {i} x _ {j} ^ {\top} + x _ {j} x _ {i} ^ {\top}) \\ \preceq \frac {(b - 1)}{b n (n - 1)} \sum_ {i < j} | \sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle \mathbf {1}, z _ {i} + z _ {j} \rangle) | \left(x _ {i} x _ {i} ^ {\top} + x _ {j} x _ {j} ^ {\top}\right), \\ \end{array} +$$ + +where we used $x_{i}x_{j}^{\top} + x_{j}x_{i}^{\top} \preceq x_{i}x_{i}^{\top} + x_{j}x_{j}^{\top}$ . Thus, + +$$ +\begin{array}{l} \frac{(b - 1)}{bn(n - 1)}\sum_{i\neq j}\bigl(\sigma^{2}\langle z_{i},z_{j}\rangle +\sigma \mu \langle \mathbf{1},z_{i} + z_{j}\rangle \bigr)\\ x_{i}x_{j}^{\top}\preceq \sup_{i\neq j}\bigl|\sigma^{2}\langle z_{i},z_{j}\rangle +\sigma \mu \langle \mathbf{1},z_{i} + z_{j}\rangle \bigr)|\\ \times \frac{(b - 1)}{bn(n - 1)}\sum_{i < j}\bigl(x_{i}x_{i}^{\top} + x_{j}x_{j}^{\top}\bigr) \\ = \sup _ {i \neq j} \left| \sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle \mathbf {1}, z _ {i} + z _ {j} \rangle \right| \times \frac {b - 1}{b} \frac {1}{n - 1} \sum_ {i = 1} ^ {n} x _ {i} x _ {i} ^ {\top} \\ = \sup _ {i \neq j} \left| \sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle \mathbf {1}, z _ {i} + z _ {j} \rangle \right| \times \frac {b - 1}{b} \frac {n}{n - 1} H. \\ \end{array} +$$ + +Similarly, we have + +$$ +\frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \left(\sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle {\bf 1}, z _ {i} + z _ {j} \rangle\right) x _ {i} x _ {j} ^ {\top} \succeq - \sup _ {i \neq j} \left| \sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle {\bf 1}, z _ {i} + z _ {j} \rangle\right) | \times \frac {b - 1}{b} \frac {n}{n - 1} H. +$$ + +Hence, with probability $1 - 3 / n^2$ + +$$ +\begin{array}{l} - \frac {(\sigma^ {2} + \sigma \mu) C \ln (n)}{\sqrt {d}} \times \frac {b - 1}{b} \frac {n}{n - 1} H \preceq \frac {(b - 1)}{b n (n - 1)} \sum_ {i \neq j} \big (\sigma^ {2} \langle z _ {i}, z _ {j} \rangle + \sigma \mu \langle {\bf 1}, z _ {i} + z _ {j} \rangle \big) x _ {i} x _ {j} ^ {\top} \\ \succeq \frac {(\sigma^ {2} + \sigma \mu) C \ln (n)}{\sqrt {d}} \times \frac {b - 1}{b} \frac {n}{n - 1} H. \\ \end{array} +$$ + +We thus have shown that this term (the one in the middle of the above inequality) is of smaller order. + +We are hence left with $\frac{(b - 1)}{bn(n - 1)}\mu^2 d\sum_{i\neq j}x_ix_j^\top$ . Denoting $\bar{x} = \frac{1}{n}\sum_{i}x_{i}$ , we have $\frac{1}{n^2}\sum_{i\neq j}x_ix_j^\top = \frac{1}{n^2}\sum_{i,j}x_ix_j^\top -\frac{1}{n^2}\sum_i x_ix_i^\top$ , so that: + +$$ +\frac {(b - 1)}{b n (n - 1)} \mu^ {2} d \sum_ {i \neq j} x _ {i} x _ {j} ^ {\top} = \frac {(b - 1) n}{b (n - 1)} \mu^ {2} d \left(\bar {x} \bar {x} ^ {\top} - \frac {1}{n} H\right). +$$ + +We note that we have $H = \frac{1}{n}\sum_{i}x_{i}x_{i}^{\top} = \frac{1}{n^{2}}\sum_{i < j}x_{i}x_{i}^{\top} + x_{j}x_{j}^{\top} \succeq \frac{1}{n^{2}}\sum_{i < j}x_{i}x_{j}^{\top} + x_{j}x_{i}^{\top} = \bar{x}\bar{x}^{\top}$ using $x_{i}x_{i}^{\top} + x_{j}x_{j}^{\top} \succeq x_{i}x_{j}^{\top} + x_{j}x_{i}^{\top}$ . Thus, $H \succeq \bar{x}\bar{x}^{\top} \succeq 0$ , and: + +$$ +- \frac {(b - 1) n}{b (n - 1)} \mu^ {2} d \frac {1}{n} H \preceq \frac {(b - 1)}{b n (n - 1)} \mu^ {2} d \sum_ {i \neq j} x _ {i} x _ {j} ^ {\top} \preceq \frac {(b - 1) n}{b (n - 1)} \mu^ {2} d (1 - 1 / n) H. +$$ + +We are now able to wrap everything together. With probability $1 - 2ne^{-d / 16} - 3 / n^2$ , we have, for some numerical constants $c_{1},c_{2},c_{3},C > 0$ : + +$$ +\left(c _ {1} \frac {d (\mu^ {2} + \sigma^ {2})}{b} - c _ {2} \frac {(\sigma^ {2} + \mu^ {2}) \ln (n)}{\sqrt {d}} - c _ {3} \frac {\mu^ {2} d}{n}\right) H \preceq \tilde {H} _ {b} \preceq C \left(\frac {d (\mu^ {2} + \sigma^ {2})}{b} + \frac {(\sigma^ {2} + \mu^ {2}) \ln (n)}{\sqrt {d}} + \mu^ {2} d\right) +$$ + +□ \ No newline at end of file diff --git a/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/images.zip b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a8569e50d076476244d7da2117bf6ef3929b6bed --- /dev/null +++ b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c4029423e5040a677ddf5f2819072dcd5e2b125c602806ae8bd4a5b2dd65856 +size 2323661 diff --git a/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/layout.json b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ea20db79bbd225aaeae0bbede08a42c284d879b0 --- /dev/null +++ b/sgdoverdiagonallinearnetworksimplicitbiaslargestepsizesandedgeofstability/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08eef8faf336bc83be933298a332026f313fb135d4fa62c9c67ec90bcc399a30 +size 2362238 diff --git a/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_content_list.json b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..246f9a6071fea8e2c73f3db5ff9028000b74e9ac --- /dev/null +++ b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55cd2153574b9a06d1f4ddd52c96dfe41edc4ec90f2cd54706c79b2e96caaeaf +size 164815 diff --git a/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_model.json b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2aa61ec4bc3795d97c3963df574e2d9bf5cc3a44 --- /dev/null +++ b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fae4747860de87771b25daf1649471ac811b7bc6e384b06f5d28e81e9d993bfa +size 193608 diff --git a/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_origin.pdf b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d99784e12ddda46fbbbdd6c34cdb239c7cfc7c56 --- /dev/null +++ b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/ecbde937-119a-421c-bc7d-4a093edc93d8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e7acf9b80e5da6c7fed861c7398ad19b05b6048ffe6afe9eb106e35ea3bb87a +size 1270395 diff --git a/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/full.md b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..13fbb439e5f771eab3b4aca1b61189fdea80f2d8 --- /dev/null +++ b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/full.md @@ -0,0 +1,889 @@ +# $\mathbf{A}^{2}\mathbf{C}\mathbf{i}\mathbf{D}^{2}$ : Accelerating Asynchronous Communication in Decentralized Deep Learning + +Adel Nabli + +Concordia University, Mila + +Sorbonne University, ISIR, CNRS + +adel.nabli@sorbonne-universite.fr + +Eugene Belilovsky + +Concordia University, Mila + +Edouard Oyallon + +Sorbonne University, ISIR, CNRS + +# Abstract + +Distributed training of Deep Learning models has been critical to many recent successes in the field. Current standard methods primarily rely on synchronous centralized algorithms which induce major communication bottlenecks and synchronization locks at scale. Decentralized asynchronous algorithms are emerging as a potential alternative but their practical applicability still lags. In order to mitigate the increase in communication cost that naturally comes with scaling the number of workers, we introduce a principled asynchronous, randomized, gossip-based optimization algorithm which works thanks to a continuous local momentum named $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . Our method allows each worker to continuously process mini-batches without stopping, and run a peer-to-peer averaging routine in parallel, reducing idle time. In addition to inducing a significant communication acceleration at no cost other than adding a local momentum variable, minimal adaptation is required to incorporate $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ to standard asynchronous approaches. Our theoretical analysis proves accelerated rates compared to previous asynchronous decentralized baselines and we empirically show that using our $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ momentum significantly decrease communication costs in poorly connected networks. In particular, we show consistent improvement on the ImageNet dataset using up to 64 asynchronous workers (A100 GPUs) and various communication network topologies. + +# 1 Introduction + +As Deep Neural Networks (DNNs) and their training datasets become larger and more complex, the computational demands and the need for efficient training schemes continues to escalate. Distributed training methods offer a solution by enabling the parallel optimization of model parameters across multiple workers. Yet, many of the current distributed methods in use are synchronous, and have significantly influenced the design of cluster computing environments. Thus, both the environments and algorithms rely heavily on high synchronicity in machine computations and near-instantaneous communication in high-bandwidth networks, favoring the adoption of centralized algorithms [7]. + +However, several studies [27, 44, 2, 28] are challenging this paradigm, proposing decentralized asynchronous algorithms that leverage minor time-delays fluctuations between workers to enhance the parallelization of computations and communications. Unlike centralized algorithms, decentralized approaches allow each node to contribute proportionally to its available resources, eliminating the necessity for a global central worker to aggregate results. Combined with asynchronous peer-to-peer (p2p) communications, these methods can streamline the overall training + +process, mitigating common bottlenecks. This includes the Straggler Problem [42], the synchronization between computations and communications [9], or bandwidth limitations [47], potentially due to particular network topologies like a ring graph [43]. However, due to the large number of parameters which are optimized, training DNNs with these methods still critically requires a considerable amount of communication [22], presenting an additional challenge [32]. + +This work aims to address these challenges by introducing a principled acceleration method for pair-wise communications in peer-to-peer training of DNNs, in particular for cluster computing. While conventional synchronous settings accelerate communications by integrating a Chebychev acceleration followed by Gradient Descent steps [37], the potential of accelerated asynchronous pair-wise gossip for Deep Learning (DL) remains largely unexplored. Notably, the sophisticated theory of Stochastic Differential Equations (SDEs) offers an analytical framework for the design and study of the convergence of these algorithms [12]. We introduce a novel algorithm $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ (standing for Accelerating Asynchronous Communication in + +Decentralized Deep Learning) that requires minimal overhead and effectively decouples communications and computations, accelerating pair-wise communications via a provable, accelerated, randomized gossip procedure based on continuous momentum (i.e., a mixing ODE) and time [12, 34]. We emphasize that beyond the aforementioned hardware superiority, stochastic algorithms also allows us to theoretically reach sublinear rates in convex settings [10], which opens the possibility to further principled accelerations. In practice, our method enables a virtual doubling of the communication rate in challenging network topologies without any additional cost, simply by maintaining a local momentum variable in each worker (see Fig. 1). + +Our key contributions are as follows: (1) We extend the continued framework [12] to the nonconvex setting, in order to obtain a neat framework to describe asynchronous decentralized DL training. (2) This framework allows us to refine the analysis of a baseline asynchronous decentralized optimization algorithm. (3) We propose a novel and simple continued momentum which allows to significantly improve communication efficiency in challenging settings, which we name $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . (4) We demonstrate that our method effectively minimizes the gap between centralized settings in environments hosting up to 64 asynchronous GPUs. (5) Our code is implemented in Pytorch [35], remove locks put on previous asynchronous implementations by circumventing their deadlines, and can be found in an open-source repository: https://github.com/AdelNabli/ACiD. + +This paper is structured as follows: Sec. 3.1 outlines our model for asynchronous decentralized learning, while Sec. 3.2 discusses the training dynamic used to optimize our Deep models. Sec. 3.4 offers a comprehensive theoretical analysis of our method, which is validated empirically in Sec. 4. + +# 2 Related Work + +Large-scale distributed DL. Two paradigms allow to maintain high-parallelization. On one side, model-parallelism [9, 25], which splits a neural network on independent machines, allowing to use local learning methods [4, 3]. On the other hand data-parallelism, which accelerates learning by making use of larger mini-batch splitted across multiple nodes [38] to maximally use GPU capacities. This parallelization entailing the use of larger batch-sizes, it requires an important process of adapting hyper-parameters [16], and in particular the learning rate scheduler. Developed for this setting, methods such as [16, 46] allow to stabilize training while maintaining good generalization performances. However, they have been introduced in the context of centralized synchronous training using All-Reduce schemes for communication, which still is the default setting of many approaches to data parallelism. + +Decentralized DL. The pioneer work [27] is one of the first studies to suggest the potential superiority of synchronous decentralized training strategies in practice. In terms of implementation in the cluster + +![](images/2ff8f461c8b971a0b24170828a581226a0c3937b7946efbafa6af60b7785bf98.jpg) +Figure 1: Adding $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ has the same effect as doubling the communication rates on ImageNet on the ring graph with 64 workers. See Sec. 4. + +setting, decentralized frameworks have been shown to achieve higher throughput than optimized All-Reduce strategies [45, 38]. From the theoretical side, [21] propose a framework covering many settings of synchronous decentralized learning. However, as it consistently relies on using a global discrete iterations count, the notion of time is more difficult to exploit, which reveals crucial in our setting. Furthermore, no communication acceleration is incorporated in these algorithms. [22] provides a comprehensive methodology to relate the consensus distance, i.e. the average distance of the local parameters to the global average, to a necessary communication rate to avoid degrading performance and could be easily applied to our method. [29] is a method focusing on improving performance via a discrete momentum modification, which indicates momentum variables are key to decentralized DL. + +Asynchronous Decentralized DL. There exist many attempts to incorporate asynchrony in decentralized training [48, 5, 8, 28, 2], which typically aim at removing lock barriers of synchronous decentralized algorithms. To the best of our knowledge, none of them introduce communication acceleration, yet they could be simply combined with our approach. Although recent approaches such as [2, 28] perform peer-to-peer averaging of parameters instead of gradients, thus allowing to communicate in parallel of computing (as there is no need to wait for the gradients before communicating), they are still coupled: parameter updates resulting from computations and communications are scheduled in a specific order, limiting their speed. Furthermore, in practice, both those works only implement a periodic averaging on the exponential graph (more favorable, see [43]) instead of investigating the influence of the graph's topology on the convergence of a randomized gossip method, as we do. In fact, AD-PSGD [28], the baseline algorithm in asynchronous decentralized DL, comes with a major caveat to avoid deadlocks in practice: they require a bipartite graph and schedule p2p communications in a pseudo-random manner instead of basing the decision on worker's current availability, hindering the advantage given by asynchronous methods in the mitigation of stragglers. Contrary to them, our implementation allows to pair workers in real time based on their availability, minimizing idle time for communications. + +Communication reduction. Reducing communication overhead is an important topic for scalability [36]. For instance, [19, 20] allow to use of compression factor in limited bandwidth setting, and the local SGD communication schedule of [30] is shown to be beneficial. Those methods could be independently and simply combined with ours to potentially benefit from an additional communication acceleration. By leveraging key properties of the resistance of the communication network [14], [12] showed that standard asynchronous gossip [6] can be accelerated, even to give efficient primal algorithms in the convex setting [34]. However, this acceleration has never been deployed in the DL context, until now. RelaySum [41] is an approach which allows to average exactly parameters produced by different time steps and thus potentially delayed. However, it requires either to use a tree graph topology, either to build ad-hoc spanning trees and has inherent synchronous locks as it averages neighbor messages in a specific order. + +Notations: Let $n \in \mathbb{N}^*$ and $d \in \mathbb{N}^*$ an ambient dimension, for $x = (x^1, \ldots, x^n) \in \bigotimes_{i=1}^{n} \mathbb{R}^d$ , we write $\bar{x} = \frac{1}{n} \sum_{i=1}^{n} x^i$ and $\mathbf{1}$ the tensor of ones such that $\bar{x} = \frac{1}{n} \mathbf{1}^\top x$ . $\Xi$ is a probability space with measure $\mathcal{P}$ . $f(t) = \mathcal{O}(1)$ means there is a $C > 0$ such that for $t$ large enough, $|f(t)| \leq C$ , whereas $\tilde{\mathcal{O}}$ -notation hides constants and polylogarithmic factors. + +# 3 Method + +# 3.1 Model for a decentralized environment + +We consider a network of $n$ workers whose connectivity is given by edges $\mathcal{E}$ . Local computations are modeled as (stochastic) point-wise processes $N_{t}^{i}$ , and communications between nodes $(i,j) \in \mathcal{E}$ as $M_{t}^{ij}$ . We assume that the communications are symmetric, meaning that if a message is sent from node $i$ to $j$ , then the reverse is true. In practice, such processes are potentially highly correlated and could follow any specific law, and could involve delays. For the sake of simplicity, we do not model lags, though it is possible to obtain guarantees via dedicated Lyapunov functions [13]. In our setting, we assume that all nodes have similar buffer variables which correspond to a copy of a common model (e.g., a DNN). For a parameter $x$ , we write $x_{t}^{i}$ the model's parameters at node $i$ and time $t$ and $x_{t} = (x_{t}^{1},\dots,x_{t}^{n})$ their concatenation. In the following, we assume that each worker computes + +Table 1: Comparison of convergence rates for strongly convex and non-convex objectives against concurrent works in the fixed topology setting. We neglect logarithmic terms. Observe that thanks to the maximal resistance $\chi_{2} \leq \chi_{1}$ , our method obtains substantial acceleration for the bias term. Moreover, while our baseline is strongly related to AD-PSGD [28], our analysis refines its complexity when workers sample data from the same distribution. + +
MethodStrongly ConvexNon-Convex
Koloskova et al. [21]σ2/nμ2ε + √L χ1ξ+√χ1σ/μ3/2√ε + L/μχ1L/σ2/nε2 + L/χ1ξ+√χ1σ/ε3/2 + L/χ1/ε
AD-PSGD [28]-L/σ2+ξ2/ε2 + n2/Lχ1/ε
Baseline (Ours)σ2+χ1ξ2/μ2ε + L/μχ1L/σ2+χ1ξ2/ε2 + L/χ1/ε
A2CiD2(Ours)σ2+√χ1χ2/μ2ε + L/μ√χ1χ2L/σ2+√χ1χ2/ε2 + L/√χ1χ2/ε
+ +about 1 mini-batch of gradient per unit of time (not necessarily simultaneously), which is a standard homogeneity assumption [18], and we denote by $\lambda^{ij}$ the instantaneous expected frequency of edge $(i,j)$ , which we assume time homogeneous. + +Definition 3.1 (Instantaneous expected Laplacian). We define the Laplacian $\Lambda$ as: + +$$ +\Lambda \triangleq \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\top}. \tag {1} +$$ + +In this context, a natural quantity is the algebraic connectivity [6] given by: + +$$ +\chi_ {1} \triangleq \sup _ {\| x \| = 1, x \perp \mathbf {1}} \frac {1}{x ^ {\top} \Lambda x}. \tag {2} +$$ + +For a connected graph (i.e., $\chi_{1} < +\infty$ ), we will also use the maximal resistance of the network: + +$$ +\chi_ {2} \triangleq \frac {1}{2} \sup _ {(i, j) \in \mathcal {E}} (e _ {i} - e _ {j}) ^ {\mathsf {T}} \Lambda^ {+} (e _ {i} - e _ {j}) \leq \chi_ {1}. \tag {3} +$$ + +The next sections will show that it is possible to accelerate the asynchronous gossip algorithms from $\chi_{1}$ to $\sqrt{\chi_1\chi_2} \leq \chi_1$ , while [12] or [34] emphasize the superiority of accelerated asynchronous gossips over accelerated synchronous ones. + +# 3.2 Training dynamic + +The goal of a typical decentralized algorithm is to minimize the following quantity: + +$$ +\inf _ {x \in \mathbb {R} ^ {d}} f (x) \triangleq \inf _ {x \in \mathbb {R} ^ {d}} \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x) = \inf _ {x _ {i} = x _ {1}} \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x _ {i}). +$$ + +For this, we follow a first order optimization strategy consisting in using estimates of the gradient $\nabla f_{i}(x_{i})$ via i.i.d unbiased Stochastic Gradient (SG) oracles given by $\nabla F_{i}(x_{i},\xi_{i})$ s.t. $\mathbb{E}_{\xi_i}[\nabla F_i(x_i,\xi_i)] = \nabla f_i(x_i)$ . The dynamic of updates of our model evolves as the following SDE, for $\eta ,\gamma ,\alpha ,\tilde{\alpha}$ some time-independent scalar hyper-parameters, whose values are found in our theoretical analysis and used in our implementation, and $dN_{t}^{i}(\xi_{i})$ some point processes on $\mathbb{R}_+ \times \Xi$ with intensity $dt \otimes d\mathcal{P}$ : + +$$ +d x _ {t} ^ {i} = \eta \left(\tilde {x} _ {t} ^ {i} - x _ {t} ^ {i}\right) d t - \gamma \int_ {\Xi} \nabla F _ {i} \left(x _ {t} ^ {i}, \xi_ {i}\right) d N _ {t} ^ {i} \left(\xi_ {i}\right) - \alpha \sum_ {j, (i, j) \in \mathcal {E}} \left(x _ {t} ^ {i} - x _ {t} ^ {j}\right) d M _ {t} ^ {i j}, \tag {4} +$$ + +$$ +d \tilde {x} _ {t} ^ {i} = \eta (x _ {t} ^ {i} - \tilde {x} _ {t} ^ {i}) d t - \gamma \int_ {\Xi} \nabla F _ {i} (x _ {t} ^ {i}, \xi_ {i}) d N _ {t} ^ {i} (\xi_ {i}) - \tilde {\alpha} \sum_ {j, (i, j) \in \mathcal {E}} (x _ {t} ^ {i} - x _ {t} ^ {j}) d M _ {t} ^ {i j}. +$$ + +We emphasize that while the dynamic Eq. 4 is formulated using SDEs [1], which brings the power of the continuous-time analysis toolbox, it is still event-based and thus discrete in nature. Hence, it can efficiently model practically implementable algorithms, as shown by Algo. 1. The coupling $\{x_{t},\tilde{x}_{t}\}$ corresponds to a momentum term which will be useful to obtain communication acceleration as explained in the next section. Again, $\int_{\Xi}\nabla F_i(x_t^i,\xi_i)dN_t^i (\xi_i)$ will be estimated via i.i.d SGs sampled as $N_{t}^{i}$ spikes. Furthermore, if $\bar{x}_0 = \overline{\bar{x}}_0$ , then, $\bar{x}_t = \overline{\bar{x}_t}$ and we obtain a tracker of the average across workers which is similar to what is achieved through Gradient Tracking methods [19]. This is a key advantage of our method to obtain convergence guarantees, which writes as: + +$$ +d \bar {x} _ {t} = - \gamma \frac {1}{n} \sum_ {i = 1} ^ {n} \int_ {\Xi} \nabla F _ {i} \left(x _ {t} ^ {i}, \xi_ {i}\right) d N _ {t} ^ {i} \left(\xi_ {i}\right). \tag {5} +$$ + +# 3.3 Informal explanation of the dynamic through the Baseline case + +To give some practical intuition on our method, we consider a baseline asynchronous decentralized dynamic, close to AD-PSGD [28]. By considering $\eta = 0$ , $\alpha = \tilde{\alpha} = \frac{1}{2}$ , the dynamic (4) simplifies to: + +$$ +d x _ {t} ^ {i} = - \gamma \int_ {\Xi} \nabla F _ {i} \left(x _ {t} ^ {i}, \xi_ {i}\right) d N _ {t} ^ {i} \left(\xi_ {i}\right) - \frac {1}{2} \sum_ {j, (i, j) \in \mathcal {E}} \left(x _ {t} ^ {i} - x _ {t} ^ {j}\right) d M _ {t} ^ {i j}. \tag {6} +$$ + +In a DL setting, $x^{i}$ contains the parameters of the DNN hosted on worker $i$ . Thus, (6) simply says that the parameters of the DNN are updated either by taking local SGD steps, or by pairwise averaging with peers $j$ , $(i,j) \in \mathcal{E}$ . These updates happen independently, at random times: although we assume that all workers compute gradients at the same speed on average (and re-normalized time accordingly), the use of Poisson Processes model the inherent variability in the time between these updates. However, the p2p averaging depends on the capabilities of the network, and we allow each link $(i,j)$ to have a different bandwidth, albeit constant through time, modeled through the frequency $\lambda^{ij}$ . The gradient and communication processes are decoupled: there is no need for one to wait for the other, allowing to compute stochastic gradients uninterrupted and run the p2p averaging in parallel, as illustrated by Fig.2. Finally, (4) adds a momentum step mixing the local parameters $x^{i}$ and momentum buffer $\tilde{x}^{i}$ before each type of update, allowing for significant savings in communication costs, as we show next. + +![](images/2d8a0e78e5bfce1118a28a0081f7ff1f3bb6756f299ba90ae53f1275b9e28a37.jpg) +Figure 2: Example of worker updates in synchronous (left) and asynchronous (right) optimization methods. We remark that our asynchronous algorithm reduces idle time, and allows to communicate in parallel of computing gradient, only synchronizing two workers at a time for averaging parameters. Here, one p2p communication is performed per computation in expectation. + +![](images/057438494de96d2620d15a200cde643c16c557dea67039701f413e3699ce60fb.jpg) + +# 3.4 Theoretical analysis of $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ + +We now provide an analysis of our decentralized, asynchronous algorithm. For the sake of simplicity, we will consider that communications and gradients spike as Poisson processes: + +Assumption 3.2 (Poisson Processes). $N_{t}^{i}$ , $M_{t}^{ij}$ are independent, Point-wise Poisson Processes. The $\{N_{t}^{i}\}_{i = 1}^{n}$ have a rate of 1, and for $(i,j)\in \mathcal{E}$ , $M_t^{ij}$ have a rate $\lambda^{ij}$ . + +We also assume that the communication network is connected during the training: + +Assumption 3.3 (Strong connectivity). We assume that $\chi_{1} < \infty$ . + +We will now consider two generic assumptions obtained from [21], which allow us to specify our lemma to convex and non-convex settings. Note that the non-convex Assumption 3.5 generalizes the assumptions of [28], by taking $M = P = 0$ . + +Assumption 3.4 (Strongly convex setting). Each $f_{i}$ is $\mu$ -strongly convex and $L$ -smooth, and: + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\xi_ {i}} \left[ \| \nabla F _ {i} (x, \xi_ {i}) - \nabla f _ {i} (x) \| ^ {2} \right] \leq \sigma^ {2} \text {a n d} \frac {1}{n} \sum_ {i = 1} ^ {n} \| \nabla f _ {i} (x ^ {*}) - \nabla f (x ^ {*}) \| ^ {2} \leq \zeta^ {2}. +$$ + +Assumption 3.5 (Non-convex setting). Each $f_{i}$ is $L$ -smooth, and there exists $P, M > 0$ such that: + +$$ +\forall x \in \mathbb {R} ^ {d}, \frac {1}{n} \sum_ {i = 1} ^ {n} \| \nabla f _ {i} (x) - \nabla f (x) \| ^ {2} \leq \zeta^ {2} + P \| \nabla f (x) \| ^ {2}, +$$ + +and, + +$$ +\forall x _ {1}, \dots , x _ {n} \in \mathbb {R} ^ {d}, \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\xi_ {i}} \| \nabla F _ {i} (x _ {i}, \xi_ {i}) - \nabla f _ {i} (x _ {i}) \| ^ {2} \leq \sigma^ {2} + \frac {M}{n} \sum_ {i = 1} ^ {n} \| \nabla f _ {i} (x _ {i}) \| ^ {2}. +$$ + +We can now state our convergence guarantees. An informal way to understand our proposition, is that while gradient updates are non-convex, the communication updates are linear and thus benefit from local convexity; its proof is delayed to Appendix C. + +Proposition 3.6 (Convergence guarantees.). Assume that $\{x_{t},\tilde{x}_{t}\}$ follow the dynamic Eq. 4 and that Assumption 3.2-3.3 are satisfied. Assume that $\mathbf{1}\bar{x}_0 = x_0 = \tilde{x}_0$ and let $T$ the total running time. Then: + +- Non-accelerated setting, we pick $\eta = 0$ , $\alpha = \tilde{\alpha} = \frac{1}{2}$ and set $\chi = \chi_{1}$ +- Acceleration $(A^2 C_i D^2)$ , we set $\eta = \frac{1}{2\sqrt{\chi_1\chi_2}}, \alpha = \frac{1}{2}, \tilde{\alpha} = \frac{1}{2}\sqrt{\frac{\chi_1}{\chi_2}}$ , and $\chi = \sqrt{\chi_1\chi_2} \leq \chi_1$ . + +Then, there exists a constant step size $\gamma > 0$ such that if: + +- (strong-convexity) the Assumption 3.4 is satisfied, then $\gamma \leq \frac{1}{16L(1 + \chi)}$ and: + +$$ +\mathbb {E} \left[ \| \bar {x} _ {T} - x ^ {*} \| ^ {2} \right] = \tilde {\mathcal {O}} \left(\| \bar {x} _ {0} - x ^ {*} \| ^ {2} e ^ {- \frac {\mu T}{1 6 L (1 + \chi)}} + \frac {\sigma^ {2} + \zeta^ {2} (1 + \chi)}{\mu^ {2} T}\right), +$$ + +- (non-convexity) the Assumption 3.5 is satisfied, then there is $c > 0$ which depends only on $P, M$ from the assumptions such that $\gamma \leq \frac{c}{L(\chi + 1)}$ and: + +$$ +\frac {1}{T} \int_ {0} ^ {T} \mathbb {E} \left[ \| \nabla f (\bar {x} _ {t}) \| ^ {2} \right] d t = \mathcal {O} \left(\frac {L (1 + \chi)}{T} (f (x _ {0}) - f (x ^ {*})) + \sqrt {\frac {L (f (x _ {0}) - f (x ^ {*}))}{T} (\sigma^ {2} + (1 + \chi) \xi^ {2})}\right). +$$ + +Also, the expected number of gradient steps is $nT$ and the number of communications is $\frac{\operatorname{Tr}(\Lambda)}{2} T$ . + +Tab. 1 compares our convergence rates with concurrent works. Compared to every concurrent work, the bias term of $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ is smaller by a factor $\sqrt{\frac{\chi_1}{\chi_2}} \geq 1$ at least. Yet, as expected, in the non-accelerated setting, we would recover similar rates to those. Compared to [20], the variance terms held no variance reduction with the number of workers; however, this should not be an issue in a DL setting, where it is well-known that variance reduction techniques degrade generalization during training [15]. Comparing directly the results of [2] is difficult as they only consider the asymptotic rate, even if the proof framework is similar to [28] and should thus lead to similar rates of convergence. + +# 3.5 Informal interpretation and comparison with decentralized synchronous methods + +Here, we informally discuss results from Prop. 3.6 and compare our communication rate with state-of-the-art decentralized synchronous methods such as DeTAG [31], MSDA [37] and OPAPC [23]. + +As we normalize time so that each node takes one gradient step per time unit in expectation, one time unit for us is analogous to one round of computation (one "step") for synchronous methods. Synchronous methods such as [31, 37, 23] perform multiple rounds of communications + +Table 2: # of communications per "step"/time unit on several graphs. + +
MethodStarRingComplete
Accelerated Synchronous (e.g., [31, 37, 23])n3/2n2n2
A2CiD2nn2n
+ +(their Accelerated Gossip procedure) between rounds of gradient computations by using an inner loop inside their main loop (the one counting "steps"), so that the graph connectivity do not impact the total number of "steps" necessary to reach $\epsilon$ -precision. As Prop. 3.6 shows, the quantity $1 + \sqrt{\chi_1[\Lambda]\chi_2[\Lambda]}$ is a factor in our convergence rate. $\Lambda$ containing the information of both the topology $\mathcal{E}$ and the edge communication rates $\lambda_{ij}$ , this is analogous to saying $\sqrt{\chi_1[\Lambda]\chi_2[\Lambda]} = \mathcal{O}(1)$ for our method (i.e., the graph connectivity does not impact the time to converge), which, given the graph's topology, dictates the communication rate, see Appendix D for more details. Tab. 2 compares the subsequent communication rates with synchronous methods. + +# 4 Numerical Experiments + +Now, we experimentally compare $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ to a synchronous baseline All-Reduce SGD (AR-SGD, see [26]) and an asynchronous baseline using randomized pairwise communications (a variant of AD-PSGD [28], traditionally used in state-of-the-art decentralized asynchronous training of DNNs). In our case, the asynchronous baseline corresponds to the dynamic Eq. (6). Our approach is standard: we empirically study the decentralized training behavior of our asynchronous algorithm by training ResNets [17] for image recognition. Following [2], we pick a ResNet18 for CIFAR-10 [24] and ResNet50 for ImageNet [11]. To investigate how our method scales with the number of workers, we run multiple experiments using up to 64 NVIDIA A100 GPUs in a cluster with 8 A100 GPUs per node using an Omni-PAth interconnection network at $100\mathrm{Gb / s}$ , and set one worker per GPU. + +# 4.1 Experimental Setup + +Hyper-parameters. Training a DNN using multiple workers on a cluster requires several adaptations compared to the standard setting. As the effective batch-size grows linearly with the + +Table 3: Training times on CIFAR10 (± 6s). + +
n48163264
Ourst (min)20.910.55.22.71.5
ARt (min)21.911.16.63.21.8
+ +number of workers $n$ , we use the learning-rate schedule for large batch training of [16] in all our experiments. Following [30], we fixed the local batch size to 128 on both CIFAR-10 and ImageNet. + +![](images/f7ed3168a9f28a3321680f0464e58bbb99cf998606f410ff10d839c2bffee1ce.jpg) +(a) + +![](images/aef6264997d1569eb8c31cdd98c5ee1a1ce65d46e7e810ebce20cd260980740b.jpg) +(b) +Figure 3: (a) Training loss for CIFAR10 with minibatch size 128 on the complete graph, w/o $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . As the number of worker increases, the loss degrades, especially for $n = 64$ . (b) Focus on the training loss for the complete graph of size $n = 64$ , w/o $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . As the rate of communication increases, the gap with All-Reduce decreases. With 2 com/ $\nabla$ , a test accuracy of $94.6 \pm 0.04$ is reached. + +Our goal being to divide the compute load between the $n$ workers, all methods access the same total amount of data samples, regardless of the number of local steps. On CIFAR-10 and ImageNet, this + +Algorithm 1: This algorithm block describes our implementation of our Asynchronous algorithm with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ on each local machine. p2p comm. and $\nabla$ comp. are run independently in parallel. +Input: On each machine $i\in \{1,\dots,n\}$ , gradient oracle $\nabla f_{i}$ , parameters $\eta, \alpha, \tilde{\alpha}, \gamma, T$ . +Initialize on each machine $i\in \{1,\dots,n\}$ : +Initialize $x^{i}, \tilde{x}^{i}\gets x^{i}, t^{i}\gets 0$ and put $x^{i}, \tilde{x}^{i}, t^{i}$ in shared memory; +Synchronize the clocks of all machines; +In parallel on workers $i\in \{1,\dots,n\}$ , while $t < T$ , continuously do: +In one thread on worker $i$ continuously do: + $t\gets \text{clock()}$ ; +Sample a batch of data via $\xi_{i}\sim \Xi$ ; + $g_{i}\gets \nabla F_{i}(x_{i},\xi_{i})$ ; // Compute gradients + $\begin{pmatrix} x^{i}\\ \tilde{x}^{i} \end{pmatrix} \gets \exp \left((t - t^{i})\begin{pmatrix} -\eta & \eta \\ \eta & -\eta \end{pmatrix}\begin{pmatrix} x^{i}\\ \tilde{x}^{i} \end{pmatrix}; \end{pmatrix}$ $x^{i}\gets x^{i} - \gamma g_{i}$ ; // Apply A²CiD² + $\tilde{x}^{i}\gets \tilde{x}^{i} - \gamma g_{i}$ ; // Take the grad step + $t^{i}\gets t$ ; +In one thread on worker $i$ continuously do: + $t\gets \text{clock()}$ ; +Find available worker $j$ ; // Synchronize workers $i$ and $j$ $m_{ij}\gets (x^{i} - x^{j})$ ; // Send $x^{i}$ to $j$ and receive $x^{j}$ from $j$ $\begin{pmatrix} x^{i}\\ \tilde{x}^{i} \end{pmatrix} \gets \exp \left((t - t^{i})\begin{pmatrix} -\eta & \eta \\ \eta & -\eta \end{pmatrix}\begin{pmatrix} x^{i}\\ \tilde{x}^{i} \end{pmatrix}; \end{pmatrix}$ // Apply A²CiD² + $x^{i}\gets x^{i} - \alpha m_{ij}$ ; // p2p averaging + $\tilde{x}^{i}\gets \tilde{x}^{i} - \tilde{\alpha} m_{ij}$ ; + $t^{i}\gets t$ ; +return $(x_T^i)_{1\leq i\leq n}$ . + +number is set to 300 and 90 epochs respectively, following standard practice [22]. To circumvent the fuzziness of the notion of epoch in the asynchronous decentralized setting, we do not "split the dataset and re-shuffle it among workers at each epoch" as done with our standard All-Reduce baseline. Rather, we give access to the whole dataset to all workers, each one shuffling it with a different random seed. We use SGD with a base learning rate of 0.1, a momentum value set at 0.9 and $5 \times 10^{-4}$ for weight decay. As advocated in [16], we do not apply weight decay on the learnable batch-norm coefficients. For ImageNet training with the SGD baseline, we decay the learning-rate by a factor of 10 at epochs 30, 60, 80 (epochs 50, 75 for CIFAR-10), and apply an analogous decay schedule with our asynchronous decentralized methods. All of our neural network parameters are initialized with the default Pytorch settings, and one All-Reduce averaging is performed before and after the training to ensure consensus at initialization and before testing. For our continuous momentum, we also need to set the parameters $\eta$ , $\tilde{\alpha}$ . For all our experiments, we use the values given by Prop. 3.6. As advocated, the asynchronous baseline corresponds to the setting without acceleration, i.e. with $\eta = 0$ and $\alpha = \tilde{\alpha} = \frac{1}{2}$ , whereas using $\mathbf{A}^2\mathbf{C}\mathbf{D}^2$ leads to consider $\eta = \frac{1}{2\sqrt{\chi_1\chi_2}}$ , $\alpha = \frac{1}{2}$ , $\tilde{\alpha} = \frac{1}{2}\sqrt{\frac{\chi_1}{\chi_2}}$ , where $\chi_1$ , $\chi_2$ are set to their theoretical value given by (2), (3) depending on the communication rate and graph's topology, assuming that each worker chose their peers uniformly among their neighbors (we verify empirically that it is the case in practice, see Appendix E.2). + +Practical implementation of the dynamic. The dynamic studied in Eq. (4) is a model displaying many of the properties sought after in practice. In our implementation, described in Algo. 1, each worker $i$ has indeed two independent processes and the DNN parameters and momentum variable $\{x^i,\tilde{x}^i\}$ are locally stored such that both processes can update them at any time. One process continuously performs gradient steps, while the other updates $\{x^i,\tilde{x}^i\}$ via peer-to-peer averaging. The gradient process maximizes its throughput by computing forward and backward passes back to back. Contrary to All-Reduce based methods that require an increasing number of communications with the growing number of workers, inevitably leading to an increasing time between two rounds of computations, + +we study the case where each worker has a fixed communication rate, given as hyperparameter in our implementation. We implement 3 different graph topologies: complete, ring, and exponential [28, 2], see Appendix E.1 for details. To emulate the P.P.Ps for the communications, each worker samples a random number of p2p averaging to perform between each gradient computation, following a Poisson law using the communication rate as mean. To minimize idle time of the communication process, workers are paired with one of their neighbors in a "First In First Out" manner in an availability queue (a worker is available when it finished its previous averaging and still has some to do before the next gradient step). To implement this, we use a central coordinator to store the availability queues and the graph topology (this + +is lightweight in a cluster: the coordinator only exchanges integers with the workers), but it could be done in different ways, e.g. by pinging each other at high frequency. As we assumed a unit time for the gradient process in our analysis, and that real time is used in our algorithm to apply our $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ momentum (see Algo. 1), we maintain a running average measure of the duration of the previous gradient steps to normalize time. + +![](images/5dfc5f540573ecd587aed3cb921a2fcee72fe4a08293d297bbda5598ab6ef067.jpg) +Figure 4: Training loss for CIFAR10 using a minibatch size of 128. We display the training loss with up to 64 workers, w/ and w/o $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , on the challenging ring graph. + +# 4.2 Evaluation on large scale datasets + +CIFAR10. This simple benchmark allows to understand the benefits of our method in a well-controlled environment. Tab. 4 reports our numerical accuracy on the test set of CIFAR10, with a standard deviation calculated over 3 runs. Three scenarios are considered: a complete, an exponential and a ring graph. In Fig. 3 (a), we observe that with the asynchronous baseline on the complete graph, the more workers, the more the training loss degrades. Fig. 3 (b) hints that it is in part due to an insufficient communication rate, as increasing it allows to lower the loss and close the gap with the All-Reduce baseline. However, this is not the only causative factor as Tab. 4 indicates that accuracy generally degrades as the number of + +workers increases even for AR-SGD, which is expected for large batch sizes. Surprisingly, even with a worse training loss for $n = 64$ , the asynchronous baseline still leads to better generalization than + +Table 5: Accuracy on ImageNet for a batch-size of 128. We compared a vanilla asynchronous pairwise gossip approach with and without $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , demonstrating the improvement of our method. We also varied the communication rate. + +
#Workers#com/#grad163264
AR-SGD baseline-75.575.274.5
Complete graph
Async. baseline174.673.871.3
Ring graph
Async. baseline174.871.664.1
A²CiD²174.773.468.0
Async. baseline274.873.768.2
A²CiD²275.374.471.4
+ +Table 4: Accuracy of our method on CIFAR10 for a 128 batchsize with an equal number of pairwise communications and gradient computations per worker. We compared a vanilla asynchronous pairwise gossip approach with and without $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , demonstrating the improvement of our method. + +
#Workers48163264
AR-SGD baseline94.5±0.194.4±0.194.5±0.293.7±0.392.8±0.2
Complete graph
Async. baseline94.93±0.1194.91±0.0794.86±0.0194.55±0.0193.38±0.21
Exponential graph
Async. baseline95.07±0.0194.89±0.0194.82±0.0694.44±0.0293.41±0.02
A²CiD²95.17±0.0495.04±0.0194.87±0.0294.56±0.0193.47±0.01
Ring graph
Async. baseline95.02±0.0695.01±0.0195.00±0.0193.95±0.1191.90±0.10
A²CiD²94.95±0.0295.01±0.1095.03±0.0194.61±0.0293.08±0.20
+ +AR-SGD, and consistently improves the test accuracy across all tested values of $n$ . The communication rate being identified as a critical factor at large scale, we tested our continuous momentum on the ring graph, each worker performing one p2p averaging for each gradient step. Fig. 4 shows that incorporating $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ leads to a significantly better training dynamic for a large number of workers, which translates into better performances at test time as shown in Tab. 4. + +![](images/96db17899e72313f6a6918a36ad5fb13756638dc6b9b83b8a0475dad31be3a64.jpg) +(a) + +![](images/8be5a2fa97ff6267ae828f2b5e2d40b2650e8cf291fe4a48d9879af075fcf2f9.jpg) +(b) +Figure 5: (a) Training loss for ImageNet using 128 batch size, with an equal number of communications and computations per worker. We display the training loss for various numbers of workers (up to 64), using $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , for the ring graph. (b) Comparison of consensus distances when $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ is applied versus doubling the rate of communications on the rign graph with 64 workers: applying $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ has the same effect as doubling communications. + +ImageNet. For validating our method in a real-life environment, we consider the large-scale ImageNet dataset. Tab. 6 confirms the advantage of asynchronous methods by allocating less compute to the slowest workers, leading to faster training times. Tab. 5 reports our accuracy for + +the complete and ring graphs. As $\chi_{1} = \chi_{2}$ for the complete graph, we simply run our baseline asynchronous method for reference. The case of the ring graph is much more challenging: for $n = 64$ workers, the accuracy drops by $10\%$ compared to the synchronous baseline given by AR-SGD. Systematically, with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , the final accuracy increases: up to $4\%$ absolute percent in the difficult $n = 64$ setting. This is corroborated by Fig. 5, which indicates that incorporating $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ significantly improves the training dynamic on ImageNet. However, for reducing the gap with the AR-SGD baseline, it will be necessary to increase the communication rate as discussed next. + +Table 6: Statistics of runs on Imagenet with 64 workers (for ours, on the exponential graph). + +
Methodt (min)# ∇ slowest worker# ∇ fastest worker
AR-SGD1.7 10214k14k
Baseline (ours)1.5 10213k14k
A²CiD²(ours)1.5 10213k14k
+ +Consensus improvement. The bottom of Tab. 5, as well as Fig. 5 (b) study the virtual acceleration thanks to $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . Not only increasing communications combined with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ allows to obtain competitive performance, but Fig. 1 shows that doubling the rate of communication has an identical effect on the training loss than adding $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . This is verified in Fig. 5 (b) by tracking the consensus distance between workers: $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ significantly reduces it, which validates the results of Sec. 3.4. + +# 5 Conclusion + +In this work, we confirmed that the communication rate is a key performance factor to successfully train DNNs at large scale with decentralized asynchronous methods. We introduced $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , a continuous momentum which only adds a minor local memory overhead while allowing to mitigate this need. We demonstrated, both theoretically and empirically, that $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ substantially improves performances, especially on challenging network topologies. As we only focused on data parallel methods for training Deep Neural Networks in a cluster environment, in a future work, we would like to extend our empirical study to more heterogeneous compute and data sources, as our theory could encompass local SGD methods [39] and data heterogeneity inherent in Federated Learning [33]. + +# Acknowledgements + +EO, AN, and EB's work was supported by Project ANR-21-CE23-0030 ADONIS and EMERG-ADONIS from Alliance SU. This work was granted access to the HPC/AI resources of IDRIS under the allocation AD011013743 made by GENCI. EB and AN acknowledge funding and support from NSERC Discovery Grant RGPIN- 2021-04104, FRQNT New Scholar, and resources from Compute Canada and Calcul Quebec. In addition, the authors would like to thank Olexa Bilaniuk and Louis Fournier for their helpful insights regarding our code implementation. + +# References + +[1] L. Arnold. Stochastic differential equations. New York, 1974. +[2] M. Assran, N. Loizou, N. Ballas, and M. Rabbat. Stochastic gradient push for distributed deep learning. In International Conference on Machine Learning, pages 344-353. PMLR, 2019. +[3] E. Belilovsky, M. Eickenberg, and E. Oyallon. Decoupled greedy learning of cnns. In International Conference on Machine Learning, pages 736-745. PMLR, 2020. +[4] E. Belilovsky, L. Leonte, L. Caccia, M. Eickenberg, and E. Oyallon. Decoupled greedy learning of cnns for synchronous and asynchronous distributed learning. arXiv preprint arXiv:2106.06401, 2021. +[5] M. Blot, D. Picard, M. Cord, and N. Thome. Gossip training for deep learning. In Advances in Neural Information Processing Systems, volume 30, 2016. +[6] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Randomized gossip algorithms. IEEE Transactions on Information Theory, 52(6):2508-2530, 2006. +[7] J. Chen, X. Pan, R. Monga, S. Bengio, and R. Jozefowicz. Revisiting distributed synchronous sgd. arXiv preprint arXiv:1604.00981, 2016. +[8] J. Daily, A. Vishnu, C. Siegel, T. Warfel, and V. Amatya. Gossipgrad: Scalable deep learning using gossip communication based asynchronous gradient descent, 2018. +[9] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, et al. Large scale distributed deep networks. Advances in neural information processing systems, 25, 2012. +[10] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. Advances in neural information processing systems, 27, 2014. +[11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009. +[12] M. Even, R. Berthier, F. Bach, N. Flammarion, H. Hendrikx, P. Gaillard, L. Massoulie, and A. Taylor. A continuized view on nesterov acceleration for stochastic gradient descent and randomized gossip. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021. +[13] M. Even, H. Hendrikx, and L. Massoulie. Decentralized optimization with heterogeneous delays: a continuous-time approach. arXiv preprint arXiv:2106.03585, 2021. +[14] A. Ghosh, S. Boyd, and A. Saberi. Minimizing effective resistance of a graph. SIAM Review, 50(1):37-66, 2008. +[15] R. M. Gower, M. Schmidt, F. Bach, and P. Richtárik. Variance-reduced methods for machine learning. Proceedings of the IEEE, 108(11):1968-1983, 2020. +[16] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +[17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2015. +[18] H. Hendrikx, F. Bach, and L. Massoulie. An accelerated decentralized stochastic proximal algorithm for finite sums. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. + +[19] A. Koloskova, T. Lin, and S. U. Stich. An improved analysis of gradient tracking for decentralized machine learning. Advances in Neural Information Processing Systems, 34:11422-11435, 2021. +[20] A. Koloskova, T. Lin, S. U. Stich, and M. Jaggi. Decentralized deep learning with arbitrary communication compression. arXiv preprint arXiv:1907.09356, 2019. +[21] A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. Stich. A unified theory of decentralized sgd with changing topology and local updates. In International Conference on Machine Learning, pages 5381-5393. PMLR, 2020. +[22] L. Kong, T. Lin, A. Koloskova, M. Jaggi, and S. Stich. Consensus control for decentralized deep learning. In International Conference on Machine Learning, pages 5686-5696. PMLR, 2021. +[23] D. Kovalev, A. Salim, and P. Richtarik. Optimal and practical algorithms for smooth and strongly convex decentralized optimization. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 18342-18352. Curran Associates, Inc., 2020. +[24] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009. +[25] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Commun. ACM, 60(6):84-90, may 2017. +[26] S. Li and T. Hoefler. Near-optimal sparse allreduce for distributed deep learning. In Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 135-149, 2022. +[27] X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. Advances in neural information processing systems, 30, 2017. +[28] X. Lian, W. Zhang, C. Zhang, and J. Liu. Asynchronous decentralized parallel stochastic gradient descent. In International Conference on Machine Learning, pages 3043-3052. PMLR, 2018. +[29] T. Lin, S. P. Karimireddy, S. U. Stich, and M. Jaggi. Quasi-global momentum: Accelerating decentralized deep learning on heterogeneous data. arXiv preprint arXiv:2102.04761, 2021. +[30] T. Lin, S. U. Stich, K. K. Patel, and M. Jaggi. Don't use large mini-batches, use local sgd. In International Conference on Learning Representations, 2020. +[31] Y. Lu and C. De Sa. Optimal complexity in decentralized training. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 7111-7123. PMLR, 18-24 Jul 2021. +[32] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR, 2017. +[33] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In International Conference on Artificial Intelligence and Statistics, 2016. +[34] A. Nabli and E. Oyallon. DADAO: Decoupled accelerated decentralized asynchronous optimization. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 25604-25626. PMLR, 23-29 Jul 2023. +[35] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. +[36] M. Ryabinin, E. Gorbunov, V. Plokhotnyuk, and G. Pekhimenko. Moshipit sgd: Communication-efficient decentralized training on heterogeneous unreliable devices. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 18195-18211. Curran Associates, Inc., 2021. + +[37] K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, and L. Massoulie. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3027-3036. PMLR, 06-11 Aug 2017. +[38] A. Sergeev and M. D. Balso. Horovod: fast and easy distributed deep learning in TensorFlow. arXiv preprint arXiv:1802.05799, 2018. +[39] S. U. Stich. Local SGD converges fast and communicates little. In International Conference on Learning Representations, 2019. +[40] S. U. Stich. Unified optimal analysis of the (stochastic) gradient method, 2019. +[41] T. Vogels, L. He, A. Koloskova, S. P. Karimireddy, T. Lin, S. U. Stich, and M. Jaggi. Relaysum for decentralized deep learning on heterogeneous data. Advances in Neural Information Processing Systems, 34:28004-28015, 2021. +[42] Y. Yakimenka, C.-W. Weng, H.-Y. Lin, E. Rosnes, and J. Kliewer. Straggler-resilient differentially-private decentralized learning. In 2022 IEEE Information Theory Workshop (ITW), pages 708-713. IEEE, 2022. +[43] B. Ying, K. Yuan, Y. Chen, H. Hu, P. Pan, and W. Yin. Exponential graph is provably efficient for decentralized deep training. Advances in Neural Information Processing Systems, 34:13975-13987, 2021. +[44] B. Ying, K. Yuan, H. Hu, Y. Chen, and W. Yin. Bluefog: Make decentralized algorithms practical for optimization and deep learning. arXiv preprint arXiv:2111.04287, 2021. +[45] B. Ying, K. Yuan, H. Hu, Y. Chen, and W. Yin. Bluefog: Make decentralized algorithms practical for optimization and deep learning. 2021. +[46] Y. You, I. Gitman, and B. Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017. +[47] B. Yuan, Y. He, J. Davis, T. Zhang, T. Dao, B. Chen, P. S. Liang, C. Re, and C. Zhang. Decentralized training of foundation models in heterogeneous environments. Advances in Neural Information Processing Systems, 35:25464-25477, 2022. +[48] S. Zhang, A. E. Choromanska, and Y. LeCun. Deep learning with elastic averaging sgd. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. + +# Appendix + +# Table of Contents + +A Notations 14 +B Technical Preliminaries 14 +C Proof of the main result of this paper 16 + +C.1 Some useful upper-bounds 17 +C.2 Resolution: putting everything together 20 +C.3 Optimizing the step-size 22 + +D Comparison with accelerated synchronous methods 23 + +E Experimental details 24 + +E.1 Graph topologies 24 +E.2 Uniform neighbor selection check 24 + +# A Notations + +For $n \in \mathbb{N}^*$ the number of workers and $d \in \mathbb{N}^*$ an ambient dimension, for all $t > 0$ , the variable $x_t \in \mathbb{R}^{n \times d}$ is a matrix such that $x_t = [x_t^1, \dots, x_t^n]^\top$ , with $x_t^i \in \mathbb{R}^d$ for all $i \in \{1, \dots, n\}$ . We remind that $\mathbf{1}$ is the vector of $n$ ones such that $\bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i = \frac{1}{n} x^\top \mathbf{1} \in \mathbb{R}^d$ . With $\mathbf{I}$ the identity and $\|. \|_F$ the matrix Frobenius norm, we write $\pi = \mathbf{I} - \frac{1}{n} \mathbf{1} \mathbf{1}^\top$ the projection so that $\| \pi x \|_F^2 = \sum_{i=1}^{n} \| x_i - \bar{x} \|^2$ . + +For a continuously differentiable function $f: \mathbb{R}^d \to \mathbb{R}$ and $a, b \in \mathbb{R}^d$ , the Bregman divergence is defined with $d_f(a, b) = f(a) - f(b) - \langle \nabla f(b), a - b \rangle$ . For $\Lambda \in \mathbb{R}^{n \times n}$ , we denote by $\Lambda^+$ its pseudo inverse. For a positive semi-definite $\Lambda \in \mathbb{R}^{n \times n}$ , and $x \in \mathbb{R}^{n \times d}$ , we introduce $\| x \|_{\Lambda}^2 \triangleq \mathrm{Tr}(x^\top \Lambda x)$ and $\Lambda^{1/2}$ its square-root. We recall that the connectivity between workers is given by a set of edges $\mathcal{E}$ , and denote by $e_i$ the $i^{th}$ basis vector of $\mathbb{R}^n$ . + +For $x\in \mathbb{R}^{n\times d}$ , we introduce: + +$$ +\nabla F (x) \triangleq [ \nabla f _ {1} (x _ {1}), \dots ., \nabla f _ {n} (x _ {n}) ] ^ {\mathsf {T}} \in \mathbb {R} ^ {n \times d} \mathrm {a n d} \nabla \tilde {F} (x, \xi) \triangleq [ \nabla F _ {1} (x _ {1}, \xi_ {1}), \dots , \nabla F _ {n} (x _ {n}, \xi_ {n}) ] ^ {\mathsf {T}} \in \mathbb {R} ^ {n \times d}. +$$ + +Finally, to study the gradient steps taken on each individual worker independently, we introduce: + +$$ +\nabla \tilde {F} _ {i} (x, \xi) \triangleq [ 0, \dots 0, \nabla F _ {i} (x _ {i}, \xi_ {i}), 0, \dots , 0 ] ^ {\mathsf {T}} \in \mathbb {R} ^ {n \times d}. +$$ + +# B Technical Preliminaries + +We recall some basic properties that we will use throughout our proofs. + +Lemma B.1 (Implications of Assumption 3.4). If each $f_{i}$ is $\mu$ -strongly convex and $L$ -smooth, we have, for any $a, b \in \mathbb{R}^d$ : + +$$ +\frac {1}{2 L} \| \nabla f _ {i} (a) - \nabla f _ {i} (b) \| ^ {2} \leq d _ {f _ {i}} (a, b) \leq \frac {L}{2} \| a - b \| ^ {2}, +$$ + +and + +$$ +\frac {\mu}{2} \| a - b \| ^ {2} \leq d _ {f _ {i}} (a, b) \leq \frac {1}{2 \mu} \| \nabla f _ {i} (a) - \nabla f _ {i} (b) \| ^ {2}. +$$ + +Lemma B.2 (Generalized triangle inequality). For any $a, b, c \in \mathbb{R}^d$ and continuously differentiable function $f: \mathbb{R}^d \to \mathbb{R}$ , by definition of the Bregman divergence, we have: + +$$ +d _ {f} (a, b) + d _ {f} (b, c) = d _ {f} (a, c) + \langle a - b, \nabla f (c) - \nabla f (b) \rangle . +$$ + +Lemma B.3 (Variance decomposition). For a random vector $a \in \mathbb{R}^d$ and any $b \in \mathbb{R}^d$ , the variance of $a$ can be decomposed as: + +$$ +\mathbb {E} \left[ \| a - \mathbb {E} [ a ] \| ^ {2} \right] = \mathbb {E} \left[ \| a - b \| ^ {2} \right] - \mathbb {E} \left[ \| \mathbb {E} [ a ] - b \| ^ {2} \right]. +$$ + +Lemma B.4 (Jensen's inequality). For any vectors $a_1, \ldots, a_n \in \mathbb{R}^d$ , we have: + +$$ +\left\| \sum_ {i = 1} ^ {n} a _ {i} \right\| ^ {2} \leq n \sum_ {i = 1} ^ {n} \| a _ {i} \| ^ {2}. +$$ + +Lemma B.5. For any vectors $a, b \in \mathbb{R}^d$ and $\alpha > 0$ : + +$$ +2 \langle a, b \rangle \leq \alpha \| a \| ^ {2} + \alpha^ {- 1} \| b \| ^ {2}. +$$ + +Lemma B.6. For any vectors $a, b \in \mathbb{R}^d$ and $\alpha > 0$ : + +$$ +\left\| a - b \right\| ^ {2} \leq (1 + \alpha) \left\| a \right\| ^ {2} + (1 + \alpha^ {- 1}) \left\| b \right\| ^ {2}. +$$ + +Lemma B.7. For any $A \in \mathbb{R}^{n \times d}$ and $B \in \mathbb{R}^{n \times n}$ , we have: + +$$ +\| B A \| _ {F} \leq \| A \| _ {F} \| B \| _ {2}. +$$ + +Lemma B.8 (Effective resistance contraction). For $(i,j) \in \mathcal{E}$ and any $x \in \mathbb{R}^{n \times d}$ , we have: + +$$ +\left\| \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x \right\| _ {\Lambda^ {+}} ^ {2} \leq \chi_ {2} \left\| \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x \right\| _ {F} ^ {2}. +$$ + +Proof. Indeed, we note that, by definition of $\chi_{2}$ (3): + +$$ +\begin{array}{l} \left\| \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x \right\| _ {\Lambda^ {+}} ^ {2} = \operatorname {T r} \left(x ^ {\mathsf {T}} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} \Lambda^ {+} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x\right) (7) \\ \leq 2 \chi_ {2} \operatorname {T r} \left(x ^ {\mathsf {T}} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x\right) (8) \\ = \chi_ {2} \| \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\top} x \| _ {F} ^ {2} (9) \\ \end{array} +$$ + +□ + +Lemma B.9. For any $x \in \mathbb{R}^{n \times d}$ , and $\Lambda$ the Laplacian of a connected graph, we have: + +$$ +\sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \| (e _ {i} - e _ {j}) (e _ {i} - e _ {j}) ^ {\intercal} x \| _ {F} ^ {2} = 2 \| x \| _ {\Lambda} ^ {2}. +$$ + +Proof. Indeed, by definition of the Laplacian $\Lambda$ (3.1), we have: + +$$ +\begin{array}{l} \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \| \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\top} x \| _ {F} ^ {2} = \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \operatorname {T r} \left(x ^ {\top} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\top} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\top} x\right) (10) \\ = 2 \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \operatorname {T r} \left(x ^ {\mathsf {T}} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x\right) (11) \\ = 2 \operatorname {T r} \left(x ^ {\mathsf {T}} \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x\right) (12) \\ \end{array} +$$ + +□ + +Lemma B.10. For any $x, y \in \mathbb{R}^{n \times d}$ , and $\Lambda$ the Laplacian of a connected graph, we have: + +$$ +2 \langle \pi x, y \rangle \leq \frac {1}{4} \| x \| _ {\Lambda} ^ {2} + 4 \chi_ {1} \| y \| _ {F} ^ {2} +$$ + +Proof. By property of the Laplacian of a connected graph, we have that $\pi = (\Lambda^{+})^{1 / 2}\Lambda^{1 / 2}$ . Thus: + +$$ +2 \langle \pi x, y \rangle = 2 \left\langle \Lambda^ {1 / 2} x, \left(\Lambda^ {+}\right) ^ {1 / 2} y \right\rangle \tag {13} +$$ + +$$ +\stackrel {(B. 5)} {\leq} \frac {1}{4} \| \Lambda^ {1 / 2} x \| _ {F} ^ {2} + 4 \| \left(\Lambda^ {+}\right) ^ {1 / 2} y \| _ {F} ^ {2} \tag {14} +$$ + +$$ +\stackrel {(B. 7)} {\leq} \frac {1}{4} \| x \| _ {\Lambda} ^ {2} + 4 \chi_ {1} \| y \| _ {F} ^ {2} \tag {15} +$$ + +# C Proof of the main result of this paper + +We remind that we study the following dynamic: + +$$ +d x _ {t} ^ {i} = \eta (\tilde {x} _ {t} ^ {i} - x _ {t} ^ {i}) d t - \gamma \int_ {\Xi} \nabla F _ {i} (x _ {t} ^ {i}, \xi_ {i}) d N _ {t} ^ {i} (\xi_ {i}) - \alpha \sum_ {j, (i, j) \in \mathcal {E}} (x _ {t} ^ {i} - x _ {t} ^ {j}) d M _ {t} ^ {i j}, +$$ + +$$ +d \tilde {x} _ {t} ^ {i} = \eta (x _ {t} ^ {i} - \tilde {x} _ {t} ^ {i}) d t - \gamma \int_ {\Xi} \nabla F _ {i} (x _ {t} ^ {i}, \xi_ {i}) d N _ {t} ^ {i} (\xi_ {i}) - \tilde {\alpha} \sum_ {j, (i, j) \in \mathcal {E}} (x _ {t} ^ {i} - x _ {t} ^ {j}) d M _ {t} ^ {i j}, +$$ + +which simplifies to: + +$$ +d x _ {t} ^ {i} = - \gamma \int_ {\Xi} \nabla F _ {i} (x _ {t} ^ {i}, \xi_ {i}) d N _ {t} ^ {i} (\xi_ {i}) - \alpha \sum_ {j, (i, j) \in \mathcal {E}} (x _ {t} ^ {i} - x _ {t} ^ {j}) d M _ {t} ^ {i j} +$$ + +if $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ is not applied. We also recall the main proposition, which we prove next: + +Proposition C.1 (Convergence guarantees.). Assume that $\{x_{t},\tilde{x}_{t}\}$ follow the dynamic Eq. 4 and that Assumption 3.2-3.3 are satisfied. Assume that $\mathbf{1}\bar{x}_0 = x_0 = \tilde{x}_0$ and let $T$ the total running time. Then: + +- Non-accelerated setting, we pick $\eta = 0$ , $\alpha = \tilde{\alpha} = \frac{1}{2}$ and set $\chi = \chi_{1}$ +- Acceleration $(A^2 C_i D^2)$ , we set $\eta = \frac{1}{2\sqrt{\chi_1 \chi_2}}$ , $\alpha = \frac{1}{2}$ , $\tilde{\alpha} = \frac{1}{2}\sqrt{\frac{\chi_1}{\chi_2}}$ , and $\chi = \sqrt{\chi_1 \chi_2} \leq \chi_1$ . + +Then, there exists a constant step size $\gamma > 0$ such that if: + +- (strong-convexity) the Assumption 3.4 is satisfied, then $\gamma \leq \frac{1}{16L(1 + \chi)}$ and + +$$ +\mathbb {E} \left[ \| \bar {x} _ {T} - x ^ {*} \| ^ {2} \right] = \tilde {\mathcal {O}} \left(\| \bar {x} _ {0} - x ^ {*} \| ^ {2} e ^ {- \frac {\mu T}{1 6 L (1 + \chi)}} + \frac {\sigma^ {2} + \zeta^ {2} (1 + \chi)}{\mu^ {2} T}\right), +$$ + +- (non-convexity) the Assumption 3.5 is satisfied, then there is $c > 0$ which depends only on $P, M$ from the assumptions such that $\gamma \leq \frac{c}{L(\chi + 1)}$ and: + +$$ +\frac {1}{T} \int_ {0} ^ {T} \mathbb {E} \left[ \| \nabla f (\bar {x} _ {t}) \| ^ {2} \right] d t = \mathcal {O} \left(\frac {L (1 + \chi)}{T} (f (x _ {0}) - f (x ^ {*})) + \sqrt {\frac {L (f (x _ {0}) - f (x ^ {*}))}{T} (\sigma^ {2} + (1 + \chi) \xi^ {2})}\right). +$$ + +Also, the expected number of gradient steps is $nT$ and the number of communications is $\frac{\operatorname{Tr}(\Lambda)}{2} T$ . + +Proof. The core of the proof is to introduce the appropriate Lyapunov potentials $\phi_k(t,X)$ , where $X$ could be $(x,\tilde{x})$ or $x$ depending on whether we apply $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ or not. If we apply $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , we introduce the momentum matrix $\mathcal{A} = \left( \begin{array}{cc} -\eta & \eta \\ \eta & -\eta \end{array} \right)$ . Then, by Ito's lemma, given that all the functions are smooth, and remembering that all Point-wise Poisson Processes $N_t^i$ have unit intensity, that the $M_t^{ij}$ have intensity $\lambda^{ij}$ and that they are all independent, we obtain in a similar fashion to [12, 34]: + +$$ +\begin{array}{l} \phi_ {k} (T, X _ {T}) - \phi_ {k} (0, X _ {0}) = \int_ {0} ^ {T} \partial_ {t} \phi_ {k} (t, X _ {t}) + \underbrace {\langle A X _ {t} , \partial_ {X} \phi_ {k} (t , X _ {t}) \rangle} _ {\text {m o m e n t u m t e r m}} d t \\ + \sum_ {i = 1} ^ {n} \int_ {0} ^ {T} \underbrace {\int_ {\Xi} \phi_ {k} \left(t , X _ {t} - \gamma \left(\frac {\nabla \tilde {F} _ {i} (x _ {t} , \xi)}{\nabla \tilde {F} _ {i} (x _ {t} , \xi)}\right)\right) - \phi_ {k} (t , X _ {t})} _ {\text {v a r i a t i o n d u e t o e a c h i n d e p e n d e n t g r a d i e n t u p d a t e}} d t d \mathcal {P} (\xi) \\ + \sum_ {(i, j) \in \mathcal {E}} \int_ {0} ^ {T} \underbrace {\left[ \phi_ {k} \left(t , X _ {t} - \left( \begin{array}{c} \alpha (e _ {i} - e _ {j}) (e _ {i} - e _ {j}) ^ {\intercal} x _ {t} \\ \tilde {\alpha} (e _ {i} - e _ {j}) (e _ {i} - e _ {j}) ^ {\intercal} x _ {t}\right)\right) - \phi_ {k} (t , X _ {t}) \right] \lambda^ {i j}} _ {\text {v a r i a t i o n d u e t o e a c h i n d e p e n d e n t p 2 p c o m m u n i c a t i o n}} d t \\ + M _ {T}, \\ \end{array} +$$ + +where $M_T$ is a martingale. In the case where $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ is not applied, we set $\mathcal{A} = 0$ to remove the momentum term, and all updates are done only along $x$ as there is no $\tilde{x}$ . We remind that: + +$$ +\int_ {0} ^ {t} e ^ {\alpha u} d u = \frac {1}{\alpha} \left(e ^ {\alpha t} - 1\right) \tag {16} +$$ + +We now present our choice of potential for each cases: + +- For the convex case in the non-accelerated setting, we introduce: + +$$ +\phi_ {1} (t, x) \triangleq A _ {t} \| \bar {x} - x ^ {*} \| ^ {2} + B _ {t} \| \pi x \| _ {F} ^ {2}. +$$ + +- For the convex case with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , we introduce: + +$$ +\phi_ {2} (t, x, \tilde {x}) \triangleq A _ {t} \| \bar {x} - x ^ {*} \| ^ {2} + B _ {t} \| \pi x \| _ {F} ^ {2} + \tilde {B} _ {t} \| \tilde {x} \| _ {\Lambda^ {+}}. +$$ + +- For the non-convex case in the non-accelerated setting, we introduce: + +$$ +\phi_ {3} (t, x) \triangleq A _ {t} d _ {f} (\bar {x}, x ^ {*}) + B _ {t} \| \pi x \| _ {F} ^ {2}. +$$ + +- For the non-convex case with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , we introduce: + +$$ +\phi_ {4} (t, x) \triangleq A _ {t} d _ {f} (\bar {x}, x ^ {*}) + B _ {t} \| \pi x \| _ {F} ^ {2} + \tilde {B} _ {t} \| \tilde {x} \| _ {\Lambda^ {+}} ^ {2}. +$$ + +# C.1 Some useful upper-bounds + +As the same terms appear in several potentials, we now prepare some intermediary results which will be helpful for the proofs. + +Study of the $\| \bar{x} -x^{*}\|^{2}$ terms: + +First, we study the variations in the $\| \bar{x} - x^{*} \|^2$ term appearing in $\phi_1$ and $\phi_2$ . As the updates due to the communication are in the orthogonal of 1, it is only necessary to study the variations induced by the gradient steps. Thus, we define: + +$$ +\Delta x \triangleq \sum_ {i = 1} ^ {n} \| \overline {{x - \gamma \nabla \tilde {F} _ {i} (x , \xi)}} - x ^ {*} \| ^ {2} - \| \bar {x} - x ^ {*} \| ^ {2} +$$ + +We note that $\overline{\nabla\tilde{F}_i(x,\xi)} = \frac{1}{n}\nabla F_i(x_i,\xi_i)$ , which, using $\sum_{i}\nabla f_{i}(x^{*}) = 0$ , leads to: + +$$ +\begin{array}{l} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} [ \Delta x ] = \sum_ {i = 1} ^ {n} - \frac {2 \gamma}{n} \left\langle \bar {x} - x ^ {*}, \nabla f _ {i} \left(x _ {i}\right) \right\rangle + \frac {\gamma^ {2}}{n ^ {2}} \mathbb {E} _ {\xi_ {i}} \left[ \| \nabla F _ {i} \left(x _ {i}, \xi_ {i}\right) \| ^ {2} \right] (17) \\ = \sum_ {i = 1} ^ {n} - \frac {2 \gamma}{n} \left\langle \bar {x} - x ^ {*}, \nabla f _ {i} \left(x _ {i}\right) - \nabla f _ {i} \left(x ^ {*}\right) \right\rangle + \frac {\gamma^ {2}}{n ^ {2}} \mathbb {E} _ {\xi_ {i}} \left[ \| \nabla F _ {i} \left(x _ {i}, \xi_ {i}\right) \| ^ {2} \right] (18) \\ \stackrel {(3. 4), (B. 3)} {\leq} \frac {\gamma^ {2}}{n} \sigma^ {2} + \sum_ {i = 1} ^ {n} - \frac {2 \gamma}{n} \left\langle \bar {x} - x ^ {*}, \nabla f _ {i} \left(x _ {i}\right) - \nabla f _ {i} \left(x ^ {*}\right) \right\rangle + \frac {\gamma^ {2}}{n ^ {2}} \| \nabla f _ {i} \left(x _ {i}\right) \| ^ {2} (19) \\ \stackrel {(B. 2)} {=} \frac {\gamma^ {2}}{n} \sigma^ {2} + \sum_ {i = 1} ^ {n} - \frac {2 \gamma}{n} \left(d _ {f _ {i}} (\bar {x}, x ^ {*}) + d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) - d _ {f _ {i}} (\bar {x}, x _ {i})\right) + \frac {\gamma^ {2}}{n ^ {2}} \| \nabla f _ {i} (x _ {i}) \| ^ {2} (20) \\ \stackrel {(B. 4)} {\leq} \frac {\gamma^ {2}}{n} \sigma^ {2} + \sum_ {i = 1} ^ {n} - \frac {2 \gamma}{n} \left(d _ {f _ {i}} (\bar {x}, x ^ {*}) + d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) - d _ {f _ {i}} (\bar {x}, x _ {i})\right) \\ + \frac {2 \gamma^ {2}}{n ^ {2}} \| \nabla f _ {i} \left(x ^ {*}\right) - \nabla f _ {i} \left(x _ {i}\right) \| ^ {2} + \frac {2 \gamma^ {2}}{n ^ {2}} \| \nabla f _ {i} \left(x ^ {*}\right) \| ^ {2} (21) \\ \stackrel {(3. 4), (B. 1)} {\leq} \frac {\gamma^ {2}}{n} \sigma^ {2} + \frac {2 \gamma^ {2}}{n} \zeta^ {2} + \sum_ {i = 1} ^ {n} - \frac {2 \gamma}{n} \left(d _ {f _ {i}} (\bar {x}, x ^ {*}) + d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) - d _ {f _ {i}} (\bar {x}, x _ {i})\right) + \frac {4 L \gamma^ {2}}{n ^ {2}} d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) (22) \\ \stackrel {(B. 1)} {\leq} \frac {\gamma^ {2} \sigma^ {2}}{n} + \frac {2 \gamma^ {2}}{n} \zeta^ {2} - \gamma \mu \| \bar {x} - x ^ {*} \| ^ {2} + \frac {L \gamma}{n} \| \pi x \| _ {F} ^ {2} + \sum_ {i = 1} ^ {n} \left(- \frac {2 \gamma}{n} + \frac {4 L \gamma^ {2}}{n ^ {2}}\right) d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) (23) \\ \end{array} +$$ + +# Study of the $d_{f}(\bar{x},x^{*})$ terms: + +Next, using the same reasoning as for $\Delta x$ , we also need to only study the gradient updates in the non-convex setting for the first part of $\phi_3, \phi_4$ . Thus, we set: + +$$ +\Delta f \triangleq \sum_ {i = 1} ^ {n} d _ {f} (\bar {x} - \gamma \frac {1}{n} \nabla F _ {i} \left(x _ {i}, \xi_ {i}\right), x ^ {*}) - d _ {f} (\bar {x}, x ^ {*}). \tag {24} +$$ + +First, it is useful to note that under Assumption 3.5, using (B.3), we have: + +$$ +\mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2} \leq n \sigma^ {2} + (1 + M) \sum_ {i = 1} ^ {n} \| \nabla f _ {i} \left(x _ {i}\right) \| ^ {2}. \tag {25} +$$ + +Then, using $\sum_{i}\nabla f_{i}(x^{*}) = 0$ and the $L$ -smoothness of $f = \frac{1}{n}\sum_{i}f_{i}$ , we get: + +$$ +\begin{array}{l} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} [ \Delta f ] \stackrel {(B, 2)} {=} \sum_ {i = 1} ^ {n} \mathbb {E} [ d _ {f} (\bar {x} - \gamma \frac {1}{n} \nabla F _ {i} (x _ {i}, \xi_ {i}), \bar {x}) ] - \frac {\gamma}{n} \langle \nabla f _ {i} (x _ {i}), \nabla f (\bar {x}) \rangle (26) \\ \leq \sum_ {i = 1} ^ {n} \frac {1}{2 n ^ {2}} L \gamma^ {2} \mathbb {E} \left[ \| \nabla F _ {i} \left(x _ {i}, \xi_ {i}\right) \| ^ {2} \right] - \frac {\gamma}{n} \langle \nabla f _ {i} \left(x _ {i}\right), \nabla f (\bar {x}) \rangle (27) \\ \stackrel {(2 5)} {\leq} \frac {L \gamma^ {2}}{2 n} \sigma^ {2} - \gamma \| \nabla f (\bar {x}) \| ^ {2} + \sum_ {i = 1} ^ {n} \frac {M + 1}{2 n ^ {2}} L \gamma^ {2} \| \nabla f _ {i} (x _ {i}) \| ^ {2} - \sum_ {i = 1} ^ {n} \frac {\gamma}{n} \langle \nabla f _ {i} (x _ {i}) - \nabla f _ {i} (\bar {x}), \nabla f (\bar {x}) \rangle (28) \\ \stackrel {(B. 5)} {\leq} \frac {L \gamma^ {2}}{2 n} \sigma^ {2} + \frac {\gamma}{2 n} L ^ {2} \| \pi x \| _ {F} ^ {2} - \frac {\gamma}{2} \| \nabla f (\bar {x}) \| ^ {2} + \sum_ {i = 1} ^ {n} \frac {M + 1}{2 n ^ {2}} L \gamma^ {2} \| \nabla f _ {i} (x _ {i}) \| ^ {2} (29) \\ \end{array} +$$ + +As we also have: + +$$ +\sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x _ {i}\right) \right\| ^ {2} \leq \sum_ {i = 1} ^ {n} 3 \left(\left\| \nabla f _ {i} \left(x _ {i}\right) - \nabla f _ {i} (\bar {x}) \right\| ^ {2} + \left\| \nabla f _ {i} (\bar {x}) - \nabla f (\bar {x}) \right\| ^ {2} + \left\| \nabla f (\bar {x}) \right\| ^ {2}\right) \tag {30} +$$ + +$$ +\stackrel {(3. 5)} {\leq} 3 L ^ {2} \| \pi x \| _ {F} ^ {2} + 3 n \zeta^ {2} + 3 n (1 + P) \| \nabla f (\bar {x}) \| ^ {2} \tag {31} +$$ + +We get in the end: + +$$ +\begin{array}{l} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} [ \Delta f ] \leq \frac {L \gamma^ {2}}{2 n} \sigma^ {2} + \frac {3 (M + 1)}{2 n} \gamma^ {2} L \zeta^ {2} + \left(\frac {3 (M + 1)}{2 n ^ {2}} L ^ {3} \gamma^ {2} + \frac {\gamma}{2 n} L ^ {2}\right) \| \pi x \| _ {F} ^ {2} \\ + \left(\frac {3 (M + 1)}{2 n} (1 + P) L \gamma^ {2} - \frac {\gamma}{2}\right) \| \nabla f (\bar {x}) \| ^ {2} \tag {32} \\ \end{array} +$$ + +Remark C.2. As observed in (5), note that for both the terms $\| \bar{x} - x^{*}\|^{2}$ and $d_{f}(\bar{x},x^{*})$ , as we are considering $\bar{x}$ and $\frac{1}{n}\mathbf{11}^{\mathrm{T}}\pi = 0$ , Poisson updates from the communication process amount to zero. Moreover, as $\frac{1}{n}\mathbf{11}^{\mathrm{T}}(x - \tilde{x}) = 0$ , the update from the momentum is also null for these terms. + +Study of the $\| \pi x\| _F^2$ terms: + +We get from the Poisson updates for the gradient processes: + +$$ +\begin{array}{l} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \left[ \sum_ {i = 1} ^ {n} \| \pi (x - \gamma \nabla \tilde {F} _ {i} (x, \xi)) \| _ {F} ^ {2} - \| \pi x \| _ {F} ^ {2} \right] = - 2 \gamma \langle \pi x, \nabla F (x) \rangle + \gamma^ {2} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \pi \nabla \tilde {F} _ {i} (x, \xi) \| _ {F} ^ {2} \\ \leq - 2 \gamma \langle \pi x, \nabla F (x) \rangle + \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2}, \\ \end{array} +$$ + +and, using the definition of the Laplacian $\Lambda$ (3.1), we get from the communication processes: + +$$ +\begin{array}{l} \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \left(\| \pi (x - \alpha (e _ {i} - e _ {j}) (e _ {i} - e _ {j}) ^ {\mathsf {T}} x) \| _ {F} ^ {2} - \| \pi x \| _ {F} ^ {2}\right) = - 2 \alpha \langle x, \Lambda x \rangle + \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \alpha^ {2} \| (e _ {i} - e _ {j}) (e _ {i} - e _ {j}) ^ {\mathsf {T}} x \| _ {F} ^ {2} \\ \stackrel {(B. 9)} {=} - 2 \alpha \langle x, \Lambda x \rangle + 2 \alpha^ {2} \| x \| _ {\Lambda} ^ {2} \\ = 2 \alpha (\alpha - 1) \| x \| _ {\Lambda} ^ {2}. \\ \end{array} +$$ + +Putting together both types of Poisson updates, we define: + +$$ +\begin{array}{l} \mathbb {E} [ \Delta_ {\pi} ] \triangleq - 2 \gamma \langle \pi x, \nabla F (x) \rangle + \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \pi \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2} + 2 \alpha (\alpha - 1) \| x \| _ {\Lambda} ^ {2} (33) \\ \stackrel {(B. 1 0)} {\leq} 4 \chi_ {1} \gamma^ {2} \| \nabla F (x) \| _ {F} ^ {2} + \left(\frac {1}{4} - 2 \alpha (1 - \alpha)\right) \| x \| _ {\Lambda} ^ {2} + \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2} (34) \\ \end{array} +$$ + +For $\alpha = \frac{1}{2}$ , we get: + +$$ +\mathbb {E} \left[ \Delta_ {\pi} \right] \leq 4 \chi_ {1} \gamma^ {2} \| \nabla F (x) \| _ {F} ^ {2} - \frac {1}{4 \chi_ {1}} \| \pi x \| _ {F} ^ {2} + \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2} \tag {35} +$$ + +For $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ , we add the momentum term $\langle \eta (\tilde{x} -x),2\pi x\rangle$ to define: + +$$ +\begin{array}{l} \Delta_ {\Lambda} ^ {1} \triangleq 2 \eta \langle \tilde {x}, \pi x \rangle - 2 \eta \| \pi x \| _ {F} ^ {2} - 2 \gamma \langle \pi x, \nabla F (x) \rangle + \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2} + 2 \alpha (\alpha - 1) \| x \| _ {\Lambda} ^ {2} (36) \\ \leq 2 \eta \langle \tilde {x}, \pi x \rangle - \frac {3}{2} \eta \| \pi x \| _ {F} ^ {2} + \frac {2}{\eta} \gamma^ {2} \| \nabla F (x) \| _ {F} ^ {2} - 2 \alpha (1 - \alpha) \| x \| _ {\Lambda} + \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2}. (37) \\ \end{array} +$$ + +Study of the $\| \tilde{x}\|_{\Lambda +}^2$ terms: + +These terms only appear in the Lyapunov potentials used when applying $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . From the Poisson updates and momentum, we get: + +$$ +\begin{array}{l} \Delta_ {\Lambda} ^ {2} \triangleq 2 \eta \langle x - \tilde {x}, \tilde {x} \rangle_ {\Lambda^ {+}} - 2 \gamma \langle \tilde {x}, \Lambda^ {+} \nabla \tilde {F} (x, \xi) \rangle + \gamma^ {2} \| \nabla \tilde {F} (x, \xi) \| _ {\Lambda^ {+}} ^ {2} - 2 \tilde {\alpha} \langle \pi x, \tilde {x} \rangle \\ + \tilde {\alpha} ^ {2} \sum_ {(i, j) \in \mathcal {E}} \lambda^ {i j} \| \left(e _ {i} - e _ {j}\right) \left(e _ {i} - e _ {j}\right) ^ {\mathsf {T}} x \| _ {\Lambda^ {+}} ^ {2}. \tag {38} \\ \end{array} +$$ + +Taking the expectation and using (B.9), (B.8), (B.7), (B.5) leads to: + +$$ +\begin{array}{l} \mathbb {E} \left[ \Delta_ {\Lambda} ^ {2} \right] \leq \chi_ {1} \eta \| \pi x \| _ {F} ^ {2} + \left(\frac {\eta}{2} - \eta\right) \| \tilde {x} \| _ {\Lambda^ {+}} ^ {2} + \frac {2}{\eta} \chi_ {1} \gamma^ {2} \| \pi \nabla F (x) \| _ {F} ^ {2} - 2 \tilde {\alpha} \langle \pi x, \tilde {x} \rangle \\ + 2 \chi_ {2} \tilde {\alpha} ^ {2} \| x \| _ {\Lambda} + \chi_ {1} \gamma^ {2} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} \| \nabla \tilde {F} (x, \xi) \| _ {F} ^ {2} \tag {39} \\ \end{array} +$$ + +# C.2 Resolution: putting everything together + +In this part, we combine the terms for each potential. We remind that with Assumption 3.4: + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\xi_ {i}} \left[ \| \nabla F _ {i} \left(x _ {i}, \xi_ {i}\right) \| ^ {2} \right] = n \sigma^ {2} + \sum_ {i = 1} ^ {n} \| \nabla f _ {i} \left(x _ {i}\right) \| ^ {2} (40) \\ \leq n \sigma^ {2} + \sum_ {i = 1} ^ {n} 2 \| \nabla f _ {i} \left(x ^ {*}\right) - \nabla f _ {i} \left(x _ {i}\right) \| ^ {2} + 2 \| \nabla f _ {i} \left(x ^ {*}\right) \| ^ {2} (41) \\ \leq n \sigma^ {2} + 2 n \zeta^ {2} + 4 L \sum_ {i = 1} ^ {n} d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) (42) \\ \end{array} +$$ + +Convex case, non-accelerated. We remind that + +$$ +\phi_ {1} (t, x, \tilde {x}) = A _ {t} \| \bar {x} - x ^ {*} \| ^ {2} + B _ {t} \| \pi x \| ^ {2}. +$$ + +Then, using (23), (35) and defining $\Psi_{1} \triangleq \partial_{t}\phi_{1}(t,X_{t}) + \mathbb{E}[A_{t}\Delta x + B_{t}\Delta_{\pi}]$ , we have: + +$$ +\begin{array}{l} \Psi_ {1} \leq \| \bar {x} - x ^ {*} \| ^ {2} \left(A _ {t} ^ {\prime} - \mu \gamma A _ {t}\right) (43) \\ + \| \pi x \| ^ {2} \left(B _ {t} ^ {\prime} + \frac {L \gamma}{n} A _ {t} - \frac {1}{4 \chi_ {1}} B _ {t}\right) (44) \\ + \sum_ {i = 1} ^ {n} d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) \left(- \frac {2 \gamma}{n} A _ {t} + \frac {4 L \gamma^ {2}}{n ^ {2}} A _ {t} + 4 L \left(4 \chi_ {1} \gamma^ {2} + \gamma^ {2}\right) B _ {t}\right) (45) \\ + \left(\frac {\gamma^ {2} \sigma^ {2}}{n} + \frac {2 \gamma^ {2}}{n} \zeta^ {2}\right) A _ {t} + \left(n \gamma^ {2} \sigma^ {2} + 2 n \zeta^ {2} \left(4 \chi_ {1} \gamma^ {2} + \gamma^ {2}\right)\right) B _ {t} (46) \\ \end{array} +$$ + +We pick $\alpha = \frac{1}{2}$ , $B_{t} = \frac{1}{n} A_{t}$ , with $A_{t} = e^{-rt}$ (we denote by $r$ the rate of the exponentials $A_{t}, B_{t}$ ). Then (43), (44) imply: + +$$ +r \leq \min (\mu \gamma , \frac {1}{4 \chi_ {1}} - L \gamma) \tag {47} +$$ + +As we want (45) to be negative, we have: + +$$ +- 1 + \gamma \left(\frac {2 L}{n} + 2 L \left(4 \chi_ {1} + 1\right)\right) \leq 0 \tag {48} +$$ + +which leads to: + +$$ +\gamma \leq \frac {1}{2 L \left(\frac {1}{n} + 4 \chi_ {1} + 1\right)} \tag {49} +$$ + +and taking $\gamma \leq \frac{1}{2}\frac{1}{2L(3 + 4\chi_1 + 1)} = \frac{1}{16L(1 + \chi_1)}$ works. Now, as $\frac{1}{4\chi_1} - L\gamma \geq \frac{3}{16\chi_1}$ and $\mu\gamma \leq \frac{1}{16\chi_1}$ , we pick $r = \mu\gamma$ . As we have: + +$$ +\mathbb {E} \left[ \phi_ {1} \left(T, x _ {T}\right) - \phi_ {1} \left(0, x _ {0}\right) \right] = \int_ {0} ^ {T} \left(A _ {t} ^ {\prime} \| \bar {x} _ {t} - x ^ {*} \| ^ {2} + B _ {t} ^ {\prime} \| \pi x _ {t} \| ^ {2} + A _ {t} \mathbb {E} _ {\xi_ {1}, \dots , \xi_ {n}} [ \Delta x ] + B _ {t} \mathbb {E} [ \Delta_ {\pi} ]\right) d t \tag {50} +$$ + +using (46) and (16) leads to: + +$$ +\mathbb {E} \| \bar {x} _ {t} - x ^ {*} \| ^ {2} \leq e ^ {- \gamma \mu t} \left(\| \bar {x} _ {0} - x ^ {*} \| ^ {2} + \frac {1}{n} \| \pi x _ {0} \| ^ {2}\right) + \frac {\gamma}{\mu} \left(\sigma^ {2} (\frac {1}{n} + 1) + 2 \zeta^ {2} (\frac {1}{n} + 4 \chi_ {1} + 1)\right) \tag {51} +$$ + +Convex case with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ We remind that + +$$ +\phi_ {2} (t, x, \tilde {x}) = A _ {t} \| \bar {x} - x ^ {*} \| ^ {2} + B _ {t} \| \pi x \| ^ {2} + \tilde {B} _ {t} \| \tilde {x} \| _ {\Lambda^ {+}}. +$$ + +Then, using (23), (37), (39) and defining $\Psi_{2} \triangleq \partial_{t}\phi_{2}(t,X_{t}) + \mathbb{E}[A_{t}\Delta x + B_{t}\Delta_{\Lambda}^{1} + \tilde{B}_{t}\Delta_{\Lambda}^{2}]$ , we have: + +$$ +\begin{array}{l} \Psi_ {2} \leq \| \bar {x} - x ^ {*} \| ^ {2} \left(A _ {t} ^ {\prime} - \mu \gamma A _ {t}\right) (52) \\ + \| \pi x \| ^ {2} \left(B _ {t} ^ {\prime} + \frac {L \gamma}{n} A _ {t} - \frac {3}{2} \eta B _ {t} + \eta \chi_ {1} \tilde {B} _ {t}\right) (53) \\ + \| \tilde {x} \| _ {\Lambda^ {+}} ^ {2} \left(\tilde {B} _ {t} ^ {\prime} - \frac {\eta}{2} \tilde {B} _ {t}\right) (54) \\ + \| x \| _ {\Lambda} ^ {2} \left(2 \tilde {\alpha} ^ {2} \chi_ {2} \tilde {B} _ {t} - 2 \alpha (1 - \alpha) B _ {t}\right) (55) \\ + \left\langle \tilde {x}, \pi x \right\rangle \left(2 \eta B _ {t} - 2 \tilde {\alpha} \tilde {B} _ {t}\right) (56) \\ + \sum_ {i = 1} ^ {n} d _ {f _ {i}} \left(x ^ {*}, x _ {i}\right) \left(- \frac {2 \gamma}{n} A _ {t} + \frac {4 L \gamma^ {2}}{n ^ {2}} A _ {t} + 4 L \left(\frac {2 \gamma^ {2}}{\eta} + \gamma^ {2}\right) \left(B _ {t} + \chi_ {1} \tilde {B} _ {t}\right)\right) (57) \\ + \left(\frac {\gamma^ {2} \sigma^ {2}}{n} + \frac {2 \gamma^ {2}}{n} \zeta^ {2}\right) A _ {t} + \left(n \gamma^ {2} \sigma^ {2} + 2 n \zeta^ {2} \left(\gamma^ {2} + \frac {2 \gamma^ {2}}{\eta}\right)\right) \left(B _ {t} + \chi_ {1} \tilde {B} _ {t}\right) (58) \\ \end{array} +$$ + +Then, we assume $\alpha = \frac{1}{2}$ , $\tilde{\alpha} = \frac{1}{2}\sqrt{\frac{\chi_1}{\chi_2}}$ , $\eta = \frac{1}{2\sqrt{\chi_1\chi_2}}$ , $B_{t} = \frac{1}{n} A_{t}$ , $\tilde{B}_{t} = \frac{1}{\chi_{1}} B_{t}$ , $A_{t} = e^{-rt}$ (we denote by $r$ the rate of the exponentials $A_{t}, B_{t}, \tilde{B}_{t}$ ), which satisfies (55) and (56). Then (52), (53), (54) imply: + +$$ +r \leq \min (\mu \gamma , \frac {\eta}{2}, \frac {\eta}{2} - L \gamma) = \min (\mu \gamma , \frac {\eta}{2} - L \gamma) \tag {59} +$$ + +As we want (57) to be negative, we have: + +$$ +- 1 + \gamma \left(\frac {2 L}{n} + 4 L \left(\frac {2}{\eta} + 1\right)\right) \leq 0 \tag {60} +$$ + +which leads to: + +$$ +\gamma \leq \frac {1}{2 L \left(\frac {1}{n} + \frac {4}{\eta} + 2\right)} \tag {61} +$$ + +and taking $\gamma \leq \frac{1}{2L\left(6 + \frac{4}{\eta} + 2\right)} = \frac{1}{16L\left(1 + \sqrt{\chi_1\chi_2}\right)}$ works. Now, we have: + +$$ +\frac {\eta}{2} - L \gamma \geq \frac {1}{4 \sqrt {\chi_ {1} \chi_ {2}}} \left(1 - \frac {4 \sqrt {\chi_ {1} \chi_ {2}}}{1 6 (1 + \sqrt {\chi_ {1} \chi_ {2}})}\right) \geq \frac {3}{1 6 \sqrt {\chi_ {1} \chi_ {2}}} \tag {62} +$$ + +As $\mu \gamma \leq \frac{\mu}{16L\left(1 + \sqrt{\chi_1\chi_2}\right)} \leq \frac{1}{16\sqrt{\chi_1\chi_2}}$ , taking $r = \mu \gamma$ works. Finally, using (58) and (16) leads to: + +$$ +\mathbb {E} \| \bar {x} _ {t} - x ^ {*} \| ^ {2} \leq e ^ {- \gamma \mu t} \left(\| \bar {x} _ {0} - x ^ {*} \| ^ {2} + \frac {2}{n} \| \pi x _ {0} \| ^ {2}\right) + \frac {\gamma}{\mu} \left(\sigma^ {2} (\frac {1}{n} + 2) + 2 \zeta^ {2} (\frac {1}{n} + 8 \sqrt {\chi_ {1} \chi_ {2}} + 2)\right) \tag {63} +$$ + +Non-convex case, non-accelerated. We remind that: + +$$ +\phi_ {3} (t, x) = A _ {t} d _ {f} \left(\bar {x}, x ^ {*}\right) + B _ {t} \| \pi x \| ^ {2} +$$ + +Here, we pick $\alpha = \frac{1}{2}, A_{t} = 1, B_{t} = \frac{L}{n} A_{t}$ . Thus, $A_{t}' = B_{t}' = 0$ . Then, using (32), (35), (31), (25) we obtain: + +$$ +\begin{array}{l} \left. A _ {t} \mathbb {E} [ \Delta f ] + B _ {t} \mathbb {E} [ \Delta_ {\pi} ]\right) \leq \| \nabla f (\bar {x}) \| ^ {2} \left(- \frac {\gamma}{2} A _ {t} + \frac {3}{2 n} L \gamma^ {2} (M + 1) (P + 1) A _ {t} + 3 n \gamma^ {2} \left(4 \chi_ {1} + M + 1\right) (P + 1) B _ {t}\right) (64) \\ + \| \pi x \| ^ {2} \left(\frac {L ^ {2} \gamma}{2 n} (1 + \frac {3}{n} (M + 1) L \gamma) A _ {t} + 3 L ^ {2} \gamma^ {2} \left(4 \chi_ {1} + M + 1\right) B _ {t} - \frac {1}{4 \chi_ {1}} L \right. (65) \\ + \gamma^ {2} (\sigma^ {2} + 3 (M + 1) \zeta^ {2}) \left(\frac {L}{2 n} A _ {t} + n B _ {t}\right) + 1 2 n \chi_ {1} \gamma^ {2} \zeta^ {2} B _ {t} (66) \\ \end{array} +$$ + +Our goal is to use half of the negative term of (64) to cancel the positive ones, so that there remains at least $-\frac{\gamma}{4} A_t \| \nabla f(\bar{x}) \|^2$ in the end. Thus, we want: + +$$ +3 L \gamma^ {2} \left(\frac {(M + 1) (P + 1)}{2 n} + (4 \chi_ {1} + M + 1) (P + 1)\right) \leq \frac {\gamma}{4} \tag {67} +$$ + +and taking $\gamma \leq \frac{1}{48(M + 1)(P + 1)(1 + \chi_1)}$ works. We verify that with $\gamma$ defined as such, (65) is also negative. Finally, we upper bound (66) with $3L\gamma^2\left(\sigma^2 + 3(M + 1 + 4\chi_1)\zeta^2\right)$ . As we have: + +$$ +\mathbb {E} \left[ \phi_ {3} (T, x _ {T}) - \phi_ {3} \left(0, x _ {0}\right) \right] = \int_ {0} ^ {T} \left(A _ {t} ^ {\prime} d _ {f} \left(\bar {x} _ {t}, x ^ {*}\right) + B _ {t} ^ {\prime} \| \pi x _ {t} \| ^ {2} + A _ {t} \mathbb {E} [ \Delta f ] + B _ {t} \mathbb {E} [ \Delta_ {\pi} ]\right) d t \tag {68} +$$ + +we note that if $\gamma \leq \frac{c}{L(\chi_1 + 1)}$ for some constant $c > 0$ which depends on $M, P$ , we will get: + +$$ +\frac {\gamma}{4} \int_ {0} ^ {T} \mathbb {E} [ \| \nabla f (\bar {x} _ {t}) \| ^ {2} ] d t \leq \frac {L}{n} \| \pi x _ {0} \| ^ {2} + d _ {f} \left(x _ {0}, x ^ {*}\right) + \mathcal {O} \left(L T \gamma^ {2} \left(\sigma^ {2} + (1 + \chi_ {1}) \zeta^ {2}\right)\right) \tag {69} +$$ + +which also writes: + +$$ +\frac {1}{T} \int_ {0} ^ {T} \mathbb {E} [ \| \nabla f (\bar {x} _ {t}) \| ^ {2} ] d t \leq \frac {4}{\gamma T} \left(f \left(x _ {0}\right) - f \left(x ^ {*}\right)\right) + \mathcal {O} \left(L \gamma \left(\sigma^ {2} + (1 + \chi_ {1}) \zeta^ {2}\right)\right) \tag {70} +$$ + +Non-convex case, with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ .We have: + +$$ +\phi_ {4} (t, x) = A _ {t} d _ {f} (\bar {x}, x ^ {*}) + B _ {t} \| \pi x \| ^ {2} + \tilde {B} _ {t} \| \tilde {x} \| _ {\Lambda^ {+}} ^ {2} +$$ + +Here, we pick $\alpha = \frac{1}{2}$ , $\eta = \frac{1}{2\sqrt{\chi_1\chi_2}}$ , $A_t = 1$ , $B_t = \frac{L}{n} A_t$ , $B_t = \chi_1\tilde{B}_t$ and an identical reasoning to the convex setting allows to say we can find a constant $c > 0$ such that if $\gamma \leq \frac{c}{L(1 + \sqrt{\chi_1\chi_2})}$ , then: + +$$ +\frac {1}{T} \int_ {0} ^ {T} \mathbb {E} [ \| \nabla f (\bar {x} _ {t}) \| ^ {2} ] d t = \mathcal {O} \left(\frac {1}{\gamma T} \left(f \left(x _ {0}\right) - f \left(x ^ {*}\right)\right) + L \gamma \left(\sigma^ {2} + \left(1 + \sqrt {\chi_ {1} \chi_ {2}}\right) \zeta^ {2}\right)\right) \tag {71} +$$ + +# C.3 Optimizing the step-size + +In this part, we follow [40, 21] and optimize the step-size a posteriori. We set $\chi = \chi_{1}$ for the non-accelerated setting and $\chi = \sqrt{\chi_1\chi_2}$ with $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ . + +Strongly-convex cases: From (51) and (63), we can write that, for $\gamma \leq \frac{1}{16L(1 + \chi)}$ and initializing $x_0$ such that $\pi x_0 = 0$ , we have: + +$$ +\mathbb {E} \| \bar {x} _ {t} - x ^ {*} \| ^ {2} = \mathcal {O} \left(\| \bar {x} _ {0} - x ^ {*} \| ^ {2} e ^ {- \gamma \mu t} + \frac {\gamma}{\mu} \left(\sigma^ {2} + \zeta^ {2} (1 + \chi)\right)\right) \tag {72} +$$ + +Then, taking the proof of [40] and adapting the threshold, we consider two cases (with $r_0 \triangleq \| \bar{x}_0 - x^* \|^2$ ): + +- if $\frac{1}{16L(1 + \chi)} \geq \frac{\log(\max\{2, \mu^2 r_0 T / \sigma^2\})}{\mu T}$ , then we set $\gamma = \frac{\log(\max\{2, \mu^2 r_0 T / \sigma^2\})}{\mu T}$ . + +In this case, (72) gives: + +$$ +\mathbb {E} \| \bar {x} _ {T} - x ^ {*} \| ^ {2} = \tilde {\mathcal {O}} \left(\frac {1}{\mu^ {2} T} \left(\sigma^ {2} + \zeta^ {2} (1 + \chi)\right)\right) \tag {73} +$$ + +- if $\frac{1}{16L(1 + \chi)} < \frac{\log(\max\{2, \mu^2 r_0 T / \sigma^2\})}{\mu T}$ , then we set $\gamma = \frac{1}{16L(1 + \chi)}$ . + +Then, (72) gives: + +$$ +\begin{array}{l} \mathbb {E} \| \bar {x} _ {T} - x ^ {*} \| ^ {2} = \mathcal {O} \left(r _ {0} e ^ {- \frac {\mu T}{1 6 L (1 + \chi)}} + \frac {1}{\mu} \frac {1}{1 6 L (1 + \chi)} \left(\sigma^ {2} + \zeta^ {2} (1 + \chi)\right)\right) (74) \\ = \tilde {\mathcal {O}} \left(r _ {0} e ^ {- \frac {\mu T}{1 6 L (1 + \chi)}} + \frac {1}{\mu^ {2} T} \left(\sigma^ {2} + \zeta^ {2} (1 + \chi)\right)\right) (75) \\ \end{array} +$$ + +Non-convex cases: From (70) and (71), we can write that, for some constant $c > 0$ depending on $M, P$ such that $\gamma \leq \frac{c}{L(1 + \chi)}$ , we have: + +$$ +\frac {1}{T} \int_ {0} ^ {T} \mathbb {E} [ \| \nabla f (\bar {x} _ {t}) \| ^ {2} ] d t = \mathcal {O} \left(\frac {1}{\gamma T} \left(f \left(x _ {0}\right) - f \left(x ^ {*}\right)\right) + L \gamma \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)\right) \tag {76} +$$ + +Then, taking the proof of Lemma 17 in [21] and adapting the threshold, we consider two cases (with $f_0 \triangleq f(x_0) - f(x^*)$ ): + +- if $\frac{c}{L(1 + \chi)} < \left(\frac{f_0}{TL(\sigma^2 + (1 + \chi)\zeta^2)}\right)^{1/2}$ , then we take $\gamma = \frac{c}{L(1 + \chi)}$ , giving: + +$$ +\begin{array}{l} \frac {1}{T} \int_ {0} ^ {T} \mathbb {E} [ \| \nabla f (\bar {x} _ {t}) \| ^ {2} ] d t = \mathcal {O} \left(\frac {L (1 + \chi)}{T} f _ {0} + L \left(\frac {f _ {0}}{T L \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)}\right) ^ {1 / 2} \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)\right) (77) \\ = \mathcal {O} \left(\frac {L (1 + \chi)}{T} f _ {0} + \sqrt {\frac {L f _ {0}}{T} \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)}\right) (78) \\ \end{array} +$$ + +- if $\frac{c}{L(1 + \chi)} \geq \left(\frac{f_0}{TL(\sigma^2 + (1 + \chi)\zeta^2)}\right)^{1/2}$ , then we take $\gamma = \left(\frac{f_0}{TL(\sigma^2 + (1 + \chi)\zeta^2)}\right)^{1/2}$ , giving: + +$$ +\begin{array}{l} \frac {1}{T} \int_ {0} ^ {T} \mathbb {E} [ \| \nabla f (\bar {x} _ {t}) \| ^ {2} ] d t = \mathcal {O} \left(\frac {f _ {0}}{T} \left(\frac {T L \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)}{f _ {0}}\right) ^ {1 / 2} + \sqrt {\frac {L f _ {0}}{T} \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)}\right) (79) \\ = \mathcal {O} \left(\sqrt {\frac {L f _ {0}}{T} \left(\sigma^ {2} + (1 + \chi) \zeta^ {2}\right)}\right) (80) \\ \end{array} +$$ + +![](images/584cfe04764587f723e5b52e6f11217eeb9a9b307f2864747a51f3222eaa7fef.jpg) + +# D Comparison with accelerated synchronous methods + +By definition of $\Lambda$ (3.1), our communication complexity (the expected number of communications) is simply given by $\frac{\mathrm{Tr}(\Lambda)}{2}$ per time unit. As discussed in Sec. 3.5, our goal is to replicate the behaviour of accelerated synchronous methods such as DeTAG [31], MSDA [37] and OPAPC [23] by communicating sufficiently so that the graph connectivity does not impact the time to converge, leading to the condition $\sqrt{\chi_1[\Lambda]\chi_2[\Lambda]} = \mathcal{O}(1)$ . + +Now, let us consider a gossip matrix $W$ as in [31, 37, 23] (i.e., $W$ is symmetric doubly stochastic) and its Laplacian $\mathcal{L} = I_n - W$ . Then, using $\Lambda = \sqrt{\chi_1[\mathcal{L}]\chi_2[\mathcal{L}]}\mathcal{L}$ is sufficient for having $\sqrt{\chi_1[\Lambda]\chi_2[\Lambda]} = \mathcal{O}(1)$ . + +- Synchronous methods: between two rounds of computations ("steps"), the number of communication edges used is $\frac{|\mathcal{E}|}{\sqrt{1 - \theta}}$ with $\theta = \max \{| \lambda_2 |, |\lambda_n | \}$ the eigenvalues of $W$ . +- Ours: the number of communication edges used per time unit for our method is $\frac{\operatorname{Tr}(\Lambda)}{2} = \frac{1}{2}\sqrt{\chi_1[\mathcal{L}]\chi_2[\mathcal{L}]}\operatorname{Tr}(\mathcal{L})$ . + +As, in [31, 37, 23], each communication edge is used at the same rate, we can apply Lemma 3.3 of [34] stating: $\sqrt{\chi_1[\mathcal{L}]\chi_2[\mathcal{L}]}\mathrm{Tr}(\mathcal{L})\leq \sqrt{\|\mathcal{L}\|_{\chi_1(n - 1)}|\mathcal{E}|}$ . We have: + +- $W$ is stochastic: $\|\mathcal{L}\| \leq 2$ . +- the graph is connected: $n - 1 \leq |\mathcal{E}|$ . +definition of $\chi_{1}$ and $\theta: 1 - \theta \leq \frac{1}{\chi_1[\mathcal{L}]}$ + +Thus, $\sqrt{\chi_1[\mathcal{L}]\chi_2[\mathcal{L}]}\mathrm{Tr}(\mathcal{L})\leq \frac{\sqrt{2}|\mathcal{E}|}{\sqrt{1 - \theta}}$ , which proves that our communication complexity per time unit is at least as good as any accelerated synchronous method. + +# E Experimental details + +# E.1 Graph topologies + +![](images/c5704b4106eb763e7585e0e0ba7e05500dce1d1b3027b27880003521aabef77e.jpg) +Figure 6: The three types of graph topology implemented. From left to right: complete, exponential, cycle, all with 16 nodes. From left to right, the approximate values of $(\chi_1,\chi_2)$ with a communication rate of "1 p2p comm./ $\nabla$ comp." for each worker are: $(1,1)$ , $(2,1)$ , $(13,1)$ . + +![](images/3846fa285ae4044b7e4dea3bdb92b07d505a780471ed3e3e9eb8d7a6e6a8d293.jpg) +Fig. 6 displays an example of each of the three graph topologies implemented. The exponential graph follows the architecture described in [28, 2]. Note the discrepancy between the values of $\chi_{1}$ and $\chi_{2}$ for the cycle graph, highlighting the advantage of using $\mathbf{A}^2\mathbf{C}\mathbf{i}\mathbf{D}^2$ in the asynchronous setting (to lower the complexity from $\chi_{1}$ to $\sqrt{\chi_1\chi_2}$ ). + +![](images/1432ba359a5afb18062d9e66863219b57328a15146c5e84edbe2be27ba56f34e.jpg) + +# E.2 Uniform neighbor selection check + +![](images/da5be7e31ab63c103c5a11fde6baaaf019565f1bbb43d98fcb07e2b546b03aeb.jpg) +Figure 7: Heat-map of the communication history (showed through a weighted adjacency matrix) during the asynchronous training on CIFAR10 with 32 workers. We display the results for the complete graph (left), exponential (centre) and ring (right) graph. + +![](images/ab268f48a673060040b9f148635bb076711c5e96fb091e7da93b52178ad40a4e.jpg) + +![](images/313846b2aeea7b94eff0a1fea9dcb2ab5fa9dfc518fcc3e732209ad477cdea18.jpg) + +Our asynchronous algorithm acts as follows: to reduce latency, the first two workers (i.e., GPUs) in the whole pool that declare they are ready to communicate (i.e., finished their previous communication and have to perform more before the next gradient step) are paired together for a p2p communication if they are neighbors in the connectivity network. During training, we registered the history of the pairwise communications that happened. Fig. 7 displays the heat-map of the adjacency matrix, confirming that our assumption of "uniform pairing among neighbors" (used to compute the values of $\chi_{1},\chi_{2}$ ) seems to be sound. \ No newline at end of file diff --git a/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/images.zip b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..46762f2448634362b036aa1ec60c50fc0c010d40 --- /dev/null +++ b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a105493c6b3121f7e3b50be919bb29f16e85d386c6116d777ee756d21575007c +size 1463115 diff --git a/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/layout.json b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3f986a6b38577a23299e0ef77a010d1bfcad70f4 --- /dev/null +++ b/textbfa2textbfcid2acceleratingasynchronouscommunicationindecentralizeddeeplearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a46aaa57bae5b1824f56873463bdb0bb9ba02ff54053530f85facce4dd7e645c +size 959887 diff --git a/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_content_list.json b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f33fbb1cb377c587fcf64bbbb053bc4971a1a377 --- /dev/null +++ b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42d9016d5348fe0cee1f9ab8bda1f626b8adaa2a17beb39920831612a5772cee +size 134146 diff --git a/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_model.json b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_model.json new file mode 100644 index 0000000000000000000000000000000000000000..66a0e080ae1693c5ad7cff71b4b0d72b98408c61 --- /dev/null +++ b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c70e03204453b9a08be260f91c3110f6eafbd3a65b28ef84b3dfde2152c234b +size 158598 diff --git a/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_origin.pdf b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ce01b2e9cbb3bd1306d6c22d0a340af67534b883 --- /dev/null +++ b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/0ce887f9-3ab5-41eb-882f-db76387e5243_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d66772bad3abbbc7d36eba93e089736657d0d6782fac5d88ba1de619cebcb875 +size 7524549 diff --git a/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/full.md b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c3052137636740952232743b3cdab16794515415 --- /dev/null +++ b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/full.md @@ -0,0 +1,595 @@ +# TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning + +Ruijie Zheng1 Xiyao Wang1 + +Yanchao Sun $^{1}$ Shuang Ma $^{4}$ Jieyu Zhao $^{1,2}$ + +Huazhe $\mathbf{X}\mathbf{u}^{3,4,\S}$ Hal Daumé III1,5,8 Furong Huang1,8 + +1 University of Maryland, College Park 2 University of Southern California + +$^{3}$ Tsinghua University $^{4}$ Shanghai Qi Zhi Institute + +5 Microsoft Research + +rzheng12@umd.edu + +# Abstract + +Despite recent progress in reinforcement learning (RL) from raw pixel data, sample inefficiency continues to present a substantial obstacle. Prior works have attempted to address this challenge by creating self-supervised auxiliary tasks, aiming to enrich the agent's learned representations with control-relevant information for future state prediction. However, these objectives are often insufficient to learn representations that can represent the optimal policy or value function, and they often consider tasks with small, abstract discrete action spaces and thus overlook the importance of action representation learning in continuous control. In this paper, we introduce TACO: Temporal Action-driven COntrastive Learning, a simple yet powerful temporal contrastive learning approach that facilitates the concurrent acquisition of latent state and action representations for agents. TACO simultaneously learns a state and an action representation by optimizing the mutual information between representations of current states paired with action sequences and representations of the corresponding future states. Theoretically, TACO can be shown to learn state and action representations that encompass sufficient information for control, thereby improving sample efficiency. For online RL, TACO achieves $40\%$ performance boost after one million environment interaction steps on average across nine challenging visual continuous control tasks from Deepmind Control Suite. In addition, we show that TACO can also serve as a plug-and-play module adding to existing offline visual RL methods to establish the new state-of-the-art performance for offline visual RL across offline datasets with varying quality. + +# 1 Introduction + +Developing reinforcement learning (RL) agents that can perform complex continuous control tasks from high-dimensional observations, such as pixels, has been a longstanding challenge. A central aspect of this challenge lies in addressing the sample inefficiency problem in visual RL. Despite significant progress in recent years [33, 34, 53, 52, 16, 15, 17, 18, 21, 48], there remains a considerable gap in sample efficiency between RL with physical state-based features and those with pixel inputs. This disparity is particularly pronounced in complex tasks; thus tackling it is crucial for advancing visual RL algorithms and enabling their practical application in real-world scenarios. + +State representation learning has become an essential aspect of RL research, aiming to improve sample efficiency and enhance agent performance. Initial advancements like CURL [33] utilized a self-supervised contrastive InfoNCE objective [47] for state representation, yet it overlooks the temporal dynamics of the environment. Subsequent works, including CPC [23], ST-DIM [2], and ATC [44], made progress to rectify this by integrating temporal elements into the contrastive loss, linking pairs of observations with short temporal intervals. The objective here was to develop a state representation capable of effectively predicting future observations. A more comprehensive approach was taken by DRIML [37], which incorporated the first action of the action sequence into the temporal contrastive learning framework. However, these methods, while innovative, have their shortcomings. The positive relations in their + +contrastive loss designs are often policy-dependent, potentially leading to instability during policy updates throughout the training process. Consequently, they lack the theoretical foundation needed to capture all information representing the optimal policy. Furthermore, these methods except for CURL and ATC typically focus on environments such as Atari games with well-represented, abstract discrete action spaces, thereby overlooking the importance of action representation in continuous control tasks [1, 28]. However, by learning an action representation that groups semantically similar actions together in the latent action space, the agent can better generalize its knowledge across various state-action pairs, enhancing the sample efficiency of RL algorithms. Therefore, learning both state and action representations is crucial for enabling the agent to more effectively reason about its actions' long-term outcomes for continuous control tasks. + +In this paper, we introduce Temporal Action-driven COntrastive Learning (TACO) as a promising approach to visual continuous control tasks. TACO simultaneously learns a state and action representation by optimizing the mutual information between representations of current states paired with action sequences and representations of the corresponding future states. By optimizing the mutual information between state and action representations, TACO can be theoretically shown to capture the essential information to represent the optimal value function. In contrast to approaches such as DeepMDP [12] and SPR [42], which directly model the latent environment dynamics, our method transforms the representation learning objective into a self-supervised InfoNCE objective. This leads to more stable optimization and requires minimal hyperparameter tuning efforts. Consequently, TACO yields expressive and concise state-action representations that are better suited for high-dimensional continuous control tasks. + +We demonstrate the effectiveness of representation learning by TACO through extensive experiments on the DeepMind Control Suite (DMC) in both online and offline RL settings. TACO is a flexible plug-and-play module that could be combined with any existing RL algorithm. In the online RL setting, combined with the strong baseline DrQ-v2[52], TACO significantly outperforms the SOTA model-free visual RL algorithms, and it even surpasses the strongest model-based visual RL baselines such as Dreamer-v3 [18]. As shown in Figure 1, across nine challenging visual continuous control tasks from DMC, TACO achieves a $40\%$ performance boost after one million environment interaction steps on average. For offline RL, TACO can be combined with existing strong offline RL methods to further improve performance. When combined with TD3+BC [11] and CQL [31], TACO outperforms the strongest baselines across offline datasets with varying quality. + +We list our contributions as follows: + +1. We present TACO, a simple yet effective temporal contrastive learning framework that simultaneously learns state and action representations. +2. The framework of TACO is flexible and could be integrated into both online and offline visual RL algorithms with minimal changes to the architecture and hyperparameter tuning efforts. +3. We theoretically show that the objectives of TACO is sufficient to capture the essential information in state and action representation for control. +4. Empirically, we show that TACO outperforms prior state-of-the-art model-free RL by $1.4\mathrm{x}$ on nine challenging tasks in Deepmind Control Suite. Applying TACO to offline RL with SOTA + +![](images/b3805b3b41ab64056cb7e6cd445754593027076a54945a095f0af3ca9686fca4.jpg) +Figure 1: Comparison of average episode reward across nine challenging tasks in Deepmind Control Suite after one million environment steps. + +algorithms also achieves significant performance gain in 4 selected challenging tasks with pre-collected offline datasets of various quality. + +# 2 Preliminaries + +# 2.1 Visual reinforcement learning + +Let $\mathcal{M} = \langle \mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma \rangle$ be a Markov Decision Process (MDP). Here, $\mathcal{S}$ is the state space, and $\mathcal{A}$ is the action space. The state transition kernel is denoted by $\mathcal{P}:S\times \mathcal{A}\to \Delta (\mathcal{S})$ , where $\Delta (\mathcal{S})$ is a distribution over the state space. $\mathcal{R}:S\times \mathcal{A}\rightarrow \mathbb{R}$ is the reward function. The objective of the Reinforcement Learning (RL) algorithm is the identification of an optimal policy $\pi^{*}:S\to \Delta (\mathcal{A})$ that maximizes the expected value $\mathbb{E}_{\pi}[\sum_{t = 0}^{\infty}\gamma^{t}r_{t}]$ . Additionally, we can define the optimal Q function as follows: $Q^{*}(s,a) = E_{\pi^{*}}\left[\sum_{t = 0}^{\infty}\gamma^{t}r(s_{t},a_{t})|s_{0} = s,a_{0} = a\right]$ , such that the relationship between the optimal Q function and optimal policy is $\pi (s) = \arg \max_{a}Q^{*}(s,a)$ . In the domain of visual RL, high-dimensional image data are given as state observations, so the simultaneous learning of both representation and control policy becomes the main challenge. This challenge is exacerbated when the environment interactions are limited and the reward signal is sparse. + +# 2.2 Contrastive learning and the InfoNCE objective + +Contrastive Learning, a representation learning approach, imposes similarity constraints on representations, grouping similar/positive pairs and distancing dissimilar/negative ones within the representation space. Contrastive learning objective is often formulated through InfoNCE loss [47] to maximize the mutual information between representations of positive pairs by training a classifier. In particular, let $X, Y$ be two random variables. Given an instance $x \sim p(x)$ , and a corresponding positive sample $y^{+} \sim p(y|x)$ as well as a collection of $Y = \{y_{1}, \dots, y_{N - 1}\}$ of $N - 1$ random samples from the marginal distribution $p(y)$ , the InfoNCE loss is defined as + +$$ +\mathcal {L} _ {N} = \mathbb {E} _ {x} \left[ \log \frac {f (x , y ^ {+})}{\sum_ {y \in Y \cup \{y ^ {+} \}} f (x , y)} \right] \tag {1} +$$ + +Optimizing this loss will result in $f(x,y) \propto \frac{p(y|x)}{p(y)}$ and one can show that InfoNCE loss upper bounds the mutual information, $\mathcal{L}_N \geq \log (N) - \mathcal{I}(X,Y)$ . + +# 3 TACO: temporal action-driven contrastive Loss + +TACO is a flexible temporal contrastive framework that could be easily combined with any existing RL algorithms by interleaving RL updates with its temporal contrastive loss update. In this section, we first present the overall learning objective and theoretical analysis of TACO. Then we provide the architectural design of TACO in detail. + +# 3.1 Temporal contrastive learning objectives and analysis + +In the following, we present the learning objectives of TACO. The guiding principle of our method is to learn state and action representations that capture the essential information about the environment's dynamics sufficient for learning the optimal policy. This allows the agent to develop a concise and expressive understanding of both its current state and the potential effects of its actions, thereby enhancing sample efficiency and generalization capabilities. + +Let $S_{t}$ , $A_{t}$ be the state and action variables at timestep $t$ , $Z_{t} = \phi(S_{t})$ , $U_{t} = \psi(A_{t})$ be their corresponding representations. Then, our method aims to maximize the mutual information between representations of current states paired with action sequences and representations of the corresponding future states: + +$$ +\mathbb {J} _ {\mathrm {T A C O}} = \mathcal {I} \left(Z _ {t + K}; [ Z _ {t}, U _ {t}, \dots , U _ {t + K - 1} ]\right) \tag {2} +$$ + +Here, $K \geq 1$ is a fixed hyperparameter for the prediction horizon. In practice, we estimate the lower bound of the mutual information by the InfoNCE loss, with details of our practical implementation described in §3.2. + +We introduce the following theorem extending from Rakely et al. [41] to demonstrate the sufficiency of TACO objective: + +Theorem 3.1. Let $K \in \mathbb{N}^{+}$ , and $\mathbb{J}_{TACO} = \mathcal{I}(Z_{t + K};[Z_t,U_t,\dots ,U_{t + K - 1}])$ . If for a given state and action representation $\phi_Z, \psi_U, \mathbb{J}_{TACO}$ is maximized, then for arbitrary state-action pairs $(s_1,a_1),(s_2,a_2)$ such that $\phi (s_1) = \phi (s_2),\psi (a_1) = \psi (a_2)$ , it holds that $Q^{*}(s_{1},a_{1}) = Q^{*}(s_{2},a_{2})$ . + +This theorem guarantees that if our mutual information objective Equation (2) is maximized, then for any two state-action pairs $(s_1, a_1)$ and $(s_2, a_2)$ with equivalent state and action representations, their optimal action-value functions, $Q^*(s_1, a_1)$ and $Q^*(s_2, a_2)$ , will be equal. In other words, maximizing this mutual information objective ensures that the learned representations are sufficient for making optimal decisions. + +# 3.2 TACO implementation + +![](images/08818a0e16322bc1dac90c62870900a2f2b95cade9a251d87bab73837fb68584.jpg) +Figure 2: A demonstration of our temporal contrastive loss: Given a batch of state-action transition triples $\{(s_t^{(i)},[a_t^{(i)},\dots,a_{t + K - 1}^{(i)}],s_{t + K}^{(i)})\}_{i = 1}^N$ , we first apply the state encoder and action encoder to get latent state-action encodings: $\{(z_{t}^{(i)},[u_{t}^{(i)},\dots,u_{t + K - 1}^{(i)}],z_{t + K}^{(i)})\}_{i = 1}^N$ . Then we apply two different projection layers to map $(z_{t}^{(i)},[u_{t}^{(i)},\dots,u_{t + K - 1}^{(i)}])$ and $z_{t + K}^{(i)}$ into the shared contrastive embedding space. Finally, we learn to predict the correct pairings between $(z_{t},[u_{t},\dots,u_{t + K - 1}])$ and $z_{t + K}$ using an InfoNCE loss. + +Here we provide a detailed description of the practical implementation of TACO. In Figure 2, we illustrate the architecture design of TACO. Our approach minimally adapts a base RL algorithm by incorporating the temporal contrastive loss as an auxiliary loss during the batch update process. Specifically, given a batch of state and action sequence transitions $\{(s_t^{(i)},[a_t^{(i)},\dots,a_{t' - 1}^{(i)}],s_{t'}^{(i)})\}_{i = 1}^N$ $(t^{\prime} = t + K)$ , we optimize: + +$$ +\mathcal {J} _ {\mathrm {T A C O}} (\phi , \psi , W, G _ {\theta}, H _ {\theta}) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {g _ {t} ^ {(i) ^ {\top}} W h _ {t ^ {\prime}} ^ {(i)}}{\sum_ {j = 1} ^ {N} g _ {t} ^ {(i) ^ {\top}} W h _ {t ^ {\prime}} ^ {(j)}} \tag {3} +$$ + +Here let $z_{t}^{(i)} = \phi(s_{t}^{(i)})$ , $u_{t}^{(i)} = \psi(a_{t}^{(i)})$ be state and action embeddings respectively. $g_{t}^{(i)} = G_{\theta}(z_{t}^{(i)}, u_{t}^{(i)}, \dots, u_{t' - 1}^{(i)})$ , and $h_{t}^{(i)} = H_{\theta}(z_{t'}^{(i)})$ , where $G_{\theta}$ and $H_{\theta}$ denote two learnable projection layers that map the latent state $z_{t}^{(i)}$ as well as latent state and action sequence $(z_{t}^{(i)}, u_{t}^{(i)}, \dots, u_{t' - 1}^{(i)})$ to a common contrastive embedding space. $W$ is a learnable parameter providing a similarity measure between $g_{i}$ and $h_{j}$ in the shared contrastive embedding space. Subsequently, both state and action representations are fed into the agent's $Q$ network, allowing the agent to effectively reason about the long-term effects of their actions and better leverage their past experience through state-action abstractions. + +In addition to the main TACO objective, in our practical implementation, we find that the inclusion of two auxiliary objectives yields further enhancement in the algorithm's overall performance. The first is the CURL [33] loss: + +$$ +\mathcal {J} _ {\mathrm {C U R L}} (\phi , \psi , W, H _ {\theta}) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {h _ {t} ^ {(i)} ^ {\top} W h _ {t} ^ {(i) ^ {+}}}{h _ {t} ^ {(i) ^ {\top}} W h _ {t} ^ {(i) ^ {+}} + \sum_ {j \neq i} h _ {t} ^ {(i) ^ {\top}} W h _ {t} ^ {(j)}} \tag {4} +$$ + +Here, $h_t^{(i)^+} = H_\theta(\phi(s_i^{(t)^+}))$ , where $s_i^{(t)^+}$ is the augmented view of $s_i^{(t)}$ by applying the same random shift augmentation as DrQ-v2 [52]. $W$ and $H_\theta$ share the same weight as the ones in TAC0 objective. The third objective is reward prediction: + +$$ +\mathcal {J} _ {\text {R e w a r d}} (\phi , \psi , \hat {R} _ {\theta}) = \sum_ {i = 1} ^ {N} \left(\hat {R} _ {\theta} \left(z _ {t} ^ {(i)}, u _ {t} ^ {(i)}, \dots , u _ {t ^ {\prime} - 1} ^ {(i)} \right]\right) - r ^ {(i)}) ^ {2} \tag {5} +$$ + +Here $r^{(i)} = \sum_{j = t}^{t' - 1}r_{j}^{(i)}$ is the sum of reward from timestep $t$ to $t^{\prime} - 1$ , and $\hat{R}_{\theta}$ is a reward prediction layer. For our final objective, we combine the three losses together with equal weight. As verified in Section 4.1, although TACO serves as the central objective that drives notable performance improvements, the inclusion of both CURL and reward prediction loss can further improve the algorithm's performance. + +We have opted to use DrQ-v2 [52] for the backbone algorithm of TACO, although in principle, TACO could be incorporated into any visual RL algorithms. TACO extends DrQ-v2 with minimal additional hyperparameter tuning. The only additional hyperparameter is the selection of the prediction horizon $K$ . Throughout our experiments, we have limited our choice of $K$ to either 1 or 3, depending on the nature of the environment. We refer the readers to Appendix A for a discussion on the choice of $K$ . + +# 4 Experiments and results + +This section provides an overview of our empirical evaluation, conducted in both online and offline RL settings. To evaluate our approach under online RL, we apply TACO to a set of nine challenging visual continuous control tasks from Deepmind Control Suite (DMC) [46]. Meanwhile, for offline RL, we combine TACO with existing offline RL methods and test the performance on four DMC tasks, using three pre-collected datasets that differ in the quality of their data collection policies. + +# 4.1 Comparison between TACO and strong baselines in online RL tasks + +Environment Settings: In our online RL experiment, we first evaluate the performance of TACO on nine challenging visual continuous control tasks from Deepmind Control Suite [46]: Quadruped Run, Quadruped Walk, Hopper Hop, Reacher Hard, Walker Run, Acrobot Swingup, Cheetah Run, Finger Turn Hard, and Reach Duplo. These tasks demand the agent to acquire and exhibit complex motor skills and present challenges such as delayed and sparse reward. As a result, these tasks have not been fully mastered by previous visual RL algorithms, and require the agent to learn an effective policy that balances exploration and exploitation while coping with the challenges presented by the tasks. + +In addition to Deepmind Control Suite, we also present the results of TACO on additional six challenging robotic manipulation tasks from Meta-world [57]: Hammer, Assembly, Disassemble, Stick Pull, Pick Place Wall, Hand Insert. Unlike the DeepMind Control Suite, which primarily concentrates on locomotion tasks, Meta-world domain provides tasks that involve complex manipulation and interaction tasks. This sets it apart by representing a different set of challenges, emphasizing precision and control in fine motor tasks rather than broader locomotion skills. In Appendix G, we provide a visualization for each Meta-world task. + +Baselines: We compare TACO with four model-free visual RL algorithms CURL [33], DrQ [53], DrQ-v2 [52], and A-LIX [5]. A-LIX builds on DrQ-v2 by adding adaptive regularization to the encoder's gradients. While TACO could also extend A-LIX, our reproduction of results from its open-source implementation does not consistently surpass DrQ-v2. As such, we do not choose A-LIX as the backbone algorithm for TACO. Additionally, we compare with two state-of-the-art model-based RL algorithms for visual continuous control, Dreamer-v3 [18] and TDMPC [21], which learn world models in latent space and select actions using either model-predictive control or a learned policy. + +TACO achieves a significantly better sample efficiency and performance compared with SOTA visual RL algorithm. The efficacy of TACO is evident from the findings presented in Figure 4 (DMC), Table 1 (DMC), and Figure 5 (Meta-world). In contrast to preceding model-free visual RL algorithms, TACO exhibits considerably improved sample efficiency. For example, on the challenging Reacher Hard task, TACO achieves optimal performance in just 0.75 million environment steps, whereas DrQ-v2 requires approximately 1.5 million steps. When trained with only 1 million environment steps, TACO on average achieves $40\%$ better performance, and it is even better than the model-based visual RL algorithms Dreamer-v3 and TDMPC on 6 out of 9 tasks. In addition, on more demanding tasks such as Quadruped Run, Hopper Hop, Walker Run, and Cheetah Run, TACO continues to outshine competitors, exhibiting superior overall performance after two or three million steps, as illustrated in Figure 4. For robotic manipulation tasks, as shown in Figure 5, TACO also significantly outperform the baseline model-free visual RL algorithms, highlighting the broad applicability of TACO. + +![](images/0702fc4843bb540af061c1bc9f7e1fe6728791d985c6045310b5215c3bda3aea.jpg) + +![](images/39ba7f05b01df1ad7a844fa3a8a4805883785f23db21a25c444d9648509470d7.jpg) + +![](images/58d9d360324c04c91bd4e59cc7bb41c3f53310f60fb595f595d732ff6d357c98.jpg) + +![](images/55a5944772501c87968394da9418a58237bc3e7ef3d65d3514f8bb03468126e6.jpg) + +![](images/877110d669373b5df38599d3c368de8e93a0c6642823b8d2d3de3b5f283feca6.jpg) + +![](images/b6fea1862728b6ac5b8ee1cd0f57b08321503bbf6a67002f07afd9655e7ba9c4.jpg) + +Figure 4: (Deepmind Control Suite) Performance of TACO against two strongest model-free visual RL baselines. Results of DrQ-v2 and A-LIX are reproduced from their open-source implementations, and all results are averaged over 6 random seeds. +![](images/4f340b0b017506a5ebb256857b9a62ecd9ecf301f280114630819abf28ffb1af.jpg) +TACO A-LIX DrQ-v2 + +![](images/a6967ab5aeb57ed3aa86e7baea80ddc316439f42e1c8549dc18029862c42b85e.jpg) + +![](images/88ddd4cacb5194c7852ea2308bcd1e91289c2ce1089b114fb3bfec8194642fd7.jpg) + +Table 1: Episode reward of TACO and SOTA visual RL algorithms on the image-based DMControl 1M benchmark. Results are averaged over 6 random seeds. Within the table, entries shaded represent the best performance of model-free algorithms, while text in bold signifies the highest performance across all baseline algorithms, including model-based algorithms. + +
Environment (1M Steps)Model-freeModel-based
TACODrQv2A-LIXDrQCURLDreamer-v3TDMPC
Quadruped Run541 ± 38407 ± 21454 ± 42179 ± 18181 ± 14331 ± 42397 ± 37
Hopper Hop261 ± 52189 ± 35225 ± 13192 ± 41152 ± 34369 ± 21195 ± 18
Walker Run637 ± 11517 ± 43617 ± 12451 ± 73387 ± 24765 ± 32600 ± 28
Quadruped Walk793 ± 8680 ± 52560 ± 175120 ± 17123 ± 11353 ± 27435 ± 16
Cheetah Run821 ± 48691 ± 42676 ± 41474 ± 32657 ± 35728 ± 32565 ± 61
Finger Turn Hard632 ± 75220 ± 2162 ± 5491 ± 9215 ± 17810 ± 58400 ± 113
Acrobot Swingup241 ± 21128 ± 8112 ± 2324 ± 85 ± 1210 ± 12224 ± 20
Reacher Hard883 ± 63572 ± 51510 ± 16471 ± 45400 ± 29499 ± 51485 ± 31
Reach Duplo234 ± 21206 ± 32199 ± 1436 ± 78 ± 1119 ± 30117 ± 12
+ +Concurrently learning state and action representation is crucial for the success of TACO. To demonstrate the effectiveness of action representation learning in TACO, we evaluate its performance on a subset of 4 difficult benchmark tasks and compare it with a baseline method without action representation, as shown in Figure 3. The empirical results underscore the efficacy of the temporal contrastive learning objectives, even in the absence of action representation. For instance, TACO records an enhancement of $18\%$ on Quadruped Run and a substantial $51\%$ on Reacher Hard, while the remaining tasks showcase a performance comparable to DrQ-v2. Furthermore, when comparing against TACO without action representation, TACO achieves a consistent performance gain, ranging from $12.2\%$ on Quadruped Hopper Hop. These results not only emphasize the inherent value objective in TACO, but also underscore the instrumental role of bolstering the performance of the underlying RL algorithms. + +![](images/ca65bec03fb3a0c05d910071f381db09846f847c9a33a4c57220b80e7772c4b0.jpg) +Figure 3: 1M Performance of TACO with and without action representation + +TACO learns action representations that group semantically similar actions together. To verify that indeed our learned action representation has grouped semantically similar actions together, we conduct an experiment within the Cheetah Run task. We artificially add 20 dimensions to the action + +![](images/60f5c46dbc7f362697427592420abc2cec13da72c969ecf85484f9a747a8c0d8.jpg) + +![](images/9645b493df42200f5f3c2e36db27332409b35b1fb4df4a6787b014c477874b77.jpg) + +![](images/9116a93d402d4e59e7e85967e4c42b304e611ec8b2528a4b92619ccd12ce7d6e.jpg) + +![](images/f83de9bf07f08c4975832258652eccd5b8ee2d63c1f1b136b31689aafaf17c95.jpg) +Figure 5: (Meta-world) Performance of TACO against DrQ-v2 and A-LIX. All results are averaged over 6 random seeds. + +![](images/badd2be141e3960876b64f1bc6435145aac1992d3efd46e582aec44f2560f9a6.jpg) + +![](images/f3e7261130b8429ef627dfdd2271fc20e1524b47b43dbeb9590882e45b28c37c.jpg) + +![](images/6b7cfa2cb3dadea02bb163ff8827d14274396ead8ad6e25eb05f7bb9a87eb11f.jpg) + +space of task Cheetah Run, although only the first six were utilized in environmental interactions. We first train an agent with TACO online to obtain the action representation. Then we select four actions within the original action space $a_1, a_2, a_3, a_4$ to act as centroids. For each of the four centroids, we generate 1000 augmented actions by adding standard Gaussian noises to the last 20 dimensions. We aim to determine if our action representation could disregard these "noisy" dimensions while retaining the information of the first six. Using t-SNE for visualization, we embed the 4000 actions before and after applying action representation. As shown in Figure 6, indeed our learned action representation could group the four clusters, demonstrating the ability of our action representation to extract control relevant information from the raw action space. + +The effectiveness of our temporal contrastive loss is enhanced with a larger batch size. As is widely acknowledged in contrastive learning research [8, 22, 40, 13], our contrastive loss sees significant benefits from utilizing a larger batch size. In Figure 7a, we illustrate the performance of our algorithms alongside DrQv2 after one million environment steps on the Quadruped Run task. As evident from the plot, batch size greatly influences the performance of our algorithm, while DrQ-v2's baseline performance remains fairly consistent throughout + +training. In order to strike a balance between time efficiency and performance, we opt for a batch size of 1024, which is 4 times larger than the 256 batch size employed in DrQ-v2, but 4 times smaller than the 4096 which is commonly used in the contrastive learning literature [8, 22, 13]. For an analysis on how batch size affects the algorithm's runtime, we direct the reader to Appendix B. + +![](images/2a2423c7421ca21d70db77f18165aa52cfa6d7eba4cfd1aa35a7121564fb4cb0.jpg) +Figure 6: Left: t-SNE embedding of actions with distracting dimensions. Right: t-SNE embedding of latent representations for actions with distracting dimensions. + +![](images/f4f7a7bf1241e92c99e97e546e199751d27e90ae71523154d38f682912bd8432.jpg) + +efficiency and performance, we opt for a batch size of size employed in DrQ-v2, but 4 times smaller than the learning literature [8, 22, 13]. For an analysis on it to direct the reader to Appendix B. + +![](images/4137ba241ecc604773e89ac180ea7a58e0533ab2fca327a5624b60d4505adff6.jpg) +(a) TACO and DrQ-v2 across different batch sizes. +Figure 7: TACO with different batch sizes and with different component of its learning objectives removed + +![](images/5586dfb2076eb2ed4304774752ca89ac99fa892cbc138836e474390a56c3b949.jpg) +(b) TACO with different learning objective removed + +Reward prediction and CURL loss serves an auxiliary role to further improve the performance of TACO, while the temporal contrastive loss of TACO is the most crucial component. In the practical deployment of TACO, two additional objectives, namely reward prediction and CURL loss, + +are incorporated to enhance the algorithm's performance. In Figure 7b, we remove one objective at a time on the Quadruped Run task to assess its individual impact on the performance after one million environment steps. As illustrated in the figure, the omission of TAC0' temporal contrastive objective results in the most significant performance drop, emphasizing its critical role in the algorithm's operation. Meanwhile, the auxiliary reward prediction and CURL objectives, although secondary, contribute to performance improvement to some degree. + +InfoNCE-based temporal action-driven contrastive objective in TACO outperforms other representation learning objectives including SPR [43], ATC [44], and DRIML [37]. In Table 2, we have showcased a comparison between our approach and other visual RL representation learning objectives such as SPR, ATC, and DRIML. Given that SPR and DRIML were not initially designed for continuous control tasks, we have re-implemented their learning objectives using the identical backbone algorithm, DrQ-v2. A similar approach was taken for ATC, with their learning objectives also being reimplemented on DrQ-v2 to ensure a fair comparison. (Without the DrQ-v2 backbone algorithm, the performance reproduced by their original implementation is significantly worse.) Furthermore, recognizing the significance of learning action encoding, as discussed earlier, we have integrated action representation learning into all these baselines. Therefore, the model architecture remains consistent across different representation learning objectives, with the sole difference being the design of the temporal contrastive loss. For DRIML, given that only the first action of the action sequence is considered in the temporal contrastive loss, TACO and DRIML differ when the number of steps $K$ is greater than one. Thus, we indicate N/A for tasks where we choose $K = 1$ for TACO. + +Table 2: Comparison with other objectives including SPR [42], ATC [44], and DRIML [37] + +
EnvironmentTACOSPRATCDRIMLDrQ-v2
Quadruped Run541 ± 38448 ± 79432 ± 54N/A407 ± 21
Walker Run637 ± 21560 ± 71502 ± 171N/A517 ± 43
Hopper Hop261 ± 52154 ± 10112 ± 98216 ± 13192 ± 41
Reacher Hard883 ± 63711 ± 92863 ± 12835 ± 72572 ± 51
Acrobot Swingup241 ± 21198 ± 21206 ± 61222 ± 39210 ± 12
+ +Table 2 showcases that while previous representation learning objectives have proven benefit in assisting the agent to surpass the DrQ-v2 baseline by learning a superior representation, our approach exhibits consistent superiority over other representation learning objectives in all five evaluated environments. These results reinforce our claim that TACO is a more effective method for learning state-action representations, allowing agents to reason more efficiently about the long-term outcomes of their actions in the environment. + +# 4.2 Combining TACO with offline RL algorithms + +In this part, we discuss the experimental results of TACO within the context of offline reinforcement learning, emphasizing the benefits our temporal contrastive state/action representation learning objective brings to visual offline RL. Offline visual reinforcement learning poses unique challenges, as algorithms must learn an optimal policy solely from a fixed dataset without further interaction with the environment. This necessitates that the agent effectively generalizes from limited data while handling high-dimensional visual inputs. The state/action representation learning objective of TACO plays a vital role in addressing these challenges by capturing essential information about the environment's dynamics, thereby enabling more efficient generalization and improved performance. TACO can be easily integrated as a plug-and-play module on top of existing strong offline RL methods, such as TD3+BC [11] and CQL [31]. + +For evaluation, we select four challenging visual control tasks from DMC: Hopper Hop, Cheetah Run, Walker Run, and Quadruped Run. For each task, we generate three types of datasets. The medium dataset consists of trajectories collected by a single policy of medium performance. The precise definition of "medium performance" is task-dependent but generally represents an intermediate level of mastery, which is neither too poor nor too proficient. The medium-replay dataset contains trajectories randomly sampled from the online learning agent's replay buffer before it reaches a medium performance level. The full-replay dataset includes trajectories randomly sampled throughout the online learning phase, from the beginning until convergence. The dataset size for Walker, Hopper, and Cheetah is 100K, while for the more challenging Quadruped Run task, a larger dataset size of 500K is used to account for the increased difficulty. We compute the normalized reward by diving the offline RL reward by the best reward we get during online TACO training. + +Table 3: Offline Performance (Normalized Reward) for different offline RL methods. Results are averaged over 6 random seeds. $\pm$ captures the standard deviation over seeds. + +
Task/ DatasetTD3+BC w. TACOTD3+BCCQL w. TACOCQLDTIQLBC
Hopper HopMedium52.4 ± 0.451.2 ± 0.847.9 ± 0.646.7 ± 0.240.5 ± 3.62.0 ± 1.748.2 ± 0.8
Medium-replay67.2 ± 0.162.9 ± 0.174.6 ± 0.468.7 ± 0.165.3 ± 1.857.6 ± 1.425.9 ± 3.2
Full-replay97.6 ± 1.483.8 ± 2.3101.2 ± 1.994.2 ± 2.092.4 ± 0.347.7 ± 2.265.7 ± 2.7
Cheetah RunMedium66.6 ± 0.566.1 ± 0.370.1 ± 0.466.7 ± 1.764.3 ± 0.71.7 ± 1.162.9 ± 0.1
Medium-replay62.6 ± 0.261.1 ± 0.172.3 ± 1.267.3 ± 1.167.0 ± 0.626.5 ± 3.248.0 ± 3.1
Full-replay92.5 ± 2.491.2 ± 0.886.9 ± 2.465.0 ± 3.989.6 ± 1.414.6 ± 3.769.0 ± 0.3
Walker RunMedium49.2 ± 0.548.0 ± 0.249.6 ± 1.049.4 ± 0.947.3 ± 0.34.4 ± 0.446.2 ± 0.6
Medium-replay63.1 ± 0.662.3 ± 0.262.3 ± 2.659.9 ± 0.961.7 ± 1.141.4 ± 2.818.5 ± 0.8
Full-replay86.8 ± 0.684.0 ± 1.688.1 ± 0.179.8 ± 0.681.6 ± 0.818.1 ± 3.730.8 ± 1.8
Quadruped RunMedium60.6 ± 0.160.0 ± 0.258.1 ± 3.755.9 ± 9.114.6 ± 3.80.8 ± 0.856.2 ± 1.1
Medium-replay61.3 ± 0.358.1 ± 0.561.9 ± 0.261.2 ± 0.919.5 ± 2.258.4 ± 4.451.6 ± 3.3
Full-replay92.6 ± 0.789.3 ± 0.492.1 ± 0.185.2 ± 2.514.5 ± 1.136.3 ± 5.957.6 ± 0.7
Average Normalized Score71.068.272.166.748.425.854.9
+ +We compare the performance of TD3+BC and CQL with and without TACO on our benchmark. Additionally, we also compare with the decision transformer (DT) [7], a strong model-free offline RL baseline that casts the problem of RL as conditional sequence modeling, IQL [30], another commonly used offline RL algorithm, and the behavior cloning (BC) baseline. For TD3+BC, CQL and IQL, which were originally proposed to solve offline RL with vector inputs, we add the their learning objective on top of DrQ-v2 to handle image inputs. + +Table 3 provides the normalized reward for each dataset. The results underscore that when combined with the strongest baselines TD3+BC and CQL, TACO achieves consistent performance improvements across all tasks and datasets, setting new state-of-the-art results for offline visual reinforcement learning. This is true for both the medium dataset collected with a single policy and narrow data distribution, as well as the medium-replay and replay datasets with a diverse distribution. + +# 5 Related work + +# 5.1 Contrastive learning in visual reinforcement learning + +Contrastive learning has emerged as a powerful technique for learning effective representations across various domains, particularly in computer vision [47, 8, 22, 23, 49]. This success is attributed to its ability to learn meaningful embeddings by contrasting similar and dissimilar data samples. In visual reinforcement learning, it's used as a self-supervised auxiliary task to improve state representation learning, with InfoNCE [47] being a popular learning objective. In CURL [33], it treats augmented states as positive pairs, but it neglects the temporal dependency of MDP. CPC [47], ST-DIM [2], and ATC [44] integrate temporal relationships into the contrastive loss by maximizing mutual information between current state representations (or state histories encoded by LSTM in CPC) and future state representations. However, they do not consider actions, making positive relationships in the learning objective policy-dependent. DRIML [37] addresses this by maximizing mutual information between state-action pairs at the current time step and the resulting future state, but its objective remains policy-dependent as it only provides the first action of the action sequence. Besides, ADAT [29] and ACO [59] incorporate actions into contrastive loss by labeling observations with similar policy action outputs as positive samples, but these methods do not naturally extend to tasks with nontrivial continuous action spaces. A common downside of these approaches is the potential for unstable encoder updates due to policy-dependent positive relations. In contrast, TACO is theoretically sufficient, and it tackles the additional challenge of continuous control tasks by simultaneously learning state and action representations. + +In addition to the InfoNCE objective, other self-supervised learning objective is also proposed. Approaches such as DeepMDP [12], SPR [42], SGI [43], and EfficientZero [56] direct learn a latent-space transition model. Notably, these methods predominantly target Atari games characterized by their small, well-represented, and abstract discrete action spaces. When dealing with continuous control tasks, which often involve a continuous and potentially high-dimensional action space, the relationships between actions and states become increasingly intricate. This complexity poses a significant challenge in effectively capturing the underlying dynamics. In contrast, by framing the latent dynamics model predictions as a self-supervised InfoNCE objective, the mutual information + +guided approach used by TACO is better suited for continuous control task, resulting in more stable optimization and thus better state and action representations. + +# 5.2 Action representation in reinforcement learning + +Although state or observation representations are the main focus of prior research, there also exists discussion on the benefits and effects of learning action representations. Chandak et al. [6] propose to learn a policy over latent action space and transform the latent actions into actual actions, which enables generalization over large action sets. Allshire et al. [1] introduce a variational encoder-decoder model to learn disentangled action representation, improving the sample efficiency of policy learning. In model-based RL, strategies to achieve more precise and stable model-based planning or roll-out are essential. To this end, Park et al. [39] propose an approach to train an environment model in the learned latent action space. In addition, action representation also has the potential to improve multi-task learning [25], where latent actions can be shared and enhance generalization. + +# 6 Conclusion + +In this paper, we have introduced a conceptually simple temporal action-driven contrastive learning objective that simultaneously learns the state and action representations for image-based continuous control — TACO. Theoretically sound, TACO has demonstrated significant practical superiority by outperforming SOTA online visual RL algorithms. Additionally, it can be seamlessly integrated as a plugin in module to enhance the performance of existing offline RL algorithms. Despite the promising results, TACO does present limitations, particularly its need for large batch sizes due to the inherent nature of the contrastive InfoNCE objective, which impacts computational efficiency. Moving forward, we envisage two primary directions for future research. Firstly, the creation of more advanced temporal contrastive InfoNCE objectives that can function effectively with smaller data batches may mitigate the concerns related to computational efficiency. Secondly, the implementation of a distributed version of TACO, akin to the strategies employed for DDPG in previous works [3, 24], could significantly enhance training speed. These approaches offer promising avenues for further advancements in visual RL. + +# 7 Acknowledgement + +Zheng, Wang, Sun and Huang are supported by National Science Foundation NSF-IIS-FAI program, DOD-ONR-Office of Naval Research, DOD Air Force Office of Scientific Research, DOD-DARPA-Defense Advanced Research Projects Agency Guaranteeing AI Robustness against Deception (GARD), Adobe, Capital One and JP Morgan faculty fellowships. + +# References + +[1] Arthur Allshire, Roberto Martin-Martin, Charles Lin, Shawn Manuel, Silvio Savarese, and Animesh Garg. Laser: Learning a latent action space for efficient reinforcement learning. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 6650-6656. IEEE, 2021. 2, 10 +[2] Ankesh Anand, Evan Racah, Sherjil Ozair, Yoshua Bengio, Marc-Alexandre Côté, and R Devon Hjelm. Unsupervised state representation learning in atari. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. 2, 9 +[3] Gabriel Barth-Maron, Matthew W. Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva TB, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributional policy gradients. In International Conference on Learning Representations, 2018. 10 +[4] Marc Bellemare, Will Dabney, Robert Dadashi, Adrien Ali Taiga, Pablo Samuel Castro, Nicolas Le Roux, Dale Schuurmans, Tor Lattimore, and Clare Lyle. A geometric perspective on optimal representations for reinforcement learning. Advances in neural information processing systems, 32:4358-4369, 2019. 22 + +[5] Edoardo Cetin, Philip J Ball, Stephen Roberts, and Oya Celiktutan. Stabilizing off-policy deep reinforcement learning from pixels. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2784–2810. PMLR, 17–23 Jul 2022. 5 +[6] Yash Chandak, Georgios Theocharous, James Kostas, Scott Jordan, and Philip Thomas. Learning action representations for reinforcement learning. In International conference on machine learning, pages 941-950. PMLR, 2019. 10 +[7] Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 15084-15097. Curran Associates, Inc., 2021. 9, 17 +[8] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR, 13-18 Jul 2020. 7, 9 +[9] Will Dabney, André Barreto, Mark Rowland, Robert Dadashi, John Quan, Marc G. Bellemare, and David Silver. The value-improvement path: Towards better representations for reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8):7160-7168, May 2021. 22 +[10] Benjamin Eysenbach, Tianjun Zhang, Sergey Levine, and Ruslan Salakhutdinov. Contrastive learning as goal-conditioned reinforcement learning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. 23 +[11] Scott Fujimoto and Shixiang (Shane) Gu. A minimalist approach to offline reinforcement learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 20132-20145. Curran Associates, Inc., 2021. 2, 8, 17 +[12] Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G Bellemare. Deep-Mdp: Learning continuous latent space models for representation learning. In International Conference on Machine Learning, pages 2170-2179. PMLR, 2019. 2, 9, 22 +[13] Jean-Bastien Grill, Florian Strub, Florent Alché, Coretin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, and Michal Valko. Bootstrap your own latent - a new approach to self-supervised learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 21271-21284. Curran Associates, Inc., 2020. 7, 23 +[14] Zhaohan Daniel Guo, Shantanu Thakoor, Miruna Pislar, Bernardo Avila Pires, Florent Altché, Corentin Tallec, Alaa Saade, Daniele Calandriello, Jean-Bastien Grill, Yunhao Tang, Michal Valko, Remi Munos, Mohammad Gheshlaghi Azar, and Bilal Piot. BYOL-exlore: Exploration by bootstrapped prediction. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. 23 +[15] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations, 2020. 1, 22 +[16] Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2555–2565. PMLR, 09–15 Jun 2019. 1, 22 + +[17] Danijar Hafner, Timothy P Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. In International Conference on Learning Representations, 2021. 1 +[18] Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models, 2023. 1, 2, 5 +[19] Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision transformers under data augmentation. In Conference on Neural Information Processing Systems, 2021. 22 +[20] Nicklas Hansen and Xiaolong Wang. Generalization in reinforcement learning by soft data augmentation. In International Conference on Robotics and Automation, 2021. 22 +[21] Nicklas A Hansen, Hao Su, and Xiaolong Wang. Temporal difference learning for model predictive control. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 8387-8406. PMLR, 17-23 Jul 2022. 1, 5 +[22] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9726-9735, 2020. 7, 9, 23 +[23] Olivier Henaff. Data-efficient image recognition with contrastive predictive coding. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4182-4192. PMLR, 13-18 Jul 2020. 2, 9 +[24] Matthew W. Hoffman, Bobak Shahrriari, John Aslanides, Gabriel Barth-Maron, Nikola Momchev, Danila Sinopalnikov, Piotr Stanczyk, Sabela Ramos, Anton Raichuk, Damien Vincent, Léonard Hussenet, Robert Dadashi, Gabriel Dulac-Arnold, Manu Orsini, Alexis Jacq, Johan Ferret, Nino Vieillard, Seyed Kamyar Seyed Ghasemipour, Sertan Girgin, Olivier Pietquin, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, Sarah Henderson, Abe Friesen, Ruba Haroun, Alex Novikov, Sergio Gómez Colmenarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Andrew Cowie, Ziyu Wang, Bilal Piot, and Nando de Freitas. Acme: A research framework for distributed reinforcement learning. arXiv preprint arXiv:2006.00979, 2020. 10 +[25] Pu Hua, Yubei Chen, and Huazhe Xu. Simple emergent action representations from multi-task policy training. arXiv preprint arXiv:2210.09566, 2022. 10 +[26] Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, and John Langford. Agent-controller representations: Principled offline rl with rich exogenous information, 2022. 22 +[27] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In International Conference on Learning Representations, 2017. 22 +[28] Hong Jun Jeon, Dylan Losey, and Dorsa Sadigh. Shared Autonomy with Learned Latent Actions. In Proceedings of Robotics: Science and Systems, Corvalis, Oregon, USA, July 2020. 2 +[29] Minbeom Kim, Kyeongha Rho, Yong-duk Kim, and Kyomin Jung. Action-driven contrastive representation for reinforcement learning. PLOS ONE, 17(3):1-14, 03 2022. 9 +[30] Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. In International Conference on Learning Representations, 2022. 9 +[31] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1179-1191. Curran Associates, Inc., 2020. 2, 8 + +[32] Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, and John Langford. Guaranteed discovery of control-endogenous latent states with multi-step inverse models, 2022. 22 +[33] Michael Laskin, Aravind Srinivas, and Pieter Abbeel. CURL: Contrastive unsupervised representations for reinforcement learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5639-5650. PMLR, 13-18 Jul 2020. 1, 2, 4, 5, 9 +[34] Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas. Reinforcement learning with augmented data. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 19884-19895. Curran Associates, Inc., 2020. 1 +[35] Alex X Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. Advances in Neural Information Processing Systems, 33:741-752, 2020. 22 +[36] Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. VIP: Towards universal visual reward and representation via value-implicit pre-training. In The Eleventh International Conference on Learning Representations, 2023. 23 +[37] Bogdan Mazoure, Remi Tachet des Combes, Thang Long Doan, Philip Bachman, and R Devon Hjelm. Deep reinforcement and infomax learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 3686-3698. Curran Associates, Inc., 2020. 2, 8, 9 +[38] Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, and Abhinav Gupta. The unsurprising effectiveness of pre-trained vision models for control. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 17359-17371. PMLR, 17-23 Jul 2022. 22 +[39] Seohong Park and Sergey Levine. Predictable mdp abstraction for unsupervised model-based rl, 2023. 10 +[40] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR, 18-24 Jul 2021. 7 +[41] Kate Rakelly, Abhishek Gupta, Carlos Florensa, and Sergey Levine. Which mutual-information representation learning objectives are sufficient for control? In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. 4, 21, 22 +[42] Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron Courville, and Philip Bachman. Data-efficient reinforcement learning with self-predictive representations. In International Conference on Learning Representations, 2021. 2, 8, 9, 22 +[43] Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Charlin, R Devon Hjelm, Philip Bachman, and Aaron Courville. Pretraining representations for data-efficient reinforcement learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. 8, 9 +[44] Adam Stooke, Kimin Lee, Pieter Abbeel, and Michael Laskin. Decoupling representation learning from reinforcement learning. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 9870-9879. PMLR, 18-24 Jul 2021. 2, 8, 9 + +[45] Yanchao Sun, Ruijie Zheng, Xiyao Wang, Andrew E Cohen, and Furong Huang. Transfer RL across observation feature spaces via model-based regularization. In International Conference on Learning Representations, 2022. 22 +[46] Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and Martin Riedmiller. Deepmind control suite, 2018. 5 +[47] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding, 2019. 2, 3, 9 +[48] Xiyao Wang, Ruijie Zheng, Yanchao Sun, Ruonan Jia, Wichayaporn Wongkamjan, Huazhe Xu, and Furong Huang. Coplanner: Plan to roll out conservatively but to explore optimistically for model-based rl, 2023. 1 +[49] Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3733-3742, 2018. 9 +[50] Tete Xiao, Ilija Radosavovic, Trevor Darrell, and Jitendra Malik. Masked visual pre-training for motor control, 2022. 23 +[51] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Reinforcement learning with prototypical representations. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 11920-11931. PMLR, 18-24 Jul 2021. 23 +[52] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Mastering visual continuous control: Improved data-augmented reinforcement learning. In International Conference on Learning Representations, 2022. 1, 2, 4, 5 +[53] Denis Yarats, Ilya Kostrikov, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In International Conference on Learning Representations, 2021. 1, 5 +[54] Denis Yarats, Ilya Kostrikov, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In International Conference on Learning Representations, 2021. 22 +[55] Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, and Rob Fergus. Improving sample efficiency in model-free reinforcement learning from images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 10674–10681, 2021. 22 +[56] Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, and Yang Gao. Mastering atari games with limited data. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. 9, 22 +[57] Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Avnish Narayan, Hayden Shively, Adithya Bellathur, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning, 2021. 5 +[58] Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learning invariant representations for reinforcement learning without reconstruction. In International Conference on Learning Representations, 2021. 22 +[59] Qihang Zhang, Zhenghao Peng, and Bolei Zhou. Learning to drive by watching youtube videos: Action-conditioned contrastive policy pretraining. European Conference on Computer Vision (ECCV), 2022. 9 + +# Appendix + +# A Effects of the choice of prediction horizon $K$ + +TACO functions as a versatile add-on to current online and offline RL algorithms, requiring only the prediction horizon $K$ as an additional hyperparameter. In our investigations, we select $K$ as either 1 or 3 across all tasks. The performance of TACO with $K$ values of 1 and 3 is compared against the DrQ-v2 baselines in Figure 8. As illustrated in the figure, TACO outperforms the baseline DrQ-v2 for both $K = 1$ and 3, indicating the effectiveness of the temporal contrastive loss in concurrent state and action representation learning. In comparing $K = 1$ to $K = 3$ , we observe that a longer prediction horizon ( $K = 3$ ) yields superior results for four out of nine tasks, specifically Hopper Hop, Quadruped Walk, Acrobot Swingup, and Reacher Hard. Conversely, for the Quadruped Run and Walker Run tasks, a shorter temporal contrastive loss interval ( $K = 1$ ) proves more beneficial. For the remaining tasks, the choice between $K = 1$ and $K = 3$ appears to have no discernable impact. + +![](images/5dafbaf5c10cd1f3bb79134b0b27b27c481f2c4b20058a463108bc9632fea907.jpg) +Figure 8: 1M Performance of TACO with step size $K = 1$ and $K = 3$ . + +Our speculation hinges on the rate of changes in the agent's observations. While Theorem 3.1 applies regardless of the prediction horizon $K$ , a shorter horizon (such as $K = 1$ ) can offer a somewhat shortsighted perspective. This becomes less beneficial in continuous control tasks where state transitions within brief time intervals are negligible. Therefore, in environments exhibiting abrupt state transitions, a larger $K$ would be advantageous, whereas a smaller $K$ would suffice for environments with gradual state transitions. + +To substantiate this conjecture, we conduct a simple experiment on the task Hopper Hop where we aim to simulate the different rates of changes in the agent's observations. We verify this by varying the size of action repeats, which correspond to how many times a chosen action repeats per environmental step. Consequently, a larger action repeat size induces more pronounced observational changes. In our prior experiments, following the settings of DrQ-v2, we fix the size of the action repeat to be 2. In this experiment, we alter the size of the action repeat for Hopper Hop to be 1, 2, and 4. For each action repeat size, we then compare the 1M performance of TACO with the prediction horizon $K$ selected from the set $\{1,3,5\}$ . Interestingly, as demonstrated in Figure 9, the optimal $K$ values for different action repeats are reversed: 5 for an action repeat of 1, 3 for an action repeat of 2, and 1 for an action repeat of 4. This observation further substantiates our assertion that the optimal choice of prediction horizon correlates with the rate of change in environmental dynamics. + +![](images/c77524465be9bdbaa1d14bca34714490bfa183579cf6617a59f2d3d50716af34.jpg) +Figure 9: 1M Performance of TACO with different prediction horizon $K$ under different action repeat sizes. Shaded columns correspond to the best prediction horizon $K$ under a fixed action repeat size. + +# B Time-efficiency of TACO + +In this section, we compare the time efficiency of different visual RL algorithms. In Table 4, we present a comprehensive comparison of the speed of these algorithms, measured in frames per second + +(FPS). Additionally, we also provide FPS for TACO with different batch sizes in Table 5 as a reference. To ensure a fair comparison, all the algorithms are tested on Nvidia A100 GPUs. + +As mentioned in §4.1, the InfoNCE objective in TACO requires a large batch size of 1024, which is 4 times larger than the batch size used in DrQ-v2. Consequently, this increases the processing time, making our method to run about 3.6 times slower than DrQ-v2, with similar time efficiency as DrQ. Therefore, the primary limitation of our method is this time inefficiency caused by the usage of large batch sizes. Potential solutions to improve the time efficiency could involve the implementation of a distributed version of our method to speed up training. We intend to explore these improvements in future work related to our paper. + +Table 4: Frames per second (FPS) for visual RL algorithms. B stands for batch size used in their original implementations. + +
TACO (B:1024)DrQv2 (B:256)A-LIX (B:256)DrQ (B:512)CURL (B:512)Dreamer-v3 (B:256)TDMPC (B:256)
351309833203122
+ +Table 5: Frames per second (FPS) for TACO with different batch size + +
TACO (B:1024)TACO (B:512)TACO (B:256)
356594
+ +# C Experiment details + +# C.1 Online RL + +In this section, we describe the implementation details of TACO for the online RL experiments. We implement TACO on top of the released open-source implementation of DrQ-v2, where we interleave the update of TACO objective with the original actor and critic update of DrQ-v2. Below we demonstrate the pseudo-code of the new update function. + +```txt +1 ## Extract feature representation for state and actions. +2 def update(batch): obs, action_sequence, reward, next_obs = batch +4 # Update Agent's critic function update_critic(obs, action_sequence, reward, next_obs) +6 # Update the agent's value function update Actor(obs) +8 # Update TACO loss +9 update_taco(obs, action_sequence, reward, next_obs) +``` + +Listing 1: Pytorch-like pseudo-code how TACO is incorporated into the update function of existing visual RL algorithms. + +Next, we demonstrate the pseudo-code of how TACO objective is computed with pytorch-like pseudocode. + +```python +1 # stateEncoder: State/Observation Encoder (CNN) +2 # actionEncoder: Action Encoder (MLP with 1-hidden layer) +3 # sequenceEncoder: Action Sequence encoder (Linear Layer) +4 # reward Predictor: Reward Prediction Layer (MLP with 1-hidden layer) +5 # G: Projection Layer I (MLP with 1-hidden layer) +6 # H: Projection Layer II (MLP with 1-hidden layer) +7 # aug: Data Augmentation Function (Radnom Shift) +8 # W: Matrix for computing similarity score +9 +10 def compute_taco_objective(obs, action_sequence, reward, next_obs): +11 ## Compute feature representation for both state and actions. +12 z = stateEncoder(aug(obs)) +13 zanchor = stateEncoder(aug(obs), stop_grad=True) +14 next_z = stateEncoder(aug(next_obs), stop_grad=True) +15 u_seq = sequenceEncoder(actionEncoder(action_sequence)) +16 ## Project to joint contrastive embedding space +17 x = G(torch.cat([z, u_seq], dim=-1)) +18 y = H(next_z) +19 ## Compute bilinear product x^TWy +``` + +Listing 2: Pytorch-like pseudo-code for how TACO objective is computed +```python +## Diagonal entries of x^T Wy correspond to positive pairs +logits = torch/matmul(x, torch/matmul(W, y.T)) +logits = logits - torch.max(logits, 1) +labels = torch.arange(n) +taco_loss = cross_entropy_loss(logits, labels) +## Compute CURL loss +x = H(z) +y = H(zanchor).detach() +logits = torch/matmul(x, torch/matmul(W, y.T)) +logits = logits - torch.max(logits, 1) +labels = torch.arange(n) +curl_loss = cross_entropy_loss(logits, labels) +## Reward Prediction Loss +reward_pred = reward_prediction(z, u_seq) +reward_loss = torch.mse_loss(reward_pred, reward) +return taco_loss + curl_loss + reward_loss +``` + +Then when we compute the Q values for both actor and critic updates, we use the trained state and action encoder. Same as what is used in DrQ-v2, we use 1024 for the hidden dimension of all the encoder layers and 50 for the feature dimension of state representation. For action representation, we choose the dimensionality of the action encoding, which corresponds to the output size of the action encoding layer, as $\lceil 1.25 \times |\mathcal{A}| \rceil$ . In practice, we find this works well as it effectively extracts relevant control information from the raw action space while minimizing the inclusion of irrelevant control information in the representation. See Appendix D for an additional experiment on testing the robustness of TACO against the dimensionality of action representation. + +# C.2 Offline RL + +TD3+BC, CQL, IQL all are originally proposed for vector-input. We modify these algorithms on top of DrQ-v2 so that they are able to deal with image observations. For TD3+BC, the behavior cloning regularizer is incorporated into the actor function update, with the regularizer weight $\alpha_{\mathrm{TD3 + BC}} = 2.5$ as defined and used in Scott et al. [11]. In our experiments, no significant performance difference was found for $\alpha$ within the [2, 3] range. In the case of CQL, we augment the original critic loss with a Q-value regularizer and choose the Q-regularizer weight, $\alpha_{\mathrm{CQL}}$ , from $\{0.5, 1, 2, 4, 8\}$ . Table 6 presents the chosen $\alpha_{\mathrm{CQL}}$ for each dataset. + +Table 6: Hyperparameter of ${\alpha }_{\mathrm{{CQL}}}$ used in different tasks/datasets. + +
Task/ DatasetαCQL
Hopper HopMedium0.5
Medium-replay0.5
Replay2
Cheetah RunMedium0.5
Medium-replay2
Replay4
Walker RunMedium0.5
Medium-replay1
Replay4
Quadruped RunMedium0.5
Medium-replay2
Replay4
+ +For IQL, we adopt the update functions of both policy and value function from its open-source JAX implementation into DrQ-v2, setting the inverse temperature $\beta$ to 0.1, and $\tau = 0.7$ for the expectile. Lastly, for the Decision Transformer (DT), we adapt from the original open-source implementation by Chen et al. [7] and use a context length of 20. + +# D Sensitivity of TACO to action representation dimensionality: an additional experiment + +In all of our previous experiments, we use $\lceil 1.25 \times |\mathcal{A}| \rceil$ as the latent dimensions of the action space for TACO. In practice, we find this works well so that it retains the rich information of the raw actions while being able to group semantically similar actions together. To test the algorithm's sensitivity to this hyperparameter, we conduct an experiment on the Quadruped Run task, which has a 12-dimensional action space. In Figure 10, we show the 1M performance of TACO with different choices of latent action dimensions. Notably, we observe that as long as the dimensionality of the action space is neither too small (6-dimensional), which could limit the ability of latent actions to capture sufficient information from the raw actions, nor too large (24-dimensional), which might introduce excessive control-irrelevant information, the performance of TACO remains robust to the choice of action representation dimensionality. + +![](images/3ecf140b5e9b88d7d26b89af8cd96e22713e9757cc4d5879de39ce9bccdd14fd.jpg) +Figure 10: 1M Performance of TACO with different dimensionality of latent action representations on Quadruped Run $(|\mathcal{A}| = 12)$ . Error bar represents standard deviation across 8 random seeds. + +# E Sensitivity of TACO vs. DrQ-v2 to random seeds + +![](images/b4699370570c48a217c64915623fac4c6c4ec1ddb433cfb306b859021bbfd902.jpg) +(a) Cheetah Run + +![](images/b34d2c77c6522987067f131faa906672aa25881d6e8a552e5c879bbe9fd9b1bd.jpg) + +![](images/38e1df51c4d122c9d4b36ad3e087669f0c59d7362527d65b3cdada1881c4e454.jpg) +(d) Quadruped Walk +(g) Finger Turn Hard +Figure 11: Robustness analysis of TACO vs. DrQ-v2 across multiple (6) random seeds + +![](images/fdc94187b8251fc51a4b1988e9a96f96e37901eb6273b3ba4aba764f272d3a10.jpg) +(b) Acrobot Swingup + +![](images/73c64ae3ea0698102f92bb166da3b8cad9ba080c8c587e859938670eef1c668d.jpg) +(e) Reacher Hard + +![](images/0dba434768a5bf9006fc8031839322305fea9e2d0ec29c02e5056214c3eead41.jpg) +(h) Reach Duplo + +![](images/8a94573eb0baec8ce8397ef0c5675b1ebb53eb20552a58526fdacb779bec6bdd.jpg) + +![](images/3c5b81cecb0ea6be0dee326ae67e8376763b385a1c241204d40159316dba24d8.jpg) +(c) Quadruped Run + +![](images/1616ed82d9590b52e75516c4ed3077aa5b9776e621225103015053f3a7229248.jpg) +(f) Hopper Hop +(i) Walker Run + +In this section, we conduct a comprehensive analysis of the + +algorithm's robustness under random seed, comparing the performance of our proposed method, TACO, with the baseline algorithm, DrQ-v2, across multiple seeds on nine distinct tasks shown in Figure 4. Our aim is to gain insights into the stability and robustness of both algorithms under varying random seeds, providing valuable information to assess their reliability. + +Remarkably, Figure 11 demonstrates the impressive robustness of TACO compared to DrQ-v2. TACO consistently demonstrates smaller performance variations across different seeds, outperforming the baseline on almost every task. In contrast, runs of DrQ-v2 frequently encounter broken seeds. These results provide strong evidence that TACO not only improves average performance but also significantly enhances the robustness of the training process, making it more resilient against failure cases. This enhanced robustness is crucial for real-world applications, where stability and consistency are essential for successful deployment. + +# F Tackling the complex "Manipulator Bring Ball" task: an additional online RL experiment within the DeepMind Control Suite + +In addition to the nine tasks depicted in Figure Figure 4, we conduct an additional experiment on an even more complex task "Manipulator Bring Ball". This task necessitates the robotic arm to precisely locate, grab, and transport the ball to a specified goal location. A visualization of a successful trajectory is illustrated in Figure 12a. + +The challenge of this domain lies in the sparse reward structure, the demand for accurate arm control, and the need for temporally extended skills. As a result, despite training for 30 million frames, as demonstrated in Figure 12b, DrQ-v2 fails to successfully complete this task. In comparison, TACO manages to successfully solve this task for half of the random initializations, i.e., 3 out of 6 seeds. + +![](images/21c115429b323b0bfe4bfe2fbc93eda297abc49f2ac943df3740ae76d043189f.jpg) + +![](images/676bf2669bf706b61083a67cc4a7cec7ef26822d515adc81b1f0d6411576f15f.jpg) +(a) Visualization of an expert trajectory on manipulator bring ball. + +![](images/012fb4fd8d6b86cc02a8b1056aa09a39e750fc169f9b7a9394c48e841363be8d.jpg) + +![](images/417e98b61e40e6c5cac410da55a873b4066e173121bc7d6903f097f1edc74968.jpg) + +![](images/3ff3229c50e1ad51ffd1e8f974f7b26e280eb1dd72639c75bd17c1ca9233159a.jpg) + +![](images/e172583ef9b40bad53f0ce937ee0b453679e4ac5c733fd0e2f57d694de6448e0.jpg) + +![](images/12aa4943513d39c0b21ef06d36dd327245f9a8a83006d63245e61f935e925c78.jpg) + +![](images/a305b16844c8629a7474eada1aa171438e4d03a415dd6ea03a3520d182a250e4.jpg) + +![](images/657f968e679b2d671c2f9c54deff9a722f970ad98cf89e1a44e425885a42ccf2.jpg) + +![](images/baf24e639bd84539d1e5793ff3138bfb5ba692e601f338ee3d98d23e664ae972.jpg) + +![](images/187c4f900a09f2033e3066ffdba37636eed2299d9cd4180b50e8942d8dabc734.jpg) +(b) Online learning curves on manipulator bring ball aggregated over 6 random seeds. +Figure 12: Manipulator Bring Ball Task + +# G Visualization of Meta-world Tasks + +![](images/3fcd454e70af3afb818e6e43aebb312fc242d07fd5c42d83943c1eaa009aeb22.jpg) +Figure 13: Visualization of expert trajectories for each Meta-world task + +# H Proof of Theorem 3.1 + +In this section, we prove Theorem 3.1, extending the results of Rakely et al. [41]. We will use notation $A_{t:t + K}$ to denote the action sequence $A_{t},\ldots ,A_{t + K}$ from timestep $t$ to $t + K$ , and we will use $U_{t:t + K}$ to denote the sequence of latent actions $U_{t},\ldots ,U_{t + K}$ from timestep $t$ to $t + K$ . + +![](images/f2642fecc8c920dae5e6a7e7aaed2c995ee9a5c1a69f73ea1984f034d7334f9b.jpg) +Figure 14: The graphical diagram + +Proposition H.1. Let $X$ be the return to go $\sum_{i=0}^{H-t-K} \gamma^i R_{t+K+i}$ , with the conditional independence assumptions implied by the graphical model in Figure 14. If $I(Z_{t+K}; Z_t, U_{t:t+K-1}) = I(S_{t+K}; S_t, A_{t:t+K-1})$ , then $I(X; Z_t, U_{t:t+K-1}) = I(X; S_t, A_{t:t+K-1})$ . + +Proof. We prove by contradictions. Suppose there exists a pair of state action representation $\phi_Z, \psi_U$ and a reward function $r$ such that $I(Z_{t + K}; Z_t, U_{t:t + K - 1}) = I(S_{t + K}; S_t, A_{t:t + K - 1})$ , but $I(X; Z_t, U_{t:t + K - 1}) < I(X; S_t, U_{t:t + K - 1}) < I(X; S_t, A_{t:t + K - 1})$ . Then it suffices to show that $I(S_{t + k}; Z_t, U_{t:t + K - 1}) < I(S_{t + K}; S_t, U_{t:t + K - 1})$ , which would give us the desired contradiction since $I(Z_{t + k}; Z_t, U_{t:t + K - 1}) \leq I(S_{t + k}; Z_t, U_{t:t + K - 1})$ , and $I(S_{t + K}; S_t, U_{t:t + K - 1}) \leq I(S_{t + K}; S_t, A_{t:t + K - 1})$ . + +Now we look at $I(X;Z_t,S_t,U_{t:t + K - 1})$ . Apply chain rule of mutual information: + +$$ +\begin{array}{l} I \left(X; Z _ {t}, S _ {t}, U _ {t: t + K - 1}\right) = I (= X; Z _ {t} | S _ {t}, U _ {t: t + K - 1}) + I (X; S _ {t}, U _ {t: t + K - 1}) (6) \\ = 0 + I \left(X; S _ {t}, U _ {t: t + K - 1}\right) (7) \\ \end{array} +$$ + +Applying the chain rule in another way, we get + +$$ +I \left(X; Z _ {t}, S _ {t}, U _ {t: t + K - 1}\right) = I \left(X; S _ {t} \mid Z _ {t}, U _ {t: t + K - 1}\right) + I \left(X; Z _ {t}, U _ {t: t + K - 1}\right) \tag {8} +$$ + +Therefore, we get + +$$ +I \left(X; S _ {t}, U _ {t: t + K - 1}\right) = I \left(X; S _ {t} \mid Z _ {t}, U _ {t: t + K - 1}\right) + I \left(X; Z _ {t}, U _ {t: t + K - 1}\right) \tag {9} +$$ + +By our assumption that $I(X;Z_{t},U_{t:t + K - 1}) < I(X;S_{t},U_{t:t + K - 1})$ , we must have + +$$ +I \left(X; S _ {t} \mid Z _ {t}, U _ {t: t + K - 1}\right) > 0 \tag {10} +$$ + +Next, we expand $I(S_{t + K};Z_t,S_t,U_{t:t + K - 1})$ + +$$ +\begin{array}{l} I \left(S _ {t + K}; Z _ {t}, S _ {t}, U _ {t: t + K - 1}\right) = I \left(S _ {t + K}; Z _ {t} \mid S _ {t}, U _ {t: t + K - 1}\right) + I \left(S _ {t + K}; S _ {t}, U _ {t: t + K - 1}\right) (11) \\ = 0 + I \left(S _ {t + K}; S _ {t}, U _ {t: t + K - 1}\right) (12) \\ \end{array} +$$ + +On the other hand, we have + +$$ +I \left(S _ {t + K}; Z _ {t}, S _ {t}, U _ {t: t + K - 1}\right) = I \left(S _ {t + K}; S _ {t} \mid Z _ {t}, U _ {t: t + K - 1}\right) + I \left(S _ {t + K}; Z _ {t}, U _ {t: t + K - 1}\right) \tag {13} +$$ + +Thus we have + +$$ +I \left(S _ {t + K}; S _ {t} \mid Z _ {t}, U _ {t: t + K - 1}\right) + I \left(S _ {t + K}; Z _ {t}, U _ {t: t + K - 1}\right) = I \left(S _ {t + K}; S _ {t}, U _ {t: t + K - 1}\right) \tag {15} +$$ + +But then because $I(S_{t + K}; S_t | Z_t, U_{t:t + K - 1}) > I(X; S_t | Z_t, U_{t:t + K - 1})$ as $S_t \to S_{t + K} \to X$ forms a Markov chain, it is greater than zero by the Inequality (10). As a result, $I(S_{t + K}; Z_t, U_{t:t + K - 1}) < I(S_{t + K}; S_t, U_{t:t + K - 1}) < I(S_{t + K}; S_t, A_{t:t + K - 1})$ . This is exactly the contradiction that we would like to show. + +Before proving Theorem 3.1, we need to cite another proposition, which is proved as Lemma 2 in Rakely et al. [41] + +Proposition H.2. Let $X, Y, Z$ be random variables. Suppose $I(Y, Z) = I(Y, X)$ and $Y \perp Z|X$ , then $\exists p(Z|X)$ s.t. $\forall x, p(Y|X = x) = \int p(Y|Z)p(Z|X = x)dz$ + +Proof of Theorem 3.1: Based on the graphical model, it is clear that + +$$ +\max _ {\phi , \psi} I \left(Z _ {t + K}, \left[ Z _ {t}, U _ {t}, \dots , U _ {t + K - 1} \right]\right) = I \left(S _ {t + K}; \left[ S _ {t}, A _ {t}, \dots , A _ {t + K - 1} \right]\right) \tag {16} +$$ + +Now define the random variable of return-to-go $\overline{R}_t$ such that + +$$ +\bar {R} _ {t} = \sum_ {k = 0} ^ {H - t} \gamma^ {k} R _ {t + k} \tag {17} +$$ + +Based on Proposition H.1, because + +$$ +I \left(Z _ {t + K}; Z _ {t}, U _ {t: t + K - 1}\right) = I \left(S _ {t + K}; S _ {t}, A _ {t: t + K - 1}\right) +$$ + +we could conclude that + +$$ +I \left(\bar {R} _ {t + K}; Z _ {t}, U _ {t: t + K - 1}\right) = I \left(\bar {R} _ {t + K}; S _ {t}, A _ {t: t + K - 1}\right) \tag {18} +$$ + +Now applying Proposition H.2, we get + +$$ +\mathbb {E} _ {p \left(z _ {t}, u _ {t: t + K - 1} \mid S _ {t} = s, A _ {t: t + K - 1} = a _ {t: t + K - 1}\right)} \left[ p \left(\bar {R} _ {t} \mid Z _ {t}, U _ {t: t + K - 1}\right) \right] = p \left(\bar {R} _ {t} \mid S _ {t} = s, A _ {t: t + K - 1}\right) \tag {19} +$$ + +As a result, when $K = 1$ , for any reward function $r$ , given a state-action pair $(s_1, a_1)$ , $(s_2, a_2)$ such that $\phi(s_1) = \phi(s_2)$ , $\psi(a_1) = \psi(a_2)$ , we have $Q_r(s_1, a_1) = \mathbb{E}_{p(\overline{R}_t | S_t = s_1, A_t = a_1)}[\overline{R}_t] = \mathbb{E}_{p(\overline{R}_t | S_t = s_2, A_t = a_2)}[\overline{R}_t]$ . This is because $p(\overline{R}_t | S_t = s_1, A_t = a_1) = p(\overline{R}_t | S_t = s_2, A_t = a_2)$ by Equation (18) as $p(z_t | S_t = s_1) = p(z_t | S_t = s_2)$ , $p(u_t | A_t = a_1) = p(u_t | A_t = a_2)$ . In case when $K > 1$ , because if $\mathbb{E}[Z_{t + K}, [Z_t, U_t, ..., U_{t + K - 1}]] = \mathbb{E}[S_{t + K}, [S_t, A_t, ..., A_{t + K - 1}]]$ , then for any $1 \leq k \leq K$ , $\mathbb{E}[Z_{t + k}, [Z_t, U_t, ..., U_{t + k - 1}]] = \mathbb{E}[S_{t + k}, [S_t, A_t, ..., A_{t + k - 1}]]$ , including $K = 1$ , by Data processing Inequality. (Intuitively, this implies that if the information about the transition dynamics at a specific step is lost, the mutual information decreases as the timestep progresses, making it impossible to reach its maximum value at horizon $K$ .) Then the same argument should also apply here. + +# I Additional related work + +# I.1 Visual reinforcement learning + +In this paper, we focus primarily on visual-control tasks, and this section reviews relevant prior work in visual RL. For visual-control environments, representation learning has shown to be a key. Many prior works show that learning auxiliary tasks can encourage the representation to be better aligned with the task and thus enhance the performance, such as reconstructing pixel observations [55], minimizing bisimulation distances [58], fitting extra value functions [4, 9], learning latent dynamics models [12, 42, 45], multi-step inverse dynamics model [32, 26] or various control-relevant objectives [27]. Model-based methods are also shown to be successful in efficient visual RL, which learn the transition dynamics based on the encoded observation [16, 35, 15, 56]. Data augmentation can also be used to smooth out the learned representation or value functions to improve the learning performance [54, 20, 19]. + +# I.2 Additional works on self-supervised/contrastive learning in reinforcement learning + +In §5.1, we summarize the works that apply self-supervised/contrastive learning objectives to improve the sample efficiency of visual reinforcement learning. In this section, we discuss the additional works that apply self-supervised/contrastive learning objectives to a broader set of topics in reinforcement learning. + +Several recent works have investigated the use of self-supervised/contrastive learning objectives to pre-train representations for reinforcement learning (RL) agents. Parisi et al. [38] propose PVR, + +which leverages a pre-trained visual representation from MoCo [22] as the perception module for downstream policy learning. Xiao et al. [50] introduce VIP, a method that pre-trains visual representations using masked autoencoders. Additionally, Ma et al. [36] propose VIP, which formulates the representation learning problem as offline goal-conditioned RL and derives a self-supervised dual goal-conditioned value-function objective. + +Besides pretraining state representations, in goal-conditioned RL, the work by Eysenbach et al. [10] establishes a connection between learning representations with a contrastive loss and learning a value function. Moreover, self-supervised learning has also been employed for effective exploration in RL. Guo et al. [14] introduce BYOL-Explore, a method that leverages the self-supervised BYOL objective [13] to acquire a latent forward dynamics model and state representation. The disagreement in the forward dynamics model is then utilized as an intrinsic reward for exploration. Another approach, ProtoRL by Yarats et al. [51], presents a self-supervised framework for learning a state representation through a clustering-based self-supervised learning objective in the reward-free exploration setting. \ No newline at end of file diff --git a/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/images.zip b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bdfdec72b748965706dd11156037fcfc3ce88bbc --- /dev/null +++ b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02ff7e6c7a47fbfd21eeae0292f61a37112995d678569f386611b344414b2b97 +size 1045441 diff --git a/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/layout.json b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..15390eba9e13282dfcac7c6a2fea25f726c97846 --- /dev/null +++ b/texttttacotemporallatentactiondrivencontrastivelossforvisualreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:923e2d320aba73252d218192e869b7814e1174b16f9aee913c20d776924264d7 +size 669471 diff --git a/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_content_list.json b/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ec4549f1b4a86254eb327229146d093d633f2235 --- /dev/null +++ b/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab69d71feec15205abf73ca0122916c2eb594a0d5af939b3b05ddb25f9bae4f1 +size 81341 diff --git a/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_model.json b/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ebe335285d01f9b8796c3caf6d69d8c65e80fa3b --- /dev/null +++ b/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6960067c38017305f5e8f2f9329921bd3684adb18c6477ec391c690b83500c72 +size 96301 diff --git a/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_origin.pdf b/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..24e1ce918d8380f94d920cde43150b107c5b1dc8 --- /dev/null +++ b/varepsilonfractionalcorestabilityinhedonicgames/56ba37a6-85a9-41dd-8ea2-fa1e72804f82_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20aaafead0f8db24938f9ebee039b100a02f7cfe91355666966a4f492ecd3569 +size 505038 diff --git a/varepsilonfractionalcorestabilityinhedonicgames/full.md b/varepsilonfractionalcorestabilityinhedonicgames/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4da7e118f640834e5269e59f405cc40b0bb20704 --- /dev/null +++ b/varepsilonfractionalcorestabilityinhedonicgames/full.md @@ -0,0 +1,366 @@ +# $\varepsilon$ -fractional core stability in Hedonic Games + +Simone Fioravanti $^{1}$ Michele Flammini $^{1,2}$ Bojana Kodric $^{3}$ Giovanna Varricchio $^{2,4}$ + +1 Gran Sasso Science Institute (GSSI), L'Aquila, Italy + +2 University of Calabria, Rende, Italy + +$^{3}$ Ca' Foscari University of Venice, Venice, Italy + +4 Goethe-Universität, Frankfurt am Main, Germany + +{simone.fioravanti, michele.flammini}@gssi.it + +bojana.kodric@unive.it giovanna.varricchio@unical.it + +# Abstract + +Hedonic Games (HGs) are a classical framework modeling coalition formation of strategic agents guided by their individual preferences. According to these preferences, it is desirable that a coalition structure (i.e. a partition of agents into coalitions) satisfies some form of stability. The most well-known and natural of such notions is arguably core-stability. Informally, a partition is core-stable if no subset of agents would like to deviate by regrouping in a so-called core-blocking coalition. Unfortunately, core-stable partitions seldom exist and even when they do, it is often computationally intractable to find one. To circumvent these problems, we propose the notion of $\varepsilon$ -fractional core-stability, where at most an $\varepsilon$ -fraction of all possible coalitions is allowed to core-block. It turns out that such a relaxation may guarantee both existence and polynomial-time computation. Specifically, we design efficient algorithms returning an $\varepsilon$ -fractional core-stable partition, with $\varepsilon$ exponentially decreasing in the number of agents, for two fundamental classes of HGs: Simple Fractional and Anonymous. From a probabilistic point of view, being the definition of $\varepsilon$ -fractional core equivalent to requiring that uniformly sampled coalitions core-block with probability lower than $\varepsilon$ , we further extend the definition to handle more complex sampling distributions. Along this line, when valuations have to be learned from samples in a PAC-learning fashion, we give positive and negative results on which distributions allow the efficient computation of outcomes that are $\varepsilon$ -fractional core-stable with arbitrarily high confidence. + +# 1 Introduction + +Game-theoretic models of coalition formation have drawn significant interest in the last years, because of their ability to capture meaningful properties of multi-agent interactions. In Hedonic Games (HGs) [2, 16] agents gather together without any form of externality, that is only minding the internal composition of their groups. A solution is then a partition of the agents (or coalition structure) having some desirable properties, which typically stand for stability against some kinds of deviations. Among the many notions existing in the literature, one of the most fundamental is core stability [10, 31]. A partition is said to be core-stable or in the core if no subset of agents would benefit by regrouping and forming a so-called core-blocking coalition. Unfortunately, while being a quite natural requirement, it is notably very difficult to achieve [5, 25], even under the usual unrealistic assumption of full knowledge of agents' preferences. Furthermore, for the few classes of HGs in which the non-emptiness of the core has been established, a stable partition is usually hard to compute. Nonetheless, due to its significance, it is still desirable to find an approximation of the core. + +The core was first considered in the setting of cooperative game theory, where the value of a coalition has to be allocated fairly between its members. In this scenario, the most well-known approximation to the core is the so-called strong $\varepsilon$ -core [26], in which a blocking coalition increases the total value allocated to its members by at least $\varepsilon$ . A derived notion is the one of least-core [22], i.e. the (strong) $\varepsilon$ -core associated to the smallest possible value of $\varepsilon$ guaranteeing its existence. An adaptation of these concepts to the context of HGs has been proposed in [17], where the authors define $k$ -improvement core stability, requiring each member of a blocking coalition to increase her utility by a multiplicative factor strictly greater than $k \geq 1$ . The same authors also propose to bound by a value $q \geq 2$ the number of agents allowed to form a blocking coalition, obtaining what they call $q$ -size core stability. + +However, the above stability concepts might still be fragile, especially when many coalitions could anyway benefit by deviating, even if not consistently according to the approximation/size factors. In fact, agents might be still inclined to subtle improvements. Interestingly, works like [11] experimentally show that, sampling random HGs, the fraction of instances with an empty core is small and decreases significantly as the number of agents increases. This could be related to the fact that, even in the instances with empty core, many or almost all the coalitions do not core-block. Consequently, the existence of outcomes having a relatively small number of blocking coalitions appears plausible. + +In this work, we investigate the concept of $\varepsilon$ -fractional core-stable partitions, i.e. partitions that can be core-blocked by only an $\varepsilon$ -fraction of all possible coalitions. This notion bears some similarities with PAC-stabilizability as defined in [27]. In this work, the authors lifted for the first time the assumption of complete information in HGs and considered learning preferences and core-stable partitions from samples, employing the probably approximately correct (PAC) learning framework [30]. Specifically, an HGs class is PAC stabilizable if, after seeing a certain number of samples, it is possible to either determine that the core is empty or to return a partition that has probability at most $\varepsilon$ to be core-blocked by a coalition sampled from the same distribution. + +Our Contribution. In this paper, we introduce and study the $\varepsilon$ -fractional core stability notion in Hedonic Games. We specifically investigate this concept on two of the most fundamental classes of HGs: Simple fractional and anonymous HGs. + +Roughly speaking a partition of agents is $\varepsilon$ -fractional core-stable, $\varepsilon$ -FC in short, if at most an $\varepsilon$ fraction of coalitions may core-block it. Such a definition has a natural probabilistic interpretation: Indeed, for an $\varepsilon$ -FC partition, $\varepsilon$ represents an upper bound to the probability of drawing a core-blocking coalition uniformly at random. Along this line, we broaden the definition to any distribution over coalitions by requiring that the probability of sampling a blocking coalition is at most $\varepsilon$ . Worth noticing, if $\varepsilon = 0$ , we are essentially requiring our solution to be core stable. Hence it is not possible in general to prove the existence and efficiently compute $\varepsilon$ -FC outcomes, for values of $\varepsilon$ that are sufficiently close to 0. In contrast, our aim is to efficiently compute $\varepsilon$ -FC solutions for as small as possible values of $\varepsilon$ , proving in turn also their existence. + +Unfortunately, as a first result, we prove that by allowing arbitrary sampling distributions an $\varepsilon$ -FC may fail to exist for constant values of $\varepsilon$ . On the positive side, for the aforementioned classes of HGs, we show that it is possible to efficiently compute an $\varepsilon$ -FC solution under the uniform distribution, with $\varepsilon$ sub-exponentially small in the number of agents. In particular, in the case of anonymous HGs, we present an algorithm computing an $\varepsilon$ -FC solution under the class of $\lambda$ -bounded distributions, where the ratio of the probabilities of extracting any two coalitions is bounded by the parameter $\lambda$ . Notably, this class includes the uniform distribution as the special case $\lambda = 1$ , and can be considered a suitable extension of it. Our algorithms, besides guaranteeing sub-exponentially small values of $\varepsilon$ , are designed to handle the possible incomplete knowledge of agents' preferences. In fact, in case preferences are unknown, the algorithms can use the sampling distribution to learn them in a PAC-learning fashion while maintaining the very same guarantees on $\varepsilon$ with high confidence. + +# 2 Related Work + +Core stability and Hedonic Games. Hedonic Games have captured considerable research attention from the scientific community over the years. One of the main goals in their study is understanding how to gather agents in a way they won't desire to modify the outcome. For this reason, several stability notions have been considered and studied such as core and Nash stability, or individual rationality. We refer to [2] for a comprehensive overview of the subject. Core stability is a fundamental + +concept in multi-agent systems, first considered in the context of cooperative games [13, 19]. Its properties and the related complexity have been largely investigated in HGs [1, 10, 25, 28] and beyond, such as in house allocation [24], markets [8, 9] and many other settings. Recently Donahue and Kleinberg [14, 15] have modeled federated learning as an HG, where agents evaluate federating coalitions according to the expected mean squared error of the model they obtain by sharing data with the coalition's members. Works like [4, 12] have used core stability to study payoff allocation among team members in collaborative multi-agent settings. + +PAC-stabilizability. Our definition of $\varepsilon$ -fractional core stability is also strictly related to PAC stabilizability as defined in [27]. This notion was further investigated in several papers. Igarashi et al. [20] studied the case of HGs with underlying interaction networks. Jha and Zick [21] defined a general framework for learning game-theoretic solution concepts from samples. Trivedi and Hemachandra [29] considered learning and stabilizing HGs with noisy preferences. Recently, Fioravanti et al. [18] proposed to relax the requirements of PAC-stabilizability by considering only restricted distributions and showed stabilizability of $\mathcal{W}$ -games under $\lambda$ -bounded distributions. + +# 3 Preliminaries + +In this section, we present the fundamental definitions for our work. Given a positive integer $k$ we use $[k]$ to denote the set $\{1, \ldots, k\}$ . + +# 3.1 Hedonic Games + +Let $N$ be a set of $n$ agents. We call any non-empty subset $C \subseteq N$ a coalition and any coalition of size one a singleton. We denote by $\succ_{i}$ any binary preference relation of agent $i$ over all coalitions containing $i$ , which is reflexive, transitive, and complete. A Hedonic Game is then a pair $H = (N, \succ)$ , where $\succ = (\succ_{i}, \ldots, \succ_{n})$ is a preference profile, i.e., the collection of all agents' preferences. + +Throughout this work, we will assume that preferences are expressed as real numbers by means of valuation functions $v_{i}: 2^{N} \to \mathbb{R}$ for each $i \in N$ . In other words, given two coalitions $C, C'$ containing agent $i$ , $v_{i}(C) \geq v_{i}(C')$ if and only if $C \gtrsim_{i} C'$ . We will denote by $\vec{v} = (v_{1}, \ldots, v_{n})$ the collections of agents' valuations and assume that $v_{i}$ is not defined for $C$ not containing $i$ and write $v_{i}(C) = \emptyset$ . A coalition structure $\pi$ is a partition of agents into coalitions and $\pi(i)$ denotes the coalition $i$ is assigned to. We write $v_{i}(\pi) = v_{i}(\pi(i))$ to denote the utility $i$ gets in her assigned coalition $\pi(i)$ inside $\pi$ . + +With this paper, we aim to relax the concept of core stability defined as follows. + +Definition 3.1. Given a coalition structure $\pi$ , a coalition $C \subseteq N$ is said to core-block $\pi$ if, for each $i \in C$ , $v_{i}(C) > v_{i}(\pi)$ . A coalition structure $\pi$ is said to be core-stable if no coalition $C \subseteq N$ core-blocks it. + +Simple fractional Hedonic Games. In fractional HGs (FHGs), first introduced in [3], every agent $i \in N$ assigns a value $v_{i}(j)$ to any other $j \neq i$ , and then her evaluation for any coalition $C \ni i$ is the average value ascribed to the members of $C \setminus \{i\}$ . Formally: $v_{i}(C) = \frac{\sum_{j \in C \setminus \{i\}} v_{i}(j)}{|C|}$ . A FHG is said to be simple if $v_{i}(j) \in \{0,1\}$ , for each $i, j \in N, i \neq j$ . A natural representation of these games is a directed and unweighted graph $G = (V,E)$ , where $V = N$ and $(i,j) \in E$ if and only if $v_{i}(j) = 1$ . Despite their name, Aziz et al. [3] show that simple FHGs capture the complexity of the entire class w.r.t. core-stability: In fact, deciding if a core-stable partition exists is $\Sigma_2^p$ -complete. + +Anonymous Hedonic Games. An HG is said to satisfy anonimity [6, 10] if agents evaluate coalitions on the sole basis of their size, i.e., $v_{i}(C) = v_{i}(C^{\prime})$ for any $i \in N$ and any $C, C^{\prime}$ containing $i$ such that $|C| = |C^{\prime}|$ . When considering anonymous HGs we will assume $v_{i}: [n] \to \mathbb{R}$ . An anonymous HGs instance is said to be single-peaked if there exists a permutation $(s_1, \ldots, s_n)$ of $\{1, \ldots, n\}$ for which every agent $i \in N$ admits a peak $p(i) \in [n]$ such that $h < k \leq p(i)$ or $h > k \geq p(i)$ imply $v_{i}(s_{k}) \geq v_{i}(s_{h})$ . Roughly speaking, the higher the distance from the peak in the given ordering, the lower the valuation for the coalition size. If the permutation is the identity function, i.e. the ordering is the usual one over $\mathbb{N}$ , we say that the preference is single-peaked in the + +natural ordering. For anonymous HGs, deciding if a core-stable partition exists has been shown to be NP-complete [5]. + +# 3.2 Epsilon-core, learning, and computation efficiency + +Here we formalize our notion of $\varepsilon$ -fractional core stability. + +Definition 3.2. Given a parameter $\varepsilon \in [0,1]$ , we say that a partition $\pi$ is $\varepsilon$ -fractional core-stable, $\varepsilon$ -FC in short, if it holds that at most an $\varepsilon$ -fraction of all possible coalitions are core-blocking for partition $\pi$ , i.e., + +$$ +\frac {\# o f c o r e - b l o c k i n g c o a l i t i o n s f o r \pi}{\# o f a l l p o s s i b l e c o a l i t i o n s} < \varepsilon . +$$ + +This definition has a natural probabilistic interpretation: A partition is $\varepsilon$ -FC if, by sampling u.a.r. a coalition, the probability of sampling a core-blocking one is at most $\varepsilon$ . Such an interpretation inspired the following extension. + +Definition 3.3. Given a parameter $\varepsilon \in [0,1]$ , we say that a partition $\pi$ is $\varepsilon$ -fractional core-stable with respect to a distribution $\mathcal{D}$ over $2^{N}$ if + +$$ +\operatorname * {P r} _ {C \sim \mathcal {D}} \left[ C \text {c o r e b l o c k i n g f o r} \pi \right] < \varepsilon . +$$ + +We may assume that agents' preferences are unknown and we need to learn them by sampling coalitions from a distribution $\mathcal{D}$ . Consequently, our algorithms will have a learning phase, where preferences are learned by observing $m$ samples, and a computation phase, where a stable outcome is computed upon the learned preferences. We say that an algorithm exactly learns a family $\mathcal{T} \subseteq 2^{N}$ if after seeing a sample $S$ it is able to determine the real valuation of any agent for every coalition in $\mathcal{T}$ . Note that this does not necessarily mean that $\mathcal{T} \subseteq S$ ; instead, by knowing the properties of the considered HG class, it must be possible to derive complete information of $\mathcal{T}$ from $\mathcal{S}$ . As an example, consider the anonymous HGs class: If $\mathcal{T}$ is the family of coalitions of size $s$ , in order to exactly learn it, for each agent $i$ there must exist $S \in S$ such that $i \in S$ and $|S| = s$ . + +We aim to design polynomial-time algorithms for computing $\varepsilon$ -FC solutions. However, to retrieve enough information on the preferences we may require a large number of samples. Hence, we say that an algorithm is efficient if its computation phase requires polynomial time in the instance size while the learning phase is polynomial in $n$ , $1 / \varepsilon$ , $\log 1 / \delta$ , where $\delta$ is a confidence parameter. Clearly, if the valuations are known in advance, an efficient algorithm requires polynomial time. Moreover, our algorithms will compute an $\varepsilon$ -FC partition with confidence $1 - \delta$ and the solution will turn out to be exact as soon as the true agents' preferences are given as input. + +# 3.3 Chernoff Bound and $\lambda$ -bounded distributions + +The analysis of our algorithms is mainly probabilistic and will strongly rely on the classical Chernoff bound (see, e.g., Chapter 4 in [23]) that we summarize hereafter. Let $X = \sum_{i=1}^{n} X_i$ be a sum of independent Poisson trials of mean $\mu$ , then for any constant $b \in (0,1)$ it holds: + +$$ +\Pr \left[ X \geq (1 + b) \mu \right] \leq e ^ {- \mu b ^ {2} / 3} \quad \text {a n d} \quad \Pr \left[ X \geq (1 - b) \mu \right] \leq e ^ {- \mu b ^ {2} / 2}. \tag {1} +$$ + +As we already mentioned, we will study the $\varepsilon$ -fractional core stability subject to distributions $\mathcal{D}$ over $2^{N}$ . We will see that, while for general distributions it is not possible to guarantee good enough values of $\varepsilon$ , it is indeed possible for $\lambda$ -bounded distributions. This class of distributions has been first introduced in a learning context by [7], where they consider general continuous distributions over bounded subsets of $\mathbb{R}^d$ . The idea is that a distribution in this class has constraints (parameterized by the value $\lambda$ ) on how much the probability density function can vary. + +Definition 3.4. A distribution $\mathcal{D}$ over $2^{N}$ is said to be $\lambda$ -bounded, if there exists $\lambda \geq 1$ such that, for every two coalitions $C_1, C_2$ , it holds that + +$$ +\operatorname * {P r} _ {C \sim \mathcal {D}} \left[ C = C _ {1} \right] \leq \lambda \operatorname * {P r} _ {C \sim \mathcal {D}} \left[ C = C _ {2} \right]. +$$ + +A straightforward consequence of this definition is that no coalition has null probability of being sampled. Moreover, it can be noted that, while setting $\lambda = 1$ we obtain the uniform distribution + +over $2^{N}$ , as $\lambda \to +\infty$ every distribution can be considered $\lambda$ -bounded up to an approximation factor. Thus, in order to keep the original intended purpose of this definition, in the rest of this work we will consider $\lambda$ to be constant with respect to $n$ . + +The following lemma, an adaptation of Lemma 4 in [7], will be useful in our computations. + +Lemma 3.5. Let distribution $\mathcal{D}$ be $\lambda$ -bounded. Let $\mathcal{F} \subseteq \mathcal{P}(2^N)$ be a family of subsets and let $a = |\mathcal{F}| / 2^n$ . Then, the following inequalities hold: + +$$ +\frac {a}{a + \lambda (1 - a)} \leq \Pr_ {C \sim \mathcal {D}} [ C \in \mathcal {F} ] \leq \frac {\lambda a}{\lambda a + 1 - a}. +$$ + +# 4 Impossibility results + +In this section, we explore the boundaries of feasible $\varepsilon$ for simple fractional and anonymous HGs according to general, $\lambda$ -bounded, and uniform distributions. + +Simple fractional Hedonic Games. We start by showing that when dealing with arbitrary distributions, an $\varepsilon$ -FC may not exist for $\varepsilon$ exponentially small w.r.t. the number of agents. Specifically, this impossibility result holds for constant values of $\varepsilon$ . The informal intuition is that, being the distribution arbitrary, one may choose it in an adversarial way with respect to a partition having an empty core. + +Proposition 4.1. There exists a distribution $\mathcal{D}$ and a simple fractional HG instance such that no $\varepsilon$ -fractional core-stable solution w.r.t. $\mathcal{D}$ exists for $\varepsilon \leq 1/2^{40}$ . + +Proof. Simple fractional HGs have been shown to have an empty core [3]. In particular, the authors provide an instance $\mathcal{I}$ with 40 agents having an empty core. We can extend this instance to an instance $\mathcal{I}'$ , with $N' = [n]$ being the set of agents, still having an empty core. Let $N = \{1, \ldots, 40\}$ be the set of agents in $\mathcal{I}$ . Their mutual preferences remain the same, while they value 0 all the other agents in $N' \setminus N$ . In turn, the agents in $N' \setminus N$ have mutual preferences equal to 1, and they value 0 all the agents in $N$ . $\mathcal{I}'$ has an empty core and, in particular, for any coalition structure $\pi$ there exists a core blocking coalition in $2^{N} \cup \{\{41, \ldots, n\}\} \setminus \{\emptyset\}$ . In fact, on the one hand, the agents in $\{41, \ldots, n\}$ will form a blocking coalition whenever we return a partition where they are not in the same coalition. On the other hand, if $\{41, \ldots, n\}$ is a coalition of the considered partition, no matter how the agents in $N$ will be partitioned, there exists a blocking coalition in $2^{N} \setminus \{\emptyset\}$ because the instance $\mathcal{I}$ has empty core. In conclusion, by choosing $\mathcal{D}$ as the uniform distribution over $2^{N'} \cup \{\{41, \ldots, n\}\} \setminus \{\emptyset\}$ , for any coalition structure $\pi$ we have $\operatorname*{Pr}_{C \sim \mathcal{D}}[C$ blocking for $\pi] > 1/2^{40}$ and the thesis follows. + +Similarly, by applying Lemma 3.5 we can easily derive the following generalization. + +Corollary 4.2. Given a parameter $\lambda$ there exists a bounded distribution with parameter $\lambda$ such that in simple fractional HGs no $\varepsilon$ -core exists for $\varepsilon < \frac{\lambda}{2^{40}(\lambda - 1) + 2^n}$ . + +Anonymous Hedonic Games. In the following, we consider single-peaked anonymous HGs. Clearly, all the provided results hold for the more general class of anonymous HGs. Following the same approach of the previous section, knowing that for anonymous HGs there exists an instance with seven agents and single-peaked preferences having empty core [6], we can show the following. + +Proposition 4.3. There exists a distribution $\mathcal{D}$ and an anonymous (single-peaked) HG instance such that for every $\varepsilon \leq 1/2^7$ there is no $\varepsilon$ -fractional core w.r.t. $\mathcal{D}$ . + +Corollary 4.4. Given a parameter $\lambda$ , there exists a $\lambda$ -bounded distribution such that for $\varepsilon < \frac{\lambda}{2^7(\lambda - 1) + 2^n}$ no $\varepsilon$ -fractional core-stable solution exists in anonymous single-peaked HGs. + +As a consequence, both for simple fractional and anonymous HGs, under uniform distributions, where $\lambda = 1$ , and constant values of $\lambda$ , the provided bound becomes exponentially small. Hence, in the rest of this paper, we will be focusing on these specific classes of distributions. + +# 5 Simple Fractional Hedonic Games + +In this section, we present an algorithm returning an $\varepsilon$ -FC for simple FHGs with $\varepsilon$ exponentially small in the number of agents, and prove its correctness, as summarized in the following claim. + +Theorem 5.1. Given a parameter $\delta \in (0,1)$ , for any simple FHG instance and with confidence $1 - \delta$ , we can efficiently compute an $\varepsilon$ -fractional core-stable partition for every $\varepsilon \geq 2^{-\Omega(n^{1/3})}$ . + +# 5.1 Computing an $\varepsilon$ -fractional core-stable partition + +We will be working only with the uniform distribution over all possible coalitions denoted by $U(2^{N})$ . + +The proof of the theorem is quite involved and needs several steps. To facilitate the reader, the section is organized as follows: First, we provide a high-level explanation of Algorithm $\mathbb{I}$ for computing an $\varepsilon$ -FC partition, then we show various parts of the proof as separate lemmas, and finally, we assemble all together to prove Theorem 5.1. + +As a first step, Algorithm $\square$ consists of a learning phase where the exact valuations are computed with confidence $1 - \delta$ . Then, it starts the computation phase building the graph representation $G$ based on the learned preferences. In order to compute an $\varepsilon$ -FC partition, the algorithm has to pay attention to the density of $G$ . In particular, the main intuition is to distinguish between instances based on the number of nodes having high out-degrees, treating the two arising cases separately. To this aim, observe first that the biggest fraction of (sampled) coalitions has a size close to $\frac{n}{2}$ . As a consequence, if many nodes have low out-degree, a matching-like partition would be difficult to core-block, since a sampled coalition $C$ will often contain very few neighbors of such nodes, compared to its size. Thus, such nodes would have a lower utility in $C$ . On the other hand, if a sufficient amount of nodes have a high out-degree, we can form a clique large enough to make any partition containing it hard to core-block. In fact, with high probability, the sampled coalition $C$ would contain at least one of such clique nodes, not connected to all the other agents in $C$ . Therefore, it would have a lower utility moving to $C$ . + +First, we focus on the learning aspect and show that it is possible to learn the exact valuations in simple FHGs, upon sampling a number of sets polynomial in $n$ and $\log \frac{1}{\delta}$ . + +Lemma 5.2. By sampling $m \geq 16\log \frac{n}{\delta} + 4n$ sets from $U(2^{N})$ it is possible to learn exactly the valuation functions $v_{1}, \ldots, v_{n}$ with confidence $1 - \delta$ . + +The following definition is crucial for proving the main result. + +Definition 5.3. Given a partition $\pi$ , an agent $i \in N$ is green with respect to $\pi$ if + +$$ +\Pr_ {C \sim U \left(2 ^ {N}\right)} \left[ v _ {i} (C) > v _ {i} (\pi) \mid i \in C \right] \leq 2 ^ {- \Omega \left(n ^ {1 / 3}\right)}. \tag {2} +$$ + +Since a coalition containing a green agent w.r.t. $\pi$ is unlikely to core-block $\pi$ , if we manage to show that there are enough green agents in the partition returned by Algorithm $\square$ , then we directly prove also its $\varepsilon$ -fractional core-stability, as in this case any randomly drawn coalition contains at least one green agent with high probability. From now on, let $\phi = |\{i \text{ s.t. } d_i \leq n - 31n^{2/3}\}|$ be as defined in Algorithm $\square$ . As already informally explained above, the algorithm follows two different procedures based on $\phi$ being greater equal or lower than $\frac{n^{1/3}}{62}$ . The set Gr defined in Algorithm $\square$ in both cases is meant to contain exclusively green agents w.r.t. the returned partition. We will start by proving that this is indeed true if $\phi \leq \frac{n^{1/3}}{62}$ . Since we will never use in-degrees, from now on we will simply use the term degree in place of out-degree. + +We first prove the correctness of the easier case of the algorithm in the else part of the if statement (line 19). + +Lemma 5.4. Let $\phi < \frac{n^{1/3}}{62}$ and $\pi$ be the partition returned by Algorithm [1]. Then Gr contains only green agents w.r.t to $\pi$ . + +Proof. Let us say that an agent $i \in N$ has high degree if $d_i \geq n - 31n^{2/3}$ . Observe that, if such an agent is picked at any iteration of the algorithm, then at most $31n^{2/3}$ agents are deleted from $F$ at that iteration. By hypothesis, there are at least $n - \frac{n^{1/3}}{62}$ agents with high degree. Since they are picked in non-increasing degree order, after $t$ iterations the agents left in $F$ with high degree are at least $n - \frac{n^{1/3}}{62} - t \cdot 31n^{2/3}$ , which is strictly positive for $t \leq \frac{n^{1/3}}{124}$ . This means that all agents in Gr + +Algorithm 1: Stabilizing Simple FHGs +Input: $N, S = \langle (S_j, \vec{v}(S_j)) \rangle_{j=1}^m$ +Output: $\pi - a 2^{-\Omega(n^{1/3})}$ -fractional core-stable partition of $N$ +1 Compute $\vec{v}$ and build the corresponding graph $G$ +2 $\pi \gets \{\{i\} | i \in N\}$ +3 Let $d_i$ be the out-degree of each $i$ in $G$ +4 $\phi \gets |\{i \text{ s.t. } d_i \leq n - 31n^{2/3}\}|$ +5 $\mathrm{Gr} \gets \emptyset$ +6 if $\phi \geq \frac{n^{1/3}}{62}$ then +7 Sort agents in non-decreasing order of out-degree +8 $H \gets \left[1, \ldots, \frac{n^{1/3}}{62}\right]$ +9 count $\gets 1$ +10 while count $\leq \frac{n^{1/3}}{124}$ do +11 Select first remaining agent $i \in H$ in the order +12 Gr $\gets$ Gr $\cup \{i\}$ +13 Let $F_i$ be a set of $\lceil \frac{2d_i}{n-d_i} \rceil$ neighbors of $i$ , giving priority to singleton agents in $N \setminus H$ +14 Let $C_i = \{i\} \cup \bigcup_{j \in F_i} \pi(j)$ +15 $\pi \gets \pi \cup \{C_i\} \setminus \{\pi(j) | j \in C_i\}$ +16 $H \gets H \setminus (F_i \cup \{i\})$ +17 count $\gets$ count + 1 +18 All remaining singletons $\pi(i) = \{i\}$ in $\pi$ are grouped together +19 else +20 $F \gets N$ +21 for count = 1, ..., n $^{1/3}$ do +22 Pick $i \in F \setminus \mathrm{Gr}$ of maximal out-degree +23 Delete from $F$ all agents not in the neighborhood $N_i$ of $i$ +24 Gr $\gets \mathrm{Gr} \cup \{i\}$ +25 $\pi \gets \{F, N \setminus F\}$ +26 return $\pi$ + +have high degree, so that $|F| \geq n - 31n^{2/3} \cdot \frac{n^{1/3}}{124} = \frac{3n}{4}$ . Moreover, since by construction at the end of the for loop each $i \in \mathrm{Gr}$ is connected to all the other agents in $F$ , $v_i(F) \geq 1 - \frac{4}{3n}$ , which is at least the utility $i$ would have in a clique of $\geq \frac{3n}{4}$ agents. Therefore, any coalition of size at most $\frac{3n}{4}$ containing $i$ cannot core block $\pi$ . As a consequence, for each $i \in \mathrm{Gr}$ it holds that: + +$$ +\operatorname * {P r} _ {C \sim U (2 ^ {N})} [ v _ {i} (C) > v _ {i} (F) | i \in C ] \leq \operatorname * {P r} _ {C \sim U (2 ^ {N})} \left[ | C | > \frac {3 n}{4} | i \in C \right] \leq e ^ {- \frac {n}{2 4}} < 2 ^ {- \Omega (n ^ {1 / 3})}, +$$ + +where we used Equation (1) and the fact that $n > 1$ . + +![](images/45d521bdec8d35265b699a40aeba55e865c0d7a6256f39c2df6e859f84d98a97.jpg) + +The procedure returning a partition in the complementary case $\phi \geq \frac{n^{1/3}}{62}$ is a bit more involved, and we divide the proof in different cases according to the degree of the agents added to Gr. + +Lemma 5.5. Let $\phi \geq \frac{n^{1/3}}{62}$ and let $\pi$ be the partition returned by Algorithm $\boxed{l}$ in this case. Then the following statements hold: + +(i) $H$ is never empty when executing line 11 of the while loop of Algorithm $\square$ . + +(ii) each agent $i\in \mathrm{Gr}$ is green. + +Proof of Theorem 5.7 We are now able to assemble all the above claims together to prove the main theorem. According to the above lemmas, in both the cases of Algorithm 1, i.e. $\phi < \frac{n^{1/3}}{62}$ and $\phi \geq \frac{n^{1/3}}{62}$ , the nodes put in Gr are green and $\gamma := |\mathrm{Gr}| = n^{1/3} / 124$ . + +It remains to show that the output $\pi$ of Algorithm 1 is a $2^{-\Omega(n^{1/3})}$ -fractional core-stable partition: + +$$ +\begin{array}{l} \Pr_ {C \sim U (2 ^ {N})} [ C \text {b l o c k s} \pi ] \leq \Pr_ {C \sim U (2 ^ {N})} [ C \cap \mathrm {G r} = \emptyset ] + \Pr_ {C \sim U (2 ^ {N})} [ C \cap \mathrm {G r} \neq \emptyset \wedge C \text {b l o c k s} \pi ] \\ \leq \frac {2 ^ {n - \gamma}}{2 ^ {n}} + \left(1 - \frac {2 ^ {n - \gamma}}{2 ^ {n}}\right) \frac {1}{2 ^ {n ^ {1 / 3}}} \\ \leq \frac {1}{2 ^ {\gamma}} + \frac {1}{2 ^ {n ^ {1 / 3}}} \leq \frac {1}{2 ^ {\frac {n ^ {1 / 3}}{1 2 4} - 1}}. \\ \end{array} +$$ + +and that concludes the proof. + +![](images/e5ad6edaf7c5068befcfda975bf14a696111c8b9c37d1f56f0ab444c8acc2479.jpg) + +# 6 Anonymous Hedonic Games + +In this section, we discuss the efficient computation of an $\varepsilon$ -FC partition in the case of anonymous HGs, and focus on the more general class of $\lambda$ -bounded distributions. + +The main result of this section is given by the following: + +Theorem 6.1. Given a $\lambda$ -bounded distribution $\mathcal{D}$ and a parameter $\delta \in (0,1)$ , for any anonymous HG instance and with confidence $1 - \delta$ , we can efficiently compute an $\varepsilon$ -fractional core-stable partition for every $\varepsilon \geq \frac{4\lambda}{2^{c(\lambda)}\sqrt[3]{n}}$ , where $c(\lambda) = \frac{1}{\sqrt{13(\lambda + 1)}}$ . + +Moreover, when agents' preferences are single-peaked, we can refine the bound on $\varepsilon$ as follows. + +Theorem 6.2. Given a $\lambda$ -bounded distribution $\mathcal{D}$ and a parameter $\delta \in (0,1)$ , for any single-peaked anonymous HG instance and with confidence $1 - \delta$ , we can efficiently compute an $\varepsilon$ -fractional core-stable partition for every $\varepsilon \geq 4 \cdot \frac{\lambda}{2^{n/4}}$ . + +# 6.1 Distribution over coalitions sizes + +Being anonymous preferences uniquely determined by coalition sizes, it is important to establish how the $\lambda$ -bounded distribution $\mathcal{D}$ impacts the probability of sampling coalitions of a given size. + +Let $X: 2^{N} \to [n] \cup \{\emptyset\}$ be the random variable corresponding to the size of $C \sim \mathcal{D}$ , i.e. $X(C) := |C|$ , and $\mu := E[X]$ be its average. The following lemma shows some useful properties of $X$ . + +Lemma 6.3. Let $X$ be the random variable representing the size of $C \sim \mathcal{D}$ , with $\mathcal{D}$ $\lambda$ -bounded, and let $\mu \coloneqq E[X]$ . Then, + +$$ +\frac {n}{\lambda + 1} \leq \mu \leq \frac {\lambda n}{\lambda + 1} +$$ + +and + +$$ +\operatorname * {P r} _ {\mathcal {D}} \left[ | X - \mu | \geq \Delta \cdot \mu \right] \leq \frac {\varepsilon}{2}, +$$ + +where $\varepsilon$ is such that $0 < \varepsilon < 1$ and $\Delta$ is the quantity $\sqrt{\frac{3(\lambda + 1)\log\frac{4}{\varepsilon}}{n}}$ . + +Let us denote by $I_{\mathcal{D}}(\varepsilon) \subseteq [n]$ the open interval $((1 - \Delta)\mu, (1 + \Delta)\mu)$ , where $\mu$ is the expected value for the size of a coalition sampled from $\mathcal{D}$ . Lemma 6.3 implies that with probability at least $1 - \varepsilon / 2$ we draw from $\mathcal{D}$ a coalition whose size is in $I_{\mathcal{D}}(\varepsilon)$ . + +# 6.2 Properties of $I_{\mathcal{D}}(\varepsilon)$ + +The interval $I_{\mathcal{D}}(\varepsilon)$ will play a central role in the computation of an $\varepsilon$ -FC partition. However, under the uncertainty of a distribution $\mathcal{D}$ , our algorithms need to i) estimate it (in fact, $I_{\mathcal{D}}(\varepsilon)$ is uniquely determined by the value $\mu$ which is unknown) and ii) learn exactly the agents' valuations for coalitions whose size is in $I_{\mathcal{D}}(\varepsilon)$ . The technical proofs of this subsection are deferred to the Appendix. + +Being $\mu$ unknown, we will work on a superset of $I_{\mathcal{D}}(\varepsilon)$ which can be estimated by simply knowing that $\mathcal{D}$ is $\lambda$ -bounded. Let $S = \{S_j\}_{j=1}^m$ be a sample of size $m$ drawn from $\mathcal{D}$ , and let $\bar{\mu} = \frac{1}{m}\sum_{j}|S_j|$ be the frequency estimator. By the Hoeffding bound (see Theorem 4.13 in [23]), it is possible to show that we can estimate $\mu$ with high confidence, as stated in the following. + +Lemma 6.4. Given any two constants $\alpha >0,\delta < 1$ , if $m > \frac{n^2\log 2 / \delta}{2\varepsilon^2}$ , then: + +$$ +\Pr_ {\mathcal {S} \sim \mathcal {D} ^ {m}} [ | \bar {\mu} - \mu | < \alpha ] \geq 1 - \delta . \tag {3} +$$ + +As a consequence, we can determine a good superset of $I_{\mathcal{D}}(\varepsilon)$ as the interval with extreme points $(1 \pm \Delta) (\bar{\mu} \pm \alpha)$ . + +We now turn our attention to the exact learning of $I_{\mathcal{D}}(\varepsilon)$ . + +Lemma 6.5. By sampling $m = \frac{2\lambda(1 + \lambda)n^2\log n^2 / \delta}{\varepsilon}$ sets from $\mathcal{D}$ it is possible to learn exactly the valuations in $I_{\mathcal{D}}(\varepsilon)$ , with confidence $1 - \delta$ . + +# 6.3 Computing an $\varepsilon$ -fractional core-stable partition for bounded distributions + +Our algorithm will consist of a learning phase and a computation phase. + +During the learning phase, the algorithm passes through a sample of size $m = \frac{2\lambda(1 + \lambda)n^2\log n^2 / \delta}{\varepsilon}$ and for each coalition $C$ in the sample of size $c$ and $i\in C$ it stores the value $v_{i}(c)$ . Let us denote by $\mathcal{X}$ the coalition sizes that have been learned exactly during this learning phase, that is, the coalition sizes for which the algorithm learned the valuations of each agent. By the previous lemma, with confidence $1 - \delta$ , $I_{\mathcal{D}}(\varepsilon)\subseteq \mathcal{X}$ . + +During the computation phase, our algorithm will make sure that a sufficiently large amount of agents have a very small probability of being involved in a core blocking coalition. Such agents will be called green agents and are defined as follows. An agent $i$ is said to be green in a partition $\pi$ w.r.t. $I \subseteq [n]$ , if $|\pi(i)|$ maximizes $v_i(s)$ for $s \in I$ . We will show that if the probability of sampling coalitions of sizes $s \in I$ is high enough, then green agents do not want to deviate from their actual partition with high probability as well. As a consequence, a partition containing many green agents is difficult to core-block, as shown in the following: + +Lemma 6.6. For any partition $\pi$ of agents and $I \subseteq [n]$ such that $I_{\mathcal{D}}(\varepsilon) \subseteq I$ . If $\pi$ contains at least $\log_2 \frac{2\lambda}{\varepsilon}$ green agents w.r.t. $I$ , then, $\pi$ is $\varepsilon$ -fractional core-stable. + +Proof. Let $C \sim \mathcal{D}$ and $c = |C|$ be its size. We denote by $\mathcal{E}_1$ the event that $C$ core blocks $\pi$ , and by $\mathcal{E}_2$ then the event that $C$ does not contain green agents. By definition of green agents, if $c \in I$ and $C$ contains at least one green agent w.r.t. $I$ , then it cannot core block $\pi$ . Formally speaking, $\overline{\mathcal{E}_2} \wedge c \in I \Rightarrow \overline{\mathcal{E}_1}$ and therefore $\mathcal{E}_1 \Rightarrow \mathcal{E}_2 \lor c \notin I$ . As a consequence, + +$$ +\operatorname * {P r} _ {\mathcal {D}} \left[ \mathcal {E} _ {1} \right] \leq \operatorname * {P r} _ {\mathcal {D}} \left[ c \notin I \vee \mathcal {E} _ {2} \right] \leq \operatorname * {P r} _ {\mathcal {D}} \left[ c \notin I \right] + \operatorname * {P r} _ {\mathcal {D}} \left[ \mathcal {E} _ {2} \right]. +$$ + +Being $I_{\mathcal{D}}(\varepsilon)\subseteq I$ , by Lemma 6.3 and Lemma 3.5 we finally get $\operatorname *{Pr}_{\mathcal{D}}[\mathcal{E}_1]\leq \frac{\varepsilon}{2} +\lambda \frac{2^{n - \log_2\frac{2\lambda}{\varepsilon}}}{2^n} = \varepsilon .$ + +We finally sketch the proof of Theorem 6.1, for a rigorous proof see the Appendix. The key idea is to create a partition $\pi$ by grouping as many agents as possible in a coalition of preferred size among a set of sizes, say $I$ , that may occur with particularly high probability. Hence, by Lemma 6.6 we only need to find appropriate $I$ and $\pi$ such that the number of green agents for $\pi$ w.r.t. $I$ is at least $\log_2\frac{2\lambda}{\varepsilon}$ . + +Proof of Theorem 6.1 Let us start by defining the set $I$ . By Lemma 6.4, we can provide an estimation $\bar{\mu}$ of $\mu$ such that $\mu \in (\bar{\mu} - \alpha, \bar{\mu} + \alpha)$ for a certain $\alpha$ with confidence $1 - \delta$ . As a consequence, if we can consider $I$ as the intersection of the interval having extreme points $(1 \pm \Delta)$ $(\bar{\mu} \pm \alpha)$ and the set $\mathcal{X}$ of sizes exactly learned during the learning phase. By Lemma 6.5, with confidence $1 - \delta$ , we can say that $I_{\mathcal{D}}(\varepsilon) \subseteq \mathcal{X}$ since with such confidence $I_{\mathcal{D}}(\varepsilon)$ has been learned. By union bound, with confidence $1 - 2\delta$ , $I_{\mathcal{D}}(\varepsilon) \subseteq I$ . W.l.o.g. we can assume our confidence to be $1 - \delta$ by replacing $\delta$ + +with $\delta /2$ . Moreover, being $|I|\leq 2\left(\Delta \bar{\mu} +\alpha\right)$ and $\bar{\mu}\geq 1$ and choosing $\alpha = \min \left\{\frac{1}{2\sqrt{n}},\frac{n}{\lambda + 1}\right\}$ we can derive the following bound: + +$$ +| I | \leq n \sqrt {\frac {1 3 (\lambda + 1) \log \frac {4}{\varepsilon}}{n}}. +$$ + +By the pigeonhole principle, there is one $s \in I$ such that at least $\frac{n}{|I|}$ agents prefer $s$ over the other coalition sizes in $I$ . Using the hypothesis on $\varepsilon$ , we can conclude $\frac{n}{|I|} \geq \log_2\frac{2\lambda}{\varepsilon}$ . + +Let us now create the desired $\pi$ . Let $q, r$ be two positive integers s.t. $n = qs + r$ and $r < s$ . Giving priorities to the agents having $s$ as the most preferred size in $I$ , we create $q$ coalitions of size $s$ and a coalition of size $r$ . In this way, at least $\log_2\frac{2\lambda}{\varepsilon}$ agents are in a coalition having the most preferred size within $I$ and, therefore, they are green agents for $\pi$ w.r.t. $I$ . By Lemma 6.6 the thesis follows. + +Finally, we turn our attention to a special but really popular subclass of anonymous HGs. In this case, thanks to the further assumption of single-peakedness of agents' preferences we can achieve better values of $\varepsilon$ . Because of space constraints, the proof of Theorem 6.2 is deferred to the Appendix. + +# 7 Conclusions and Future Work + +We introduced and studied the $\varepsilon$ -fractional core stability concept, a natural relaxation of core stability where only a small ratio (or a small probability mass) of coalitions is allowed to core-block. + +We investigated this concept on two fundamental classes of HGs: Simple FHGs and anonymous HGs. For both these classes the problem of deciding the existence of a core-stable partition is notably hard: NP-complete for anonymous HGs and even $\Sigma_2^p$ -complete for simple FHGs. While in Section 4 we showed that very small value of $\varepsilon$ and the choice of the distributions pose limits to the existence of $\varepsilon$ -FC solutions, we have still been able to obtain positive results for both classes under different assumptions on the considered distributions. For simple FHGs we showed that, when sampling from a uniform distribution (that corresponds to $\varepsilon$ -FC as in Definition 3.2), it is possible to always construct an $\varepsilon$ -FC under the assumption that $\varepsilon$ does not decrease faster than a sub-exponential value. For anonymous HGs instead, we were able to show the existence of $\varepsilon$ -FC for a much broader class of distributions (i.e. the $\lambda$ -bounded ones), under a similar condition over $\varepsilon$ . These encouraging results show that, while having a natural probabilistic nature, the notion of $\varepsilon$ -FC stability is flexible enough to allow positive results in very complex settings where core-stability is usually considered unattainable. Moreover, its very definition and connection with the concept of PAC stabilizability makes it resilient to possible uncertainty of agents' preferences, increasing its usability in applications. + +Limitations of $\varepsilon$ -fractional core stability. Beyond the core, many stability concepts have been introduced and studied in the literature of HGs. Among the others, we mention individual rationality (IR) which is considered the minimum requirement of stability: It postulates that no agent strictly prefers being alone, forming a singleton, rather than being in their current coalition. Clearly, core stability implies IR; in fact, if the outcome is not IR there must exist an agent able to form a blocking coalition on her own. Despite IR being implied by core-stability, this is not the case for $\varepsilon$ -FC and the algorithms presented in this paper do not necessarily satisfy IR. Specifically, in the case of simple fractional HGs, any outcome is IR and therefore it is our proposed solution; on the other hand, in the case of anonymous HGs, our algorithm provides an IR solution if coalitions of size 1 are less preferred than any other coalition size. That said, we believe it is possible to modify our algorithm, under the reasonable assumption that the value of the singleton coalition is known for each agent, in a way that also IR is satisfied, at the cost of a possible worse lower bound on $\varepsilon$ . + +Future directions. This contribution has a high potential for future work. A first natural question is whether it is possible to extend our positive results for simple FHGs to the more general class of $\lambda$ -bounded distributions. We believe that such an extension is attainable but nonetheless non-trivial. Moreover, although we showed that exponentially small values of $\varepsilon$ are not possible, it is still worth investigating which is the best guarantee for general distributions. More broadly, there are several HG classes that have not been considered in our work, and understanding for which $\varepsilon$ an $\varepsilon$ -FC partition exists is certainly of interest. As discussed in the previous paragraph, it would be definitely of interest to come up with an algorithm for anonymous HGs guaranteeing the solutions to be IR. Generally speaking, it would be worth studying $\varepsilon$ -FC in conjunction with IR in other HGs classes. + +# Acknowledgements + +We acknowledge the support of the PNRR MIUR project FAIR - Future AI Research (PE00000013), Spoke 9 - Green-aware AI, the PNRR MIUR project VITALITY (ECS00000041), Spoke 2 ASTRA - Advanced Space Technologies and Research Alliance, the Italian MIUR PRIN 2017 project ALGADIMAR - Algorithms, Games, and Digital Markets (2017R9FHSR_002) and the DFG, German Research Foundation, grant (Ho 3831/5-1). + +We thank the anonymous reviewers for their insightful comments and suggestions, which helped us to improve the quality of the manuscript. + +# References + +[1] H. Aziz and F. Brandl. Existence of stability in hedonic coalition formation games. In Proc. 11th Conf. Autonomous Agents and Multi-Agent Systems (AAMAS), pages 763-770, 2012. +[2] H. Aziz and R. Savani. Hedonic games. In Handbook of Computational Social Choice, pages 356-376. Cambridge University Press, 2016. +[3] H. Aziz, F. Brandt, and P. Harrenstein. Fractional hedonic games. In Proc. 13th Conf. Autonomous Agents and Multi-Agent Systems (AAMAS), pages 5-12, 2014. +[4] E. Balkanski, U. Syed, and S. Vassilvitskii. Statistical cost sharing. In Proc. 30th Conf. Adv. Neural Information Processing Systems (NIPS), pages 6221-6230, 2017. +[5] C. Ballester. Np-completeness in hedonic games. Games Econ. Behav., 49(1):1-30, 2004. +[6] S. Banerjee, H. Konishi, and T. Sonmez. Core in a simple coalition formation game. Social Choice and Welfare, 18(1):135-153, 2001. +[7] P. L. Bartlett and R. C. Williamson. Investigating the distribution assumptions in the PAC learning model. In Proc. 4th Conf. Learning Theory (COLT), 1991. +[8] E. Batziou, M. Bichler, and M. Fichtl. Core-stability in assignment markets with financially constrained buyers. In Proc. 23rd Conf. Econom. Comput. (EC), pages 473-474, 2022. +[9] M. Bichler and S. Waldherr. Core pricing in combinatorial exchanges with financially constrained buyers: Computational hardness and algorithmic solutions. Oper. Res., 70(1):241-264, 2022. +[10] A. Bogomolnaia and M. O. Jackson. The stability of hedonic coalition structures. Games Econom. Behav., 38(2):201-230, 2002. +[11] A. J. Collins, S. Etemadidavan, and W. Khallouli. Generating empirical core size distributions of hedonic games using a monte carlo method. IGTR, 24(3):2250001:1-2250001:28, 2022. +[12] D. Cornelisse, T. Rood, Y. Bachrach, M. Malinowski, and T. Kachman. Neural payoff machines: Predicting fair and stable payoff allocations among team members. In Proc. 36th Conf. Adv. Neural Information Processing Systems (NIPS), 2022. +[13] X. Deng and C. H. Papadimitriou. On the complexity of cooperative solution concepts. Math. Oper. Res., 19(2):257-266, 1994. +[14] K. Donahue and J. M. Kleinberg. Model-sharing games: Analyzing federated learning under voluntary participation. In Proc. 35th Conf. Artificial Intelligence (AAAI), pages 5303-5311, 2021. +[15] K. Donahue and J. M. Kleinberg. Optimality and stability in federated learning: A game-theoretic approach. In Proc. 34th Conf. Adv. Neural Information Processing Systems (NIPS), pages 1287-1298, 2021. +[16] J. Dreze and J. Greenberg. Hedonic coalitions: Optimality and stability. *Econometrica*, 48(4): 987-1003, 1980. + +[17] A. Fanelli, G. Monaco, and L. Moscardelli. Relaxed core stability in fractional hedonic games. In Proc. 30th Intl. Joint Conf. Artif. Intell. (IJCAI), pages 182-188, 2021. +[18] S. Fioravanti, M. Flammini, B. Kodric, and G. Varricchio. PAC learning and stabilizing hedonic games: Towards a unifying approach. In Proc. 37th Conf. Artificial Intelligence (AAAI), pages 5641-5648, 2023. +[19] D. B. Gillies. Solutions to general zero-sum games. Contributions to the Theory of Games IV (Annals of Mathematics Studies, 40), pages 47-85, 1959. +[20] A. Igarashi, J. Sliwinski, and Y. Zick. Forming probably stable communities with limited interactions. In Proc. 33rd Conf. Artificial Intelligence (AAAI), pages 2053-2060, 2019. +[21] T. Jha and Y. Zick. A learning framework for distribution-based game-theoretic solution concepts. In Proc. 21st Conf. Econom. Comput. (EC), pages 355-377, 2020. +[22] M. Maschler, B. Peleg, and L. S. Shapley. Geometric properties of the kernel, nucleolus, and related solution concepts. Math. Oper. Res., 4(4):303-338, 1979. +[23] M. Mitzenmacher and E. Upfal. Probability and Computing, 2nd edition. Cambridge University Press, 2017. ISBN 9781107154889. +[24] E. Miyagawa. Strategy-proofness and the core in house allocation problems. Games Econ. Behav., 38(2):347-361, 2002. +[25] D. Peters and E. Elkind. Simple causes of complexity in hedonic games. In Proc. 24th Intl. Joint Conf. Artif. Intell. (IJCAI), pages 617-623, 2015. +[26] L. S. Shapley and M. Shubik. Quasi-cores in a monetary economy with nonconvex preferences. Econometrica, 4(34):805--827, 1966. +[27] J. Sliwinski and Y. Zick. Learning hedonic games. In Proc. 26th Intl. Joint Conf. Artif. Intell. (IJCAI), pages 2730-2736, 2017. +[28] S. C. Sung and D. Dimitrov. On core membership testing for hedonic coalition formation games. Oper. Res. Lett., 35(2):155-158, 2007. +[29] P. Trivedi and N. Hemachandra. Noise robust core-stable coalitions of hedonic games. In Proc. 14th Asian Conf. Machine Learning, (ACML), volume 189, pages 1038-1053, 2022. +[30] L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, 1984. +[31] G. J. Woeginger. Core stability in hedonic coalition formation. In Proc. 39th Intl. Conf. Current Trends in Theory & Practice of Comput. Sci. (SOFSEM), pages 33-50, 2013. \ No newline at end of file diff --git a/varepsilonfractionalcorestabilityinhedonicgames/images.zip b/varepsilonfractionalcorestabilityinhedonicgames/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..929c2137e42049339e72920508498bc3fce84cdd --- /dev/null +++ b/varepsilonfractionalcorestabilityinhedonicgames/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed4d5bb0c7320ea1bc5d7ab929df361f767abbfb2f5afffa5f108e96b95aa8c4 +size 89693 diff --git a/varepsilonfractionalcorestabilityinhedonicgames/layout.json b/varepsilonfractionalcorestabilityinhedonicgames/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3eeb877c512c005a6715daaf89112674911dd5f2 --- /dev/null +++ b/varepsilonfractionalcorestabilityinhedonicgames/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24907c9e6685837be7bfe28dc9a2b7bbf4085e5744b0c1340298bac5d0624e7e +size 717319